Six months ago, we published a founder update about reaching AMP MVP0, Building the Engine of Growth. We were honest about where we were: theory becoming testable reality, a living system ready to validate our core hypotheses, an invitation to a handful of early evangelists to help us shape what agentic marketing could become.
Today, we're back. And this time, the story is different.
MVP0 taught us that the engine works. The agents coordinate. The Knowledge Base learns. The content quality is real. What we discovered, however, is that the interface between human and agent, the cockpit, if you like, needed to evolve fundamentally before we could put this in the hands of ambitious marketing teams with confidence.
That evolution is AMP V1, and specifically its centrepiece: Chat Interface.
This isn't a UI refresh. It's a rethinking of what it means to work with a team of agents rather than simply prompt an AI tool.
When we built MVP0, we focused almost entirely on the right question: can the agents actually do the work? Can a coordinated mesh of specialist agents, Strategy, SEO, Content, Research, Brand QA, produce marketing output that meets or exceeds human quality, at a fraction of the time and cost?
The answer was yes. Emphatically.
But here's what we learned in testing: a powerful agent team with a weak interface is like hiring a brilliant marketing director and only ever communicating with them via text message. The capability is there. The coordination suffers. The trust doesn't build.
Our MVP0 interface was honest about its limitations. It was human-driven, chat-dependent, and linear. Every interaction required the human to initiate. There was no visibility into what the agents were doing, why they made certain decisions, or how their work connected to the broader brand strategy sitting in the Knowledge Base. There was no approval workflow. No quality gate you could see in action. No sense of a team working on your behalf.
For a human-in-the-loop model at early stage, that was acceptable. For the platform we're building towards, one where agent teams handle most of the planning and execution while humans focus on creativity and strategy, it was a ceiling we needed to break through.
V1 breaks through it.
The design philosophy behind Chat Interface starts from a single question: what does it feel like to manage a high-performing team, rather than operate a tool?
If you've ever worked with a great team, you know the feeling. You set the direction. They execute. You get status updates without asking for them. Quality is checked before it reaches you. When something needs your decision, it surfaces, clearly, with context, without noise. When the work is done, it's done to a standard you'd be proud to put your name on.
That's the experience we're building.
In MVP0, if you wanted to know what the agents were doing, you had to ask. Now, you don't. The interface surfaces an content task queue, a live view of what content is being built by the agent mesh, what's queued, what's in review, and what's ready for your sign-off.
For product managers and CTOs evaluating agentic platforms, this is non-trivial. One of the most common objections we hear is: "How do I know what the AI is actually doing?" The answer, in Chat Interface, is: you can see it.
One of the most significant additions in V1 is a basic approval workflow. AMP doesn't just produce content and drop it in your inbox. It brings content to defined checkpoints, gates where a human reviews, approves, or redirects before the agent proceeds.
This is critical for two reasons. First, it builds the trust that you require. You only delegate fully when you've seen the quality consistently. Second, it aligns with how real marketing teams work. Senior marketers don't review every tweet, but they absolutely review campaign strategy, hero content, and anything client-facing.
AMP's approval workflow mirrors that reality.
Our Brand Messaging QA agent has always been one of AMP's structural differentiators. Every piece of content passes through a quality gate before delivery, checked for brand alignment, tone, persuasive structure, and SEO effectiveness. In MVP0, this happened in the background. You saw the output; you didn't see the work.
In V1, the QA process is surfaced in the interface. You can see the QA report, the agent's scoring, and any flags raised before content reaches you. For marketing leaders who've spent years explaining to boards why brand consistency matters, this is a feature that practically sells itself.
AMP is not one agent. It's a coordinated mesh, a Strategy Agent, an SEO Agent, a Content Agent, a Research Agent, a Customer Agent, and a Brand QA Agent, each a subject matter expert operating within a shared knowledge framework. MVP0's interface was just a form to request content. Now you can chat with your team.
You can see which agents are active on a given brief, how they're coordinating, and where handoffs are happening. For anyone who's tried to manage a fragmented marketing stack, briefing the SEO agency separately from the content team, separately from the social media manager, the experience of watching coordinated agents work in parallel is genuinely striking.
We want to be transparent about something, because it matters for the people we're inviting into this next phase.
V1 is not a finished product. It is a living system in its next iteration. Some of what we've described above is in production. Some is in active development. Some will be shaped directly by the early adopters who join us in the coming months.
We're building for the market realities of 2027–2028, not 2025. And if that sounds familiar, it's because it's the same commitment we made with MVP0, to skate to where the puck is going.
Here's where we believe the puck is going:
2025: Marketing teams adopt AI tools. Productivity improves.
2026: Teams realise they have dozens of agents and workflows to maintain. New overhead emerges. The savings slow.
2027: The question shifts: "Can we have smarter agents that handle more?" Platforms offering SaaS-level agent coordination that self heal and self learn become the norm.
AMP is built for 2027. By the time the market asks the question, we intend to have two years of production data, cross-customer learning, and architectural refinement that cannot be replicated quickly.
V1 is the next step in that journey. And we need early adopters who understand, and want to help shape, where it's going.
As we enter V1, the market is more crowded and more confused than ever. Every week, a new AI tool promises to "increased productivity". Most of them are productivity assistants wearing execution clothing.
We want to be unambiguous about what AMP is and isn't.
ChatGPT, and tools like it, are productivity assistants. They help a skilled human work faster. You prompt, they respond, you edit, you repeat. They have no memory of your brand, no awareness of your strategy, no connection to the other work your team is doing. They are excellent tools. They are not execution platforms.
AMP is an execution platform. The distinction is not semantic, it's structural.
⚡ The one-line version: ChatGPT is a calculator. AMP is an accountant who does your books, flags the risks, and files your returns, while you focus on growing the business.When you use AMP, you're not prompting a model. You're briefing a team. The Strategy Agent frames the plan. The Research Agent analyses the market. The Content Agent drafts to brand. The QA Agent checks the work. The Distribution Agent sequences the channels. And the Knowledge Base, continuously updated with your brand guidelines, personas, competitive landscape, and past performance, is the institutional memory that ties it all together.
No prompt resets. No starting from scratch. No explaining your brand voice for the hundredth time.
As we move into V1, this distinction becomes the foundation of everything we're building in the interface. The agent task queue, the approval workflows, the multi-agent visibility, none of these make sense if you're thinking about AMP as a chat tool. They make complete sense if you're thinking about AMP as a marketing team that works 24×7.
"What excites me most about Chat Interface is not any single feature, it's what it represents architecturally," says Netanel Eliav, CTO of Jam 7. "In MVP0, the interface was a simple form to capture requests to help us validate our assumption around content quality. In V1, the interface becomes part of the agent system itself. Approval states, QA visibility, content queues, these aren't UX additions. They're coordination mechanisms that make the agent mesh more reliable, more auditable, and significantly more scalable. We're not just making it easier to use AMP. We're making AMP smarter every time a human interacts with it. That feedback loop, human judgement feeding directly back into agent performance, is what separates a genuinely intelligent system from a very fast content machine."
The technical foundation remains the ReAct framework combined with dynamic RAG, agents that reason step-by-step and retrieve relevant knowledge in real time from a centralised, continuously updated Knowledge Base. What V1 adds is the coordination layer: structured handoffs between agents, defined quality gates, and an interface that makes the entire process visible and controllable without requiring the human to drive every step.
This is what SaaS-level scalability looks like when applied to AI agents. Not custom workflows per customer. Not brittle prompts hard-coded with brand knowledge. A mesh architecture that improves with every customer interaction and every human approval, across the entire platform, not just for one account.
We're entering V1 with a very specific ask: we want early adopters who want to build this with us, not just use it.
The early evangelist model we introduced with MVP0 worked. The feedback, the real-world briefs, the honest "this doesn't work for us" conversations, they shaped the roadmap that produced V1. We're doing it again, and the terms are the same.
You might be our ideal early adopter if you:
If you're a CMO or CTO evaluating platforms for your marketing team, a product manager who's watched their organisation buy ten AI tools and get ten percent improvement, or a founder who needs to scale marketing output without scaling headcount linearly, we want to hear from you.
We're limiting this cohort to 10–15 early adopters to ensure every partnership gets the attention it deserves.
To apply, email Jason directly: jason@jam7.com
Include: your company name and website, your current marketing team size and budget, your biggest marketing challenge right now, and what you hope to achieve with AMP in the next six months.
For those who want to understand the full arc of where AMP is going, here's how we're thinking about it:
| Version | Mode | What Changes |
|---|---|---|
| MVP0 | Human-driven chat | Human asks, AI responds, content is created task by task |
| V1 | Agent-assisted | Agent task queue, QA visibility, approval workflows, multi-agent coordination |
| V2 | Agent-led with checkpoints | Agents initiate work, propose content, humans approve at strategic gates |
| Autonomous | Human oversight only | Dashboard of running campaigns, exception alerts, batch approvals |
We are firmly in the V1 phase. The destination is Autonomous. The journey is honest, disciplined, and grounded in real customer validation at every step.
MVP0 proved the agent architecture works, that specialist agents coordinated through a shared Knowledge Base can produce brand-compliant, high-quality marketing content. V1 evolves the interface to match that capability: you can now see agents working, manage approval workflows, track QA in real time, and monitor multi-agent coordination. It's the difference between knowing you have a great team and being able to manage them properly.
No, and we're deliberately transparent about that. V1 is a living system in active development. Some features are in production, others are being shaped right now, partly by the early adopters who join this cohort. If you want a polished, finished product, we're not the right fit yet. If you want to help build the platform that will define agentic marketing in 2027–2028, we'd love to talk.
ChatGPT and Claude are productivity assistants, they help a skilled human work faster. AMP is an execution platform. It ingests your brand guidelines, personas, and strategic frameworks into a centralised Knowledge Base, then applies a coordinated mesh of specialist agents to plan, produce, QA, and distribute marketing content, without you having to prompt every step. The interface in V1 makes this coordination visible and controllable. It's a fundamentally different category.
It means you're not just a customer, you're a design partner. Your real briefs, your feedback on what works and what doesn't, and your input on feature priorities directly shape the AMP roadmap. In return, you get preferential pricing locked in for two years, direct access to the founding team, and a meaningful competitive advantage as the market catches up to where you already are.
Across the core platform: blog posts, social media content, email sequences, ad copy, video scripts, case study drafts, and strategic marketing plans. All content passes through our Brand Messaging QA agent before delivery. In V1, the QA process is fully visible in the interface.
Your brand guidelines, content, and customer data are never shared with other customers. The only shared element is anonymised performance benchmarks (e.g. average engagement rates by content type in your sector) that help improve AI recommendations across the platform. All data is stored on EU-based servers with full GDPR compliance and enterprise-grade encryption.
Initial setup, brand ingestion, personas, positioning framework, takes one to two days if your brand assets are well-defined. You'll see your first AMP-generated content within days of completing setup. Full platform adoption typically takes two to four weeks. One honest note: we consistently find that the quality of what AMP produces is directly proportional to the clarity of your brand positioning going in. If your messaging needs work, we'll help you sharpen it first.
We want to know that. Early adopters are partners in validation, if we determine AMP isn't the right fit for your use case, we'll be direct about it, help you transition smoothly, and potentially use the gap you've identified to improve the platform for everyone else. No hard sell. No lock-in beyond the committed contract period.
Absolutely and directly. Requests from five or more early adopters in the same cohort move to priority status. You'll have quarterly input sessions on feature prioritisation. This isn't a feedback form, it's a genuine design partnership.
The engine is running. The interface is evolving. The team is learning.
If you're ready to be part of what comes next, email jason@jam7.com
Jason Nash
Chief Product Officer, Jam 7
Netanel Eliav
Chief Technology Officer, Jam 7