Debate Location: Committee Room 14, House of Commons, London SW1A 0AA, United Kingdom
Debate Date: Tuesday, October 28, 2025
Also published on The Chartered Institute of Marketing
On 28 October 2025, marketers will gather in Committee Room 14 at the House of Commons to debate the motion: “The privacy battle has been lost and your personal data will never be safe again.”
Before the debate begins, these are my own reflections on the motion. I’ve written them ahead of hearing the arguments for and against, as a way of testing my own view on where marketing really stands on privacy today, and what role AI and agentic systems might play in shaping its future.
They’ll also serve as my speaker’s notes when I’m invited to comment during the debate.
If we take a cold look at the state of data privacy today, it’s hard to argue that we’re winning.
Websites still set non-essential cookies before anyone clicks “accept”. Data leaks out of companies large and small almost every week. Consent has become a box-ticking exercise.
So, if the question is whether privacy as we practise it now has failed, then yes, it has. We lost that battle some time ago.
But the motion goes further. It says privacy will never be safe again.
That’s the part I reject. To claim that privacy will never be safe again is lazy fatalism. It assumes the story ends here.
New personal data is still being created every second, which means we still have a chance to recover.
If that ever stopped, we’d have far bigger problems to worry about.
But as it stands, technology is already giving us a way to rebuild privacy.
Saying privacy is gone forever ignores how technology evolves. Every time marketing has over-reached, new tools and norms have eventually rebalanced the system.
Spam gave rise to filters.
Click fraud led to verification.
Data abuse will meet the same fate, not through moral awakening, but through better design.
We now have the building blocks to make privacy self-enforcing.
AI can already detect policy breaches, flag anomalies, and redact personal data automatically.
Agentic systems, AI that can act independently within guardrails, take that further.
They can apply consent rules, verify lawful use, and report misuse faster than any human team could.
That’s where Agentic Privacy comes in.
Agentic Privacy is privacy that moves with the data.
Every AI agent that touches information, a content generator, a bid optimiser, a segmentation engine, carries the consent and purpose of that data with it.
It knows what it can use, why it can use it, and it logs every action for review.
In practice, that means:
A campaign agent checks consent status before activating an audience.
A creative agent tags every output with its data sources.
A compliance agent monitors all of it in real time, flagging anything that steps outside policy.
If consent isn’t valid, the task doesn’t execute.
Privacy becomes a built-in rule, not an optional setting. That’s the real fix.
Not more paperwork, but systems that simply refuse to misbehave.
Marketers like to talk about “owning the tech stack.” Soon, they’ll need to own their compliance stack too.
Internal agents can manage privacy inside organisations by verifying permissions, logging data movement, and maintaining audit trails automatically.
But some brands will always cut corners. That’s where external oversight matters.
Imagine ICO-approved crawler agents patrolling the web, spotting unlawful trackers or unapproved data transfers and issuing automated warnings.
It’s not about punishment; it’s about scale.
You can’t police billions of websites manually. Agents can, quietly, constantly, and without the backlog.
The next few years will define whether privacy remains a failed ideal or becomes a working feature of the digital economy.
We’ll see agentic systems not just optimising media spend but policing their own behaviour, verifying consent, tracing data lineage, and producing evidence on demand.
Regulators will use the same tools to monitor the market in real time.
When that happens, privacy stops being a static document and becomes a dynamic process: alive, measurable, and enforceable.
So yes, we’ve lost the privacy battle as it’s fought today.
But “never safe again” is wrong.
Marketing created the data problem; it also holds the means to fix it.
If we embed accountability into the technology and if every agent knows the limits of its own authority privacy becomes possible again.
The first era of marketing used data to persuade.
The next will use intelligence to protect.
That’s how we win the battle that always matters: not for clicks or conversions, but for trust.
Agentic Privacy means privacy that moves with the data. Each AI system or agent carries its own record of consent, purpose and accountability, so it can prove what it used, why, and who authorised it. It turns privacy from a policy document into an operating rule.
Automation outpaced regulation. Tools collect and process personal data faster than humans can check it, and consent banners became routine click-throughs. The result is scale without oversight marketing systems that work perfectly from a performance point of view and terribly from a privacy one.
Yes. AI can already identify policy breaches, redact personal data, and audit data use automatically. Agentic systems extend that by enforcing consent rules before actions happen, providing real-time accountability instead of after-the-fact investigations.
They’d make compliance part of the workflow rather than a separate step. Campaign agents could verify consent before activation, creative tools could tag their data sources, and compliance agents could flag issues as they occur. Privacy becomes built in rather than bolted on.