Onil Gunawardana on AI Agent Squads Giving Enterprise Marketers Superpowers

"The goal isn't to build a marketing department that runs itself," says enterprise AI expert Onil Gunawardana. "It's to give marketers superpowers—AI agent squads that handle the routine, statistical work so humans can focus on what's hard to automate: taste, context, and trade-offs."

Sit in on a weekly marketing meeting at a big company, and you can sense the untapped potential. The talent around the table is deep. But the next big advantage won't be a new channel—it'll be a new kind of teammate: one that gives all that talent superpowers.

Around the table, you have people with years of experience in brand, product, analytics, and sales. They argue about positioning, budgets, and what the quarter "needs." Then the meeting ends, and many of those same people go back to their desks and spend most of the week moving numbers between tools.

They export from one system, paste into another, and reconcile three slightly different dashboards. By Friday, they've spent more time feeding the machinery than thinking about what the machinery should do. The important questions—what to say, who to speak to, what bets to place—get squeezed between status updates and spreadsheet maintenance.

Given the kind of intelligent software we have now, that's starting to look like an opportunity.

Gunawardana has spent over 15 years leading data teams at top Silicon Valley companies, building platforms and machine learning products that deliver business impact. He's seen this pattern before. "We have models that can read more data than any human can follow," he explains. "They notice subtle patterns, generate decent drafts, and keep trying small variations until something works a little better. Keeping people glued to CSVs is like asking your head chef to spend the dinner rush washing dishes."

That's where AI agent squads come in. The point isn't to remove humans from marketing. It's to move them up the stack—from operating the machinery to conducting the orchestra.

And this isn't a vague promise. At Accenture, that shift reportedly shortened time-to-market by 25–55%.

From One-Off Outputs to Ongoing Agent Squads

Most people's experience of AI in marketing so far has been generative: you type "write a subject line for this webinar," and a few seconds later, you get something you can tweak.

"It feels like working with a very capable assistant over chat," Gunawardana says. "Helpful, responsive, and surprisingly good at drafts. But you still have to be in the loop for each step."

Agentic AI is different. The key distinction: with generative AI, progress happens turn-by-turn—you prompt, it responds, you prompt again. With an agent squad, progress happens between turns. You set a goal, and the system keeps working while you're doing something else.

It's closer to adding a team lead who not only does some of the work, but also coordinates a whole squad of AI agents on your behalf.

Instead of asking for a single artifact, you give it a higher-level project like a narrow goal inside your environment: "keep this campaign healthy within these constraints," or "turn this launch brief into a complete set of ready-to-review assets." The system doesn't just spit out one thing. It breaks the work into pieces, connects to your existing tools, and starts doing the repetitive parts on its own.

"Your role doesn't disappear; it moves up a level," Gunawardana explains. "Instead of being the person doing every task, you become the person who designs the workflow: the goals, the handoffs, the escalation rules, and the guardrails."

He describes an AI agent as software that can understand a narrow goal and look into your systems—CRM, marketing automation, data warehouse, and analytics. It takes a small action, then watches what happened to decide what to do next. In practice, an AI agent squad sits on top of your customer data platform or warehouse, your activation tools, and your measurement layer.

The loop is what matters. Generative AI waits for you to prompt it again at every step. An AI agent squad keeps going until it hits a guardrail or something that actually requires judgment. That's when a human steps back in.

In marketing terms, guardrails are concrete things: spend caps, audience exclusions, brand voice constraints, and approval gates for regulated topics.

Here's what "between turns" looks like in practice. A cohort's conversion rate dips on Wednesday afternoon. The squad notices before the next standup, pauses a small slice of spend, and drafts three new subject lines and two landing-page variations that stay inside the brand rules. It routes the options to a human for a quick approval, then launches an A/B test and watches the results. By the time the team meets again, the squad has a short summary: what changed, what it tried, and what worked.

"The human team still decides whether the dip was a one-off or a signal," Gunawardana notes, "and whether the campaign needs a bigger rethink."

AI agent squads are particularly good at the parts of marketing that are mostly statistics: watching performance curves all day, turning simple rules into actions, and trying dozens of small experiments in the background. They are not meant to decide what kind of company you want to be.

Three Levels of Work

So, how far does this go?

Gunawardana says it helps to look more closely at the work inside a marketing org. Not all tasks are the same. Watch what people actually do, and you notice at least three levels.

At the 100-level are the tasks that the company has done thousands of times. Pulling standard weekly reports. Applying known rules like "pause ads with a cost per acquisition above this threshold." Cloning a proven nurture sequence and swapping out the offer. Turning decisions that have already been made into concrete actions.

At the 200-level are tasks that mix pattern and judgment. Designing a new welcome flow for a new segment. Interpreting why a specific cohort is suddenly churning. Deciding whether a surprising spike is real or just noise. There are examples to learn from, but no rote recipe.

At the 300-level are the calls that change the game. Repositioning the brand. Entering a new market. Responding to a public crisis. Deciding whether a campaign is acceptable from a regulatory or ethical point of view. These decisions don't just optimize a curve; they redraw it.

"AI agent squads belong primarily at the 100-level," Gunawardana says. "In the 200-level, they can suggest options, but humans choose. The 300-level remains human territory by design."

For example, the squad might surface three plausible explanations for a cohort drop and propose two test plans—but a human decides which explanation to bet on and what not to touch.

This 100/200/300 model reflects the same philosophy behind Gunawardana's 5Ps of Product—a framework he developed for taking products from concept to scaled revenue. The same principle applies: automate the repeatable, focus humans on judgment calls, and build the platform that lets both work together.

"An AI agent squad is what you get when you let software take the parts of a marketer's job that are mostly statistics and repetition," Gunawardana argues. "You keep the rest for people."

Over time, senior marketers look less like individual contributors and more like conductors. They decide which parts of the score belong to humans and which can be handed to an agent squad—rather than trying to play every instrument themselves.

What does conducting look like in practice? Two case studies offer a window.

Agent Squads in the Wild: Accenture's Campaign Engine

Accenture's marketing team runs hundreds of campaigns with nearly a thousand marketers worldwide. Their AI Refinery—fourteen AI agents—handles the routine, statistical work: research agents pull competitor positioning and market data, analytics agents watch performance dashboards, planning agents check calendars and flag scheduling conflicts, and drafting agents assemble strategy briefs.

The results: campaigns cut from 135 manual steps to 85, time-to-market improved 25–55%, and strategy briefs that took three weeks are now drafted in minutes.

That frees humans for taste, context, and trade-offs. A strategist who spent Monday assembling data now spends Monday deciding whether the campaign tone should be bold or reassuring—and whether this quarter's message still fits the market mood.

Agent Squads on the Page: Writer's Content Engine

Writer's AI platform handles the routine pipeline: one agent gathers competitive context, another drafts landing pages and emails, another checks every line against brand voice and legal rules, and another packages the assets for review.

A Forrester study on Writer found 333% ROI, doubled efficiency, halved agency spend, and content reviews 85% faster.

Humans keep the hard parts. The squad can generate five ad variants in seconds—but a human decides which one captures the product's soul. The squad flags a phrase that might violate regulations—but a human weighs whether the risk is worth the impact.

"A squad can amplify whatever you measure," Gunawardana points out. "That's why humans still own the goals, the trade-offs, and the moments when restraint is the right move."

Superpowers, Not Substitutes

Seen this way, AI agent squads are not a new brain dropped on top of the org chart. They are a layer underneath it, and the human job shifts from operating the machinery to conducting how that machinery should behave.

Yes, previous waves of marketing automation promised similar transformations. The difference this time is that agent squads can handle ambiguity and operate autonomously within constraints. They don't just execute rules—they can reason about which rule applies. That's what makes the "between turns" work possible.

As more 100-level work moves to an AI agent squad, the constraint in marketing shifts. "The real change isn't 'human versus AI,'" Gunawardana says. "It's 'operator versus orchestrator.' The job becomes managing the squad: what it's allowed to touch, when it must stop, and how you measure whether it's helping."

The bottleneck moves too. It's no longer "how many reports can we pull?" or "how many variations can we try?" It becomes "how many good questions can we ask?" and "how many 200- and 300-level decisions can we make well?"

A marketer with an agent squad has more eyes on the data—software is always watching for patterns. More shots on goal—drafting and testing aren't scarce resources anymore. More headspace in strategy conversations—because they didn't spend the morning reconciling dashboards.

If you want to try this, here's a practical recipe: Pick one workflow you already understand well. Tag every task as 100, 200, or 300. Hand the AI agent squad a narrow slice of 100-level work first—something with clear success metrics. Put hard guardrails around spend and audiences. Require human approval at every point where judgment matters. Measure outcomes. Expand only when you trust the behavior.

There will always be pressure to push these systems higher up the ladder—to let them inform decisions that touch brand, ethics, or long-term positioning. Gunawardana's argument is that the most durable value comes from resisting that temptation.

"If we do this right, AI agent squads won't make marketing less human," he says. "They'll finally let the human parts of the job sit where they were supposed to be all along: at the top of the stack, not buried under spreadsheets."

The scarcest resource in enterprise marketing has never been data or channels. It's the focused thinking time of skilled people—their taste, context, and judgment on trade-offs. AI agent squads don't replace that thinking—they give it superpowers.

ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Join the Discussion