Gigi Gets Promoted: Moving from Assisted AI to Authoritative AI

Gigi Gets Promoted: Moving from Assisted AI to Authoritative AI

Gigi Gets Promoted: Moving from Assisted AI to Authoritative AI

February 11, 2026

5 min

We made a very intentional design decision when building Gigi: no change is made by Gigi without human direction. A human is always in the loop to approve a set of bid modifiers, update a media plan, extend flights, etc. This design decision positively resonated with our customers by mitigating the operational risk of AI adoption in the enterprise.

In effect, Gigi acts like an entry-level team member. When you assign an entry-level team member work, you expect to review that work before it is finalized. You assign Gigi a task, Gigi presents her chain of thought and work product, and you approve Gigi’s work. Since the tasks that Gigi is often assigned are lower-level ad ops tasks, this way of working seems natural.

Our design decision is not unique to Gigi. Many of the leading AI companies at the application layer have crafted similar workflows of “assisted AI”. As Jamin Ball of Altimeter Capital observes:

Most enterprises have adopted “assisted AI”. AI drafts the email, but a human sends it. AI suggests the next action, but a human approves it. AI flags a risk, but a human decides what to do. AI summarizes the ticket, the contract, the account, the customer, and then waits patiently for someone to act. This is the comfortable version of AI. It feels safe. It feels controllable. It also feels productive in a very local sense. People save time. Work moves a bit faster. Everyone can point to usage charts going up and feel good about progress.

We’ve seen this in practice. Our customers save large chunks of time in operating the Amazon DSP. Advertising performance improves. Better decisions are made.

But now, at the start of the year, our savviest customers asked us to amend this paradigm. They said: “We trust Gigi for certain tasks. I no longer want to be a bottleneck to her taking that action, can she do it automatically and simply provide a changelog of the completed work?”

We knew we’d get to this point, but we didn’t know it would happen so fast. These customers essentially promoted Gigi. She no longer needs approval to submit work and could move on to more challenging tasks. This is obviously an encouraging sign. Our customers have built a sufficiently high amount of trust with Gigi to automate tasks. More importantly, though, these steps towards automation allow Gigi to provide more value to our customers because when AI is merely assistive, there are limits to its effectiveness:

What rarely happens in this mode is a step change in outcomes. Costs do not collapse. Cycle times do not fundamentally reset. Headcount plans do not bend. The organization still runs at the speed of human review, human judgment, and human bottlenecks. The AI is helpful, but it is not decisive.

We’re excited by the path forward. This year, as LLMs continue to see step-change improvements and Gigi becomes increasingly smarter and more reliable, we believe Gigi should proactively identify the work needed to achieve our customers’ goals. If Gigi successfully achieves this, trust will be built, and our customers will want Gigi to automatically complete proactive work. In doing so, this will transform Gigi into “authoritative AI”. This is when enterprises will see the most outsized returns from AI. Ball identifies what separates leading adopters:

The most advanced enterprises are not necessarily using more advanced models. They are not always spending more on infrastructure. What they are doing differently is deciding, explicitly, where software is allowed to take responsibility. They pick narrow domains. They define tight guardrails. They invest heavily in sources of truth. And then they let the system act.

We will continue to innovate and make Gigi smarter. We will also provide our customers with varying levels of control and allow them to choose whether Gigi is assistive or authoritative. As Ball concludes:

”The biggest AI decision companies face has very little to do with AI at all. It is a decision about control. About trust. About whether software is allowed to do more than whisper suggestions into a human ear. Assistive AI saves time. Authoritative AI changes outcomes. And that line, more than any model choice or benchmark score, is where the real value starts to show up”

Like any promoted employee, Gigi's growth won't stop here. As she becomes more authoritative, she'll graduate from executing tactical tasks to proposing strategic initiatives. Human oversight will shift from approving individual bid changes to guiding broader campaign strategies—moving from "did you adjust this correctly?" to "should we pivot our approach?”

We’re excited to release task automation to a small group of customers this week. It is the first step in Gigi becoming authoritative AI.