top of page

AI is the wolf we're trying to domesticate. Many digital agencies will get eaten

  • Writer: Kashif  Hasan
    Kashif Hasan
  • Jan 22
  • 6 min read

How we’re using AI today in the enterprise


Enterprise AI has arrived. Governments have licensed it. Most big firms have rolled it out.


And yet, for most people actually using it, the day-to-day experience is quite underwhelming. This is despite the hype from big tech and media and TikTok and Linkedin - who tell us that we’re already living in the future - better keep up or get ready for redundancy capped with permanent professional obsolesce.   


That’s the nature of hype. And investor bubbles. But that feeling of being underwhelmed by your new Copilot assistant isn’t just you and nor is it because the technology is weak. It’s because organisations are behaving exactly as they always do. When we look at how AI is actually being enabled inside large institutions, the pattern is predictable. We’re using AI for drafting docs, summarising meeting notes, for rewriting emails, for explaining things better than we normally would. In other words, in the enterprise, we’re speeding up work that, a) already existed and b) is pretty low stakes.


Enterprises are machines. They optimise for resilience, defensibility, continuity and risk management. They hate to experiment freely. They’re driven to survive scrutiny. Legal, regulatory, reputational, political.


So when AI arrives, it is greeted in exactly the way you’d expect: as an anodyne productivity tool for the tasks that exist on the periphery of the core business fundaments. Something that helps employees move a little faster without changing how decisions are made and without exposing new forms of as yet unknown risk.


Copilot and every variation of it are treated as tools, and tools - no matter how powerful - do not change operational behaviour on their own. They are useful. We are aware of them. We are all now AI-aware.


Which is why organisations license it. Pilot it. And encourage responsible use. But they keep it safely contained with minimal organisational consequence.


From the outside, this can look disappointing. But from the inside, it’s perfectly rational. AI is a wolf and the enterprise is attempting to domesticate it.


In UK Government’s Copilot experiment, around 20,000 civil servants across 12 organisations took part. Users self-reported an average of 26 minutes saved per day.


17% reported no time savings at all.


Imagine a major institution granting access to seemingly infinite computational power and nearly 1 in 5 said, "meh".


But let's ignore that for a moment. When we look at those who claimed their 26 minutes of time saved, the expected pattern emerges:


• Summarising documents

• Drafting first versions

• Transcribing or summarising meetings


In several cases, Copilot didn’t save time at all. It created work. If people didn’t trust or use the output, or only did a task because Copilot nudged them to, that counted as negative time saved.


Microsoft’s own research shows that around three quarters of knowledge workers now use generative AI at work, and the majority bring their own tools.


Unsurprisingly, separate studies show many users actively conceal their usage because of optics, politics, and uncertainty about how it will be perceived.


So enterprises now run two AI realities:


1. The official CYA one: licences, pilots, centres of excellence

2. The private BYO one: discreet, personal shortcuts with plausible deniability


And that’s why overall the results feel underwhelming. Because AI transformation does not come from shaving a few minutes off of meeting minutes. It comes from building systems that learn and harness knowledge.


Which brings us to the harder question.


How AI Could Be Used for Real Compounding Advantage


The real promise of AI is not productivity.


It’s memory.


Specifically, institutional memory that persists beyond individuals, projects, suppliers and restructures. Memory that can be queried, evaluated and improved over time. This is the difference between using AI to do work faster and using AI to make the organisation smarter.


Most organisations are terrible at remembering because they operate in silos and they outsource projects and business processes. Knowledge and decisions are saved in slide decks, email threads, Slack messages, people’s heads. And eventually, people leave.


An AI-native organisation will treat this as the real problem to solve. They ask a different question.


Not:


“Can AI help us complete this task faster?”


But:


“What knowledge needs to be nurtured and never lost again?”


Because once memory becomes a tangible asset, we set the stage for compounding competitive advantage.


Decisions improve because context is retrievable.


Workflows improve because outcomes feed back into the system.


Risk improves because behaviour is observable, testable, and auditable.


This is where the second-order effects appear.


Decision latency collapses. Because the space between question, context, judgement, and action shrinks. Institutional knowledge is available at the moment it’s needed, not reconstructed from scratch every time.


Learning compounds. The organisation’s error rate drops. Mistakes becomes harder to repeat.


Risk becomes programmable. Boundaries are explicit. Permissions are enforced. Outputs are logged. Behaviour is evaluated. Failure modes are designed in advance, not discovered in public.


And perhaps most quietly disruptive of all: the boundary between project and BAU dissolves. When project deliverables are produced in a matter of prompts - change is cheap and reversible, the old project lifecycle: discovery, design, deliver, handover, disintegrate - starts to look absurdly old hat.


This is what it means to be AI-native. Structurally capable of learning.


But getting there requires something most organisations—and most suppliers—are deeply uncomfortable with.


Continuity.



The Role of Agencies and Why This Is All So Hard for Them


To understand why this transition is so difficult, you first have to understand why enterprises spend so much money on agencies and CMS vendors in the first place.


It isn’t because they want websites.


They want certainty.


They want decisions they can defend.

Partners they can point to.

Platforms that look legitimate to procurement, legal, security, and the board.


A website is just the visible artefact. The real product being purchased is reassurance: that someone reputable is accountable if things go wrong, that the work followed “best practice”, that the decision was reasonable at the time.


This is why the spend persists even when the output looks trivial in hindsight.


And it’s why the arrival of prompt-driven generation is so destabilising.


When pages, components, and integrations can be created in minutes, the visible work collapses. The thing agencies have historically sold—the labour of delivery—suddenly looks thin.


This is where most agencies panic.


They start talking about “consulting”.


But consulting, in its original sense, is not delivery with better slides. It is diagnosis. Pre-sales shaping. Understanding the business reality well enough to influence direction, not just implement instructions.


Most agencies aren’t set up for this. Not because they’re incompetent, but because their operating model actively works against it.


Time-and-materials rewards activity, not outcomes.

Projects end, and memory is flushed at handover.

Staff turnover resets context. And attrition is high.


“Best practice” quietly replaces understanding the client’s actual constraints.


So you get a strange theatre of cognitive dissonance.


Agencies talk about transformation while selling resourcing.

They claim outcomes while avoiding accountability for them.

They position themselves as strategic partners while remaining structurally temporary.


In a promptable world, this becomes impossible to sustain.


Because when delivery is cheap, value moves elsewhere.


It moves to people who can design systems that keep getting better after they leave.


This is the new role for third-party suppliers, if they want to remain relevant.


Not builders of pages—but architects of memory.

Not implementers of tools—but operators of safe autonomy.

Not sellers of platforms—but translators of risk, intent, and outcome.


Their value lies in designing institutional memory: what gets captured, how it’s structured, how it’s queried, how it’s governed. In orchestrating AI systems that can act—safely, audibly, reversibly—inside real enterprise constraints. In proving quality, not just claiming it. In designing adoption so humans trust the system enough to use it, and the system learns from that use.


This is harder work than building websites ever was.


It requires continuity.

It requires restraint.

It requires understanding how the business actually functions when nobody is presenting slides.


But it’s the only work that compounds.


In a world where generation is cheap, delivery is no longer the differentiator.


Learning is.


But there is a hidden threat all agencies will discover. Their best staff will have a eureka moment. They'll realise they can become autonomous. Why should they build agents for their bosses when they can do it for themselves? Agencies will need to acknowledge this risk as part of their investment in change.


Conclusions, for now...


And the uncomfortable truth is this:


Most organisations will remain AI-aware for a long time. Careful. Sensible. Incrementally better.


AI-native organisations will be rarer. Not because the tools are unavailable, but because the structural conditions required - memory, continuity, accountability - are difficult to create.


Agencies don’t disappear in this world.


The ones that survive will not be the fastest builders.


They will be the ones who inspire organisations to build lasting memory.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page