A Case for Humans

Will AI eat humans?

This wave of technological disruption feels visceral because it is personal. Early waves of technological change displaced workers who relied on ‘muscle power’ and predated our generation. Recent waves of disruption – the Internet, mobile phones and SaaS tooling – enabled us and rewarded years of education and hard-won expertise. But for the first time, the technology is coming for those whose power is knowledge. Lawyers, bankers, engineers – and yes, even VCs. We are not being asked to adopt new tools. We are being asked to justify ourselves against them.

So will AI eat humans? My friend recently reminded me of Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” We are likely doing both right now. In the short run, the hype still outpaces the reality. Most enterprise AI deployments remain messy, fragile, and far from autonomous. But in the long run, the transformation will be more profound than almost anything we can currently imagine. 

While the end state remains unknown, I want to make a case for humans. Not a sentimental one. A structural one. I want to explore why we remain essential and where exactly.

A case for humans: the framework of context and trust

I believe the answer lives at the intersection of two dimensions: context and trust. The higher the requirement for either context or trust in any given interaction, the more irreplaceable the human element becomes. 

The X-Axis: Context

Context is the ability to adapt to the end user’s specific needs and environment. As Florian, CEO of our portfolio company Dataiku, the platform for AI success, puts it: “Context is king.” I think it can be expressed simply:

Complexity is the nature of the environment. How many moving parts, how many systems, how many exceptions? Critically, it is also how often this environment changes. A static, rule-bound process is one thing. A living, shifting one is another entirely. 

Specificity is the degree to which the work is bespoke to a particular customer, domain, or environment. 

Accessibility is how easy it is to reach the information that matters. Gathering context from structured, clear, readily available data is easier than from messy, disparate and undocumented knowledge.

The more complex and specific a workflow is, the more context it demands. But the denominator also matters: the more accessible the underlying knowledge, the more AI can absorb it. Where accessibility is low, humans remain.

As the agentic AI vision moves from concept to deployment, we are confronting this friction precisely. The dream is elegant: autonomous agents orchestrating workflows across platforms, anticipating needs, executing decisions. The reality is that most work happens across multiple systems and teams, with their often informal processes, unclean data, and deeply embedded institutional knowledge. Context does not always live in open APIs. It lives in hallway conversations, tribal memory, or the instinctive read of a room. 

The Y-Axis: Trust

Trust is a word we use constantly and define rarely. I want to borrow a definition from Jamie Daum, CEO of our portfolio company Inforcer. Inforcer enables thousands of Managed Service Providers to adopt AI, a space where trust is the entire product. He defines it as: 

Intimacy is rapport, the accumulated warmth of human interaction, the sense of being understood. 

Credibility is expertise, deep domain knowledge, and a track record that speaks for itself. 

Risk is what you stand to lose – the regulatory exposure, the reputational damage, the cost of getting it wrong.

The need for trust is obvious in things that feel personal, such as healthcare or education. But trust extends beyond. When a business chooses a security vendor, it is placing a bet on a relationship, not purchasing features. When a financial institution selects a fraud detection partner, it is asking whether it can trust this vendor with its reputation. The higher the stakes and the deeper the expertise required, the more the relationship depends on genuine human connection – and the harder it is for AI to substitute.

Plot any task, any role, any interaction on these two axes. As you move further along, deeper context, higher trust, you enter the territory where humans are not just useful but indispensable. The upper right is where our humanity is not a nice-to-have. It is the product itself.

So what does a case for humans mean?

This framework has become a lens through which I think about investment opportunities, because without a 2×2, I’d be a philosopher, not a VC. The context-trust matrix is a map, and each quadrant tells you what kind of businesses win.  It applies to the industry a business operates in, but equally to its product and go-to-market strategy – where the real question is what locks you in.

Low Context, Low Trust >> “Automate or Die”

The work is codifiable, the stakes are manageable, and the client does not need to trust you with much to let you start. Think BPOs: call centres, back-office processing, data entry. The work is transactional, already outsourced at a massive scale, and churn is high. This part of the market is ready for full agentic automation. The winning startup is an execution machine: it sells the outcome, not the tool. Speed and cost are everything. Build fast or be built over.

High Context, Low trust >> “Embed or Lose”

The environment is messy: multiple systems, informal processes, undocumented knowledge, but the stakes are contained. Think custom software agencies or management consultants. The client needs you to deeply understand their world, but the engagement is project-based and can be replaced. The winning startup owns the context of how the customer actually works. The moat is products that remember and compound tribal knowledge and hybrid delivery models that encode it into repeatable playbooks. The deeper you embed, the harder it is to rip you out.

Low Context, High trust >> “Earn It”

The work may not be technically complex, but the stakes are high. Clients hand over sensitive data, money, or legal exposure, and mistakes are expensive. Think accountants or lawyers. The process is largely standardised, but the client needs to believe someone is accountable when the audit lands. The winning startup here makes trust scalable without diluting it. Accountability is built into the product so humans can serve more clients without breaking the bond. AI handles the intelligence. Humans carry the weight.

High context, High trust >> “The Human Premium”

This is where the stakes are high, and the work is deeply entangled with the client’s reality. Think of regulated industries like healthcare, defence, or financial services, where knowledge is embedded, exposure is real, and the regulatory environment is constantly shifting. The client is not buying just software. They are buying a relationship with someone who understands their specific world and can be held accountable for the outcome. The winning startup here does not try to remove the human. It arms them. The product amplifies human judgment rather than replacing it, and commands the highest margins and deepest loyalty as a result.

Technology will always shift the boundary

Technology is, by definition, about change. And the boundaries of both context and trust will continue to shift, as they have many times before. Consider this. Only a decade ago, we would not have imagined a social media platform with enough context to recommend a perfect product we did not even know we needed. And two decades ago, we would not have had enough trust to enter our credit card details into a web browser to buy it. 

No quadrant is permanent, and the curve on the chart above is not fixed. It moves outward. What once required deep human involvement gradually becomes automatable and, one day, autonomous. What matters is what endures as it shifts. 

Paradoxically, this framework places humans at the centre of change. When IQ is commoditised, and any model can retrieve, synthesise, and reason, EQ becomes the scarcest resource. Not just emotional intelligence as a soft skill, but as the foundation of leadership and relationships – what I call “radical humanity”. It shapes how I think about every founder we back: the ones with the creativity and curiosity to ask the right questions and imagine new outputs, so that AI can do the work. The ones with resilience to keep building when the technology shifts beneath them, and with the self-awareness, empathy, and instinct for trust to bring people around them – team, customers, shareholders – towards a successful business outcome.

Will AI eat humans? Not the ones worth backing.

Latest from Dawn

26-04-2021

AI 50

Dawn Dawn
Dawn Dawn

to our newsletter

Stay in touch with us