Blog

iSAQB Software Architecture Gathering 2025​

Principal Consultant at INNOQ
Published on August 28, 2025

Putting AI in APIs: Erik Wilde on New Creativity and Reliable Automation

An interview with Erik Wilde

Interfaces are an absolute basic concept of software architecture; you design, implement, and redesign them all the time for "application programming". When do you start calling an interface "API"? (The semantics of that word exceed purely spelling out the abbreviation.)

You call an interface an API once it is meant to be used by others beyond its immediate implementation context. An interface is just a technical boundary; an API is a published contract. That means it is intentionally exposed, documented, and stable enough for others—inside or outside your team or system—to depend on. It is mostly this aspect of intentionality and a wider audience that sets an API apart.

Are the considerations to make an API useful and accessible for humans not the same as for making it accessible to "AI" (i.e., LLM-based automation)?

Humans and machines both need APIs to be accessible, but in different ways. For humans, documentation works best when APIs share consistent patterns, because that not only makes them easier to understand but also allows tools and practices to be reused across different APIs. Humans can also draw on a wider context without getting confused. Machines, by contrast, need each API to be described in a clear, self-contained way. Even as context windows grow, more context doesn’t always help — AI often struggles to use larger contexts effectively. Humans value APIs that are open, reusable, and adaptable in flexible ways, while machines benefit more from a guided layer of abstraction that emphasizes what can be achieved and how to do it, rather than exposing every possible operation.

You have addressed the environmental footprint of APIs in "Getting APIs to Work" (episodes with Phil Sturgeon, likely others...) in the past. When thinking about software efficiency and carbon awareness, does that fit well with what is currently being promoted as "Agentic AI"?

The environmental footprint of agentic AI is significant, because exploratory use by agents often drives more orchestration, more compute cycles, and higher energy use. That makes it seem at odds with the push for efficiency and carbon awareness in software and APIs. The way forward is to see them as complementary: agents can explore creative solutions and uncover new ways of doing things, but once a promising approach is found, it should be codified into a deterministic, repeatable workflow that is energy-efficient, scalable, and auditable. This balances the benefits of AI’s creativity with the need for sustainable and compliant operations, using as much AI as necessary but as little as possible. By designing architectures that make the transition from experimentation to efficient execution smooth and deliberate, we can address both the unease about AI’s unpredictability and the need to control its substantial energy footprint.

How is MCP related to OpenAPI? Aren't they both trying to achieve the same thing? (That would be: standardising how APIs are described and making them easily accessible.) Or is it closer to JSON:API? (Standardising APIs themselves)

MCP, OpenAPI, and JSON:API are all about exposing capabilities, but they target different consumers. MCP is designed specifically for LLMs, giving them tools and resources in a way that fits how they operate. OpenAPI, by contrast, is aimed at developers who want to consume HTTP APIs, focusing mainly on structuring endpoints and attaching schemas to them. JSON:API then adds another layer by standardizing how those schemas are structured and what common concepts an API should expose, so that developers benefit from conventions they already know and can reuse tools that support them. While it is possible to generate MCP servers automatically from OpenAPI, this usually doesn’t give the best results: for more complex APIs, a list of endpoints is not enough, because LLMs lack the implicit understanding that human developers bring when writing code. That’s the fundamental difference — OpenAPI and JSON:API assume a human developer can fill in the gaps, whereas MCP must provide enough task-oriented structure for an LLM to succeed without that human intelligence in the loop.

Do LLMs make certain approaches to automation obsolete? Or are they just another use case? (Due to the non-determinism, I suspect they don’t really replace reliable system integration.)

Automation is usually about reliability, repeatability, and efficiency — and LLMs don’t deliver that. They are not deterministic, not reliably reproducible, and not particularly efficient. What they do bring, though, is a new kind of creativity: the ability to bridge gaps, try out solutions, and handle messier parts of automation that traditional approaches cannot. The best way to see them is as another tool in the toolbox — one we can use selectively, for exploration or for certain parts of a process, but not for the parts that demand strict guarantees. Architectures that combine LLM-driven exploration with codified, deterministic workflows can get the best of both worlds: AI where creativity adds value, and traditional automation where reliability is essential.