When I talk to other engineering leaders about how we integrate AI, the question I get most often is: who's on your AI team?
We don't have one. That's intentional.
The argument for a dedicated AI team
The standard case: AI work requires specialised skills, specialised infrastructure, and a different development cadence than product engineering. If you put it in the product team, it gets deprioritised. If you put it with the data team, it lacks product context. A dedicated team gives you focus and ownership.
I've seen this work. At companies with a hundred-plus engineers and a genuine need for custom model development, it makes sense.
Why it doesn't work at our scale
We have six engineers. A dedicated AI team would be one or two people, which means a silo of one or two people who are the AI dependency for everyone else. Now you have a team that's either a bottleneck or irrelevant depending on the quarter.
More importantly: the AI work we do is deeply integrated with the product and infrastructure work. The LLM prompt that processes a waste monitoring report is owned by the backend engineer who owns the report processing pipeline. The evaluation harness for the CV model is owned by the ML engineer who owns the CV pipeline. If these lived in a separate team, the ownership split would create coordination overhead we can't afford and architectural decisions made without full context.
What we do instead
Every engineer on the team is expected to integrate AI tooling into their domain. This isn't a policy. It's a hiring criterion. When I interview candidates, I ask how they've used LLMs in their work in the past year. Not as a filter - plenty of good engineers haven't had the opportunity - but as a signal about how they think about tools.
We have a shared space (a Notion page that nobody loves but everyone uses) where engineers document what's working: which prompts, which models, which evaluation approaches. This is informal. It's enough.
When we have a new AI integration to build, it's a task for the engineer who owns the relevant system. They're closest to the problem. They make the integration decisions. They own the operational consequences.
The failure mode to watch
The risk in this model is that AI becomes everyone's responsibility and therefore nobody's. I've seen this happen in teams that distributed ownership without being specific about it.
We avoid this by being explicit: for each AI-integrated system, there is one named owner. They wrote the prompt, they own the evaluation, they get paged if it breaks. The responsibility isn't shared. The knowledge is shared.
On the actual work
Seventy percent of what we do with AI is not model development. It's integration, evaluation, prompt engineering, and tooling. The model development - when we do custom fine-tuning or architectural work on the CV pipeline - is specialised work that a specific engineer owns and leads. But that's a fraction of the total AI surface area.
The framing of "AI team" implies the model work is the main work. For most companies doing applied AI, the integration and operational work is the main work. Organise around what you actually spend time on.
What this requires
Engineers who are comfortable working at the edge of the stack. Not AI researchers. People who can build a production integration, instrument it, evaluate it, and debug it when it breaks. That's a software engineering skill set with an AI component, not a research skill set.
We're lucky to have found people like this. It's increasingly not rare.
With gusto, Fatih.