
I’ve been thinking about hiring a lot lately. Not in the abstract “future of work” TED-talk sense — in the very concrete “I need to find the right people for my team” sense.
I run alfatier, a small AI-native consulting firm in Hamburg. Our operating model is “5 perform like 20” — five humans working alongside AI agents, delivering what used to take a much larger team. (I unpacked that idea in The Post-Headcount Era — including a somewhat sobering 48-dimension comparison of what AI agents do better and what humans still own.) It works. But it’s forcing me to ask a question I don’t think our industry has answered yet: if AI agents keep getting better at the things we’ve always hired for, what should we actually be hiring for?
Most job postings I come across — including ones I’ve written myself in the past — are still built for a world that’s fading fast.
A formula that works for both sides
We use a simple model to describe what makes up an AI agent:
Agent = Prompt + Tools + Knowledge + Skills + Reasoning + Guardrails
An agent needs a clear role (Prompt), instruments it can use (Tools), a knowledge base (Knowledge), practiced capabilities (Skills), the ability to think and plan (Reasoning), and boundaries that keep it in line (Guardrails).
Nothing wild so far. But here’s what tripped me up: these six building blocks map onto humans just as well.
| Building Block | AI Agent | Human |
|---|---|---|
| Prompt | System prompt with role and goals | Self-concept, values, sense of purpose |
| Tools | APIs, databases, code execution | Software, methods, professional network |
| Knowledge | RAG, knowledge bases, documentation | Education, experience, domain context |
| Skills | Orchestrated workflows, automations | Practiced routines, trained capabilities |
| Reasoning | Chain-of-thought, planning | Judgment under uncertainty, intuition |
| Guardrails | Rules, limits, escalation logic | Ethics, common sense, knowing when to say no |
Now look at that table and ask: which building blocks do 90% of job postings focus on?
Tools. Knowledge. Skills.
And which ones are AI agents getting scarily good at?
Same three.
Let me show you what I mean
I want to use two real examples here, and I want to be upfront about something: the postings I’m about to reference are from well-established and respected companies. There’s absolutely nothing wrong with them — they’re solid companies, and I genuinely hope they find the right people quickly. I’m using them not to criticize, but because they’re near-perfect examples of how our entire industry writes job ads. I’ve written dozens of these myself over 25+ years. That’s what makes them useful as reference points.
Example 1: CONET, a well-established German IT services company, posted an IT Cloud Consultant role. Here’s the gist of what they’re asking for:
Tasks: Client advisory, solution proposals, workshop moderation, keeping up with tech trends, working with sales on acquisition.
Requirements: Several years of cloud consulting experience. Solid understanding of cloud concepts and agile practices. Knowledge of current technology trends. Structured approach. Communication skills. Fluent German and English. Willingness to travel.
Example 2: Public Cloud Group (PCG), another respected player in the cloud space, is looking for a (Senior) AWS Cloud Consultant. Their posting asks for:
Tasks: Acting as a “Trusted Advisor” to enterprise clients, consultative selling, identifying optimization opportunities across modernization, cost reduction, security, and AI, plus conducting strategic workshops.
Requirements: Multiple years of AWS experience. Strong communication and stakeholder management skills. Fluency in German and English. The ability to coordinate between customers and internal technical teams.
Read those requirements again across both postings. “Several years of experience.” “Solid understanding.” “Knowledge of trends.” “Multiple years of AWS experience.”
Every single one targets Knowledge and Skills — the stuff that lives in documentation, certification prep courses, and years of repetition. The stuff an AI agent with access to Azure docs, AWS whitepapers, and every Terraform provider registry already has on lock before your first coffee is ready.
An AI agent knows every cloud architecture pattern. It has knowledge of current trends — real-time, not from last quarter’s re:Invent recap. It can draft solution proposals grounded in thousands of reference architectures.
What it can’t do: look a nervous CFO in the eye and make him believe that cloud migration won’t sink his company. Or pick up that the IT director in the workshop isn’t being difficult — he’s scared of becoming irrelevant. And then handle that gracefully instead of plowing through the next slide. These are the human capabilities I keep coming back to — the ones I explored in The Hybrid Team Revolution, where I argue that the entire professional services industry needs to rethink how it structures teams, prices work, and measures value when half the team runs on electricity.
So what would a different kind of job posting look like?
I’ve been working on this for alfatier. It’s not finished, probably never will be. But the starting question is fundamentally different: not “what should this person already know?” but “what can this person do that no agent ever will?”
Cloud Transformation Partner (m/f/d) — alfatier GmbH
You won’t work alone. You’ll work with AI agents that handle the heavy lifting — so you can do what no agent can.
Your role in a human-agent team:
You’re the person our clients trust. You translate between technology and business strategy. You figure out what the client actually needs — even when they haven’t figured it out themselves yet. Our AI agents deliver analysis, architecture options, and documentation. You deliver judgment, empathy, and decisions.
What matters — and what doesn’t:
Five years of Azure experience→ Curiosity to get into any cloud platform, supported by AI tools that explain anything on demand
Certifications (AZ-104, AWS SAA, etc.)→ The ability to critically evaluate textbook knowledge and know when best practices don’t fit the situation
Deep Terraform/Kubernetes expertise→ The instinct to ask the right questions and evaluate AI-generated architectures, instead of hand-writing every YAML fileWhat we’re actually looking for:
Judgment. You know the difference between a technically correct solution and one that actually makes sense for this particular client. When the agent serves up five options, you know which one to recommend — and why.
Relationship intelligence. You can tell when a project is stuck for political reasons before it fails for technical ones. You build trust with people who’ve been burned by consultants before.
Comfort with ambiguity. You don’t panic when there’s no clear answer. You walk clients through uncertainty instead of faking confidence.
Learning velocity. You don’t need five years of Azure experience if you can build real understanding of a new domain in a week with AI support. What matters is how fast you form the right mental models — not how long your experience section is.
Orchestration instinct. You know when to let the agent run and when to step in. You’re the conductor, not the entire orchestra.
Ethical backbone. You’ll tell a client their project doesn’t make sense, even when saying yes would be easier and more profitable. No agent is ever going to make that call.
What you won’t need: A perfect CV. Five certifications that expire in two years. The ability to write Terraform in your sleep. Our agents do all of that. We need what they don’t have — a human.
Why this isn’t just philosophical musing
I’m not arguing that technical knowledge is irrelevant. A cloud consultant still needs to understand what a VPC is and why latency matters. But that kind of knowledge has stopped being a barrier to entry — it’s become a commodity. With AI assistance, someone can acquire in weeks what used to require years of hands-on grind.
The real differentiator is moving to the two building blocks that AI agents are genuinely bad at: Reasoning in the human sense — making sound calls when things are messy and unclear — and Guardrails in the human sense — the ethical compass and common sense that tells you something is wrong even when the spreadsheet says otherwise.
Three practical consequences follow from this:
CVs are losing their signal. A list of certifications and tools tells me what someone learned in the past. When AI agents act as knowledge multipliers, your learning history matters less than your learning ability. The better interview question isn’t “Do you know Kubernetes?” It’s “Walk me through the last time you had to get up to speed on something completely unfamiliar. How did you do it? What did you get wrong first?”
So-called soft skills are becoming the actual hard requirements. “Strong communication skills” shows up in every job ad and gets tested in zero interviews. In human-agent teams, communication stops being a nice-to-have — it’s the whole job. If you can’t think clearly and express yourself well, you can’t steer an AI agent effectively. And if you can’t build trust with another human, you’ll eventually be replaced by someone — or something — that’s cheaper.
Hiring processes need to catch up. If I want to know whether someone has good judgment, no multiple-choice test and no certificate will tell me. I need to put them in a room with a messy scenario: “Here’s an AI-generated cloud architecture for a mid-sized manufacturer with 500 employees. What’s missing? What would you do differently? And how would you explain your recommendation to a managing director who doesn’t know what a subnet is?” That’s the new hiring test. And honestly, it’s a better one than what we had before.
The shift
We talk a lot about AI replacing people. I think that framing misses the point. AI is replacing tasks — specifically the knowledge-retrieval and skill-execution tasks that we’ve spent decades using as proxies for competence.
What it can’t replace is the stuff we never bothered to screen for properly: the ability to make a call when there’s no right answer, to hold a relationship together when things get tense, and to say “we shouldn’t do this” when everyone else in the room wants to move forward.
The future of hiring doesn’t belong to the best CV. It belongs to the best human.