By Jabulani Simplisio Chibaya
HARARE – ANTHROPIC’S engineers just showed the world something remarkable: with the right five-part structure and a little borrowed syntax from 1990s web technology, any professional can unlock results from AI that most teams spend months chasing. But there’s a catch — and it involves everything you thought you’d left behind when you stopped coding.
In a fifteen-minute video that spread rapidly through LinkedIn feeds last week, two engineers from Anthropic’s Applied AI team sat down in front of a browser console and did something surprisingly unflashy: they wrote a prompt. Not a clever hack. Not a multi-million-dollar fine-tuning run. Just a structured, deliberate, five-part text instruction — and then they showed what happened when they got it right. The results were quietly jaw-dropping. A travel itinerary, personalized to specification, formatted to the millimeter, produced in seconds. The kind of output that would have required a junior employee two hours of work eighteen months ago.
Hannah Moran and Christian Ryan, both carrying the title “Applied AI, Anthropic,” were not unveiling a new model. They weren’t announcing a product. They were teaching something that sounds mundane until the implications land: how to think about the structure of language when you’re speaking to a machine that has read almost everything humanity has ever written — and still needs you to be precise.
The session was titled, with charming understatement, Prompting 101. It has since become something of a masterclass for the tens of thousands of founders, executives, and engineers who caught it in their feeds. Because inside those fifteen minutes lives a framework that quietly obliterates the popular myth that AI is a magic box you can talk to like a friend and expect brilliance from.
“The model isn’t confused. It’s waiting for you to be clear.” — HANNAH MORAN, APPLIED AI, ANTHROPIC
The Five-Part Architecture of a Great Prompt
The first thing Moran and Ryan established was architectural. A great prompt, they argued, isn’t a sentence. It isn’t even a paragraph. It’s a document — one with a defined skeleton. They laid it out in five stacked layers, each doing a specific job that the others cannot do:
THE ANTHROPIC PROMPT ARCHITECTURE — FIVE LAYERS
- Task context. What is this AI, and what is its fundamental job in the world? Not “summarize this” but “You are a senior travel consultant specialising in adventure itineraries for experienced hikers.”
- Tone context. How should it speak? Authoritative and brief? Warm and exploratory? This is the register, the voice, the emotional temperature of every word the model will produce.
- Background data, documents, and images. The raw material. The facts the AI could not possibly know without you. The user’s profile. The PDF. The spreadsheet. The photo.
- Detailed task description and rules. The actual instruction, written with the specificity of a legal contract. What to include. What to exclude. Edge cases. Fallbacks. Constraints.
- Examples. The most underestimated layer of all. Show the model exactly what a perfect output looks like — and it will model the model of your mind.
The sequence matters. So does the completeness. Omit layer two and the model defaults to a generic, slightly corporate cadence you didn’t ask for. Skip layer four and you get technically correct output with the wrong structure. Drop examples entirely and you may get something good — or you may get something that requires three rounds of expensive re-prompting to correct.
“Disorganized prompts are hard for Claude to comprehend,” the slide behind them read. Which sounds almost too simple to be revelatory. And yet, if you have ever stared at a ChatGPT or Claude output and thought this is close but not quite right, the odds are overwhelming that the failure was yours, not the model’s. The model isn’t confused. It’s waiting for you to be clear.
XML: The Unexpected Hero of the AI Age
Here is where the session took its most surprising turn — and where it began to matter enormously to anyone with a technical background, or anyone who thought their technical background had been made redundant.
The language Moran and Ryan reached for to organize their prompts was not Python. It was not JSON. It was not some exotic new AI-native format invented in a San Francisco lab. It was XML — the Extensible Markup Language, the somewhat unglamorous workhorse of web development from the 1990s and early 2000s, beloved by enterprise software architects and largely invisible to everyone else for the past decade.
<!– A structured prompt using XML delimiters –>
<location>
{{LOCATION}}
</location>
<num_days>
{{NUM_DAYS}}
</num_days>
<user_preferences>
{{USER_PREFERENCES}}
</user_preferences>
Present your itinerary within <itinerary> tags.
Why XML? Because Claude — and, by extension, most large language models trained on internet-scale data — has seen an enormous volume of XML-tagged text during training. It understands that content between <location> tags is a place. That content between <user_preferences> tags is a person’s stated desires. The tags aren’t decoration. They are cognitive scaffolding for a system that processes everything as a continuous stream of tokens. They tell the model where one kind of meaning ends and another begins.
“XML tags are nice because you can actually specify what’s inside them,” Ryan noted during the session — a line that, if you have spent any time wrestling with ambiguous AI outputs, lands with the force of obvious-in-retrospect revelation. Claude, the team emphasised, understands all types of delimiters. But XML is preferred: the boundaries are unambiguous, and it is token-efficient, meaning you spend less of your limited context window on structural overhead and more on actual content.
The person who understands why the tags work will always outperform the person who only knows that they work.
Examples: The Layer Everyone Skips
If XML organisation was the session’s technical revelation, the deep treatment of examples — what machine learning practitioners call “few-shot prompting” — was its strategic one.
The principle is elegant: when you show a model one or two examples of exactly the output you want, you are not just guiding it. You are, in effect, teaching it a micro-task in real time, without any training run, without any fine-tuning, without any cost beyond the tokens it takes to write the example. The model generalises from what you show it with remarkable fidelity.
“Examples act as concrete templates,” the slide read, “making it easier for Claude to understand and replicate the desired output. This is especially crucial in tasks that require consistent formatting, specific jargon, or adherence to industry standards.” The team’s kicker was pointed: “It’s often much more efficient to just show Claude an example or two of your desired outputs rather than trying to encapsulate all the nuance with text descriptions.”
Consider what this means in practice. You are a management consultant who needs Claude to produce executive briefings in your firm’s house style — a very particular blend of assertive opening, numbered recommendations, and a specific hedging language around risk. You could spend three hundred words trying to describe that style. Or you could paste in one exemplary briefing, tell Claude to match it, and watch the model reverse-engineer your house style from a single data point. The latter is faster, cheaper, and produces better output.
The benchmark the team cited:
relevance, diversity, and quantity — at least three to five examples — give the model enough signal to generalise reliably. One example teaches a pattern. Three examples confirm it. Five examples lock it in.
The Uncomfortable Truth About “Easy” Prompting
Here is the part of the Anthropic session that the LinkedIn summary slides did not quite capture — the subtext that any experienced engineer or technical founder will have heard clearly, even if it wasn’t stated as bluntly as it deserved to be.
Prompting is genuinely more accessible than programming. You do not need to compile anything. You do not need to understand memory management, or recursion, or the difference between a class and an instance. The feedback loop is seconds, not hours. The barrier to entry has fallen off a cliff.
But accessibility is not the same as simplicity of understanding. The five-layer framework works because it maps onto a logical structure — input transformation, context, rules, constraints, examples — that is, at its core, the same structure as a well-specified software function. XML works because it encodes semantic meaning in a way that the model can parse — which is, at its core, the same reason it was invented for data interchange between systems in the first place. Few-shot prompting works because language models are, mechanically, in-context learners — which is a non-trivial thing to understand if you want to use it strategically rather than accidentally.
The person who understands why the tags work will always outperform the person who only knows that they work. The engineer who can think in terms of inputs, outputs, edge cases, and validation will write better prompts than the executive who cannot. The developer who can read a stack trace when Claude produces unexpected output will diagnose the problem in thirty seconds that the non-technical user will spend thirty minutes on.
Coding skills are not dead. They are metamorphosing. The market is not rewarding people who can write Python syntax from memory. It is rewarding people who can think algorithmically — who can decompose a messy real-world task into clean logical components, who can anticipate failure modes, who can reason about what a system needs to know versus what it already knows. That kind of thinking is what computer science has always been trying to teach. AI just changed the language in which you express it.
What This Means for Your Business, Right Now
The practical implications of the Anthropic session are both immediate and long-horizon. In the short term, any team deploying AI for repetitive, high-volume tasks — customer support drafts, report generation, document summarisation, data extraction — should audit their prompts against the five-layer framework immediately. The gap between a disorganised prompt and a structured one is not marginal. In enterprise deployments, teams routinely report output quality improvements of thirty to fifty percent simply from restructuring their system prompts, without changing the model or the pipeline.
In the medium term, the session points toward a coming bifurcation in the AI-enabled workforce. The first group will treat AI as a better search engine — ask it things, get answers, move on. The second group will treat it as a programmable collaborator — give it a persona, a framework, examples, constraints, and a clear success criterion, then iterate. The outputs the second group produces will be categorically different from what the first group gets, and the gap will only widen as models become more capable and reward structural clarity even more richly.
And in the long term — the horizon that matters most for founders and investors thinking about where moats will actually form — the signal from Anthropic’s session is that the value is migrating upward in the stack. The commodity layer is the model. The durable layer is the prompt architecture, the curated example library, the domain-specific context that makes a general-purpose intelligence perform like a specialist. That is intellectual property. That is defensible. That is, increasingly, what it means to build something that lasts.
Moran and Ryan ended their session with a call to experimentation that was, in retrospect, the most important thing they said. The framework is not a formula. It is a starting point. The best prompt engineers — the ones building products that will define the next decade of software — are the ones who understand the principles deeply enough to break the rules intelligently.
Which, it turns out, is exactly what the best engineers have always done.
Jabulani Simplisio Chibaya is a Data and AI Consultant specializing in data science, artificial intelligence, blockchain, and cryptocurrency innovation. A seasoned conference speaker, he also writes on the intersection of technology, regulation, and economic development. Contact: Cell: +263 778 921 881, Email: simplisiochibaya22@gmail.com, LinkedIn: https://www.linkedin.com/in/jabulani-simplisio-chibaya
Discover more from Etimes
Subscribe to get the latest posts sent to your email.


