Introducing:

Cog·ni·tec·ture

/ˌkɒɡ.nɪˈtɛk.tʃər/

noun


The craft of orchestrating AI with context — to outcomes you verify and own.

TL;DR

AI makes execution cheap & fast. Quality context & judgement are the new bottlenecks. The cognitecture craft is orchestrating AI with context, and applying your judgment to verify and own the outcome.

Want your agent to talk you through this?

Naming the craft...

There are technical terms out there for working with AI agents, but none of them felt right for me.

"Orchestration" describes the coordination layer. "Prompt engineering" describes the interface.

Neither captures the human craft underneath.

The verbs are old. The economics are new: parallel cognition (the ability to run many agents at once) is now cheap, and the surface area for confident error has grown massively because of it.

The craft of exploiting that parallelism without losing track of what's actually real and worthwhile is what needs a name.

The human brings context, domain knowledge, stakeholder access, and judgment. Agents bring speed, parallelism, code generation, and pattern matching. Verification bridges both. That full practice is cognitecture.

My aha moment:

1 person, 22 repos, 6 weeks

In November and December of 2025 I challenged myself. One client. ~Fifteen data sources: ticket sales, 4 ad platforms, POS, web analytics, tourism. The kind of project that needs a data team but can't afford one.

I had myself and Claude Code (and a bunch of bookmarked viral X threads about how to use it).

6 weeks later, I delivered 22 repos, 50+ data models, 8+ dashboards, for ~$100/month in GCP infrastructure. (Full case study)

What used to require a team and months, I delivered "alone" (hey Claude 👋) in weeks.

A client who couldn't justify a data team now has better infrastructure than most companies that can.

One thing became very clear to me: my work has changed forever.

Since then, I've been moving everything I do into this new paradigm.

Everything I do is now a .git repo, and all of the context I create lives in .md files that are indexed and searchable by my agents.

Each task generates more context and awareness which makes the next task a bit more frictionless and productive.

This is what I now call cognitecture. The craft of orchestrating AI with context — to outcomes you verify and own.

cog·ni·tect
/ˌkɒɡ.nɪˈtɛkt/
noun

A person who practices cognitecture. A way of working. AI amplifies it.

A cognitect doesn't just prompt. They gather context, decompose problems, direct agents, verify outputs against reality, integrate results into real systems with real stakeholders, and own the result. Not a role. A practice. The whole thing, end to end.

You want enough domain exposure to know what "right" looks like: a grasp of the architecture to look under the hood when something smells wrong.

Domain knowledge can now become software directly, without writing code.

The person with the domain exposure, who is experiencing the problems firsthand, is increasingly the one who builds their own solutions.

Think in tasks, not jobs

You don't have "a job." You probably never did. What you have is a constantly shifting mix of tasks. Some need your judgment. Some need your taste. Some are just you copy-pasting between systems while pretending it's strategic work.

AI doesn't replace jobs. It replaces tasks. And which tasks it can take over depends on two questions:

  1. Can the context be captured structurally?
  2. Is there a feedback loop to measure success?

Both yes — AI runs it end to end. You govern, not operate. Both no — you're still doing the work. AI is a fancy spellcheck here.

The real stuff is in between. Feedback loop works but context lives in someone's head? You become the context layer — codifying knowledge so AI can iterate, but let your guard down and garbage in still equals garbage out. Context exists but "good" is subjective? AI executes, you judge. You're now the bottleneck of a swarm of agents eager to produce more — as long as you can verify the output.

I mapped this into a 2×2 I call the AI Leverage Matrix.

It won't tell you if your job is safe. Wrong question. It tells you which of your tasks are about to change — and what your role becomes in each.

The career move isn't "learn prompting." It's making context capturable and building feedback loops. The economic incentive is there, so market forces will push in that direction. The question is whether you swim against that current or surf the wave.

Context is the new bottleneck

For most of human history, everyone was bottlenecked by execution capacity: your own hands, hours, and knowledge. That bottleneck has shifted. Not to "human skills" in the abstract. To context.

The quality of what AI produces is bounded by the context it has access to.

Code is unusually context-rich (a verbal description of how you're solving a problem), which is why agentic workflows succeed there early. A 2026 CMU/Stanford study mapping 72,000+ AI agent tasks to U.S. occupations confirmed this: agent development is overwhelmingly concentrated on programming — a domain covering just 7.6% of employment. Meanwhile, highly digitized fields like management (88% digitized) get barely 1% of benchmark coverage. The opportunity gap is massive.

Once you make any domain equally context-rich (thorough descriptions, domain knowledge, historical decisions, stakeholder preferences), it becomes just as solvable. Without rich context, even the most capable models produce fluent garbage.

Business logic that used to require years of specialized engineering is migrating from code to markdown. Domain experts can now encode their methodology directly.

In many internal workflows, the interface is disappearing. Everything will flow through agents. The person directing the agent becomes the interface.

Feedback loops close the gap

Context gets AI the right information. But how do you know the output is actually good?

A feedback loop is a measurable signal of success or failure that allows iteration. Automated tests, objective metrics, user behavior, ground truth comparisons — any signal the system can use to improve.

Without one, you ship and hope. With one, AI can self-correct. The difference between "generate and pray" and "generate, measure, improve" is the loop.

A loop is strong when it's fast (minutes, not months), frequent (enough volume to learn), aligned (measures what you actually care about), and actionable (you can change the process based on it).

Each quadrant has a different operating model — what AI can do, and what your job becomes. Explore the full playbook →

Context compounds

Knowing what context matters, how to structure it, keep it current, and verify agents are using it correctly — and building the feedback loops that confirm it's working. Those are the cognitect's two levers. They require domain exposure, taste, and rigor.

Every task generates more context: decision logs, verified outputs, stakeholder feedback, documented edge cases. A cognitect builds systems where that context accumulates and stays useful.

Economic modeling backs this up: in an AI-saturated economy, ground truth ownership, verification infrastructure, and trust-based network effects are the durable competitive moats — not execution capacity.

Will AI eventually close the gap with us humans on direction, taste, and integration? Maybe. But the window is open now. I'd rather build around it than wait to see if it closes.

The five cognitecture disciplines

If AI handles the execution, what's left for us humans? Five disciplines that together form the practice of cognitecture.

Direction — deciding what to do and why.

When agents can explore fifty paths in the time it used to take to explore one, choosing becomes harder, not easier. Cheap execution makes commitment the binding constraint. Direction without commitment is just a wish.

Rigor — verifying against reality.

Agents generate plausible, confident output at speed. That confidence is exactly what makes them dangerous without scrutiny. Catalini et al. model this from economic first principles: AI's binding constraint isn't intelligence — it's human verification bandwidth. The gap between what AI can generate and what humans can verify only widens as capability scales. Run the code. Query the data. Test the edge cases. Call the client. AI can assist verification, but the final check has to touch the real world. A cognitect needs to distrust their agents. Evaluate, debug, and steer when tools fail. If you can, you're practicing. If you can't, you're dependent.

Ownership — standing behind the outcome.

The decision to own the outcome is what makes the work matter. You define what "done" looks like. You set the risk tolerance. You live with what happens after it ships. If you can't be held to it, you don't own it.

Integration — surviving the real world.

The hardest part and the most human. Dealing with stakeholders, politics, compliance, security, and the distance between a prototype and production. Research confirms the gap: "Interacting with Others" dominates most real-world jobs but gets almost zero AI benchmark coverage, and 77% of real tasks span multiple domains — exactly the messy, cross-cutting work that agents handle worst. Context that isn't written down anywhere becomes the advantage: boardroom dynamics, physical constraints, off-the-record client preferences. AI knows what's been digitized. You know what hasn't.

Taste — trained judgment under constraints.

Subjectivity backed by enough reps that your instincts can be trusted under constraints of audience, risk, cost, time, and maintainability. Sometimes taste means choosing the less flashy output because it fits the constraint. The discipline includes continuing to do enough hands-on work that your taste stays sharp.

Yes, one letter away from DROID. Working on it.

Open loops

This manifesto makes claims I believe in but haven't fully resolved. Intellectual honesty means naming them.

The junior pipeline problem. If AI automates junior roles before they produce experts, who becomes the next generation of cognitects? Catalini et al. call this the "Missing Junior Loop" — the economy loses the mechanism by which tacit knowledge transfers generationally. Cognitecture assumes a supply of domain-exposed practitioners. It doesn't yet address how that supply sustains itself.

Scope of proof. The case study that triggered this manifesto — 1 person, 22 repos, 6 weeks — happened in code and data infrastructure. That's the domain where AI agents already work best, covering just 7.6% of U.S. employment. The framework is presented as universal, but the proof points live in the easiest quadrant. Whether cognitecture generalizes to management, legal, healthcare, and education is an open question, not a settled one.

The race to the bottom. Cognitecture argues for verification, rigor, and ownership. But firms that skip all three — shipping unverified AI output — enjoy short-term cost advantages over those who do the work. Catalini et al. model this as a predator-prey dynamic: "self-corrects only through catastrophic failure." Doing it right is more expensive than cutting corners, and the market doesn't always reward the careful. This is a structural pressure, not a character flaw, and the manifesto doesn't yet have an answer for it.

Own the outcome

Anyone can generate. The person who verifies, integrates, and stands behind the outcome is the one who matters. Doing that with artificial intelligence at scale, that's cognitecture. That's the bet.

The Cognitects Club

If your terminal history is 80% AI conversations, come hang out. Free community, zero pitch. Just people figuring this out together.

Further reading

These are the thinkers and pieces that have shaped my thinking, both the ones I agree with and the ones that challenge me.

The case for expansion:

The case for caution:

On the economics of disruption:

On the shape of AI agent development: