Model-driven templates are the most reliable form of computational feedforward harness for an AI coding agent. They make whole classes of mistake structurally impossible, because the agent is producing a small structured model rather than free-form code. Talk Think Do has run a model-driven templating tool called Codenative for years. With coding agents, it has been re-implemented as native Cursor Skills, with natural language as a translation layer on top of an unchanged deterministic core.
- A harness template is a bundle of guides and sensors that leashes the agent to a known topology.
- A generator is harder to jailbreak than a prose rule.
- The agent’s job is to produce the model, not the code; the template generates the code.
- Conflict-free regeneration is what keeps a template harness useful past day one.
- If you do not have a Codenative-equivalent, elevate the scaffolder you already have.
What is a harness template, and why does it matter?
The term comes from the closing section of Birgitta Böckeler’s harness engineering article, where she speculates that service templates may evolve into harness templates: bundles of guides and sensors that leash an AI coding agent to the structure, conventions, and tech stack of a known topology. Her examples are a CRUD business service on the JVM, an event processor in Go, and a data dashboard in Node.
This guide is the practitioner answer to that open question. Talk Think Do has been running a model-driven templating tool, Codenative™, for years. With the arrival of coding agents, it has become a working harness template: the same templates are now invocable by an AI agent, and the natural-language layer the agent provides rides on top of an unchanged deterministic core. The result is a feedforward control that is harder to jailbreak than any prose rule.
If you are coming from the main harness engineering guide, this is the deep dive on Sections 10 and 12. If you arrived here directly, the short version is: every text rule worth keeping should aspire to become a deterministic check or a generator. This guide is about the generator half.
Why text rules alone are not enough
A .cursor/rules/use-repository-pattern.mdc that describes a convention can be hallucinated past. Under context pressure, on a long task, or after a noisy diff, the rule the agent followed at the start gets quietly dropped at the end. We have seen the same pattern with AGENTS.md instructions, with prompt-resident style guides, and even with carefully written examples.
A type checker, linter, or structural test catches the same problem the same way every time. It does not have moods, does not run out of context window, and does not trade adherence to one rule against adherence to another.
A code generator goes a step further. It does not catch the mistake; it removes the opportunity to make it. The agent cannot forget to use the repository pattern if the only way to add a new entity is to invoke a Skill that scaffolds the entity, the repository, and the migration in one step.
Generators are the strongest form of computational feedforward, because they make whole classes of mistake structurally impossible.
Codenative: a deterministic harness that predates AI coding agents
Years before LLMs, Talk Think Do invested in Codenative™, a model-driven templating tool covering the full lifecycle: project scaffolding, ongoing model-driven feature generation, regeneration on model changes with conflict-free merging, and template-enforced conventions. It is entirely deterministic. It underpins our Accelerators (Billing and Payments, Assessment Platform, Booking Engine, Enterprise RAG, Mobile App Framework, HubSpot Integration) without being a public product. Codenative is a TTD trademark.
The story has three stages.
Authoring then: YAML domain models
Engineers wrote a YAML domain model describing entities, fields, and relationships. Codenative read the model and generated the .NET API surface, EF Core entities and migrations, React and TypeScript forms and types, and the matching test scaffolding, all conforming to TTD conventions. The output was reliable, regeneration was safe, and conventions were enforced by construction rather than by review.
Authoring now: natural language as a translation layer
With coding agents, an engineer or an agent describes intent in natural language. “Add a Booking entity with a date, party size, customer reference, and full CRUD endpoints.” The agent translates that intent into the same YAML domain model the templates already accept, and the deterministic generator produces the artefacts.
The natural-language layer rides on top of an unchanged deterministic core.
This is why the combination is harness engineering done well. The deterministic templates give the agent a target shape to translate into. The agent is not free-form generating code; it is producing a small structured model that the templates can consume. The non-deterministic step is bounded. The deterministic step does the heavy lifting.
It is also why natural language is suddenly a viable input. Without the templates, asking an LLM to “add a Booking entity” produces code that may or may not match conventions, may or may not include migrations, and may or may not have tests. With the templates, the agent only has to map intent to a small structured model. The template handles every consequential detail.
Conflict-free regeneration is the credibility detail
Codenative is not a one-shot scaffolder. When the model changes (new field, renamed entity, additional relationship) it regenerates the affected artefacts and merges into hand-written and AI-extended code without clobbering it. This is the property most teams cannot achieve with off-the-shelf code-gen, and the reason regeneration stays useful long after the project’s first day.
Most scaffolders fail this test. dotnet new templates, Yeoman, Plop, and Cookiecutter are bootstrap-only by default. They produce a project, then get out of the way. Treat them that way until you have engineered a regeneration protocol on top.
The transition into the agent era: Skills, not a CLI
Codenative tooling and templates have been re-implemented as native Cursor Skills inside .cursor/skills/<name>/. There is no separate CLI to invoke. The agent uses Skills the same way it uses any other Skill, and the templates live alongside the rest of the harness (rules, AGENTS.md, MCP servers).
The flow is simple. The agent receives an intent, finds the Skill that knows how to satisfy it, calls the Skill with the parsed model, and integrates the generated output into the working set. The Skill itself is markdown plus scripts. The deterministic part of the work happens inside the Skill, not in the agent’s free-form output.
A note on detail: this guide describes the pattern (deterministic templates, YAML or natural-language input, conflict-free regeneration, Skills as the invocation surface). We do not publish internal Codenative implementation detail.
Why this is harness engineering, not just code generation
Mapping Codenative onto Böckeler’s framework makes the case clear.
It is a working example of her harness templates open question. A bundle of guides (the domain model, the conventions, the AGENTS.md rules around when to use the Skill) and sensors (template conformance checks, build, tests) that leashes the agent to a known topology.
It realises the ambient affordances sidebar in concrete form. Codenative-generated code is structurally legible to the agent because it follows a known shape it has already seen elsewhere in the codebase. The agent navigates Codenative-generated areas more confidently than greenfield code, because the shape is familiar in every entity, repository, and migration.
It is an Ashby’s Law variety reduction. Ashby’s Law says a regulator must have at least as much variety as the system it governs. An LLM has near-infinite variety. A codebase committed to a topology has narrow variety. The template absorbs the unbounded part, and the agent solves the bounded, novel parts.
It collapses the gap between deterministic tooling and AI agents. The same template now serves both a human engineer and an AI agent through the same Skill interface. There is no separate “AI mode”. The deterministic backbone is the AI mode.
What if you don’t have a Codenative? Practical patterns for any team
You almost certainly already have a scaffolder. Most teams just stop using it once the project bootstraps, or fail to make it accessible to an agent. Here is the playbook for elevating what you have.
Inventory existing scaffolders. Likely candidates:
dotnet newtemplates for .NET project structures, services, and libraries.- Yeoman generators for full-stack scaffolds.
- Plop for in-project boilerplate (component, hook, page).
- T4 templates for code-generation inside .NET.
- Hygen for filesystem-driven generation.
- NSwag and OpenAPI generators for API client and server stubs.
- Cookiecutter for cross-language project templates.
- Nx generators for monorepo workspaces.
Wrap each as a Cursor Skill or expose it through MCP. A small markdown wrapper documents inputs, idempotency guarantees, and the conventions enforced. The Skill calls the underlying tool. The agent reaches for the Skill before it reaches for raw editing.
Add a sensor that detects drift from the generated shape. ArchUnitNET works for .NET module structure. Custom ESLint rules work for TypeScript component patterns. A schema-diff tool flags drift in SQL Server schemas. A simple git diff --check against a regenerated baseline catches manual edits to generated regions.
Resist hand-evolving the generated parts. Push customisation into extension points the template knows about: partial classes, override hooks, configuration files, plugin slots. Generated regions stay generated. Hand-written regions stay hand-written. The boundary is enforced by the sensor.
Start small. One entity-shaped feature is enough to prove the loop. Pick something safe (an internal admin model, a less critical aggregate) and run the cycle: model change, regenerate, merge, sensors green, ship. If it works, expand.
A worked example end-to-end: from natural language to merged PR
Here is a single Booking entity addition end-to-end on a React, TypeScript, .NET 10, and SQL Server stack.
<div class="ht-pl-stage navy">
<div class="ht-pl-stage-num">1</div>
<div class="ht-pl-stage-title">Intent</div>
<div class="ht-pl-stage-sub">Natural language</div>
</div>
<div class="ht-pl-conn" aria-hidden="true">
<div class="ht-pl-conn-line"></div>
<div class="ht-pl-conn-arrow"></div>
</div>
<div class="ht-pl-stage navy-light">
<div class="ht-pl-stage-num">2</div>
<div class="ht-pl-stage-title">YAML model</div>
<div class="ht-pl-stage-sub">Bounded translation</div>
</div>
<div class="ht-pl-conn" aria-hidden="true">
<div class="ht-pl-conn-line"></div>
<div class="ht-pl-conn-arrow"></div>
</div>
<div class="ht-pl-stage magenta">
<div class="ht-pl-stage-num">3</div>
<div class="ht-pl-stage-title">Skill</div>
<div class="ht-pl-stage-sub">Codenative templates</div>
</div>
<div class="ht-pl-conn" aria-hidden="true">
<div class="ht-pl-conn-line"></div>
<div class="ht-pl-conn-arrow"></div>
</div>
<div class="ht-pl-stage teal-dark">
<div class="ht-pl-stage-num">4</div>
<div class="ht-pl-stage-title">Artefacts</div>
<div class="ht-pl-stage-sub">Entity, API, form, tests</div>
</div>
<div class="ht-pl-conn" aria-hidden="true">
<div class="ht-pl-conn-line"></div>
<div class="ht-pl-conn-arrow"></div>
</div>
<div class="ht-pl-stage teal-dark">
<div class="ht-pl-stage-num">5</div>
<div class="ht-pl-stage-title">Sensors</div>
<div class="ht-pl-stage-sub">Build, tests, schema</div>
</div>
<div class="ht-pl-conn" aria-hidden="true">
<div class="ht-pl-conn-line"></div>
<div class="ht-pl-conn-arrow"></div>
</div>
<div class="ht-pl-stage teal">
<div class="ht-pl-stage-num">6</div>
<div class="ht-pl-stage-title">Merged PR</div>
<div class="ht-pl-stage-sub">Reviewed and shipped</div>
</div>
The walked-through narrative:
- Intent. An engineer types into Cursor: “Add a Booking entity with date, party size, customer reference, and CRUD endpoints. It should integrate with the existing membership service.”
- Translation to YAML model. The agent reads the existing schema, recognises the Codenative model conventions, and produces a model fragment matching them. The fragment is small, structured, and reviewable: an entity name, a list of fields with types and validation rules, a reference to the membership aggregate, and a CRUD policy. The agent shows the model to the engineer for a quick review before invoking the Skill.
- Skill invocation. The agent calls the Codenative Skill in
.cursor/skills/codenative/. The Skill loads the existing model, applies the new fragment, and runs the deterministic generator. - Generated artefacts. The .NET entity, EF migration, repository, controller, DTOs, React form, validation schema, and tests appear in the working tree. They follow the same conventions as every other entity in the codebase, because they are produced from the same templates.
- Sensors. The pre-push pipeline runs:
dotnet build,dotnet test, ArchUnitNET, SqlPackage schema-diff, ESLint,tsc --noEmit. An inferential review-agent then runs against the diff, grounded in the computational sensor output, looking for issues a deterministic check cannot catch (naming drift, business-logic suspicions, integration gaps). - Merged PR. The engineer reviews and approves. Time-to-merge is measured in minutes for a feature that historically took hours.
The reliability comes from the boundary. The agent’s non-deterministic step is small (translate intent into a model fragment). Everything downstream is deterministic: templates, sensors, build. The only inferential layer in the feedback loop is grounded in deterministic signal first.
Where templates stop and freeform AI begins
Templates do not solve every problem. Novel domain logic, irregular UI, complex algorithms, and one-off integrations should remain agent-assisted hand-written code. Pretending otherwise leads to template fatigue: contorting a model to fit an architecture that was never meant to be templated, or generating code that no engineer would have written by hand.
The skill is knowing where the template boundary is. A few practical heuristics:
- CRUD-shaped features, schema-driven UI, and reference data flows belong inside the template. They are repetitive, conventional, and unforgiving of inconsistency. A generator does this better than an agent every time.
- Domain logic with real complexity belongs outside the template, but lives next to it through extension points. A booking validity rule, a pricing calculation, or a workflow gate is rarely templatable. Use partial classes, override hooks, and plugin slots so the agent can add hand-written logic without invading generated regions.
- Integrations to external systems vary too much for templates. A new HubSpot workflow action or a one-off webhook is hand-written, with the agent free-form coding inside guardrails set by AGENTS.md and computational sensors.
- UI that follows your design system is templatable; UI that breaks it is not. A standard form is a generator job; a bespoke interactive visualisation is not.
The harness narrows variance. It does not eliminate it. Accept that and the rest gets easier.
Where to take this next
If you are starting from scratch, the order to invest is: pick one shape that recurs in your codebase, scaffold a template for it, wrap the template as a Cursor Skill, and add a drift sensor. That is enough to prove the loop. Expand outward from there.
For the broader harness picture, including computational and inferential controls, the Cursor and Claude Code comparison, and the .NET-specific recipes that wrap around your templates, read the main harness engineering guide.
For real numbers from a working harness, the Q1 2026 AI Velocity Report records 84% AI-authored code, six live custom MCP servers, and the Codenative re-housing as Cursor Skills.
To see the kinds of accelerator that Codenative templates produce in production, look at our Accelerators page.
To talk through a template harness for your own stack, including auditing existing scaffolders for elevation, see our Claude Code development service or book a free consultation.
Frequently asked questions
What is a harness template?
Are code generators and project templates a form of harness?
How can natural-language intent stay reliable when an LLM is producing code?
Can I use a generator alongside an AI agent without conflicts?
We don't have a Codenative-equivalent. How do we get started with template-based harnessing?
Is regeneration safe when the codebase has hand-written extensions?
Related guides
Harness Engineering for Coding Agents
What harness engineering means for AI coding agents: inner vs outer harnesses, guides, sensors, and deterministic tools for Cursor and Claude Code.
The EU AI Act and Custom Software: What UK Businesses Commissioning AI Need to Know
When you commission custom AI-powered software, the EU AI Act determines who carries which obligations. This guide explains provider vs deployer, risk classification, and what UK businesses must do before August 2026.
Using AI to Meet the GDS Service Standard
How AI tools, agent rules and skills (in tools like Claude Code and Cursor), and the GOV.UK Design System help delivery teams meet the GDS Service Standard across research, design, build, and operation, and where human effort still concentrates.