Skip to Content

NeatNerds Rosetta

Governance for AI coding tools — declare once, enforce everywhere.


AI coding assistants are reshaping how software gets built. But every tool speaks its own language, follows its own rules, and forgets its instructions between sessions. There is no standard for expressing what AI tools should and should not do — and no way to enforce it across vendors.

Rosetta changes that.

Beyond Vibe Coding

The first behaviour rule I wrote was called critical thinking.

It was a simple instruction: challenge my assumptions, do not just agree with me, back up objections with evidence. I saved it as a YAML file — a simple, human-readable text format used for configuration — loaded it into my AI coding assistant, and something shifted.

The AI stopped being a yes-machine. It pushed back on bad ideas. It cited documentation instead of guessing. It remembered the rule across every session, every file, every conversation.

One rule. Everything changed.

That was the moment I realised: AI coding tools are not dangerous because they are powerful — they are dangerous because they have no persistent constraints. Every session starts from zero. Every instruction is momentary. There is no governance, no memory of what matters, no accountability for what gets produced.

Vibe coding — letting AI tools run freely on intuition and hope — works until it does not. And when it does not, the damage is real: hallucinated code in production, leaked credentials, infrastructure changes nobody authorised, outputs that violate policies nobody thought to express.

I built Rosetta because AI needs training wheels. Not to slow it down — to keep it on the road.

See It in Action

This is the actual critical thinking rule — the one from the story. On the left, the human-readable source. On the right, what Rosetta compiles for Claude Code.

✎ You write this (YAML)

name: criticalthinking
description: Claude code critical thinking
version: 1.0.0
author: hugo
created_at: '2026-01-25'
rules:
  - id: rule_001
    description: |
      Claude code must use critical thinking,
      avoiding pleasing the user and rather
      attempt to challenge the user to its
      ideas and wishes
    priority: 50
    enabled: true
  - id: rule_002
    description: |
      Claude code must always use factual
      references when challenging user with
      links to documentation, source material
      articles, or other information to build
      a solid response.
    priority: 50
    enabled: true

→ Rosetta compiles to Claude Code rule





# criticalthinking

Claude code critical thinking

## Rules

### rule_001 (priority: 50)

Claude code must use critical thinking,
avoiding pleasing the user and rather
attempt to challenge the user to its
ideas and wishes

### rule_002 (priority: 50)

Claude code must always use factual
references when challenging user with
links to documentation, source material
articles, or other information to
build a solid response.

The same rule, compiled for Cursor, Copilot, or any other engine, would produce that engine's native format. One source, every target.

Declare Once, Enforce Everywhere

Rosetta is a governance platform for AI coding tools. You write your rules once — in a simple, human-readable format — and Rosetta translates them into the native configuration that each AI tool understands.

Currently supported
  • ✓ Claude Code (Anthropic)
  • ✓ AGENTS.md
Coming soon
  • ◯ Cursor
  • ◯ GitHub Copilot
  • ◯ Windsurf (Codeium)
  • ◯ Aider
  • ◯ Goose (Block)
  • ◯ Ollama (local AI models)
  • ◯ GPT-NeoX (open-weight models)
  • ◯ More via plugin architecture

Two engines today, nine targets in development. Your critical thinking rule, your security policies, your coding standards, your compliance requirements — declared once, enforced everywhere. When you switch tools or add a new one, your governance travels with you.

The Governance Chain

declare → translate → constrain → audit

Declare

Teams write governance rules in structured, human-readable text files. These files can be reviewed by anyone, tracked in version control, and discussed in team reviews — just like code. No proprietary formats, no vendor lock-in.

Translate

Rosetta takes your rules and compiles them into the format each AI tool expects. Claude Code gets its rules files. Cursor gets its configuration. Copilot gets its instructions. You write once; every tool receives what it needs.

Constrain

Through the Model Context Protocol (MCP) — an open standard for giving AI tools controlled access to external systems — Rosetta ensures AI assistants can interact with your infrastructure, but only within boundaries you define.

What is MCP? The Model Context Protocol is an open standard that lets AI tools connect to external services (databases, project managers, code repositories) in a safe, structured way — like a controlled API specifically designed for AI assistants.

Audit

Every translated rule carries provenance: who wrote it, where it came from, whether it was authored by a human or generated by AI. Credit and accountability flow back to the people whose knowledge made the output possible.

Provenance and Authorship

AI solutions are built on human knowledge. Rosetta ensures credit, accountability, and traceability flow back to the humans whose work made those solutions possible.

Provenance Tracking

Every rule, every translation, every AI-assisted output carries metadata about its origin. Where did this rule come from? Who wrote it? When was it last changed? Rosetta tracks the full chain from declaration to enforcement.

Authorship Attribution

When AI tools produce output based on your governance rules, the original human authors are credited. Source material is traceable. This matters for compliance, for intellectual property, and for simple fairness — the humans who taught the AI deserve recognition.

Applied in Practice

This is not theoretical. Provenance and authorship tracking are active in our companion MCP servers. Every compliance finding in auditor-mcp, every Puppet validation in openvox-mcp distinguishes between human-authored and AI-generated contributions.

Built for Real Communities

Rosetta is the governance core. Two companion projects demonstrate what governed AI looks like in practice — and serve communities that need this tooling right now.

openvox-mcp

AI tooling for infrastructure automation

OpenVox is the open-source continuation of Puppet, a widely-used system for managing servers and infrastructure. When the original project moved towards closed-source licensing, the community forked it to keep it open. Voxpupuli maintains nearly 400 community modules that thousands of organisations depend on.

openvox-mcp gives AI coding tools structured, governed access to this ecosystem: validating Puppet code, scaffolding new modules, checking compliance. 57 tools, 27 resources, serving ~80 OpenVox repositories and ~379 community modules.

View on GitLab

auditor-mcp

AI-assisted compliance auditing

Compliance auditing — checking whether systems meet security and regulatory standards like ISO 27001 — is tedious, repetitive, and critical. auditor-mcp brings AI assistance to CINC-Auditor and InSpec (open-source compliance tools).

Organisations can run compliance checks, analyse gaps, and generate reports with full traceability of what the AI contributed versus what a human authored. 17 tools, 5 guided workflows. Every AI-assisted finding is attributable.

Public release coming soon

Both projects were built using Rosetta's own design methodology. They are not just companion tools — they are proof that human-expressed AI governance produces real, working, well-tested software.

Tested, Measured, Verified

We believe in showing evidence, not making promises. Every number below comes from live test runs on our codebase — not estimates or targets.

ProjectTestsLine CoverageBranch Coverage
Rosetta (nncc)4,52799.0%90.1%
auditor-mcp64198.2%88.9%
openvox-mcp (ecosystem)506
Total5,674
What do these numbers mean?

Tests are automated checks that verify the software works correctly. 5,674 individual checks run every time we change the code — catching bugs before they reach users.

Line coverage measures what percentage of the code is exercised by tests. 99% means virtually every line has been verified.

Mutation testing goes further: it deliberately introduces bugs and verifies that tests catch them. We require a 100% catch rate on all changed code.

  • Zero style violations across 613+ source files
  • Zero known security vulnerabilities (audited March 2026)

This is not a prototype. This is production-grade software built to the standards we would expect from the tools we govern.

A Building Block for Digital Sovereignty

Sovereign infrastructure — systems that are fully under your control, not dependent on any single vendor or jurisdiction — needs governance tooling that meets four requirements:

Transparent
Rules are human-readable, auditable by anyone.

Portable
Not locked to any vendor or platform.

Enforceable
Automated checks detect and block non-compliance.

Offline-capable
Works with locally hosted AI, no cloud dependency.

Rosetta meets all four. Built in Belgium, licensed GPL-3.0-only (free and open source, with the strongest copyleft protection), designed to work in air-gapped environments with locally hosted AI models.

Europe is building its own digital infrastructure. That infrastructure needs governance tooling that does not depend on the platforms it is meant to govern.

Open Source, Open Development

Rosetta is developed openly on our GitLab instance. The project is preparing for its first public release — when ready, the full source code, documentation, and installation guides will be published.

Browse our projects on GitLab

All code GPL-3.0-only. No exceptions. No proprietary add-ons. No gated features.

We Would Love to Hear From You

Whether you have questions, ideas, or a project where AI governance could help — our door is open.

Questions?

Curious how Rosetta works, what it can do, or when the public release is coming? Just ask.

Have a project?

Tell us how your organisation uses AI tools and where governance fits in. We are always interested.

Need help now?

Consulting and integration for AI governance, compliance, and accountability requirements.

Ideas?

We love hearing how people think about AI governance. Share your perspective — it might shape what we build next.

query@neatnerds.be

Banner image: Rosetta Stone by Hans Hillewaert, CC BY-SA 4.0

NeatNerds BV — Belgian SRL, FOSS-only since 2016. All code GPL-3.0-only.