Se rendre au contenu

Rosett-AI

Governance for AI coding tools — declare once, enforce everywhere.


AI coding assistants are reshaping how software gets built. But every tool speaks its own language, follows its own rules, and forgets its instructions between sessions. There is no standard for expressing what AI tools should and should not do — and no way to enforce it across vendors.

Rosett-AI changes that.

Beyond Vibe Coding

The first behaviour rule I wrote was called critical thinking.

It was a simple instruction: challenge my assumptions, do not just agree with me, back up objections with evidence. I saved it as a YAML file — a simple, human-readable text format used for configuration — loaded it into my AI coding assistant, and something shifted.

The AI stopped being a yes-machine. It pushed back on bad ideas. It cited documentation instead of guessing. It remembered the rule across every session, every file, every conversation.

One rule. Everything changed.

That was the moment I realised: AI coding tools are not dangerous because they are powerful — they are dangerous because they have no persistent constraints. Every session starts from zero. Every instruction is momentary. There is no governance, no memory of what matters, no accountability for what gets produced.

Vibe coding — letting AI tools run freely on intuition and hope — works until it does not. And when it does not, the damage is real: hallucinated code in production, leaked credentials, infrastructure changes nobody authorised, outputs that violate policies nobody thought to express.

I built Rosett-AI because AI needs training wheels. Not to slow it down — to keep it on the road.

See It in Action

This is the actual critical thinking rule — the one from the story. On the left, the human-readable source. On the right, what Rosett-AI compiles for Claude Code.

✎ You write this (YAML)

name: criticalthinking
description: Claude code critical thinking
version: 1.0.0
author: hugo
created_at: '2026-01-25'
rules:
  - id: rule_001
    description: |
      Claude code must use critical thinking,
      avoiding pleasing the user and rather
      attempt to challenge the user to its
      ideas and wishes
    priority: 50
    enabled: true
  - id: rule_002
    description: |
      Claude code must always use factual
      references when challenging user with
      links to documentation, source material
      articles, or other information to build
      a solid response.
    priority: 50
    enabled: true

→ Rosett-AI compiles to Claude Code rule





# criticalthinking

Claude code critical thinking

## Rules

### rule_001 (priority: 50)

Claude code must use critical thinking,
avoiding pleasing the user and rather
attempt to challenge the user to its
ideas and wishes

### rule_002 (priority: 50)

Claude code must always use factual
references when challenging user with
links to documentation, source material
articles, or other information to
build a solid response.

The same rule, compiled for Cursor, Copilot, or any other engine, would produce that engine's native format. One source, every target.

Declare Once, Enforce Everywhere

Rosett-AI is a governance platform for AI coding tools. You write your rules once — in a simple, human-readable format — and Rosett-AI translates them into the native configuration that each AI tool understands.

Currently supported
  • ✓ Claude Code (Anthropic)
  • ✓ AGENTS.md
Coming soon
  • ◯ Cursor
  • ◯ GitHub Copilot
  • ◯ Windsurf (Codeium)
  • ◯ Aider
  • ◯ Goose (Block)
  • ◯ Ollama (local AI models)
  • ◯ GPT-NeoX (open-weight models)
  • ◯ More via plugin architecture

Two engines today, nine targets in development. Your critical thinking rule, your security policies, your coding standards, your compliance requirements — declared once, enforced everywhere. When you switch tools or add a new one, your governance travels with you.

The Governance Chain

declare → translate → constrain → audit

Declare

Teams write governance rules in structured, human-readable text files. These files can be reviewed by anyone, tracked in version control, and discussed in team reviews — just like code. No proprietary formats, no vendor lock-in.

Translate

Rosett-AI takes your rules and compiles them into the format each AI tool expects. Claude Code gets its rules files. Cursor gets its configuration. Copilot gets its instructions. You write once; every tool receives what it needs.

Constrain

Through the Model Context Protocol (MCP) — an open standard for giving AI tools controlled access to external systems — Rosett-AI ensures AI assistants can interact with your infrastructure, but only within boundaries you define.

What is MCP? The Model Context Protocol is an open standard that lets AI tools connect to external services (databases, project managers, code repositories) in a safe, structured way — like a controlled API specifically designed for AI assistants.

Audit

Every translated rule carries provenance: who wrote it, where it came from, whether it was authored by a human or generated by AI. Credit and accountability flow back to the people whose knowledge made the output possible.

Provenance and Authorship

AI solutions are built on human knowledge. Rosett-AI ensures credit, accountability, and traceability flow back to the humans whose work made those solutions possible.

Provenance Tracking

Every rule, every translation, every AI-assisted output carries metadata about its origin. Where did this rule come from? Who wrote it? When was it last changed? Rosett-AI tracks the full chain from declaration to enforcement.

Authorship Attribution

When AI tools produce output based on your governance rules, the original human authors are credited. Source material is traceable. This matters for compliance, for intellectual property, and for simple fairness — the humans who taught the AI deserve recognition.

Applied in Practice

This is not theoretical. Provenance and authorship tracking are active in our companion MCP servers. Every compliance finding in auditor-mcp, every Puppet validation in openvox-mcp distinguishes between human-authored and AI-generated contributions.

Built for Real Communities

Rosett-AI is the governance core. Two companion projects demonstrate what governed AI looks like in practice — and serve communities that need this tooling right now.

openvox-mcp

AI tooling for infrastructure automation

OpenVox is the open-source continuation of Puppet, a widely-used system for managing servers and infrastructure. When the original project moved towards closed-source licensing, the community forked it to keep it open. Voxpupuli maintains nearly 400 community modules that thousands of organisations depend on.

openvox-mcp gives AI coding tools structured, governed access to this ecosystem: validating Puppet code, scaffolding new modules, checking compliance. 57 tools, 27 resources, serving ~80 OpenVox repositories and ~379 community modules.

View on GitLab

auditor-mcp

AI-assisted compliance auditing

Compliance auditing — checking whether systems meet security and regulatory standards like ISO 27001 — is tedious, repetitive, and critical. auditor-mcp brings AI assistance to CINC-Auditor and InSpec (open-source compliance tools).

Organisations can run compliance checks, analyse gaps, and generate reports with full traceability of what the AI contributed versus what a human authored. 17 tools, 5 guided workflows. Every AI-assisted finding is attributable.

Public release coming soon

Both projects were built using Rosett-AI's own design methodology. They are not just companion tools — they are proof that human-expressed AI governance produces real, working, well-tested software.

Tested, Measured, Verified

We believe in showing evidence, not making promises. Our quality standards are enforced by automated tooling on every commit — not by discipline alone.

Continuous verification

Every change passes through automated tests, static analysis, and security audits before it can be merged. Coverage and quality metrics are tracked in our public CI pipelines — not static claims on a webpage.

Mutation testing

Beyond line coverage, we use mutation testing: the tooling deliberately introduces bugs into the code and verifies that tests catch them. We require a 100% catch rate on all changed code — the highest standard in software quality assurance.

Zero-tolerance quality gates

Zero style violations. Zero known security vulnerabilities. These are not aspirational targets — they are enforced pre-commit hooks that block non-compliant code from entering the repository.

Dependency auditing

Every dependency is audited for known CVEs on every pipeline run. Vulnerable dependencies block the build until resolved — not flagged for later review.

This is not a prototype. This is production-grade software built to the standards we would expect from the tools we govern. Verify it yourself — the pipelines are public.

A Building Block for Digital Sovereignty

Sovereign infrastructure — systems that are fully under your control, not dependent on any single vendor or jurisdiction — needs governance tooling that meets four requirements:

Transparent
Rules are human-readable, auditable by anyone.

Portable
Not locked to any vendor or platform.

Enforceable
Automated checks detect and block non-compliance.

Offline-capable
Works with locally hosted AI, no cloud dependency.

Rosett-AI meets all four. Built in Belgium, licensed GPL-3.0-only (free and open source, with the strongest copyleft protection), designed to work in air-gapped environments with locally hosted AI models.

Europe is building its own digital infrastructure. That infrastructure needs governance tooling that does not depend on the platforms it is meant to govern.

Open Source, Open Development

Rosett-AI is developed openly on our GitLab instance. The core compiler, all engine plugins, and extensions are published under GPL-3.0-only.

Rosett-AI core on GitLab

Engine plugins: gitlab.neatnerds.be/foss/rosett-ai/ — engines, extensions, desktops, and more.

All code GPL-3.0-only. No exceptions. No proprietary add-ons. No gated features.

We Would Love to Hear From You

Whether you have questions, ideas, or a project where AI governance could help — our door is open.

Questions?

Curious how Rosett-AI works, what it can do, or when the public release is coming? Just ask.

Have a project?

Tell us how your organisation uses AI tools and where governance fits in. We are always interested.

Need help now?

Consulting and integration for AI governance, compliance, and accountability requirements.

Ideas?

We love hearing how people think about AI governance. Share your perspective — it might shape what we build next.

query@neatnerds.be

Banner image: Rosetta Stone by Hans Hillewaert, CC BY-SA 4.0

NeatNerds BV — Belgian SRL, FOSS-only since 2016. All code GPL-3.0-only.