Show Your Work
The school rule that the industry forgot.
Why This Page Exists
AI is powerful. That’s exactly why it’s dangerous to use without disclosure.
When a tool can write code, generate documentation, and draft legal text, the people who depend on that output deserve to know what was human and what wasn’t. Not because AI output is bad — but because trust requires verifiability. I can’t ask you to trust my software if I won’t show you how it was made.
And yet, we’re in an industry that treats AI like a trade secret — something to hide because admitting you used it might make your work seem less valuable. Companies quietly feed code through AI, strip the attribution, and ship it as if a human wrote every line.
I think that’s backwards.
AI is a tool. I use it. I’m transparent about it. The work is better for it — not because the AI is smarter than me, but because the combination of human judgement and AI capability, with proper oversight, produces results that neither could alone. The key word is oversight. The AI helps; I decide. And you get to verify that for yourself.
Hiding AI involvement helps no one. It erodes trust, it undermines accountability, and it makes it impossible to audit quality. If the industry won’t set this standard, I will — starting with my own work.
The Bigger Picture
The threat isn’t hypothetical. Governments and institutions are already using AI-generated content to shape public perception — not with truth, but with volume. When fabricated text is indistinguishable from genuine analysis, and when the tools to produce it are available to anyone with an agenda, the only defence is radical verifiability. Not “trust me” — but “here’s the evidence, verify it yourself.”
That’s what AI transparency is: a defensive weapon against a world where truth is increasingly manufactured. Every commit trailer, every provenance file, every disclosed AI contribution is a small act of defiance against the idea that it doesn’t matter where information comes from.
I cannot fix the information climate. But I can make sure that everything leaving my hands is traceable, attributable, and honest. That’s the standard. It’s not negotiable.
How I Disclose
Every NeatNerds product follows the same disclosure protocol. This isn’t a policy document that lives in a drawer — it’s enforced by tooling, verified by CI pipelines, and visible in every repository.
Git Commit Trailers
Every commit that involves AI assistance carries a trailer:
AI-Generated-By— the AI produced the primary contentAI-Co-Author— the AI and a human wrote it togetherAI-Assisted-By— the AI provided suggestions, the human wrote itAI-Reviewed-By— the AI reviewed human-written code
Real example (from openvox-mcp):
feat(governance): community concerns pre-publication review
Systematic review of all open community concerns before mirroring
openvox-mcp to GitHub. 11 decisions across 8 concerns, all traced
to source threads.
Code changes:
- Accept both AI-Co-Author and Co-authored-by trailers as AI
disclosure in validate_commit
- Tier 2 enforcement changed from mandatory to advisory pending
formal community adoption
OP#897
AI-Co-Author: Claude Opus 4.6 (Anthropic)
This is a real commit. You can verify it yourself: clone the repository, run git log, read the trailers. Every NeatNerds repository follows the same protocol.
Provenance Files
Every repository contains an .ai-provenance.yml file that documents:
- Which AI tools were used
- What model versions were involved
- The scope of AI involvement
- Session dates and context
This file is version-controlled — you can trace how AI involvement evolved over the life of a project.
SPDX Headers
All source files carry SPDX licence headers. These identify the licence, the copyright holder, and the file’s origin. Combined with commit trailers, they create a complete chain of attribution from the file level down to individual changes.
Human Review Gates
AI output doesn’t ship without review. Every contribution passes through:
- Linting — automated code style and quality checks
- Static analysis — structural and security analysis
- Mutation testing — verifying that tests actually catch bugs
- Test coverage — ensuring new code is exercised by tests
- Human review — a person reads and approves every change
The AI proposes. The CI pipeline validates. The human decides.
What This Means for You
As a user
You can inspect exactly what was AI-assisted in any NeatNerds product. Clone the repository, run git log, read the trailers. The provenance file tells you the rest. No black boxes.
As a contributor
If you contribute to a NeatNerds project, the same disclosure standards apply to you. Used Copilot to write a function? Add the trailer. Had ChatGPT help with a commit message? Disclose it. The rules are the same for everyone — including me.
As a client
Custom work follows the same transparency protocol. If I use AI tools during your engagement, you will know — through the same commit trailers and provenance files. This is not optional and it is not negotiable, even if you’d prefer not to know.
Why the Industry Needs This
AI disclosure is not a NeatNerds quirk. It’s a necessity that the software industry has been slow to adopt.
The problem: When AI-generated code ships without attribution, nobody can audit it. Was a security-critical function written by a human who understood the threat model, or by an AI that pattern-matched from training data? Without disclosure, you can’t tell. Without being able to tell, you can’t trust.
The regulatory direction: The EU AI Act requires transparency for AI systems. Public-sector procurement increasingly asks about AI involvement in deliverables. Belgian federal IT guidelines are moving toward disclosure requirements. The standard is coming whether the industry likes it or not — the only question is whether you get ahead of it or scramble to catch up.
What NeatNerds is doing about it:
- openvox-mcp enforces AI governance policies for the OpenVox Puppet ecosystem — attribution, licensing, security, and scope
- Rosett-AI lets you define and compile AI development rules across your entire team
- This page exists to show that the standard is achievable, practical, and not a burden — it’s just discipline
Work in Progress
I’ll be honest: this transparency standard is something I’m building, not something I’ve perfected. I’m establishing it with the help of the very AI tools I’m disclosing — which is either poetic or ironic, depending on your perspective.
Not every historic commit in every repository carries a perfect trailer. I’ve rewritten git history. I’ve forgotten to add provenance entries. I’m human, and this process is new for everyone.
What I can tell you is that going forward, the standard is set, the tooling enforces it, and the intent is genuine. If I fall short, it won’t be because I was hiding something — it’ll be because I’m still learning how to do this well.
Where to look:
- NeatNerds GitLab: gitlab.com/neatnerds — every repository, every commit
- Rosett-AI source: gitlab.com/neatnerds/NNSD/NeatNerds-ai/rosett-ai
- openvox-mcp governance policy: published in the repository
- This website’s own content: the ToS opens with a disclosure that it was co-written with Claude Code
Transparency isn’t a feature. It’s a habit — like a clean git log or a well-pressed suit. You either do it every time, or it means nothing.