Overslaan naar inhoud

NNSA-2026-001

Session Data in rosett-ai v1.2.0 — Detected, Remediated, Prevented

On April 15, we published rosett-ai v1.2.0 to RubyGems. Three days later, an internal security audit found that the gem package contained AI development session logs that should not have been included. We yanked v1.2.0 immediately and published clean replacements (v1.3.0, v1.3.1). This post explains what happened, what the impact was, and what we have done to prevent it from happening again.

What Happened

The rosett-ai gemspec used git ls-files to determine which files to include in the gem. Development session log files — generated during AI-assisted development — were tracked by git in a doc/claude-sessions/ directory. Because they were tracked, they were packaged into the gem.

The result: v1.2.0 was 54.9 MB. Clean versions are 36.3 MB. The 18.6 MB difference was session data that had no business being in a distribution package.

What Was Exposed

The session files contained AI development conversation logs, internal project planning discussions, and technical decision rationale. In short: the kind of internal working notes you would find in any development team's chat history.

What Was NOT Exposed

We ran automated credential scanning (gitleaks + pattern matching) against every session file. The results:

  • Zero credentials — no API keys, tokens, or passwords
  • Zero customer data — rosett-ai had no external users at time of release
  • Zero third-party data — only NeatNerds internal content
  • No code vulnerability — this was a packaging issue, not a security flaw in the software

What We Did

  1. Yanked v1.2.0 from RubyGems within hours of discovery
  2. Published clean builds — v1.3.0 and v1.3.1 (36.3 MB each)
  3. Hardened the gemspec with a positive file allowlist — only lib/, bin/, conf/, licence files, and documentation are included
  4. Added session data to .gitignore across all projects
  5. Scanned all session files for credentials — confirmed clean

What We Are Building

This incident exposed a gap in our release process: we verified code quality (tests, linting, security audits) but never asked "what is actually in the box?" That question now has a permanent answer:

  • Pre-release content audit — every release is checked against a positive allowlist before publication
  • CI/CD security pipeline — automated scanning for session data, memory files, and credential patterns in every pipeline
  • Security behaviour rules — enforceable rules that prevent release commands without a prior content audit
  • Gem size monitoring — unexpected size increases trigger investigation

These measures apply to all NeatNerds projects, not just rosett-ai.

If You Installed v1.2.0

Upgrade to v1.3.1: gem update rosett-ai. Delete cached copies: gem cleanup rosett-ai. No credential rotation or configuration changes are needed.

A Word From the Founder

Let me be honest: this is embarrassing. This is our first public software release, and shipping session transcripts inside it is not exactly the professional debut you hope for. Our release process did not include enough due diligence in reviewing everything that went into that gem before pushing it out the door.

But here is the thing — this incident is exactly why we build tools like rosett-ai in the first place. It demonstrates why AI provenance matters, why transparency matters, and why you cannot just hide behind fancy words like "responsible AI" and "governance framework" without actually doing the work.

We could have quietly yanked the version and hoped nobody noticed. We chose not to. If you trust our software with your infrastructure, you deserve to know when we make a mistake and what we did about it. Not a sanitised version of events — the real thing.

Our full disclosure policy is in our SECURITY.md. Security reports can be sent to security@neatnerds.be.


Advisory ID: NNSA-2026-001 | Published: 2026-04-18 | Severity: Medium

← Back to Transparency | Report a security issue →