From feedback to live update
in 24 hours.
Every day, an autonomous AI pipeline collects player feedback, plans changes, tests them with automated bots, and ships a new build. Here's every step.
Players play and leave feedback. The AI pipeline processes it overnight. Changes ship by morning. Players see the impact. The loop repeats — every single day.
Collect
Gathering the signal
Every day at 06:00 UTC, the pipeline wakes up and pulls all new player feedback and telemetry data from the past 24 hours. Ratings, written comments, session metrics (deaths, choices, session length), and bug reports all flow into the queue.
Feedback is linked to player profiles so the pipeline can credit contributors when changes ship.
Triage
Understanding what matters
An AI agent reads every feedback item against the Game Design Document. It categorizes (balance, content, bug, art, UX), deduplicates similar reports, and prioritizes by frequency and severity.
"Too hard" from 20 players outranks "add hats" from 1. Bugs always go to the top.
Plan
Designing the changes
The AI plans concrete, specific changes: "reduce bat damage from 15 to 12", "add a 0.8s telegraph to the boss slam", "generate a new enemy sprite variant." Each change maps to one or more feedback items.
Plans are validated against the guardrail system before any code is touched.
Validate
Checking the guardrails
Every proposed change is checked against parameter bounds (min/max values), design boundaries (what the AI is and isn't allowed to touch), and the game's style guide. If a change would break a rule, it's rejected.
Green zone changes (balance tweaks) auto-approve. Yellow zone (new content) needs extra scrutiny. Red zone (core mechanics) is off-limits.
Execute
Making the changes
Sub-agents execute each approved change. Balance agents edit JSON config files. Content agents add new entities. Art agents call Scenario.gg to generate style-consistent sprites. Bug agents patch code.
All changes are data-driven — the AI edits config, not engine code.
Build & Test
Automated quality assurance
The updated game is built from source. Then Playwright bots play 100 automated sessions using different strategies (random, survival, aggressive) and collect performance metrics.
If survival time drops by more than 20% or crash rate increases, the build fails and changes are reverted.
Deploy
Shipping to players
If all tests pass and metrics are healthy, the pipeline commits the changes, pushes to main, and Cloudflare deploys the new build globally. The game is live within minutes.
Every deploy creates a new version entry with full traceability back to the feedback that caused it.
Notify
Closing the loop
Players whose feedback was implemented get notified — by email and by an in-game popup the next time they play. Their name appears in the release notes. The changelog updates publicly.
This is the loop. Play → Feedback → AI evolves → You get credited → Repeat.
Safety guardrails
The AI is powerful but bounded. Every change must pass through three safety zones before it can ship.
Green Zone
Balance tuning, number changes, spawn rates, damage values. The AI can freely adjust these within parameter bounds.
Yellow Zone
New content, enemy variants, visual changes. Allowed but requires extra validation and automated playtesting before deploy.
Red Zone
Core mechanics, control scheme, game structure. The AI cannot touch these — they're defined by the Game Design Document and protected.
Ready to shape a game?
Play ScrapScrap, leave your feedback, and come back tomorrow to see it in action.