What to look for in AI-generated diffs before you merge
A practical checklist for reviewing AI output — correctness, security, UX, and when to say “ship it.”
AI can move fast; your review is what keeps the app safe and coherent. You do not need to read every line like a traditional code review — you need a short, repeatable pass before you merge or deploy.
Does it do what you asked?
Run the app or the affected flow. If the feature is “add a forgot-password link,” click it end-to-end. AI often gets 90% right and misses one transition or empty state.
Security and data
- Secrets: No API keys, tokens, or passwords in source or in chat logs that get committed. Use environment variables and
.env.local(gitignored). - User input: Forms and URLs should be validated or escaped where the stack expects it. If something touches the database or auth, pay extra attention.
- Dependencies: If the diff adds packages, glance at what they are — stick to well-known names when possible.
UX and accessibility
- Focus and labels: Buttons and inputs should be reachable by keyboard; images need alt text when they carry meaning.
- Errors: Failed actions should show a clear message, not a silent failure or a raw stack trace to users.
Consistency with the rest of the project
New code should feel like the same app: naming, file placement, and patterns that already exist. If the model invented a parallel “utils2” folder, consider asking it to align with your structure in a follow-up prompt.
When to stop reviewing and ship
If the flow works, nothing obvious is leaking, and the change matches your product’s bar, merge. You can always open a follow-up prompt for polish. VibeShare is full of apps that got better after the first deploy — not before.
For more vocabulary around prompts and workflows, see the glossary.