Use ← → arrow keys or Space to navigate
Code with AI
Tool-Agnostic Best Practices for Modern Development
Sergey Kurdin
Senior Developer at Charles River Labs (Apollo SA project) • 30+ years building software
Built and shipped at Marriott.com, Ski.com, It.com, Amazon, NIH, startups
PasteBar App Maintainer — Free Clipboard Manager for Mac & Windows (1.7k★)
1 / 29
Agenda
Foundation: Why AI coding now & mindset shift → Slides 3-6
Understanding AI: How LLMs work & context windows → Slides 7-12
Best Practices: Prompt patterns & planning-first approach → Slides 13-16
Git Workflow: Safe version control & incremental commits → Slides 15, 17
CLI-first Agents: Common commands & workflow (tool-agnostic) → Slides 21-23
Quality Control: Testing, reviews & catching bad patterns → Slides 18-19, 25
Limits & Safety: AI security guardrails → Slide 20
Human Skills: What matters MORE with AI → Slide 26
Key Takeaways: Start small, verify everything → Slides 27-28
💡 Tip: Click any topic to jump directly to that section
2 / 29
Evolution of AI-Assisted Coding
Early Tools
Syntax highlighting → readability Linting & type checking → fewer mistakes
AI Breakthrough
Natural-language Q&A Inline autocomplete in IDEs
Modern Coding Agents
Repo-aware (understand structure & context)
Plan first , then implementation
Plan → apply small step → test → iterate (reviewable diffs)
3 / 29
Why AI for Developers — Now
Less typing, more steering
Fast refactors and unit tests; iterate on variants
Faster docs and code reviews
Learns and follows your repo’s patterns
Produces code aligned with project style
✓ You guide, review, and accept ✓ AI accelerates mechanics ✓ You own design and code quality
💡 Pro Tip: Generate and reuse AGENTS.md
with repo-specific instructions.
4 / 29
Current Tooling Landscape
Autocomplete copilots — inline suggestions
Chat assistants — design, code, docs, Q&A
CLI-first coding agents Most Powerful
Understand the repo Plan → Review → Implement workflow Direct file editing with reviewable changes
✓ Focus on patterns, not brands — most agents support similar flows.
5 / 29
The Mindset Shift
Build systems that solve problems
Shift from coder → architect
AI suggests; you guide & control
Your responsibilities:
Define the problem clearly Curate context Review changes thoroughly Enforce quality gates
Result: Fewer keystrokes, more impact
6 / 29
What is AI? (LLMs in 2 Minutes)
LLMs are trained on text/code patterns
They are not databases or compilers
They predict the next token
Strengths: recall, mimicry, scaffolding, breadth
Weaknesses: hallucinations, no runtime awareness
⚠️ Think “autocomplete on steroids,” not a truth engine.
7 / 29
How LLMs Generate Code
Tokenization → Probabilities → Decoding
Output quality depends on: prompt clarity, curated context, explicit constraints
Temperature controls creativity vs. safety
Always validate with build/run/tests
✓ Ask for plans first → apply small steps → compile → test.
8 / 29
Understanding Models & Context Windows
Limits vary by provider; effective window < advertised
Specialization: general vs. code-tuned models
Choose model by task: coding vs. explaining
✓ Code models: strong patterns • General models: strong explanations.
9 / 29
Why Context Matters
Models only see a limited window
Too much context → truncation
Provide minimal relevant files + explicit acceptance criteria
Priority: 1) error & stack 2) target files & imports 3) types/interfaces 4) tests 5) project config
✓ Mantra: Curate → Confirm → Constrain → Verify
10 / 29
The Confidence Calibration Problem
LLMs are confident even when wrong
Red flags: invented APIs, deprecated methods, odd features
Mitigate: cross-check docs, be strict with new libs, track recurring hallucinations
⚠️ Example: non-existent useState2()
, Array.prototype.contains()
11 / 29
Mental Model: Architecting with AI
Decompose → write acceptance criteria
Constrain scope → request small changes
Iterate quickly → run, test, review each step
Treat AI like a junior dev : plan first, ask “why,” request alternatives
⚠️ Review diffs visually; never rely only on passing output.
12 / 29
General Best Practices
Before Code
Clarify intent & constraints
Provide only relevant context
During Code
Plan → review → implement
Request small diffs
Compile early; iterate
After Code
Review/ask AI to explain its diff
Lint, typecheck, build, test
Manual tests + unit tests
Security & performance checks
✓ Workflow: Plan → Implement → Test → Review
13 / 29
Prompt Patterns That Work (CLI Agents)
Structure: Role • Context • Task • Constraints • Verification
Be explicit about files/scope, runtime/versions, safety expectations
Prefer plan → apply; keep steps small & reviewable
Keep outputs focused (commands, concise summaries)
Example Prompt (Refactor, Diff-Applied):
Role: Senior TypeScript engineer.
Context: Node 20, Jest; repo uses src/ and test/.
Target: src/auth/token.ts#getUserToken duplicates retry/backoff logic.
Task: Extract retry/backoff into src/utils/retry.ts and reuse it in getUserToken
without changing external behavior.
Constraints:
• Keep public signatures stable
• Do not change unrelated modules
• Update test/auth/token.spec.ts to cover 429/503 with exponential backoff (max 3 attempts)
Verification:
• build passes (Node 20)
• npm run typecheck
• npm test -- test/auth/token.spec.ts
Out of scope:
• Do not modify unrelated modules or configuration files
14 / 29
Version Control & AI Patterns
The Incremental Staging Pattern
1) New branch: git switch -c ai/<feature>
2) Clean working dir: git status
3) Ask AI for a plan → approve small, focused changes
4) Stage incrementally: git add -p
(review hunks)
5) Test locally (lint, typecheck, compile, unit tests)
6) Good → commit: git commit -m "[AI]: Extract retry logic"
7) Bad → git restore
/git revert
and retry
8) Repeat small steps
git switch -c ai/refactor-retry
git add -p # Review hunks interactively
git commit -m "[AI]: Extract retry logic"
git restore --staged . # Unstage if needed (avoid reset --hard)
15 / 29
Planning-First for Big Features
Plan upfront → Break into manageable subtasks → Reduce risk
Step 1: Describe the Feature
Current state, requirements, constraints, screenshots/wireframes
"As a product architect, summarize this feature spec: [details]"
Step 2: Request Implementation Plan
Break into subtasks (<100 LOC each)
"Create plan: subtasks, dependencies, tests; save as [feature]-plan.md"
Step 3: Review & Approve
Approve plan or request changes; then implement subtask 1
"Start implementation of subtask 1 from [feature]-plan.md"
16 / 29
Essential Git Commands for AI Work
git switch -c ai/<feature>
— Dedicated branch
git add -p
— Stage hunks
git restore --staged .
— Unstage to retry
git diff --staged
— Verify staged changes
git stash
— Quick save when off-track
git commit -m "[AI] checkpoint: desc"
git revert HEAD
— Safe undo
git log --grep="[AI]"
— Track AI commits
git rebase -i
— Clean before PR
pre-commit hooks — auto-lint/format
⚠️ Avoid reset --hard
17 / 29
Testing AI-Generated Code
AI code needs extra testing — it makes creative mistakes
Property-based testing: define invariants; auto-generate inputs
Mutation testing: ensure tests catch injected bugs
Weakness detection: ask AI to break its own code (edge cases)
Browser testing via tools/APIs: scripted navigation & assertions
Golden Rule: AI code → Human tests | Human reviews → AI tests
18 / 29
Performance Awareness & AI-Assisted Reviews
AI code is often correct but not optimal — profile and review
Perf Issues to Watch
N+1 DB queries Nested loops Blocking I/O Memory leaks
AI Double-Loop Reviews
Risk analysis (security/perf) Missing test cases Draft PR descriptions Edge case reviews
"Find 3 issues in this diff and suggest fixes"
⚠️ Never skip human review—assume AI is wrong until tests & profiling say otherwise
19 / 29
Data & Safety Guardrails
Protect customer data, secrets, and IP
Never paste secrets — use placeholders or env vars; configure redaction
Mask customer data — use synthetic examples
Tag AI commits — prefix [AI]
Review licenses for generated code/deps
Enterprise/local models for private repos
20 / 29
CLI-First AI Workflow (Tool-Agnostic)
Automation, consistency, CI/CD-friendly
Why CLI?
Scriptable: wrap prompts in bash/python
Portable: terminals, servers, remote
Standardize: share configs & aliases
Common Agent Capabilities
Plan / Review / Diff / Apply
Mention files or folders
Search or fuzzy-find files
Resume sessions, compact/summarize
✓ Flow: Plan → Diff preview → Apply small step → Test → Commit
21 / 29
Common CLI Agent Commands (Generic)
Session & Config
/init
— generate AGENTS.md
/ set context
/status
— show settings
/model
— pick model/effort level
/new
or /resume
— session control
Workflow
/plan
— propose steps
/diff
— preview changes
/apply
— apply edits
/review
— critique changes
/mention
— add files to context
@
— fuzzy file search
About AGENTS.md Files
CLI agents automatically detect and use AGENTS.md
files for context
Use multiple AGENTS.md files for different repo areas:
backend/AGENTS.md
— API-specific patterns & rules
frontend/AGENTS.md
— UI conventions & components
libs/AGENTS.md
— Component library guidelines
Each file provides domain-specific context to guide AI behavior
22 / 29
CLI Agent Workflow Example
1) Start & Create Plan
$ claude | codex
→ Plan retry logic with exponential backoff; save to retry-plan.md
2) Review Plan
$ cat retry-plan.md
← Approve before applying
3) Execute Approved Step
claude | codex
→ Implement step 1 from retry-plan.md; run tests after changes
4) Stage & Commit
$ git add -p
$ git commit -m "[AI]: Implement retry logic step 1"
5) Test & Push
$ npm test
$ git push -u origin ai/retry-logic
→ Open PR for review
23 / 29
The Context Switching Cost
Batch similar tasks (tests → refactors)
Save successful prompts in team KB
Build context once and reuse
When stuck: clear context, simplify, provide a working example
15-minute rule: change approach or restart
💡 Provide a minimal snippet instead of the whole file for specific edits
⚠️ If the session gets long/noisy, summarize or start fresh
24 / 29
Bad Patterns in AI Code
Generic names: data
, item
, result
Structure issues: deep nesting, clever one-liners, unnecessary indirection
Missing safeguards: no error boundaries, hardcoded values, swallowed errors
Debug artifacts: stray logs, redundant imports, overused type assertions
⚠️ Catch & fix these before opening a PR
25 / 29
Human Skills That Matter More
System design (AI can’t architect)
Code review (3× more important)
Specification writing is the new programming
Domain expertise differentiates you
Debugging intuition
⚠️ AI amplifies both good and bad practices
26 / 29
Key Takeaways
AI is a tool, not a replacement
Small, verifiable changes win
Context is everything
Don’t trust — verify
Your skills become more valuable
Plan, guide, review — you stay in control
Start small. Ship safely. Measure impact.
27 / 29
Practical AI Workflow
Effective Prompts
Planning:
"Create plan for [feature]. Save to [feature].md for review"
Bugfix:
"Fix [bug] in [file]. Include test."
Refactor:
"Extract [logic] to [target dir]."
Testing:
"Add tests for [file] edge cases."
Docs:
"Document [API] with examples."
Git Safety Flow
Start with clean dir
Have AI make one small change
git add -p
→ review
Test locally first
Pass? → commit
Fail? → git restore --staged
Iterate with a better prompt
Golden Rule: Smaller changes = Easier rollbacks
28 / 29
Thank You!
AI Coding Best Practices & Patterns — Tool-Agnostic
29 / 29
← Previous
1 / 29
Next →