Joel Lawler

AI & What I'm Learning

AI-augmented workflows and continuous learning

AI & Prompt Engineering

6-Stage AI-Augmented SDLC Workflow

  1. Planning & Requirements: Use AI to analyze user stories, identify edge cases, and suggest acceptance criteria.
  2. Design: Generate architecture diagrams, API specifications, and database schemas with AI assistance, then review and refine.
  3. Implementation: AI-assisted code generation with strict code review and testing requirements.
  4. Testing: Generate test cases, identify test coverage gaps, and create integration test scenarios.
  5. Code Review: AI-powered static analysis and suggestion generation, with human reviewers focusing on logic and architecture.
  6. Documentation: Auto-generate API docs, update READMEs, and create deployment guides from code and commit history.

Example Prompts (by stage)

Copy into Cursor or ChatGPT. Substitute [bracketed] parts with your context. Enforce YAGNI and team standards.

1. Planning & Requirements
You are a senior engineer doing requirements analysis. I'm giving you a user story or feature description.

Input:
[Paste the user story, ticket, or feature description]

Output in this order:
1. Clarifying questions (max 5) I should ask product/stakeholders before implementation.
2. In-scope acceptance criteria (Given/When/Then or checklist), specific and testable.
3. Out-of-scope for this story (explicitly exclude to avoid scope creep).
4. Edge cases and failure modes we must handle (auth, validation, network, idempotency).
5. Dependencies (other services, APIs, schema changes) and risks.

Tech context: [e.g. Next.js 15, React 19, PostgreSQL, REST/API]. Apply YAGNI—only what's needed for this story.
2. Design
You are a staff engineer doing technical design. I'm giving you requirements and context.

Input:
- Requirements: [summary or link to acceptance criteria]
- Constraints: [scale, compliance e.g. HIPAA/SOC2, existing stack]
- Existing: [relevant services, DB schema, or "greenfield"]

Output:
1. Proposed data model or schema changes (tables, key fields, indexes). Justify choices.
2. API contract: method, path, request/response shape, status codes, idempotency and errors.
3. Sequence or flow (e.g. "client → API → DB → event" or numbered steps). Call out failure points.
4. Security and compliance checklist for this design (auth, encryption, audit, PII).
5. Alternatives considered and why this one (one short paragraph).

Format: clear headings; code blocks for schema/API only where helpful. No implementation code yet.
3. Implementation
You are a senior engineer implementing a single, well-scoped task. Follow our stack and conventions strictly.

Context:
- Stack: [e.g. Next.js 15, React 19, TypeScript, Tailwind, Shadcn UI, PostgreSQL]
- Conventions: [e.g. Server Components by default, server actions for mutations, @/lib for shared code, no inline styles]
- Relevant files: [paste file paths or key snippets the new code must integrate with]

Task: [One clear sentence: e.g. "Add a POST /api/feedback handler that validates body, writes to feedback table, returns 201 or 4xx."]

Requirements:
- Match existing patterns in the codebase (naming, structure, error handling).
- No new dependencies unless justified. YAGNI: only what this task needs.
- Include brief JSDoc or comments for non-obvious logic.
- Output only the new/changed code and a 1–2 line summary of what to add where.
4. Testing
You are a senior engineer writing tests. I'm giving you code to test and our test setup.

Input:
- Code under test: [paste the function, component, or API handler]
- Test framework: [e.g. Jest, React Testing Library, Vitest]
- Existing test patterns: [e.g. "we use describe/it, mock X with Y, file next to source as *.test.ts"]

Output:
1. Unit tests: happy path, key edge cases (null, empty, invalid input), and error paths. One describe block per logical behavior.
2. List any integration or E2E scenarios we should add (e.g. "full request to POST /api/feedback with invalid body").
3. Coverage gaps: what this suite does not cover and whether it's acceptable.

Write runnable test code only; no pseudocode. Assert specific values or behaviors, not just "does not throw".
5. Code Review
You are a principal engineer doing a thorough code review. Focus on correctness, security, performance, and maintainability.

Input:
- Diff or code: [paste the changed files or diff]
- Context: [e.g. "new API for feedback form", "migration to add column X"]

Review against:
1. Correctness: logic bugs, off-by-one, race conditions, error handling.
2. Security: input validation, auth/authz, injection, sensitive data (logs, responses).
3. Performance: N+1, missing indexes, unnecessary work, payload size.
4. Maintainability: naming, duplication, complexity, testability.
5. Standards: [e.g. "REST conventions, no raw SQL without parameterization, TypeScript strict"].

Output:
- Summary: 1–2 sentences (ship / needs changes).
- Blocking issues: must fix before merge (list with file/line and fix suggestion).
- Non-blocking: nice-to-have (same format).
- Positive note: what was done well.

Be specific: "Add null check for user.id at line 42" not "handle nulls better".
6. Documentation
You are a technical writer producing accurate, minimal documentation from code and context.

Input:
- What to document: [e.g. "new API /api/feedback", "setup for local dev", "deployment for service X"]
- Code/repo context: [paste relevant OpenAPI snippet, env vars, or "see repo README for stack"]

Output:
1. For APIs: endpoint, method, request/response shape, example curl, error codes, and any auth required. No prose beyond what's needed.
2. For setup/deploy: numbered steps, prerequisites, env vars (with "set to X for local"), and how to verify it works.
3. Changelog-style: 2–3 bullet points for "What changed" suitable for a PR description or release note.

Tone: concise, imperative ("Run the server" not "You can run the server"). Assume reader has repo access. Update only what changed; don't duplicate existing docs.

Learning Log

Currently Reading

  • 📖"Clear Thinking" by Shane Parrish
  • 📖"The Great Mental Models Book 1" by Shane Parrish

Currently Learning

  • 🎓AWS Certified AI Practitioner (AIF-C01) by Staphane Maarek

Recently Completed

  • Advanced Prompt Engineering Course (DeepLearning.AI)

Up Next

  • 🎯Kubernetes Advanced Patterns
  • 🎯The Great Mental Models Book 2" by Shane Parrish