Clayton Davis Blog

The blog of an overly opinionated cloud consultant

28 Oct 2025

Enabling an AI-Powered SDLC by Shifting Left

Enabling an AI-Powered SDLC by Shifting Left

Why the bottleneck isn’t your engineers—it’s your requirements.

If your AI-enablement rollout starts and ends with “developer enablement,” you’ll hit a ceiling fast. Sustainable acceleration happens when product and engineering co-author richer requirements and context—up front—so AI can one-shot features instead of thrashing on code iterations.

The Core Idea: Spend Cycles Where They Compound

In a traditional flow, we rush scope, write a thin ticket, and toss it over the wall to get it done fast. Engineers try to fill in the gaps, iterate with AI (or not), and bounce PRs through review. Lots of motion in UAT iterating on those lack-luster requirements. Not all progress.

In an AI-driven SDLC, the cheapest place to invest time is before code exists—while the information surface is still small. Every sentence of clarified intent becomes leverage:

  • Clearer requirements → fewer AI misfires
  • Richer context → higher-fidelity first drafts
  • Shared constraints → less rework and debate

When your prompt + context are strong, the model can produce code, tests, telemetry, and docs that land close to done on the very first pass. The goal is one-shot features: merge after a single human pass, not five.


Why Product Must Lead

Product managers think in problems, users, outcomes, edge cases, and acceptance. That’s exactly the shape a model needs. Put differently, product managers are natural prompt architects—they just don’t call it that.

What product manager contribute that engineers can’t easily retrofit later:

  • User & domain semantics: language the model should mirror in UX and docs
  • Outcomes over tasks: target behaviors and success metrics, not implementation details
  • Edge-case narratives: the “weird but real” flows that cause 80% of rework
  • Non-functionals in business terms: what “fast,” “secure,” and “compliant” actually mean for this user and environment

Once that foundation exists, developers layer technical context—house rules, arch constraints, interfaces, data models, code layout, relevant files—to guide codegen toward your stack and standards.

Sequence matters: Product shapes the what and why; engineering binds the how to your system.


Shift-Left, Practically: The Five Context Blocks

Replace thin tickets with a single, long-form context artifact attached to the work item/ticket. Think of it as the “meal kit” the model cooks from. Five blocks:

  1. Problem & Users

    • Persona, pain, outcome, success criteria, business rules
    • Edge cases
  2. Behavior & Acceptance

    • Happy path + alternative flows
      Given/When/Then examples in the language of the domain
    • UX notes, empty-state copy, error messages
  3. System Constraints

    • Target services/APIs, data ownership, event shapes, idempotency rules
    • Privacy/compliance flags (PII, HIPAA/PCI), tenancy model
  4. Quality & Operability

    • Performance SLOs (p95, payload sizes), error budgets, timeouts/retries
    • Observability contract: logs, metrics, traces, event names, alerts
  5. House Rules (Reusable)

    • Architecture patterns (e.g., hexagonal), error handling, security posture
    • Style guides, lint rules, testing pyramid, feature flags, i18n/a11y

Tip: Keep Block 5 as a versioned “House Rules Manifest” referenced by every ticket. It shouldn’t get retyped; it is maintained by engineers.

The Goal - The “One-Shot” Loop

  1. Capture → Expand: Product managers drafts Blocks 1–2; an LLM expands scenarios and acceptance tests.
  2. Bind to System: Dev adds Blocks 3–5 (constraints, house rules).
  3. Generate: AI creates code + tests + telemetry + docs in a branch with an ephemeral environment.
  4. HITL Review: Developers review code prior to deployment; Product/QA click through; approve or annotate gaps.
  5. Lock-In: Regenerate tests from the approved behavior (post-acceptance).
  6. Merge & Monitor: Ship behind a flag; watch SLOs and business KPIs.

Aim for merge after one human pass. If you’re averaging 3–5 passes, your context is too thin or your house rules are too implicit. Retro’s should start to incorporate how you could have written the context better to get to a single pass


Why Developer-Only Enablement Tops Out

Starting with IDE copilots and codegen feels great—until ambiguity, rework, and cross-team alignment swamp the gains. Symptoms of the ceiling:

  • Tickets read like “Build X” with no narrative or acceptance
  • PRs bounce on “this isn’t how we do logging/auth/validation”
  • AI produces plausible but wrong flows (right nouns, wrong verbs)
  • Designers and product managers review late, creating late-stage pivots

The fix isn’t “more AI in the editor.” It’s shifting left so the AI has something trustworthy to aim at.

But of note - less requirements + faster iteration is a model that works really well for quick POCs or pilots. AIs ability for quick iteration based on feedback makes it a super tool. But this doesn’t scale into well established product orgs.


Context Engineering: Make the Implicit Explicit

Think of context engineering as curating the ingredients list for the model. Yes, some is unique per ticket—edge cases, constraints. The rest is reusable doctrine:

  • Architecture & Integration: service boundaries, retries, idempotency
  • Security: authZ model, PII boundaries, secrets handling
  • Testing: fixtures conventions, contract test philosophy, snapshot policy
  • Observability: log fields, metric names, trace spans, alert thresholds
  • UX Writing: tone, capitalization, error style, empty states
  • Performance: p95 targets by endpoint class, budgets by screen

Maintain these as machine-readable manifests (Markdown/YAML) and reference them by version in tickets. When the manifest changes, you get governance and reproducibility “for free.”


Tooling Without the Religion

You don’t need a perfect stack to start. Useful building blocks:

  • Capture: auto-ingest Zoom/Meet transcripts; label speakers & decisions
  • Expansion: LLM agent that turns raw notes into Blocks 1–2
  • Binding: templates/snippets for Blocks 3–5, backed by a house-rules repo
  • Generation: your favorite codegen (Cursor, Copilot, v0, or in-house agent)
  • Preview: ephemeral environments on every PR
  • Lock-in: test synthesis (unit/contract/e2e) from accepted behavior
  • Traceability: store prompts, manifests, model versions with the PR

Keep it tool-agnostic; the process is the moat. Layer AI into your existing process for the best adoption


Anti-Patterns to Avoid

  • Thin Tickets, Thick PRs: If the thinking happens in the PR, you’re late.
  • Prompt Drift: Manually copying context; move to manifests with versions.
  • Micro-slicing Too Early: Models need adjacent context; don’t starve them.
  • Tool Fetish: Swapping vendors instead of fixing upstream ambiguity.
  • Governance Afterthought: No prompt logs, no model versions, no audit trail.

How to Start (30-Day Playbook)

Week 1 – Define the House

  • Write v1 of your House Rules Manifest (architecture, testing, observability, security)
  • Convert 2 recent features into the five-block format to calibrate

Week 2 – Product as Prompt Architects

  • Train product managers on Blocks 1–2; run a transcript-to-context workshop
  • Add acceptance examples in domain language to three upcoming tickets

Week 3 – Bind & Generate

  • Engineers add Blocks 3–5; wire your repo to generate code + tests + telemetry
  • Turn on ephemeral envs per PR; require product click-through before merge

Week 4 – Lock & Measure

  • Synthesize lock-in tests post-acceptance
  • Track: “passes per feature,” “time-to-first-accept,” “rework PRs,” and “% one-shot merges”
  • Retrospect: where context was missing, move it upstream into manifests

What Good Looks Like

  • Tickets read like small, coherent “PRDs” with domain narratives and explicit acceptance
  • Engineers add a short binding addendum, not a novel
  • First AI draft is runnable and mostly correct; review focuses on UX and naming, not correctness
  • Tests, telemetry, and docs appear with minimal prompts
  • Teams debate product behavior in the ticket, not implementation in the PR
  • You regularly see one-shot merges for well-bounded features

The Payoff

Shifting left is not busier work—it’s cheaper work. It replaces five cycles of code churn with one cycle of intentionality. And it scales: each clarified ticket becomes new training data for your organization’s “collective prompt,” raising the floor for every future feature.

If you’re serious about an AI-driven SDLC, don’t start with more autocomplete. Start by teaching your organization to write down what it really means. Then let the models do what they do best.