Framework

Edge cases are the difference between "works" and "works reliably."

Systematic edge case discovery:

CategoryExamplesTest
Empty statesNo data, new user, no historyDoes system handle gracefully?
Boundary valuesMin/max values, zero, negativeDo calculations handle them?
Extreme scale1M items, 1M usersDoes performance hold?
Concurrent operationsSame user, two sessionsDoes system conflict?
DegradationAPI down, network slowDoes UX degrade gracefully?
Bad dataInvalid input, corrupted dataDoes system reject cleanly?

For each category: List 5-10 specific edge cases your feature must handle.

Actionable Steps

1. Create Edge Case Matrix

List all features + categories above. Fill in specific edge cases.

2. Test Each Edge Case

Don't assume—test. Automated tests for clear edge cases, manual tests for complex ones.

3. Document Handling

For each edge case: How does system handle it? Is this acceptable?

Key Takeaways

Why Ad-Hoc Edge Case Discovery Fails

Typical PM brainstorm:

PM: "OK, what edge cases should we handle?"

Person 1: "What if there's no data?"

Person 2: "What if it's really slow?"

Person 3: "What if users click really fast?"

PM (5 min later): "OK, I think we've got them all. Ship it."

Month 1 in production: Users find 10 edge cases nobody thought of. 3 bugs per week.


Framework: Systematic Edge Case Discovery

Category 1: Empty/Null States

Edge Case | Expected Behavior
-----------|-------------------
New user, no data | Show template/onboarding
Zero inventory | Show "out of stock"
No permissions | Show "upgrade to see"
Missing field | Use default or skip
Null image | Show placeholder

Category 2: Boundary Values

Edge Case | Expected Behavior
-----------|-------------------
Min value (0, 1) | System handles, doesn't crash
Max value (1B items) | System handles gracefully
Negative numbers | Reject or convert to 0
Very long strings (10K chars) | Truncate or warn, don't corrupt
Very small/large numbers | Calculate correctly

Category 3: Concurrency/Race Conditions

Edge Case | Expected Behavior
-----------|-------------------
Simultaneous purchases, last item | One succeeds, one "out of stock"
User logs in twice (desktop + mobile) | Session conflict resolved
Edit + delete simultaneously | Conflict resolution
Payment processed twice | Idempotent (charged once)

Category 4: State Transitions

Edge Case | Expected Behavior
-----------|-------------------
User cancels mid-checkout | Cancel gracefully
User refreshes during submission | Don't resubmit
Browser back button | Show stale or fresh, but don't repeat action
User closes app mid-sync | Resume cleanly or restart

Category 5: Data Quality/Corruption

Edge Case | Expected Behavior
-----------|-------------------
Malformed input (SQL injection) | Sanitize or reject
Invalid email | Validate or reject
Corrupted DB record | System handles, doesn't crash
Stale cache after restart | Refresh, don't serve stale

Real-World Example: E-Commerce Checkout Edge Cases

Systematic discovery found 23 edge cases:

Empty/Null (5): No address, no payment, zero-price item, expired cart, expired coupon Boundary (6): $0.01 min, $100K max, qty 1-1000, long addresses, negative discounts, money precision Concurrency (4): Simultaneous buy last item, double-click checkout, payment + inventory race, timeout handling State (5): Browser back, page refresh, tab close, navigate away + return, address changed Data Quality (3): Special characters in address, invalid postal code, stale inventory cache

Result: 0 critical bugs in first month. Feature shipped with confidence.


Anti-Pattern: "We'll Find Edge Cases in QA"

The Problem: QA finds edge cases in production → rework begins.

The Fix: Discover edge cases systematically during PRD writing. Build guards. QA tests them (not discovers them).


Actionable Steps

Step 1: Use the 5-Category Framework

List edge cases in each category for your feature.

Step 2: Create an Edge Case Matrix

FEATURE: Checkout

Empty/Null: No address, No payment, Zero-price item
Boundary: Min $0.01, Max $100K, Qty 1-1000
Concurrency: Simultaneous buy, Double-click
State: Browser back, Refresh, Tab close
Data: Special chars, Invalid postal code

Gate: All tested ✓

Step 3: Assign Ownership

For each edge case: Who tests? Who fixes if broken?

Step 4: Test Each Edge Case

  • Automated for data/boundary cases
  • Manual for concurrency/state cases
  • Don't assume, verify

Step 5: Document Expected Behavior

For each edge case, write: Trigger + Expected Behavior + Actual Behavior + Status


PMSynapse Connection

Edge case discovery is tedious. PMSynapse's Edge Case Generator reads your PRD and suggests edge cases: "You mentioned 'buy items'. Here are 12 edge cases to consider." By systematically surfacing edge cases, PMSynapse reduces the chance production discovers them first.


Key Takeaways

  • Edge cases aren't exceptional—they're normal in production. Plan for them.

  • Systematic beats ad-hoc. Use 5 categories (Empty + Boundary + Concurrency + State + Data) to find 80% upfront.

  • Document expected behavior. For each edge case, write what should happen before engineering builds.

  • Test before shipping. Automated for data, manual for concurrency/state. Don't ship untested edge cases.

  • QA confirms, doesn't discover. If QA finds edge cases, your PRD discovery was incomplete.

Systematic Edge Case Discovery: Finding the Scenarios You Haven't Considered

Article Type

SPOKE Article — Links back to pillar: /prd-writing-masterclass-ai-era

Target Word Count

2,500–3,500 words

Writing Guidance

Provide a systematic methodology: user state combinations, error paths, boundary conditions, time-based scenarios, permission boundary tests, and data migration edge cases. Reference PRD's guided decomposition concept. Soft-pitch: PMSynapse flags gaps in edge case coverage automatically.

Required Structure

1. The Hook (Empathy & Pain)

Open with an extremely relatable, specific scenario from PM life that connects to this topic. Use one of the PRD personas (Priya the Junior PM, Marcus the Mid-Level PM, Anika the VP of Product, or Raj the Freelance PM) where appropriate.

2. The Trap (Why Standard Advice Fails)

Explain why generic advice or common frameworks don't address the real complexity of this problem. Be specific about what breaks down in practice.

3. The Mental Model Shift

Introduce a new framework, perspective, or reframe that changes how the reader thinks about this topic. This should be genuinely insightful, not recycled advice.

4. Actionable Steps (3-5)

Provide concrete actions the reader can take tomorrow morning. Each step should be specific enough to execute without further research.

5. The Prodinja Angle (Soft-Pitch)

Conclude with how PMSynapse's autonomous PM Shadow capability connects to this topic. Keep it natural — no hard sell.

6. Key Takeaways

3-5 bullet points summarizing the article's core insights.

Internal Linking Requirements

  • Link to parent pillar: /blog/prd-writing-masterclass-ai-era
  • Link to 3-5 related spoke articles within the same pillar cluster
  • Link to at least 1 article from a different pillar cluster for cross-pollination

SEO Checklist

  • Primary keyword appears in H1, first paragraph, and at least 2 H2s
  • Meta title under 60 characters
  • Meta description under 155 characters and includes primary keyword
  • At least 3 external citations/references
  • All images have descriptive alt text
  • Table or framework visual included