When Your API Is Up But Unusable: The 3AM Pager Story Every Developer Fears

It was 3 AM when the pager went off. Your API dashboard showed green lights everywhere, but customers were screaming about timeouts. This isn't fiction—it's exactly what happened to Stripe in March 2022, when their median response times skyrocketed from 120ms to over 3 seconds, causing chaos across thousands of integrated businesses 1. The terrifying truth? Your system can be 'up' while being completely unusable, and most monitoring setups won't catch it until it's too late.

The Silent Killer: Latency as a First-Class Signal

Most developers treat uptime as their primary metric, but latency is the silent killer that destroys user experience. When Stripe's incident occurred, their services were technically healthy—no 500 errors, no crashed pods—but response times had degraded to the point where client timeouts and duplicate transactions became the norm 1 . The lesson? You need to track latency percentiles, not just averages. A single slow request can skew your mean latency into looking acceptable while 95% of your users suffer. 💡 Key Insight : Use histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m])) in Prometheus to track your 95th percentile latency—this is what your actual users experience 2 .

The Detective Work: Following the Breadcrumbs

When that alert hits—'API response time increased from 200ms to 2s'—where do you even start? Here's the systematic approach that separates junior engineers from SRE pros: Query Prometheus for the smoking gun : Check both latency percentiles and error rates with rate(http_requests_total{status=~"5.."}[5m]) / rate(http_requests_total[5m]) 3 Trace the path : Use OpenTelemetry to find the exact service causing the bottleneck by filtering traces with duration > 1s and following parent_span_id relationships 4 Correlate with system metrics : In Grafana, check CPU usage ( rate(container_cpu_usage_seconds_total[5m]) ), memory pressure ( container_memory_working_set_bytes ), and database connection pools ( pg_stat_activity_count ) 5 ⚠️ Watch Out : Many teams stop at step 1 and blame 'the network' or 'cloud provider issues.' The real culprit is often hiding in plain sight within your own services. Effective incident response requires team coordination and clear communication channels

The Plot Twist: It's Not What You Think

Here's where most incident responses go wrong: you find the slow service and immediately start optimizing code. But what if the real issue is architectural? Stripe discovered their problem wasn't just slow queries—it was how failures cascaded through their system 1 . When one service slowed down, client retries amplified the problem 10x, creating a thundering herd that overwhelmed their infrastructure. 🔥 Hot Take : Your retry logic might be your biggest liability. Implement exponential backoff with jitter, and consider circuit breakers that fail fast when downstream services are struggling 6 . The best failure mode isn't retrying forever—it's graceful degradation with partial functionality.

The War Room: Real-Time Incident Response

Picture the scene: engineers huddled around screens, Slack channels blowing up, executives demanding updates. This is where your observability setup either saves you or betrays you. The teams that handle incidents well have three things in common: Pre-built dashboards that show the full story—not just metrics, but traces, logs, and business impact correlated together 7 . Runbooks that specify exact queries to run, like checking recent deployments with git_commit tags or comparing performance across availability zones 8 . Blameless post-mortems that focus on systemic improvements rather than individual mistakes 9 . 🎯 Key Point : The goal isn't to prevent incidents (that's impossible). The goal is to detect them faster, respond more effectively, and learn continuously from each experience. Real-World Case Study Stripe In March 2022, Stripe experienced a three-hour API latency surge where median response times for key endpoints rose from 120ms to over 3 seconds, causing timeouts, duplicate transactions, and user frustration across thousands of integrated businesses. Key Takeaway: Design for slow failure, not just hard failure. Track latency as a first-class signal since systems can remain 'up' while being unusable. Separate client retries from core queues to prevent amplification loops, and implement graceful degradation with partial fallbacks.

Incident Response Flow

flowchart TD A[Alert: 200ms → 2s latency] --> B[Prometheus Query] B --> C[95th Percentile Check] C --> D[Error Rate Analysis] D --> E[OpenTelemetry Traces] E --> F[Identify Bottleneck Service] F --> G[Grafana System Metrics] G --> H[CPU/Memory/DB Analysis] H --> I{Recent Deployment?} I -->|Yes| J[Rollback Strategy] I -->|No| K[Infrastructure Issue] J --> L[Monitor Recovery] K --> M[Scale/Fix Resources] L --> N[Post-Mortem] M --> N Did you know? The term 'observability' was coined by Hungarian engineer Rudolf Kálmán in the 1960s for control theory, but wasn't applied to software systems until Google's SRE team popularized it in the 2010s 10 . Key Takeaways Always track 95th percentile latency, not just averages Use OpenTelemetry traces to identify bottlenecks across service boundaries Implement circuit breakers to prevent cascade failures Separate client retries from core queues to avoid amplification loops Build pre-configured dashboards for rapid incident response References 1 The Stripe Latency Post-Mortem Every Engineer Should Read Before Launching Their API blog 2 Prometheus Querying Documentation documentation 3 Histograms and Summaries in Prometheus documentation 4 Grafana Dashboard Best Practices documentation 5 Circuit Breaker Pattern blog 6 Google SRE Book: Monitoring Distributed Systems documentation 7 Kubernetes Observability Best Practices documentation 8 Site Reliability Engineering documentation 9 Distributed Systems Observability documentation 10 Chaos Engineering for Resilience documentation Share This 🚀 Your API is 'up' but completely unusable. Here's why that happens at 3AM... • Stripe's 3-hour outage: 120ms → 3s latency with green lights everywhere • 95% of your monitoring setup misses the real user experience • The retry logic that's probably amplifying your failures right now • Why 'blameless post-mortems' save companies millions Discover the SRE secrets that separate junior engineers from incident response pros... #SRE #DevOps #Observability #I

System Flow

flowchart TD A[Alert: 200ms → 2s latency] --> B[Prometheus Query] B --> C[95th Percentile Check] C --> D[Error Rate Analysis] D --> E[OpenTelemetry Traces] E --> F[Identify Bottleneck Service] F --> G[Grafana System Metrics] G --> H[CPU/Memory/DB Analysis] H --> I{Recent Deployment?} I -->|Yes| J[Rollback Strategy] I -->|No| K[Infrastructure Issue] J --> L[Monitor Recovery] K --> M[Scale/Fix Resources] L --> N[Post-Mortem] M --> N

Did you know? The term 'observability' was coined by Hungarian engineer Rudolf Kálmán in the 1960s for control theory, but wasn't applied to software systems until Google's SRE team popularized it in the 2010s 10.

Wrapping Up

The next time your pager goes off at 3 AM, remember Stripe's lesson: your system can be 'up' while being completely broken. The difference between a minor incident and a company-wide outage comes down to whether you're tracking the right signals. Start treating latency as a first-class metric, build those correlation dashboards before you need them, and design for graceful degradation rather than hard failures. Your future self—and your customers—will thank you.

Satishkumar Dhule
Satishkumar Dhule
Software Engineer

Ready to put this into practice?

Practice Questions
Start typing to search articles…
↑↓ navigate open Esc close
function openSearch() { document.getElementById('searchModal').classList.add('open'); document.getElementById('searchInput').focus(); document.body.style.overflow = 'hidden'; } function closeSearch() { document.getElementById('searchModal').classList.remove('open'); document.body.style.overflow = ''; document.getElementById('searchInput').value = ''; document.getElementById('searchResults').innerHTML = '
Start typing to search articles…
'; } document.addEventListener('keydown', e => { if ((e.metaKey || e.ctrlKey) && e.key === 'k') { e.preventDefault(); openSearch(); } if (e.key === 'Escape') closeSearch(); }); document.getElementById('searchInput')?.addEventListener('input', e => { const q = e.target.value.toLowerCase().trim(); const results = document.getElementById('searchResults'); if (!q) { results.innerHTML = '
Start typing to search articles…
'; return; } const matches = searchData.filter(a => a.title.toLowerCase().includes(q) || (a.intro||'').toLowerCase().includes(q) || a.channel.toLowerCase().includes(q) || (a.tags||[]).some(t => t.toLowerCase().includes(q)) ).slice(0, 8); if (!matches.length) { results.innerHTML = '
No articles found
'; return; } results.innerHTML = matches.map(a => `
${a.title}
${a.channel.replace(/-/g,' ')}${a.difficulty}
`).join(''); }); function toggleTheme() { const html = document.documentElement; const next = html.getAttribute('data-theme') === 'dark' ? 'light' : 'dark'; html.setAttribute('data-theme', next); localStorage.setItem('theme', next); } // Reading progress window.addEventListener('scroll', () => { const bar = document.getElementById('reading-progress'); const btt = document.getElementById('back-to-top'); if (bar) { const doc = document.documentElement; const pct = (doc.scrollTop / (doc.scrollHeight - doc.clientHeight)) * 100; bar.style.width = Math.min(pct, 100) + '%'; } if (btt) btt.classList.toggle('visible', window.scrollY > 400); }); // TOC active state const tocLinks = document.querySelectorAll('.toc-list a'); if (tocLinks.length) { const observer = new IntersectionObserver(entries => { entries.forEach(e => { if (e.isIntersecting) { tocLinks.forEach(l => l.classList.remove('active')); const active = document.querySelector('.toc-list a[href="#' + e.target.id + '"]'); if (active) active.classList.add('active'); } }); }, { rootMargin: '-20% 0px -70% 0px' }); document.querySelectorAll('.article-content h2[id]').forEach(h => observer.observe(h)); } function filterArticles(difficulty, btn) { document.querySelectorAll('.diff-filter').forEach(b => b.classList.remove('active')); if (btn) btn.classList.add('active'); document.querySelectorAll('.article-card').forEach(card => { card.style.display = (difficulty === 'all' || card.dataset.difficulty === difficulty) ? '' : 'none'; }); } function copySnippet(btn) { const snippet = document.getElementById('shareSnippet')?.innerText; if (!snippet) return; navigator.clipboard.writeText(snippet).then(() => { btn.innerHTML = ''; if (typeof lucide !== 'undefined') lucide.createIcons(); setTimeout(() => { btn.innerHTML = ''; if (typeof lucide !== 'undefined') lucide.createIcons(); }, 2000); }); } if (typeof lucide !== 'undefined') lucide.createIcons();