<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Rahul Srivastava Collections!]]></title><description><![CDATA[Thoughts, insights and explorations on quality engineering, software testing, DevOps, AI innovations and software development best practices. 

You might also see some of my personal ramblings, please ignore :)]]></description><link>https://nohappypath.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 24 Apr 2026 13:37:37 GMT</lastBuildDate><atom:link href="https://nohappypath.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Layer You Think You Climbed]]></title><description><![CDATA[Higher value work isn't a vibe.

It has a shape.
And if you want to know whether AI has actually moved you up or just made you faster at standing still, you have to be able to see the shape clearly en]]></description><link>https://nohappypath.com/the-layer-you-think-you-climbed</link><guid isPermaLink="true">https://nohappypath.com/the-layer-you-think-you-climbed</guid><category><![CDATA[AI]]></category><category><![CDATA[Productivity]]></category><category><![CDATA[Futureofwork]]></category><category><![CDATA[Career]]></category><category><![CDATA[Critical Thinking]]></category><dc:creator><![CDATA[Rahul Srivastava]]></dc:creator><pubDate>Sat, 11 Apr 2026 12:21:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69c55e5f10e664c5daf8a077/4d5ddb2a-61f3-4535-ad6d-df8d3d5ee64b.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<h3>Higher value work isn't a vibe.</h3>
</blockquote>
<p>It has a shape.</p>
<p>And if you want to know whether AI has actually moved you up or just made you faster at standing still, you have to be able to see the shape clearly enough to find yourself inside it.</p>
<p>Think about your work week as three layers stacked on top of each other.</p>
<h2>The Three Layers</h2>
<h3><strong>"Execution"</strong> is the bottom layer</h3>
<p>It's the doing, drafting the deck, writing the status update, running the standard analysis, formatting the report, summarising the meeting, and working through the checklist. Anything where "done" is definable in advance and the steps could, in principle, be written down for someone else to follow.</p>
<p>This is the layer where the floor has dropped out.</p>
<p>Not because AI does Execution work perfectly, it doesn't, and anyone who tells you otherwise hasn't shipped anything important recently, but because the time cost has collapsed far enough that nobody pays a premium for execution alone anymore.</p>
<p>What used to take three hours of competent effort now takes twenty minutes of competent direction.</p>
<p>The work still happens. It's just no longer scarce, and value tracks scarcity, not effort.</p>
<p><em>If most of your week is Execution hours, you're standing on the layer that's being commoditised under your feet, in real time. The discomfort of that sentence is the point.</em></p>
<h3><strong>"Judgement"</strong> is the layer above Execution</h3>
<p>It's the two questions, both of which AI cannot answer for you, no matter how good the model gets.</p>
<p><strong>The Upstream Question</strong></p>
<p>Of all the things you could point or ask this machine at, which one actually matters here? For <em>this</em> stakeholder, in <em>this</em> situation, given everything you know that isn't written down anywhere, the history, the politics, the thing the last project taught you that nobody documented.</p>
<p>Choosing the right problem is a different skill from solving it, and AI helps with the second much more than the first.</p>
<p><strong>The Downstream Question</strong></p>
<p>Now that the output exists, is it right?</p>
<p>Not "does it look right", does it hold up?</p>
<p>Against reality, against context, against what could actually go wrong if someone acts on it?</p>
<p>Verification isn't checking grammar; it's the harder act of catching fluent wrongness, which is the defining failure mode of AI-assisted work.</p>
<blockquote>
<h3>AI doesn't shrink the Judgement layer, it expands it.</h3>
</blockquote>
<p>There's now ten times more output flying around than there used to be, and the people who can decide what's worth producing and whether the result holds up are in shorter supply, not longer.</p>
<p>The premium on Judgement is going up, not down, and it will keep rising as models become more fluent without becoming more accountable.</p>
<h3><strong>"Creativity"</strong> is the Top Layer</h3>
<p>The smallest of the three, the one most often misunderstood, and the one where AI is least dangerous and least useful at the same time.</p>
<p>Creativity here doesn't mean "being creative" in the art-class sense.</p>
<p>It means the move that reframes the problem</p>
<ul>
<li><p>Noticing that the question everyone is answering is the wrong question.</p>
</li>
<li><p>Seeing the connection between two things that nobody put together.</p>
</li>
<li><p>Producing the option that wasn't on the list because nobody knew the list was incomplete.</p>
</li>
</ul>
<p>AI is a strong assistant to creativity. It can generate variations, surface adjacent ideas, break you out of a rut, give you ten bad options so the eleventh good one becomes visible.</p>
<p>But the originating spark, the <em>wait, what if we're solving the wrong problem entirely</em>, still comes from a human with context and stakes.</p>
<blockquote>
<h3>AI expands within the frame you give it</h3>
</blockquote>
<p>It does not hand you a new frame, and it does not know when the old frame has become useless.</p>
<p>The reframing is yours.</p>
<p>AI eats the bottom, the upper layers get more valuable, and you climb. This is the version of the story everyone is telling each other, and it's directionally correct, but it's also incomplete in a way that matters.</p>
<h2>The Part that isn't told</h2>
<p>AI makes Execution-layer work <em>feel</em> like Judgement-layer work.</p>
<p>The output is comprehensive, confident and well-structured. You read it, and your brain quietly files it under <em>I thought this</em>.</p>
<p>You didn't, you recognised it.</p>
<p>Those are two different operations, and only one of them is yours, but the interface gives you no way to tell them apart from the inside.</p>
<p>There's no indicator, no warning.</p>
<p>No little icon that lights up when you've stopped thinking and started just nodding along to fluent prose.</p>
<p>There's a name for what happens to people in this situation.</p>
<blockquote>
<h3>Metacognitive decoupling</h3>
</blockquote>
<p>Your confidence in your own work rises faster than your actual capability, and there is no internal alarm, because the outputs keep getting better.</p>
<p>The thing producing the outputs is improving; you are not. But because the only signal you can see is the output, you experience the improvement as your own.</p>
<p>The feeling of having climbed the stack is real; the climbing is not.</p>
<p>This isn't a hypothetical risk.</p>
<p>In 2025, researchers studied experienced gastroenterologists at four endoscopy centres in Poland after AI polyp-detection tools were introduced into their daily practice. The study tracked <strong>the doctors' ability to find polyps <em>without</em> the AI assistance</strong>.</p>
<p>After a few months of routine use, their unaided detection rate had dropped measurably by around six per cent compared with the pre-AI baseline.</p>
<p>These were experienced physicians. The tool adopted to make them better had quietly made them worse. Nobody noticed in real time, because the AI-assisted numbers looked great.</p>
<p>The decay was invisible until somebody specifically looked for it.</p>
<p>The same pattern shows up in legal work, software engineering, financial analysis, and strategic planning, wherever AI is used routinely for tasks that humans used to develop and maintain skills.</p>
<p>The domain changes, but the pattern doesn't.</p>
<p>People who use AI a lot get better at using AI and worse at the underlying judgment, and they almost always think it's the other way around.</p>
<p>Call it <em>Sleeping Driver mode</em>.</p>
<p>The car is moving. You're in it, but you're just not the one driving - you're monitoring, occasionally, when something feels off, and the rest of the time the automation is doing the thinking for you.</p>
<p>From the outside, it looks identical to driving. From the inside, when nothing has gone wrong yet, it feels identical too.</p>
<p>The difference only becomes visible the moment you're asked to take the wheel, and by then, it's information about something you can't undo.</p>
<h3>What naming it does and doesn't do</h3>
<p>Be suspicious of any blog post that names a problem and implies the naming is half the solution.</p>
<p>It isn't.</p>
<p>Knowing about <em><strong>Sleeping Driver mode</strong></em> is like knowing about confirmation bias. You can name it, you can explain it to someone else, and you'll still fall into it tomorrow.</p>
<p>Naming a habit doesn't change the habit.</p>
<p>The trap lives in the small choices you make when nobody's watching: whether you read the AI output carefully or just skim it, whether you check the statistic or trust it, whether you ask the model to argue against your own instinct or take its first answer and move on.</p>
<p>So I won't pretend reading this has fixed it. It hasn't.</p>
<p>What you have, if any of this landed, is a sharper question to carry into next week.</p>
<p>Of the AI-assisted hours you put in, how many were actually Judgement hours, and how many were Execution hours wearing Judgement's clothes? You won't know in the moment. You'll only know later, when you try to reconstruct what you were thinking — without the document in front of you.</p>
<p>If you can reconstruct it, you were driving.</p>
<p>If you can't, the car was.</p>
<p>For most people who use AI a lot, the honest answer is <em>some of both, and more of the second than I'd like</em>.</p>
<p>That's not a verdict, it's a baseline.</p>
<p>Once you can see it, you can do something about it, and that's a longer conversation than this post can hold. I'll come back to it in the next blog.</p>
<p>For now, the work is just noticing.</p>
<p>Over the next week, when you finish a piece of AI-assisted work and feel that small flush of satisfaction at how good it looks, pause for the length of a breath and ask yourself which layer you were actually on while you were producing it.</p>
<p>Not which layer you wish you were on, which one were you actually on?</p>
<p>The answer will be more honest than you expect, and more useful.</p>
<hr />
<p><em>From a book I'm writing on the five skills that separate people who use AI from people who think with it. If this landed, that's the gap it's about.</em></p>
<hr />
<p><strong>Reference</strong></p>
<p>Budzyń K, Romańczyk M, Kitala D, et al. <em>Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.</em> The Lancet Gastroenterology &amp; Hepatology, 2025;10(10):896–903. <a href="https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract">https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract</a></p>
]]></content:encoded></item><item><title><![CDATA[Shift Left and Continuous Testing: Stop Treating Quality as a Phase]]></title><description><![CDATA[If it hurts, do it more frequently, and bring the pain forward.
— Jez Humble & Dave Farley, Continuous Delivery (2010)

Shift left and continuous testing are two of the most misunderstood terms in sof]]></description><link>https://nohappypath.com/shift-left-and-continuous-testing-stop-treating-quality-as-a-phase</link><guid isPermaLink="true">https://nohappypath.com/shift-left-and-continuous-testing-stop-treating-quality-as-a-phase</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[Quality Engineering]]></category><category><![CDATA[Devops]]></category><category><![CDATA[shiftlefttesting]]></category><category><![CDATA[continuous testing]]></category><category><![CDATA[continuous deployment]]></category><category><![CDATA[SDLC]]></category><dc:creator><![CDATA[Rahul Srivastava]]></dc:creator><pubDate>Thu, 02 Apr 2026 15:30:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69c55e5f10e664c5daf8a077/31757e3c-463c-4653-a3d7-60e71be64f36.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p><strong>If it hurts, do it more frequently, and bring the pain forward.</strong></p>
<p>— Jez Humble &amp; Dave Farley, <em>Continuous Delivery</em> (2010)</p>
</blockquote>
<p>Shift left and continuous testing are two of the most misunderstood terms in software engineering. Not because they're complex, but because everyone thinks that they're already doing them.</p>
<p>They are not.</p>
<p>In the <a href="https://nohappypath.com/the-traditional-quality-model-is-costing-you-more-than-you-think">last</a> post on the traditional quality model, the math was clear. A defect caught in requirements costs \(1, but the same defect in production costs \)10,000 or more.</p>
<p>The Traditional quality model will always have these problems unless the development workflow ensures that defects can be detected as early as possible.</p>
<p><strong>Shift Left Testing</strong> and <strong>Continuous Testing</strong> are the exact two disciplines that directly address everything broken about the traditional model.</p>
<p>But there are a lot of misconceptions and wrong implementations of it.</p>
<p>Let's clear each of them and see what they actually mean in practice.</p>
<h2>Shift Left Is Not about Moving Testers Earlier</h2>
<img src="https://cdn.hashnode.com/uploads/covers/69c55e5f10e664c5daf8a077/cd271301-4ba5-4db5-aa7c-58973bbd111c.png" alt="" style="display:block;margin:0 auto" />

<p>Most teams hear "shift left" and immediately think,</p>
<p>" Let's get QA involved in sprint planning. Have testers review stories earlier. Write test cases before code."</p>
<p>That's a start. But it's not shift left. That's just earlier QA.</p>
<p>Shift left is a fundamentally different relationship between quality and the development process. It's the recognition that quality decisions are made throughout the entire lifecycle, in requirements, in architecture, in design, in code review, and that the people making those decisions need quality thinking, not just quality checking at the end.</p>
<p>The difference matters enormously.</p>
<p>Checking quality at the end means someone inspects what has already been built and flags the problems. Shifting quality left means quality constraints shape what gets built in the first place.</p>
<p>One is reactive. The other is preventive. And prevention, as we established, is cheaper by orders of magnitude.</p>
<h2>What Shift Left Actually Looks Like</h2>
<p>I have implemented shift-left practices across five different industries — aviation, healthcare, government, fintech, and enterprise software.</p>
<p>The pattern is consistent across domains.</p>
<h3>It begins with the Requirements.</h3>
<p>Not with test cases, with the requirements themselves. Before a single line of code is written, the question is: how will we know this works?</p>
<p>Not "what tests will QA run," but "what does correct behaviour actually mean for this feature, in this context, for this user?"</p>
<p>This question sounds simple, but it is equally hard to answer well. Most teams skip it and discover the ambiguity later, in production, at 10x the cost.</p>
<h3>It extends into architecture.</h3>
<p>Quality-relevant decisions like how failures are handled, how the system degrades under load, and how errors are surfaced to users are architectural decisions.</p>
<p>If QA or quality thinking is not in the room when those decisions are made, it inherits their consequences without having influenced them.</p>
<details>
<summary>A Real Example Of What This Costs</summary>
<p>I was working with a payment team building a flight booking platform.</p><p>The system was integrated with an external payment gateway that uses 3D Secure authentication, a redirect flow where the customer is sent to their card issuer's page to verify the transaction before returning to complete the booking.</p><p>The architecture also had a seat-hold timer.</p><p>When a customer selected a seat, the inventory system held it for fifteen minutes while they completed payment.</p><p>Standard practice, and it was logical on paper.</p><p>The design review had happened without QA, and then they were brought in to test it after the payment integration was already complete.</p><p>During Testing, as always, we test not only the happy flow but the edge cases too, so we started asking all the relevant questions, but there was one which nobody had asked during architecture -</p><p><strong><em>What happens if the 3D Secure redirect takes longer than the hold window? On slow networks, on mobile, with certain card issuers that trigger a full challenge flow requiring OTP, the authentication could easily push past fifteen minutes.</em></strong></p><p>The answer was not good.</p><p>The inventory service would release the seat the moment the hold expired. It had no awareness of an in-progress payment. The payment gateway, completely decoupled, would continue processing the charge.</p><p>The customer would complete 3DS authentication, the payment would succeed, and the booking would fail because the seat was already gone.</p><p>No refund trigger. No retry on the hold. No meaningful error message to the customer. Just a successful charge and a seat that belonged to someone else.</p><p>A big bug, and not only that.</p><p>Three services would now need to be changed.</p><p>The payment service, the inventory management layer, and the booking orchestration service all had to be redesigned to coordinate state across a flow none of them had been built to share.</p><p>The question "What happens if 3DS takes longer than the hold timer ?" was just a thirty-minute conversation, but that never happened until we found the issue during testing, six months after the architecture was locked.</p><p>By then, the cost of the answer had multiplied by an order of magnitude.</p>
</details>

<h3>It lives in code review.</h3>
<p>Unit testing, defensive coding, meaningful error handling, and observable instrumentation are quality practices that developers own. Not because QA handed them a checklist, but because the team has a shared definition of what "done" actually means.</p>
<p>Done doesn't mean code merged.</p>
<p>Done means observable, testable, and deployable with confidence.</p>
<h3>It changes who asks the hard questions.</h3>
<p>In a shift left environment, the question "what could go wrong here?" isn't only asked by QA during test execution. It's asked by developers writing the story, architects designing the solution, and product managers defining the acceptance criteria.</p>
<p>Quality thinking becomes distributed. The defect detection surface moves left, toward the source.</p>
<blockquote>
<p>Quality becomes everyone's responsibility</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/69c55e5f10e664c5daf8a077/69a1e466-8306-4b68-8da2-a00c212e8a27.webp" alt="" style="display:block;margin:0 auto" />

<p>When it's said that "Quality is everyone's responsibility", that's where it becomes the uncomfortable part, but Shift Left is not only a QA initiative; it's an engineering culture change.</p>
<p>QA can advocate for it, QA can model it, but QA can't impose it.</p>
<blockquote>
<p>If leadership doesn't understand that quality is everyone's responsibility, shift left remains a slogan.</p>
</blockquote>
<p>It's clear now that shift left moves the defect detection surface earlier, but earlier is not enough on its own.</p>
<p>You also need faster, and that's where continuous testing comes in.</p>
<hr />
<h2>Continuous Testing Is Not Running More Tests</h2>
<img src="https://cdn.hashnode.com/uploads/covers/69c55e5f10e664c5daf8a077/57a3fa0e-e979-401c-987b-8ccc77b51275.webp" alt="" style="display:block;margin:0 auto" />

<p>Here's the second common misunderstanding.</p>
<p>Teams adopt CI/CD pipelines and run their test suites automatically on every commit. They call this continuous testing.</p>
<p>It's not.</p>
<p>Running the same tests faster is "Automation".</p>
<p>Continuous testing is a different thing entirely; it's the integration of quality signals throughout the delivery pipeline in a way that makes the cost of a defect visible at the moment it's introduced.</p>
<p>The distinction is timing, again, but finer-grained.</p>
<p>In traditional testing, you find out about a defect when QA runs the test suite, usually at the end of a sprint, during a testing phase, or in UAT.</p>
<p>In automated regression, you find out faster, but still after the fact.</p>
<p>In Continuous Testing, you find out at the point of change, the commit, the build, the deployment- before the defect has had time to compound into something harder to unpick.</p>
<p>That 10,000x cost multiplier operates at every stage, not just between requirements and production. A bug caught at the commit level costs almost nothing. The same bug caught after it's merged into main and touched five other services costs significantly more, even if it's caught before production.</p>
<p>Continuous Testing compresses that timeline to the minimum.</p>
<h2>What Continuous Testing Actually Looks Like</h2>
<h3>It starts with fast feedback loops</h3>
<p>The test suite has to be fast enough that developers don't route around it.</p>
<p>A suite that takes forty minutes to run gets run at the end of the day, if at all.</p>
<p>According to DORA research, high-performing teams maintain test suites that provide feedback quickly enough to keep developers in flow. The moment that the loop breaks, developers treat the pipeline as an obstacle rather than a quality gate. Speed is a quality property of the test suite itself. Most teams treat it as an afterthought.</p>
<blockquote>
<p>Speed is a quality property of the test suite itself</p>
</blockquote>
<h3>It requires the right distribution of tests - The Test Pyramid</h3>
<p>The test pyramid isn't just a concept; it's an engineering constraint.</p>
<p>Fast unit tests at the base. Component in the middle using mocks without external dependencies. Contract tests at the integration boundary, verifying that services honour the agreements they make with each other without requiring full end-to-end execution. Then, integration tests are run after that for real user flow validation, and lastly, a small number of critical end-to-end tests at the top.</p>
<p>Too many slow, expensive end-to-end tests and your continuous testing pipeline crawls.</p>
<p>The distribution determines the speed, and the speed determines whether the pipeline is trusted.</p>
<img alt="" style="display:block;margin:0 auto" />

<blockquote>
<ul>
<li><p>Write tests with different granularity</p>
</li>
<li><p>The higher-level you get the fewer tests you should have</p>
</li>
</ul>
<p><em><strong>—</strong></em> Mike Cohn</p>
</blockquote>
<h3>It means testing at multiple levels simultaneously</h3>
<p>Not sequentially.</p>
<p>Unit tests run at commit. Integration tests run at build. Performance baselines run at merge. Security scans run alongside. Production monitoring runs continuously. Each layer catches different failure modes at the appropriate cost.</p>
<h3>It requires production to be part of the quality signal</h3>
<p>This is where most teams stop short.</p>
<p>They treat production as outside the quality boundary, something that happens after QA signs off. But production is where real users encounter real failure modes under real conditions.</p>
<p>Feature flags, canary deployments, observability pipelines, anomaly detection, these are continuous testing practices. They extend quality visibility beyond the testing environment into the place where quality actually matters.</p>
<h3>It changes how you think about test ownership</h3>
<p>In a continuous testing environment, tests aren't only owned by QA.</p>
<p>They're also owned by the team that owns the functionality. QA sets the standards, defines the coverage criteria, and validates the approach. But the developer who writes the feature writes the unit tests. The team that owns the service, along with QA, maintains its integration tests.</p>
<p>QA then focuses on what requires QA-level expertise, exploratory testing, failure mode analysis, quality strategy, and the hard edge cases that automation can't anticipate.</p>
<h2>You Cannot Test What You Cannot See</h2>
<p>This is the insight most shift-left and continuous testing articles miss entirely, and it's the one that matters most.</p>
<p>Before a team invests in expanding test coverage, they need to be able to see their system behaving in production.</p>
<p>Logs that tell a story.</p>
<p>Metrics that surface anomalies before users notice them.</p>
<p>Traces that reveal what actually happened when something went wrong, not just that it went wrong.</p>
<blockquote>
<p>Instrumentation has to come before automation, Not after.</p>
</blockquote>
<p>Automation on top of a system you cannot observe just produces faster failures that are harder to diagnose. I've seen teams with 90% test coverage ship defects that took weeks to root-cause in production, not because their tests were wrong, but because they had no visibility into what the system was actually doing once it left the test environment.</p>
<p>The test suite tells you what the system does in the conditions you anticipated. Instrumentation tells you what it does in the conditions you didn't.</p>
<p>Both are necessary. The order matters.</p>
<h2>Where Most Teams Get It Wrong</h2>
<p>There's a failure mode I've seen consistently across organisations that attempt shift left and continuous testing without understanding what they're actually changing.</p>
<p>They treat it as a tooling problem.</p>
<p>They buy a CI/CD platform, adopt a test automation framework, mandate a shift left policy, and wonder why nothing changes. The bugs still pile up. Releases still slip. QA still becomes the bottleneck.</p>
<p>The tools are not the problem. The mental model is.</p>
<blockquote>
<p>Shift left and continuous testing are not automation initiatives. They're quality philosophy changes that have technical implementations.</p>
</blockquote>
<p>The philosophy has to come first.</p>
<p>What does quality mean for this system?</p>
<p>Who is responsible for it?</p>
<p>At what point in the workflow is a quality decision being made?</p>
<p>Where is the defect detection surface today, and what would it cost to move it left?</p>
<p>Until a team can answer those questions, adding more tools makes the situation more complicated without making it better.</p>
<h2>The Pattern That Actually Works</h2>
<p>Across fifteen years and multiple industries, the pattern that produces results looks the same everywhere.</p>
<p>It starts with a definition.</p>
<p>What does quality mean for this specific system, for these specific users, in this specific context?</p>
<p>That definition doesn't come from a template. It comes from engineering leadership understanding the product well enough to articulate what failure actually costs.</p>
<p>It continues with shared ownership. Not "QA is responsible for quality", that sentence is a trap.</p>
<blockquote>
<p>Quality is a property of how the team works, not a function that gets delegated.</p>
</blockquote>
<p>QA brings expertise. Everyone brings ownership.</p>
<p>It requires instrumentation before automation.</p>
<p>See the system first and then test it.</p>
<p>And it demands patience.</p>
<p>Shift left and continuous testing don't produce results in a sprint. The defect curve takes time to change. The cost curve takes time to visibly improve. Teams that abandon the approach after three months because they can't see the numbers move are making the same mistake as organisations that treat quality as a phase, optimising for the short term, paying for it in the long run.</p>
<p>The traditional model doesn't fail loudly. It fails slowly, invisibly, until the bill arrives all at once.</p>
<hr />
<h2>References &amp; Further Reading</h2>
<ul>
<li><p>Continuous Delivery — Jez Humble &amp; Dave Farley — <a href="https://continuousdelivery.com">https://continuousdelivery.com</a></p>
</li>
<li><p>Agile Testing: A Practical Guide for Testers and Agile Teams — Lisa Crispin &amp; Janet Gregory — <a href="https://agiletester.ca">https://agiletester.ca</a></p>
</li>
<li><p>Continuous Testing in DevOps — Dan Ashby — <a href="https://danashby.co.uk/2016/10/19/continuous-testing-in-devops">https://danashby.co.uk/2016/10/19/continuous-testing-in-devops</a></p>
</li>
<li><p>The Test Pyramid — Martin Fowler — <a href="https://martinfowler.com/bliki/TestPyramid.html">https://martinfowler.com/bliki/TestPyramid.html</a></p>
</li>
<li><p>Shift Left Testing — IBM Developer — <a href="https://developer.ibm.com/articles/shift-left-and-shift-right-testing-strategies">https://developer.ibm.com/articles/shift-left-and-shift-right-testing-strategies</a></p>
</li>
<li><p>State of DevOps Report — DORA Research Program — <a href="https://dora.dev/research">https://dora.dev/research</a></p>
</li>
<li><p>Accelerate: The Science of Lean Software and DevOps — Nicole Forsgren, Jez Humble, Gene Kim — <a href="https://itrevolution.com/product/accelerate">https://itrevolution.com/product/accelerate</a></p>
</li>
<li><p>Testing in Production: the safe way — Cindy Sridharan — <a href="https://medium.com/@copyconstruct/testing-in-production-the-safe-way-18ca102d0ef1">https://medium.com/@copyconstruct/testing-in-production-the-safe-way-18ca102d0ef1</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[The Traditional Quality Model Is Costing You More Than You Think]]></title><description><![CDATA[Every team I've worked with underestimates the cost of finding bugs late. Not by a little, by orders of magnitude. The math is brutal, the pattern is predictable, and yet organisations keep repeating ]]></description><link>https://nohappypath.com/the-traditional-quality-model-is-costing-you-more-than-you-think</link><guid isPermaLink="true">https://nohappypath.com/the-traditional-quality-model-is-costing-you-more-than-you-think</guid><category><![CDATA[Software Testing]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[SDLC]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[Quality Engineering]]></category><dc:creator><![CDATA[Rahul Srivastava]]></dc:creator><pubDate>Thu, 26 Mar 2026 20:32:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69c55e5f10e664c5daf8a077/30136b26-1753-4e45-a00a-6730d9eb34ba.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every team I've worked with underestimates the cost of finding bugs late. Not by a little, by orders of magnitude. The math is brutal, the pattern is predictable, and yet organisations keep repeating the same mistake: treating quality as a phase instead of a discipline.</p>
<p>Here's why that decision costs far more than anyone budgets for.</p>
<h2>The Numbers Don't Lie</h2>
<blockquote>
<p>Most defects end up costing more than they would have cost to prevent them. Defects are expensive when they occur, both the direct costs of fixing the defects and the indirect costs because of damaged relationships, lost business and lost development time.</p>
<p><em><strong>— Kent Beck, Extreme Programming Explained</strong></em></p>
</blockquote>
<p>This isn't abstract philosophy. The numbers are real.</p>
<p>To understand why the costs increase in this manner, consider what happens to a single requirements error depending on when it gets found:</p>
<ul>
<li><p>If you make a requirements error and find it during the requirements phase, it is inexpensive to fix. You merely change a portion of your requirements model. A change of this scope is on the order of $1</p>
</li>
<li><p>If you do not find it until the design stage, it is more expensive to fix. Not only do you have to change your analysis, but you also have to reevaluate and potentially modify the sections of your design based on the faulty analysis. This change is on the order of $10.</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/69c55e5f10e664c5daf8a077/da022e1b-f124-4d05-8f87-8771d9e5dbde.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p>If you do not find the problem until programming, you need to update your analysis, design, and potentially scrap portions of your code, all because of a missed or misunderstood user requirement. This error is on the order of $100, because of all the wasted development time based on the faulty requirement.</p>
</li>
<li><p>Furthermore, if you find the error during the traditional testing stage, it is on the order of $1,000 to fix (you need to update your documentation and scrap/rewrite large portions of code).</p>
</li>
<li><p>Finally, if the error gets past you into production, you are looking at a repair cost on the order of $10,000+ to fix (you need to ship updated code, fix the database, restore old data, etc.).</p>
</li>
</ul>
<p>That's a <strong>10,000x cost multiplier</strong> from requirements to production. The same defect. Just found it later.</p>
<p>This isn't a testing problem. It's a timing problem.</p>
<h2>The Traditional Quality Model — And Why It Fails</h2>
<p>Most organisations I have seen operate with a version of the same model, and it looks something like this:</p>
<img src="https://cdn.hashnode.com/uploads/covers/69c55e5f10e664c5daf8a077/0cf6c995-781c-44c8-a1fa-5e853665f8ca.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p>QA members tend to be less involved in early stages like planning and design.</p>
</li>
<li><p>Many architectural requirements and design flaws are not discovered and corrected until after a significant effort has been wasted on their implementation.</p>
</li>
<li><p>Less time to fix defects.</p>
</li>
<li><p>There is a great chance of “breaking” functionality due to last-minute fixes, jeopardising the release date.</p>
</li>
<li><p>Testing time gets shrunk and usually becomes a bottleneck.</p>
</li>
<li><p>Testing is a phase at the end of the development cycle.</p>
</li>
</ul>
<p>Sound familiar? It should. This is the default in most organisations, and the consequences are entirely predictable every time. Bugs pile up, releases slip, teams burn out, and everyone points fingers at QA for "not catching it."</p>
<p>The uncomfortable truth is that QA can't catch what it was never involved in building. When testing is a phase at the end of a cycle, it inherits every bad decision made before it arrived.</p>
<p>Once you see these numbers clearly, the traditional model doesn't just look inefficient — it looks indefensible. That's what drives serious engineering organisations to rethink quality from the ground up.</p>
<hr />
<h2>The Questions This Forces Us to Ask</h2>
<p>Once you accept the compounding cost of late defect detection, the right questions become obvious:</p>
<ul>
<li><p>How do we detect defects earlier — or better yet, prevent them entirely?</p>
</li>
<li><p>How do we bring QA and testing activities into the earlier stages of the SDLC?</p>
</li>
<li><p>How do we reduce the total cost of development and testing?</p>
</li>
<li><p>How do we make the whole team — not just QA — responsible for quality?</p>
</li>
</ul>
<p>These aren't QA questions. They are engineering leadership questions.</p>
<p>And the answers require more than better test coverage; they require a fundamentally different approach to how quality is embedded into the workflow.</p>
<p>So how do we actually achieve that?</p>
<p>Essentially, the development workflow should ensure that defects can be detected as early as possible. It is valuable to implement processes that enable the team to detect early and detect often.</p>
<hr />
<blockquote>
<h3><strong>Detect Early, Detect Often</strong></h3>
</blockquote>
<p>In essence, processes and conventions should be designed around moving defect detection as early in the workflow and as close to the developer’s coding environment as possible.</p>
<p>This way, the same compounding effects which inflate the negative impacts of late defect detection work in favour of increasing software quality and resilience.</p>
<p>Knowing the problem is only half the equation. The practical answer lies in two disciplines that directly address everything broken about the traditional model: <strong>Shift Left Testing</strong> and <strong>Continuous Testing</strong>.</p>
<blockquote>
<h3>Shift Left And Continuous Testing</h3>
</blockquote>
<p>In the next post, I'll break down exactly what both mean in practice, not the buzzword definitions, but how to actually implement them in a real team environment, with real constraints and real delivery pressure.</p>
<p>Because understanding why the traditional model fails is step one. Knowing what to replace it with is where the real work begins.</p>
<hr />
<p><strong>References &amp; Further Reading</strong></p>
<ul>
<li><p><a href="https://www.researchgate.net/publication/280937479_Cost_Effective_Software_Test_Metrics">Cost Effective Software Test Metrics — ResearchGate</a></p>
</li>
<li><p><a href="https://deepsource.io/blog/exponential-cost-of-fixing-bugs/">Exponential Cost of Fixing Bugs — DeepSource</a></p>
</li>
<li><p><a href="https://www.launchableinc.com/customers/a-silicon-valley-icon-reduces-slow-delivery-cycles-by-testing-faster-and-improves-developer-happiness-case-study">Launchable — Reducing Slow Delivery Cycles Case Study</a></p>
</li>
</ul>
]]></content:encoded></item></channel></rss>