In a field often dominated by rigid frameworks, our SaaS and Software expert, Vijay Raina, challenges the status quo. A specialist in enterprise SaaS technology with a deep focus on software design and architecture, Vijay proposes a fundamental shift in how we think about quality assurance. His “Testing Vial” concept moves the conversation away from counting test types and toward a more holistic, business-centric approach to building robust and maintainable software.
Today, we sit down with Vijay to explore this innovative methodology. We’ll discuss how focusing on business entities can protect teams from the turmoil of major refactoring and how architectural tests act as a crucial guardrail for system design. He’ll also share practical insights into using test categories to visualize a feature’s impact and explain how a seemingly simple test can expose deep-seated design flaws, ultimately leading to a more resilient and understandable codebase.
The article criticizes the Testing Pyramid’s tight coupling with implementation. How does the Testing Vial’s focus on business entities, rather than specific test counts, practically help a team avoid massive test rewrites during a major refactor, like replacing SQL with MongoDB?
That’s the core of the problem the Vial aims to solve. I’ve lived through those refactors, and it’s a painful experience. You feel like you’re making a positive architectural change, but you spend 80% of your time fighting a sea of red unit tests that are no longer relevant. With the Pyramid, your tests are often verifying that a UserRepository’s save method calls the right ORM function. When you swap SQL for MongoDB, that method and its underlying logic are completely thrown away, and so are the tests. The Vial flips the script. Instead of testing the how, we test the what. A test categorized under “User” would validate the business process: “When a new user registers, their profile information must be retrievable.” That test doesn’t care if you’re using SQL or MongoDB; it only cares that the user’s data is persisted and can be fetched. So, when you perform that massive refactor, that higher-level integration test—your safety net—should still pass with minimal changes. You gain the confidence to evolve your technology stack without being chained to a brittle test suite.
You introduce Architectural Tests as the “cap for the vial.” Could you walk me through a real-world scenario where an ArchUnit test you wrote caught an unintended dependency between modules? Please describe the violation it found and the steps your team took to resolve it.
Absolutely. On one project, we had a modular monolith with clearly defined boundaries: a “Products” module and a “Pricing” module. The rule was simple: “Pricing” could depend on “Products” to get product information, but “Products” should know nothing about “Pricing.” This keeps the concerns separate. One day, a developer was working on a feature in the “Products” module and needed to display a price. To take a shortcut, they added a direct dependency from a class in the “Products” module to a service in the “Pricing” module. The code worked, and the local unit tests passed. However, our build pipeline failed immediately. The ArchUnit test, which we had written during the initial design phase, caught it cold. The test simply stated: “Classes in the ‘Products’ namespace must not have dependencies on classes in the ‘Pricing’ namespace.” The error message was crystal clear. It wasn’t a cryptic null pointer; it was a direct violation of our agreed-upon architecture. The resolution was straightforward: the team reverted the change and implemented the feature correctly by having the user interface call both modules and combine the data, preserving the architectural integrity. That little test saved us from introducing a subtle coupling that would have caused major headaches down the road.
The article highlights using test categories like “Cart” or “User” with code coverage tools. Could you detail the process of using this combination to analyze the impact of a change to the “Product” entity and how this insight helps you better plan your work?
This is where the strategy becomes a powerful analytical tool. Imagine you need to add a new attribute to the “Product” entity, say, “SupplierID.” The immediate work seems contained. But is it? Using the Vial’s categories, the first thing we’d do is run all tests tagged with the “Product” category and generate a code coverage report. This report acts like a heat map, visually showing us every single line of code across the entire application that was executed by our product-related tests. We might see the expected coverage in the Product service, but we might also see highlighted code paths in the “Cart” module (for price recalculations), the “User” module (for purchase history), and even the “Shipping” module (for supplier-specific logistics). Suddenly, the “small change” has a visible blast radius. This insight is gold for planning. We can now accurately estimate the testing effort, identify which other teams need to be notified, and proactively look for potential regressions in seemingly unrelated areas. It transforms impact analysis from guesswork into a data-driven process.
You mention that a single test with too many categories is a “sign of a poor design.” Can you share an anecdote where discovering a test tagged with “User,” “Cart,” and “Product” led your team to refactor a component? What was the problem and the ultimate solution?
I remember this one vividly. We had a test for a checkout process that was tagged with “User,” “Cart,” “Product,” and “Payment.” The test name was something like Test_UserCheckout_WithValidItem_AndSufficientFunds_Succeeds. When we looked at the test categories in our test runner, this one test lit up like a Christmas tree across multiple business domains. It was a huge red flag. We dug into the code it was testing and found a single monolithic method called ProcessOrder(). This one method was doing everything: validating the user’s session, checking product stock levels from the inventory, calculating the cart total, and then calling the payment gateway. It was a classic example of a “God object” that violated the Single Responsibility Principle. The test, with its many tags, was screaming this at us. The solution was a significant but necessary refactoring effort. We broke that massive ProcessOrder method down into a coordinating service that delegated work to smaller, focused components: a StockValidator, a CartCalculator, and a PaymentProcessor. Each new component had its own set of tightly-scoped tests with only one or two categories. The result was a system that was far easier to understand, maintain, and test independently.
The Testing Diamond struggles with slow, complex integration tests. Since the Vial also embraces them, what specific strategies or tools, such as TestContainers or ephemeral environments, have you found most effective for managing test data and speeding up feedback cycles on a large-scale project?
This is a critical challenge, and you can’t just embrace integration tests without a strategy to manage them. For component-level integration, TestContainers has been a game-changer. On a recent project, every developer running tests locally would get a fresh, isolated Docker container for our PostgreSQL database. This completely eliminated the “it works on my machine” problem and killed issues with dirty data from previous test runs. The tests became 100% repeatable and reliable. For larger, system-wide validation involving multiple microservices, we’ve had great success with ephemeral environments. Our CI/CD pipeline is configured to spin up a complete, lightweight instance of our entire application stack for every single pull request. This allows us to run a small, critical set of end-to-end tests against a realistic environment before any code is merged. The feedback is incredibly fast—we know within minutes if a change has caused a cross-service regression. This approach combines the realism of a staging environment with the speed and isolation needed for a fast-flowing development process.
What is your forecast for the future of software testing, especially as systems become more distributed and complex?
I believe the future lies in moving further away from implementation-specific testing and doubling down on behavior-driven validation. As we build more complex systems with microservices, serverless functions, and third-party APIs, the idea of testing a single “unit” in isolation becomes less and less meaningful. The real business risk isn’t in a single function’s logic; it’s in the interactions and contracts between these distributed components. Therefore, I foresee a greater emphasis on tools and strategies that validate business workflows across service boundaries. Concepts like the Testing Vial, which categorize tests by business domain, and practices like consumer-driven contract testing will become standard. We’ll see more sophisticated use of ephemeral environments and service virtualization to make this kind of high-level testing faster and more accessible to every developer, shifting the focus from “did I write the code right?” to “did I build the right thing, and does it work with everything else?”
