Vijay Raina is a seasoned expert in enterprise SaaS technology and a prominent thought leader in software design and architecture. With an extensive background in building scalable systems, he has witnessed the industry’s evolution from the rigid, monolithic structures of the past to the fluid, high-velocity environments of modern cloud-native development. His insights bridge the gap between traditional engineering discipline and the contemporary need for developer joy and system adaptability.
This conversation explores the transition from the long-standing SOLID principles to the more holistic CUPID properties. We delve into how composability and predictability replace complex dependency hierarchies, the importance of writing idiomatic code that feels native to its language, and the necessity of aligning technical nomenclature with business reality. By focusing on the developer experience and system behavior, these themes offer a roadmap for creating software that is as easy to maintain as it is to build.
Traditional design principles like SOLID emerged during the era of monolithic desktop software and strict class hierarchies. How does the shift toward microservices and serverless change our design priorities, and what specific “over-engineering” traps should modern teams avoid when managing complex dependencies?
The landscape of software development has shifted dramatically since the 1990s when SOLID was established as the gold standard for C++ and Java hierarchies. In today’s world of microservices and serverless functions, the strict adherence to those five principles can inadvertently lead to a dense, impenetrable thicket of code. One of the most common over-engineering traps I see is the explosion of interfaces created for single-method classes, which significantly increases the cognitive load for any developer trying to navigate the system. While SOLID focuses on the internal mechanics of how to build an engine, modern priorities should shift toward how the car actually drives, emphasizing properties that are observable rather than just rules that are enforced. By prioritizing developer experience and system behavior, teams can avoid the technical debt that comes from applying rigid, micro-level rules to a macro-level ecosystem.
Favoring “Data-In, Data-Out” structures over complex dependency injection can reduce lifecycle coupling between components. How do you implement these Unix-inspired, composable modules in practice, and what are the trade-offs when using standard data structures like JSON instead of proprietary custom objects?
Implementing composability requires us to move away from components that are deeply aware of each other’s internals and toward a model where the “glue” between them is as thin as possible. In practice, this means focusing on functions that transform data rather than objects calling methods on other objects; for instance, a simple discount logic function should just take a price and return a new one without needing a complex service hierarchy. Using standard data structures like JSON, Lists, or Dictionaries—the modern equivalent of the 1970s Unix text stream—allows modules to act as pipes in a larger pipeline. The primary trade-off is moving away from the safety of proprietary, deeply nested custom objects, but the reward is that testing becomes trivial because you no longer need complex mocking frameworks to verify your results. This “Unix-inspired” approach ensures that each module does one thing well and remains small enough to be easily replaced or rearranged as the system evolves.
Predictability relies on functions performing exactly what their names imply without hidden side effects or global states. What strategies do you use to maintain this “least astonishment” principle in large codebases, and how does this approach improve a system’s observability when errors occur?
Maintaining the “least astonishment” principle starts with the absolute commitment to ensuring that a function’s name is an honest contract of its behavior. If a developer calls a function named get_user_balance(), that function must never secretly trigger a refresh of a session token or update a database in the background. My strategy involves making code deterministic, so that given the same input, it consistently produces the same output without altering any hidden global variables. This directly impacts observability because when a failure occurs, a predictable system will generate an error message that tells you exactly why and where the problem originated. Instead of cryptic messages caused by secondary side effects, you get clear, actionable data that makes debugging a straightforward process rather than a mystery.
Writing code that ignores a language’s specific idioms can create significant friction for the rest of the team. How do you balance following these “native vibes” with maintaining consistent patterns across a multi-language organization, and where does AI provide the most value in this process?
Every programming language has a unique “vibe,” such as being “Pythonic” or following “The Go Way,” and fighting against these idioms creates unnecessary friction for every developer who touches the code. For example, trying to force heavy Java-style class hierarchies into a Python project results in verbose code that is harder to read than a simple, idiomatic list comprehension. In a multi-language organization, I encourage teams to embrace these native patterns rather than forcing a single architectural style on every stack, as it respects the expertise of the developers using those tools. This is where Artificial Intelligence and LLMs provide immense value, as they are exceptionally good at taking a logic block and rewriting it to be idiomatic in a different language like Kotlin or JavaScript. While AI can ensure the syntax feels native and predictable, it still requires the human architect to ensure the overall logic fits the specific business context.
Bridging the “translation tax” between business stakeholders and engineers often requires using a ubiquitous language within the source code. How do you transition from technocentric naming to domain-driven logic, and how does this shift impact the way non-technical teams understand the development process?
Transitioning to domain-driven logic involves a conscious shift from technocentric naming, like process_data_table_row(), to names that reflect real-world business actions, such as submit_insurance_claim(). By adopting the “Ubiquitous Language” from Domain-Driven Design, we use the exact words that stakeholders use, such as “Premium,” “Escrow,” or “Claim,” directly within the source code. This eliminates the “translation tax” because a non-technical stakeholder can practically read the logic and understand the business rules governing the software. When the code reflects the domain, the gap between the product team and the engineering team shrinks, leading to a much more collaborative environment where everyone is speaking the same language. This transparency helps stakeholders understand the development process more clearly, as they can see their own business requirements mirrored in the structure of the system.
Evaluating software based on properties like joy and adaptability is often more subjective than checking for rule compliance. During a code review, what specific questions should a reviewer ask to determine if a module is truly composable and predictable, and how do you measure these qualities?
During a code review, we should move beyond checking for linter compliance and instead ask questions that reveal the true qualities of the code, such as “Can I use this logic in a CLI tool as easily as in a Web API?” To determine if a module is Unix-inspired, I ask if the function is trying to be too “smart” or if it is focused on one simple, well-defined job. Predictability can be measured by passing a null value to a function and seeing if the resulting error is helpful or if it exposes a hidden weakness in the system’s logic. We also look for “God Objects” or “Manager” classes that handle everything from validation to persistence, as these are clear indicators of poor composability. Ultimately, if the code looks like the official documentation for the language and uses domain terms rather than database terms, we know we are achieving a system that is both joyful to work with and highly adaptable.
What is your forecast for modern software craftsmanship?
I forecast a future where software craftsmanship moves away from the rigid, rule-bound frameworks of the past and toward a high-velocity development model that prioritizes human-centric properties. We will see a decline in the obsession with complex architectural patterns that were designed for a different era, and instead, the most successful teams will be those that favor clarity and simplicity over theoretical perfection. As AI continues to assist with the mechanical and idiomatic aspects of coding, the role of the developer will shift toward becoming a curator of domain logic and a guardian of system predictability. Ultimately, our goal will be to build systems that don’t just work, but are genuinely enjoyable for the next generation of developers to read, maintain, and expand.
