How Do You Measure A Feature’s Real Impact?

How Do You Measure A Feature’s Real Impact?

In the world of software development, the gap between shipping a feature and truly understanding its impact can feel vast. We’re joined by Vijay Raina, a thought leader in enterprise SaaS, who specializes in cutting through the noise of vanity metrics to find the real story behind user behavior. He champions a data-driven approach that connects UX work directly to product strategy. Today, we’ll explore his insights on quantifying a feature’s target audience before it’s even built, defining what meaningful user engagement actually looks like, and using a simple framework to turn raw data into a clear roadmap for what to build, improve, or retire next. We’ll also unpack the common pitfall of equating conversion rates with good design and discuss how to have more productive conversations with stakeholders about the true value of user experience.

The article distinguishes between feature usage and the true target audience, noting more people may have a problem than currently use a solution. Could you walk us through your process for quantifying this potential target audience before a feature is even shipped, perhaps with a real-world example?

That’s one of the most common traps teams fall into. They look at an existing, poorly designed “Export” button that gets 5% usage and conclude that only 5% of users need to export data. The reality is often much larger. My process is about triangulation. First, we look at the quantitative data we do have, like that 5% usage, as a baseline. But then, we immediately turn to qualitative sources. We’ll dive into support tickets and comb through customer feedback channels, searching for keywords related to the problem. You’d be amazed at how many people write in saying, “How can I get my data out to show my boss?” That’s a clear signal. Finally, we might run a simple, one-question survey in-app for a specific user segment, asking, “How do you currently share your project results with your team?” The combination of existing behavior, direct feedback, and survey responses gives us a much more realistic picture. We might discover that the actual audience with this problem is closer to 40% of users, but the current solution is so buried or difficult that they’ve resorted to taking screenshots or manually copying data—painful workarounds that signal a huge opportunity.

For the Adoption metric, you emphasize tracking “meaningful engagement” over simple clicks. Beyond sharing an export URL, what are some other signals of meaningful adoption you’ve used, and how do you decide which ones are most relevant for a completely new or different type of feature?

Meaningful engagement is all about validating that the user has solved the problem the feature was designed for. A click just tells you they found the button. For a dashboard feature, for instance, meaningful adoption isn’t just loading the page. It’s applying a filter, changing a date range, or saving a custom view. Those actions scream, “I am using this to analyze my data and find insights.” For a new collaboration tool, it might be successfully tagging a colleague and leaving a comment that gets a reply. The key to defining this for a brand-new feature is to go back to the core user story. Before we write a single line of code, we ask, “What is the successful outcome for the user?” For an export feature, the outcome isn’t clicking “export”; it’s having the file in their hands. So, tracking the successful download or the sharing of that URL is the true measure. If we were building a new AI-powered scheduling assistant, the metric wouldn’t be opening it; it would be the successful booking of a meeting with zero manual edits. It’s always about tracking the completion of the job-to-be-done.

The TARS framework measures retention and then uses a Customer Effort Score specifically for those retained users. Can you elaborate on why it’s crucial to survey only retained users for satisfaction, and how this insight helps you prioritize improving “Liability” features (high retention, low satisfaction)?

This is probably the most powerful and misunderstood part of the process. If you survey every user who ever clicked on a feature, you’re mixing signals. You’ll get feedback from people who used it once and it wasn’t for them, people who couldn’t figure it out, and people who use it every single day. It’s noise. By surveying only the retained users—those who come back time and again—you are isolating the feedback from your most important cohort: the people for whom this feature is a necessity. This is where you find gold, especially for those “Liability” features. These are features with high retention because people have to use them for their workflow, but low satisfaction because they are clunky, slow, or confusing. The feedback from this group is a clear, flashing signal for your roadmap. They are essentially telling you, “We are locked into using this, and it is painful. Please, make it better.” It immediately elevates that feature’s priority over a shiny new project because improving it will deliver immediate relief and gratitude from a captive audience.

The content argues that conversion rate is not a true UX metric. When stakeholders are laser-focused on conversion, how do you communicate this distinction and pivot the conversation toward more direct UX metrics, such as task completion time or error reduction, to demonstrate design impact?

I never start by telling a stakeholder that their focus on conversion is wrong; it’s the ultimate business goal, after all. Instead, I frame the conversation around leading and lagging indicators, a concept that usually resonates well. I explain that conversion rate is a lagging indicator. It tells us the result of many different activities—marketing campaigns, pricing changes, brand perception, and yes, user experience. It’s the final score of the game. Our UX work, however, focuses on the leading indicators—the specific plays that win the game. I’ll say something like, “To improve our overall conversion, let’s focus on the part we can directly control and measure: how easily users can accomplish their goals.” We can show that our redesign reduced the time to complete a critical task from three minutes to ninety seconds, or that it cut task-related errors by 50%. These are tangible, UX-driven wins. When you can demonstrate that you’re making users more successful and efficient, it becomes a much more logical and compelling story to say that these improvements are what will ultimately contribute to that lagging indicator of higher conversion.

After calculating the S÷T score and mapping features on the 2×2 matrix, you get categories like “Overperforming” and “Core.” Could you share an anecdote about a time this visualization led to a surprising decision or a significant shift in your team’s product roadmap priorities?

Absolutely. We had a feature for creating custom data reports that the whole team, from product to engineering, was convinced was a “Core” feature. We were planning a major, six-month overhaul to add even more capabilities to it. When we finally ran the TARS analysis and plotted it on the matrix, we were shocked. It landed squarely in the “Overperforming” quadrant: exceptionally high satisfaction but much lower retention than we assumed. It wasn’t a daily-use tool for everyone. After we got over the initial surprise, we dug in and found it was used intensely by a small segment of our users for critical monthly or quarterly reporting, and they absolutely loved it. For them, it was a game-changer. This visualization completely changed our roadmap. We scrapped the massive overhaul and instead shifted to a strategy of protecting and maintaining its excellence for that power-user niche. We then reallocated that huge chunk of engineering time to a “Liability” feature—a clunky settings panel—that was frustrating a much broader set of users every single day. The matrix gave us the clarity and confidence to make a pivot that was better for all our users.

Do you have any advice for our readers on how a small team, with limited resources, can begin implementing a framework like TARS to make more data-informed decisions about their product features?

My best advice is to not get overwhelmed and aim for perfection. Start small. You don’t need a sophisticated data science team to begin. Pick just one feature you’re working on or one that you’ve always debated. For ‘Target Audience,’ simply talk to a handful of customers or read through the last month of support tickets. For ‘Adoption,’ most product analytics tools can track a single event that signifies meaningful use. For ‘Retention,’ a simple cohort chart can show you if people are coming back. And for ‘Satisfaction,’ use a free tool to pop up a one-question survey after a user has successfully used the feature three times. The goal at the beginning is not to have statistically perfect data; it’s to build the muscle and the culture of asking the right questions. This framework provides the questions. Once you start answering them, even with simple data, you will already be making smarter, more user-centric decisions than you were before.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later