With the release of Michele A. Williams’s new book, Accessible UX Research, the conversation around true digital inclusion has reached a new level of urgency. To unpack these ideas, we sat down with Vijay Raina, a leading voice in enterprise SaaS technology and software architecture. Vijay has spent over two decades helping top tech companies bridge the gap between powerful technology and the people who use it, making him the perfect guide to explore how these principles apply in the real world.
In our conversation, we explore the subtle but powerful ways a “disability mindset” can limit a product’s potential and how to shift from a compliance-focused approach to one of genuine inclusion. Vijay shares practical strategies for recruiting participants with disabilities, designing research that is itself accessible, and transforming raw data into compelling narratives that drive real organizational change. Finally, we discuss how to build a lasting culture of accessibility that moves beyond a single project and becomes woven into a company’s DNA.
The book’s summary emphasizes confronting our “disability mindset.” How does this mindset create blind spots in traditional research? Can you share a specific, step-by-step action a team can take to shift from a compliance checklist to a more genuinely inclusive approach in their next project?
The “disability mindset” is a trap many well-intentioned teams fall into. It’s the tendency to view accessibility as a list of technical requirements, like WCAG guidelines, that you check off at the end of a project to avoid legal trouble. This creates enormous blind spots because it mistakes compliance for experience. You might have a perfectly compliant component that is still frustrating or completely unusable for someone in their actual daily context. It’s the difference between a building having a ramp and that ramp being so steep it’s dangerous to use.
To break out of this, a team can start their next project with a “journey-based assumption audit.” First, before writing a single line of code, map out the primary user journey. Second, hold a workshop dedicated to challenging your assumptions, specifically asking: “Who are we excluding with this flow?” Actively brainstorm how a screen reader user, a keyboard-only user, or someone with cognitive disabilities might experience this journey differently. Finally, commit to recruiting participants from these groups for formative research—not just summative testing at the end. This simple, three-step process reframes accessibility from a late-stage quality check to a foundational principle of good design.
The content notes that recruiting disabled participants “is not always easy.” What are some of the most common pitfalls researchers encounter during recruitment, and could you walk us through a successful strategy or a specific channel you’ve used to find and ethically engage participants for a study?
One of the biggest pitfalls is approaching recruitment transactionally. Researchers often blast out a generic call for participants on broad platforms, treat disabled communities as a monolith, and then wonder why they get no response. Another common mistake is using inaccessible tools for the recruitment itself—like a sign-up form that doesn’t work with a screen reader. This sends a clear message that you haven’t done the basic work and erodes trust before the study even begins. You’re essentially asking for help making your product accessible while using an inaccessible process to do it.
A strategy that has worked for me is to build genuine, long-term relationships with disability advocacy organizations. Instead of a cold email asking for “participants,” I’ll reach out to a local or online group to introduce myself and my team’s goals. I make it clear we want to be partners, not just extractors of information. We’ll offer to sponsor an event or provide a free accessibility workshop for their members. This builds trust over time, and when we do have a study, we have a community that knows we are sincere. It’s about contributing to the community, not just taking from it.
According to the article, the research itself must be accessible, not just the final product. What’s a common mistake teams make when designing a prototype or facilitating a session for participants with diverse abilities? Please share an anecdote where one small adjustment made a huge difference for a participant.
A very common mistake is creating high-fidelity, visually polished prototypes that are completely unusable for anyone who isn’t a sighted mouse-user. Teams spend weeks making a prototype look perfect in Figma, full of complex drag-and-drop interactions and un-labeled icons, only to realize during a session with a screen reader user that it’s just a silent, un-navigable image. They end up having to narrate the screen, which hopelessly biases the session and robs the participant of their agency. The research becomes about testing the researcher’s ability to describe an interface, not the participant’s ability to use it.
I remember a session for a new data dashboard where this exact thing happened. The team was devastated. For the next session with another blind participant, we made a radical adjustment. We built a parallel prototype in a simple text document. It was just headings, links, and text describing the data. It was ugly, but it was structured semantically. The difference was night and day. The participant could fly through the document with their screen reader, independently discovering insights and critiquing the information architecture. That single change transformed the session from a frustrating, guided tour into a true, user-led evaluation.
The book positions the researcher as a “storyteller, educator, and advocate” when sharing findings. How does this framing change the way accessibility issues are presented to stakeholders? Could you share a tactic for turning raw data into a compelling story that successfully drives organizational change?
This framing is absolutely crucial. When you present accessibility findings as just a list of bugs or WCAG failures, stakeholders see it as a cost center—a list of chores to be done. It’s impersonal and easy to deprioritize. But when you position yourself as a storyteller, you’re not just reporting on issues; you are conveying the human impact of design decisions. You’re connecting a “bug” to a real person’s frustration, exclusion, or inability to do their job. This changes the conversation from “How much will this cost to fix?” to “How can we not fix this for our users?”
A powerful tactic is to lead with the story, not the data. Instead of starting a presentation with a slide full of charts, start with a powerful, anonymized quote or a short video clip from a research session showing the moment of struggle. For one project, we had data showing a 90% failure rate on a critical checkout process for screen reader users. But what got the vice president’s attention was a two-minute audio clip of a participant saying, “I guess this website just isn’t for people like me.” We followed that clip with the data, and in that moment, the issue became a moral and business imperative, not just a line item on a spreadsheet. The fix was funded that same day.
The book aims to help readers “establish a culture for accessibility.” Beyond specific studies, how can a research team build lasting trust and confidence around inclusion with their engineering and product partners? Please give a real-world example of how inclusive research findings directly influenced a product roadmap.
Building a culture isn’t about one perfect study; it’s about consistency, collaboration, and making accessibility feel like a shared goal, not a top-down mandate. The best way to build trust is to demystify the work. Invite engineers and product managers to be observers in research sessions so they can witness the challenges firsthand. Create an open channel where developers can ask “dumb questions” without fear of judgment. When a team feels like they are partners in discovery rather than just recipients of bug reports, their entire mindset shifts.
I saw this happen with a team working on a project management tool. The core feature was a visual, card-based board, which our research showed was completely unusable for employees with low vision who used screen magnifiers. The initial reaction was defensive; a non-visual board seemed impossible. But the researchers brought the lead engineer into the next session. He watched a participant struggle for 20 minutes to simply find their assigned tasks. The engineer left that session and, on his own initiative, built a basic prototype for a “list view” of the board. This wasn’t on any roadmap, but seeing the problem directly inspired a solution. That “list view” became one of the product’s most praised features and fundamentally changed how the company thought about building inclusive products from the start.
What is your forecast for the role of inclusive research in product development?
My forecast is that inclusive research is going to rapidly move from a specialized discipline, often siloed within an “accessibility team,” to a core competency expected of all UX researchers and product teams. Companies are beginning to understand that this isn’t just about mitigating legal risk or being ethical—it’s about innovation and market growth. By testing with users at the margins, you uncover usability problems that affect everyone and discover novel solutions you would have never found by only testing with the “average” user. In the next few years, I predict that a product team launching a major feature without having conducted research with disabled participants will be seen as just as negligent as launching without any user research at all. It will become a non-negotiable part of a healthy, mature product development lifecycle.
