Stack Internal Unveils Its 2025 Year in Review and AI Vision

Stack Internal Unveils Its 2025 Year in Review and AI Vision

With a career centered on enterprise SaaS technology, Vijay Raina has become a leading voice in how large organizations build, share, and leverage internal knowledge. As Stack Internal wrapped up a transformative year, marked by a strategic rebrand and a deep integration with AI, we sat down with Vijay to discuss the intricate dance between human collaboration and artificial intelligence.

Our conversation explored the thoughtful design behind celebrating contributors, the strategy for embedding knowledge directly into team workflows, and the secure architecture required to connect private data with powerful AI tools. We also delved into the customer-driven pivot to an “AI-first” mission and looked ahead at the ambitious goal of breaking down information silos across the enterprise.

The “Your 2025 Stacked” review introduces unique contributor categories like “Most dependable” and “Most connected.” Can you describe the process for selecting these specific metrics, and share an anecdote about how a client used this data to recognize key team members?

We realized early on that measuring impact can’t just be about who posts the most. True collaboration is nuanced. We wanted to celebrate the different archetypes of a healthy knowledge community. There’s the person who shows up every single day—that’s “Most dependable.” Then there’s the person who bridges gaps between departments, answering questions across a wide array of tags—that’s “Most connected.” We chose these categories to reflect the human behaviors that truly strengthen an organization. I heard from one of our enterprise clients that they were completely surprised by their “Most connected” contributor. It was a junior engineer who wasn’t a formal team lead, but the data clearly showed they were the go-to person for three different product teams. This gave leadership a new lens to see who their real influencers were, beyond the org chart.

You launched digests for Slack and Microsoft Teams in 2025. Beyond simply pushing notifications, what was the core strategy for integrating into these platforms, and what specific metrics demonstrated that this approach improved knowledge discovery and engagement within teams’ existing workflows?

Our core strategy was to bring the knowledge to where the work happens, eliminating the friction of context switching. No one wants another tool to check. We designed the digests not as an alert system, but as a weekly pulse of the community’s brain. By surfacing top unanswered questions and highlighting key contributors directly in a Slack or Teams channel, we were embedding the act of knowledge sharing into their daily conversation. The proof was in the engagement data. We saw a significant uptick in answers and votes on questions that were featured in the digests. It wasn’t just passive consumption; it was active problem-solving. It confirmed that we were successfully turning a notification into a catalyst for collaboration right within their flow of work.

The MCP server is a major development connecting internal knowledge to AI like GitHub Copilot. Could you provide a step-by-step breakdown of how this feature ensures an AI-generated response is grounded and secure, using only a company’s trusted internal data?

This is absolutely critical, and we built it with a security-first mindset. The process is a secure, closed loop. When an employee asks a question in a tool like GitHub Copilot, the query doesn’t just go out to a generic AI model. First, our MCP server intercepts it. It then performs a targeted search against the organization’s private Stack Internal knowledge base—all the trusted articles and verified answers. The server then packages the most relevant, trusted information as context and feeds it to the generative AI. The AI then formulates its response based only on that verified, internal data. This ensures the answer is grounded in the company’s reality, not the public internet, and that sensitive internal knowledge never leaves their secure ecosystem.

The rebrand to Stack Internal marked an “AI-first era.” What specific pain points for your 100+ enterprise organizations prompted this shift, and how did features like the LangChain loader and SDK directly address their need for more scalable, AI-native collaboration?

The shift was a direct response to what our largest clients were telling us. They were all grappling with the same paradox: immense excitement about the potential of generative AI, paired with a deep-seated fear of its risks—data leaks, inaccurate or “hallucinated” answers, and a lack of control. They couldn’t just plug their organization into a public AI. They needed a way to fuel these new tools with their own trusted knowledge. The rebrand was our commitment to solving that. Features like the LangChain loader and our SDK were the practical tools to make it happen. They provide the secure, scalable building blocks for companies to create their own custom, AI-native applications, ensuring that their AI initiatives are powered by reliable, curated internal information, not a black box.

Looking ahead, your “Knowledge Ingestion” initiative will pull content from systems like Confluence. What is the biggest technical challenge in de-siloing this information, and can you describe the ideal “problem solved” moment you envision for a user once this is fully implemented?

The biggest technical hurdle isn’t just moving data; it’s understanding context and maintaining trust. A Slack conversation has a very different structure and level of verification than a formally approved Confluence document. The challenge is to ingest this diverse content, intelligently parse its meaning and context, and integrate it into a single, reliable source of truth without creating a noisy, untrustworthy mess. The ideal “problem solved” moment I envision is this: an engineer is stuck on a deployment issue. They ask a question, and the system instantly provides a synthesized answer that pulls a verified code block from a Stack Internal article, a key decision from a historical Slack thread, and a link to the official architecture document from Confluence. That single, comprehensive answer, delivered in seconds, is the ultimate goal—eliminating the hours they would have spent hunting across three different systems.

What is your forecast for how generative AI will fundamentally change the way large organizations manage and surface trusted internal knowledge over the next few years?

My forecast is that the idea of a “knowledge base” as a destination people have to visit will dissolve. Instead, trusted knowledge will become an intelligent, ambient layer that powers every other application in the enterprise stack. The focus will shift dramatically from knowledge capture to knowledge activation. It will no longer be about employees searching for information. It will be about information finding the employee at the exact moment of need, presented contextually within their workflow—whether in their IDE, their CRM, or their chat client. The most critical function of a platform like ours will be to serve as the curator and guarantor of trust, ensuring that the AI powering these experiences is fueled exclusively by verified, up-to-date, and secure internal knowledge.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later