Can AI Finally Solve the Cybersecurity Talent Shortage?

Can AI Finally Solve the Cybersecurity Talent Shortage?

Our SaaS and Software expert, Vijay Raina, is a specialist in enterprise SaaS technology and tools. He also provides thought-leadership in software design and architecture. In our conversation, we explored the evolution of Security Operations Centers (SOCs), delving into how AI is not just a buzzword but a practical force multiplier for security teams of all sizes. We discussed the surprising security gaps even in large enterprises, especially around critical SaaS platforms, and how a data-first approach to AI can cut through the noise of false positives. The discussion also covered the mechanics of AI-driven investigations that interact directly with employees and the creative potential of custom, natural-language-based automation agents, ultimately painting a picture of a more proactive and intelligent future for cybersecurity.

Your platform aims to amplify small security teams, making them function like a much larger one. Could you walk me through the key differences in your approach when you’re augmenting an existing team versus building a security operations center from the ground up for a startup?

It’s a fascinating spectrum, and our approach really depends on where a company is in its security journey. For a startup or a high-growth company that doesn’t have a SOC, we’re essentially providing one in a box. They can build a fully functional security operations center in days, bypassing the immense effort of buying disparate tools, hiring dedicated detection engineers, and staffing a team of analysts. We provide the detections, the triaging, the investigation capabilities, and the response framework right out of the gate. On the other hand, when we work with an established organization that already has a SOC, our goal is amplification. Imagine you have a lean team of two or three analysts; our mission is to make them feel and operate like a team of ten. We fill the gaps where they can’t hire fast enough—a very real problem given the skills shortage—and use our AI SOC to augment their existing capabilities, handling the high-volume, repetitive tasks so they can focus on the most critical threats.

It’s surprising that even large enterprises may lack security coverage for critical SaaS platforms like GitHub or OpenAI. What are the common reasons for this oversight, and what are the first few practical steps a company should take to start monitoring these internet-facing services effectively?

You’d be shocked. We work with Fortune 2000 companies and find that even they often have significant blind spots when it comes to monitoring their most critical, internet-facing SaaS applications. Think about it: so much intellectual property and sensitive data now live in platforms like GitHub, Snowflake, or OpenAI, yet they frequently lack dedicated detection and response coverage. This happens because traditional security was built around the network perimeter and endpoints, and many teams haven’t adapted their strategies to this new, decentralized reality. The first practical step is acknowledging this gap and prioritizing these services based on the data they hold. The next step is to find a solution that can tap directly into these platforms via APIs to ingest logs, event data, and configurations. You don’t need to boil the ocean; start with one or two critical services and begin establishing a baseline of normal activity. The key is to get visibility where your most valuable data actually resides today.

Traditional anomaly detection often struggles with high rates of false positives. How does your approach of combining statistical modeling with LLM-based triaging specifically reduce this noise and increase the reliability of alerts? Could you provide an example of how the AI adds crucial context?

Anomaly detection absolutely has a bad reputation for being noisy, and that’s because traditional statistical models lack context. Our approach is layered to solve exactly this problem. We still use robust statistical modeling to establish a baseline of what’s normal, but the real magic happens when we layer on our AI agents, which are powered by large language models. These agents act as a knowledge layer. For instance, a developer might perform actions in a cloud environment—like spinning up resources or accessing data in unusual ways—that look exactly like an attacker’s movements to a purely statistical model. This would typically trigger a low-fidelity, noisy alert that an analyst would have to chase down. However, our AI agent enriches this raw alert with business context. It understands this user is a developer, that this activity is consistent with their role, and can then intelligently decide that this is likely benign, effectively weeding out the false positive before it ever reaches a human. This allows us to analyze even the lowest-level signals, let the machine stitch them together, and only escalate truly high-fidelity threats.

Rather than heavily fine-tuning your models, you emphasize a data-first approach that focuses on providing LLMs with strong context. What are the key trade-offs with this strategy, and how does your upfront data engineering and enrichment work lead to more predictable and reliable results?

This is a fundamental part of our philosophy. Many competitors take an overlay approach, trying to make sense of alerts from other third-party systems. We believe that leads to guesswork. Instead, we took a data-first approach from day one. The trade-off is that it requires a massive upfront investment in data engineering. We ingest raw data, build semantics around it, create relationships, and enrich it extensively before it ever gets near an LLM. The payoff, however, is immense predictability and reliability. Think of it as giving an expert a well-organized, concise briefing document instead of a 100-page book and asking for a summary. By narrowing the scope of the data and providing rich, domain-specific context, we remove the guesswork for the LLM. It can then apply its powerful general intelligence and reasoning capabilities to a clean, well-understood problem. This is how you get human-like reasoning at machine scale, and it drastically reduces the need for constant, brittle fine-tuning of the models themselves.

Security teams often spend significant time manually verifying alerts with employees. Could you walk me through the step-by-step process of how your AI agent investigates a potential threat, from initial detection to interacting with a user on Slack to confirm or deny the activity?

This manual verification process is a huge, asynchronous time sink for most SOCs. Our AI agent automates this beautifully. Let’s say our system detects a suspicious login or an unusual data access pattern. The first step is the initial detection and triage, where the AI assesses the event against behavioral models and contextual data. If it can’t be immediately dismissed and requires user verification, the agent moves to the investigation phase. It will automatically ping the user directly in Slack with a clear, concise message like, “Hi, we detected a login to your account from a new device in an unfamiliar location. Was this you?” The user can respond with a simple yes or no. This response is then fed directly back into the investigation as a critical piece of evidence. If the user confirms the activity, the alert is closed as a false positive. If they deny it, the system immediately escalates it as a confirmed threat and can trigger automated response actions, all without a human analyst having to send a single message or wait around for a reply.

You enable customers to build their own “automation agents” using natural language for bespoke response actions. Can you share an anecdote about a unique or particularly creative automation agent a customer has built, and how it helped them move from reactive threat hunting to proactive defense?

This is where things get really exciting, because we’re moving beyond canned, step-by-step playbooks. One of the most creative uses I’ve seen involved a customer dealing with persistent password spray attempts. Instead of just blocking IPs reactively, they built a proactive automation agent using simple natural language. They instructed it: “Look for all threats where I see password spray attempts. Extract the source IPs from these events. Now, keep a running list of these IPs and generate a daily report of any activity, successful or not, from any IP on that list.” What they created was a self-maintaining threat intelligence feed tailored to their environment. It transformed a reactive, manual threat hunting exercise into an automated, proactive defense mechanism. If any of those previously hostile IPs ever showed up again, even in a seemingly benign way, the security team knew about it instantly, allowing them to preempt a potential breach.

For companies considering an AI-driven security platform, the setup process can seem daunting. Given your system’s single-tenant architecture and API-based data ingestion, what does the onboarding process look like from day one, and how quickly can a team start seeing meaningful results?

We’ve worked hard to make the onboarding process as painless as possible because we know that time-to-value is critical. From day one, the process is straightforward. Because we use a single-tenant architecture, every customer gets their own dedicated cloud account and their own Snowflake data warehouse—this is crucial for data security and integrity. Getting the data in is entirely API-based. For major cloud providers, it’s a matter of granting us read-only access roles. For SaaS apps like GitHub or OpenAI, it’s just about connecting to their APIs to pull logs and event data. The lift for the customer is incredibly light; we’ve seen companies onboard four or five data sources and get fully set up in just three to four hours. The best part is that if they have historical data—we ideally like a 90-day window—we ingest it immediately and begin building behavioral models on day one. This means they can start seeing meaningful, contextualized alerts and insights almost instantly.

What is your forecast for AI’s role in security operations over the next five years?

Over the next five years, I believe AI will fundamentally reshape the security operations landscape from a reactive to a proactive and predictive posture. We’ll move beyond just using AI for triaging alerts. We will see AI agents taking on a more autonomous role in investigations, dynamically pulling evidence from disparate systems and reasoning about threats with minimal human oversight. The true game-changer will be the democratization of security expertise. Instead of needing a PhD in data science to hunt for threats, security analysts will be able to simply ask natural language questions and direct AI agents to perform complex, proactive threat hunts on their behalf. This will empower smaller teams to defend themselves with the sophistication of a massive enterprise, finally allowing them to get ahead of the attackers instead of constantly playing whack-a-mole.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later