Fable Security via 360 MAGAZINE.

Your Human Risk Playbook for Secure Generative AI Use

Spread the love

By Nicole Jiang, CEO, Fable Security

Enterprises are adopting generative AI in a big way. People are using tools like ChatGPT, Gemini, and Claude to speed up coding, do research, polish marketing copy, summarize contracts, and brainstorm ideas. The productivity upside is real, but so are the risks: if you’re not careful, you can expose customer data, source code, intellectual property, non-public financials, protected health information, and more. Unlike traditional cyber threats, these exposures don’t come from an external attacker; they come about because everyday employees move too fast, and often don’t understand the consequences.

The only way to deal with this risk is to see it clearly. Gather signals from your workspace and security stack, normalize them, and find patterns. While your security team may be able to surface some issues from individual tools, they won’t have an easy way to correlate and identify people’s risky behavior, and more importantly, won’t be empowered to intervene, making employees aware and suggesting alternative ways to get their jobs done, while also protecting your sensitive data.

Take unsanctioned generative AI tools as an example. An employee might paste content into a public AI application to move faster on a project. If you can see that activity in context—employee, role, access, geo, tenure, and behavior history—and you can also see that the document being uploaded has already been flagged by DLP as sensitive, you now have something actionable.

Here are a few examples of what we see in Fable:

  • IAM (Okta, Azure AD) → who’s adopting and provisioning access to AI
  • EDR (CrowdStrike, SentinelOne) → endpoint activity such as copy-paste
  • DLP (Microsoft Purview, Netskope) → sensitive data categorizations
  • SASE (Netskope, Zscaler) → sensitive data uploads to AI

To name a few.

Most teams responsible for managing human risk can’t find this information without tracking it down from separate tools that they may not have access to. What’s missing is a clear, unified view of employee risk—both inherent and behavior—and the ability to intervene quickly and intelligently. When you can see patterns across systems and act on them in real time, you reduce exposure without slowing the business down.

Once you’ve identified the most problematic data-sharing behaviors in your enterprise, you’ll want to take action in the moment using an automated, AI-generated intervention. That may be a quick-and-dirty nudge in Slack or Teams, or a personalized 60-ish-second video briefing referencing the person, their precise behavior, your company’s policy, whatever sanctioned applications you want to guide them to, specific calls-to-action, and whom to contact if they have questions.

With a modern human risk platform, you can suss out risky behavior and respond in a way that supports both security and productivity. Instead of simply blocking an action or sending a long, generic training that feels punitive, you can notify employees as quickly as you detect risk, explain why you’re reaching out, and point them to the right alternative action or remediation. The goal isn’t to police people’s behavior, but rather to help them make safe decisions while also protecting your systems and data.