IA and SaaS Auth

tags : #area/watch #AI #SaaS #MFA #SSPM
source : (sponsored, caution !)
date : 2023-06-27

A February 2023 generative AI survey of 1,000 executives revealed that 49% of respondents use ChatGPT now, and 30% plan to tap into the ubiquitous generative AI tool soon. Ninety-nine percent of those using ChatGPT claimed some form of cost-savings, and 25% attested to reducing expenses by $75,000 or more. As the researchers conducted this survey a mere three months after ChatGPT's general availability, today's ChatGPT and AI tool usage is undoubtedly higher.

1 — Threat Actors Can Exploit Generative AI to Dupe SaaS Authentication Protocols

AI's ability to impersonate humans exceedingly well renders weak SaaS authentication protocols especially vulnerable to hacking. According to Techopedia, threat actors can misuse generative AI for password-guessing, CAPTCHA-cracking, and building more potent malware. While these methods may sound limited in their attack range, the January 2023 CircleCI security breach was attributed to a single engineer's laptop becoming infected with malware.

Likewise, three noted technology academics recently posed a plausible hypothetical for generative AI running a phishing attack:

"A hacker uses ChatGPT to generate a personalized spear-phishing message based on your company's marketing materials and phishing messages that have been successful in the past. It succeeds in fooling people who have been well trained in email awareness, because it doesn't look like the messages they've been trained to detect."

Beyond implementing multi-factor authentication (MFA) and physical security keys, security and risk teams need visibility and continuous monitoring for the entire SaaS perimeter, along with automated alerts for suspicious login activity.

2 — Employees Connect Unsanctioned AI Tools to SaaS Platforms Without Considering the Risks

AI tools, like most SaaS apps, use Oauth OAuth access tokens for ongoing connections to SaaS platforms. Once the authorization is complete, the token for the AI scheduling assistant will maintain consistent, API-based communicationwith Gmail, Google Drive, and Slack accounts — all without requiring the user to log in or authenticate at any regular intervals. The threat actor who can capitalize on this OAuth token has stumbled on the SaaS equivalent of spare keys "hidden" under the doormat.

Security and risk teams often lack the SaaS security tooling to monitor or control such an attack surface risk. Legacy tools like cloud access security brokers (CASBs) and secure web gateways (SWGs) won't detect or alert on AI-to-SaaS connectivity.

3 — Sensitive Information Shared with Generative AI Tools Is Susceptible to Leaks

The data employees submit to generative AI tools — often with the goal of expediting work and improving its quality — can end up in the hands of the AI provider itself, an organization's competitors, or the general public.

A March incident inadvertently enabled ChatGPT users to see other users' chat titles and histories in the website's sidebar. Concern arose not just for sensitive organizational information leaks but also for user identities being revealed and compromised. OpenAI, the developers of ChatGPT, announced the ability for users to turn off chat history. In theory, this option stops ChatGPT from sending data back to OpenAI for product improvement, but it requires employees to manage data retention settings. Even with this setting enabled, OpenAI retains conversations for 30 days and exercises the right to review them "for abuse" prior to their expiration.

This bug and the data retention fine print haven't gone unnoticed. In May, Apple restricted employees from using ChatGPT over concerns of confidential data leaks. While the tech giant took this stance as it builds its own generative AI tools, it joined enterprises such as Amazon, Verizon, and JPMorgan Chase in the ban. Apple also directed its developers to avoid GitHub Co-pilot, owned by top competitor Microsoft, for automating code.

Common generative AI use cases are replete with data leak risks. Consider a product manager who prompts ChatGPT to make the message in a product roadmap document more compelling. That product roadmap almost certainly contains product information and plans never intended for public consumption, let alone a competitor's prying eyes. A similar ChatGPT bug — which an organization's IT team has no ability to escalate or remediate — could result in serious data exposure.

Stand-alone generative AI does not create SaaS security risk. But what's isolated today is connected tomorrow. Currently, ChatGPT's Slack integration demands more work than the average Slack connection, but it's not an exceedingly high bar for a savvy, motivated employee. The integration uses OAuth tokens exactly like the AI scheduling assistant example described above, exposing an organization to the same risks.