From automated emails to AI-powered writing assistants, small and mid-sized businesses (SMBs) are adopting artificial intelligence at lightning speed. But while tools, like ChatGPT, promise productivity gains, they’re also opening the door to new and often overlooked cyber threats. During our most recent masterclass on AI Cyber Risks, hosted with CheckPoint, cybersecurity expert Keely Wilkins broke down how different types of AI tools, especially in the hands of resource-constrained SMBs, can quietly erode security and compliance frameworks.
Catch up with the full recording now: Risks and Rewards: Integrating AI into Cybersecurity Practices
Two Deployment Models, One Shared AI Cyber Risk
Understanding where and how AI is deployed is critical to understanding its risk profile. Here are the two most common types that SMB policyholders are likely using:
1. SaaS-Based AI Tools
These are plug-and-play platforms accessed through a browser, requiring no installation or IT support. They’re fast, cheap (often free), and frictionless, exactly why they’ve taken over SMB environments. But their convenience is also their weakness.
Because employees can sign up without oversight, SaaS tools contribute to shadow IT (unapproved technologies operating outside the organization’s control). This opens the door to data leakage, third-party exposure, and compliance violations.
Common examples in use include:
- ChatGPT (generative AI)
- Grammarly (writing assistant)
- Jasper (content creation)
- Otter.ai (meeting transcription)
2. On-Premise or Proprietary AI Tools
These are developed or deployed in-house, often by larger enterprises like banks or healthcare providers. They allow for greater control, privacy, and customization. However, they’re not foolproof. These deployment models need clear documentation, model explainability, and internal risk assessments. Without this governance, these tools can create just as much exposure.
While less common among SMBs due to their complexity and cost, some midsize businesses in regulated sectors may opt for hybrid solutions or licensed enterprise AI tools.
What Cyber Insurance Professionals Need To Consider
AI tools are no longer “emerging tech”; they’re already part of your clients’ day-to-day operations. But many organizations, especially SMBs, lack formal policies around AI usage. That makes it difficult to know:
- What data is being shared (and where it’s going)
- Whether AI outputs are accurate, secure, or legally compliant
- Who’s responsible for oversight, updates, and usage policies
As AI adoption accelerates, so does AI cyber risk, especially when these tools are deployed without visibility, governance, or basic security hygiene.
Turning AI Cyber Risk into Business Opportunity
For cyber underwriters and brokers, understanding how AI tools are used helps shift conversations from technical jargon to practical risk management. Policyholders should be asked:
- Do you have an AI usage policy in place?
- Who approves and monitors AI tool adoption?
- Are SaaS tools being used without IT approval?
- Is sensitive data being entered into third-party platforms?
Being able to ask the right questions and interpret the answers gives cyber insurance professionals a competitive edge in an increasingly softening market.
Watch the full recording for a complete cheatsheet and additional helpful tips from Check Point.