AI risk is not one thing: “AI risk is a huge basket of different things that can go wrong with different types of AI systems, some of them accidental, some of them deliberate and malicious.” In her session at the CIA Cyber Insurance Bootcamp 2025, Josephine Wolff, Professor of Cybersecurity Policy at The Fletcher School at Tufts University, drew on her cybersecurity roots to explore how the lessons of cyber insurance can inform the emerging AI insurance market.
AI Risk in Practice
To illustrate the very real issues that are quickly emerging, Wolff pointed to bias in AI-driven hiring systems. As companies increasingly rely on automated resume screening, underlying data patterns can quietly perpetuate discrimination. “These systems can make decisions that seem objective, but often reflect the biases of their training data,” she noted, while referencing a major U.S. court case from 2025: Mobley v. Workday, Inc. This case demonstrated that the legal system is beginning to treat these risks as more than hypothetical.
That single example encapsulates the core challenge of AI insurance: identifying, quantifying, and allocating responsibility for harms that are both technologically and ethically complex.
From AI Risk to Evolution
While today’s insurance market is still finding its footing, Wolff emphasized that this stage mirrors the early, cautious evolution of cyber insurance. The industry’s next task, she suggested, is to define what “AI risk” really means before it can learn how to price it.
Josephine Wolff’s session took place at the 2025 Cyber Insurance Bootcamp. It brought together top industry minds for an intensive, no-nonsense learning experience focused on the trends that will shape cyber risk in 2026.