
What will occur when the software being rapidly implemented becomes excessively risky for insurers? According to a Financial Times report, we are about to learn the answer.
Prominent insurers like AIG, Great American, and WR Berkley are seeking approval from U.S. regulators to remove AI-related liabilities from business policies. One underwriter described the outputs of the AI models to the FT as “too much of a black box.”
The story highlights the industry’s valid concerns. Google’s AI Overview falsely implicated a solar company in legal issues, leading to a $110 million lawsuit in March. Air Canada was compelled last year to honor a discount created by its chatbot. Furthermore, fraudsters utilized a digital clone of a senior executive last year to embezzle $25 million from Arup, a London-based design engineering firm, during a seemingly authentic video call.
Insurers are primarily concerned about the systemic risk of numerous simultaneous claims resulting from a widespread AI model error, rather than a single large payout. An Aon executive explained that insurers can manage a $400 million loss to one company, but they cannot handle an agentic AI failure triggering 10,000 simultaneous losses.
