![](https://sp-ao.shortpixel.ai/client/to_auto,q_glossy,ret_img,w_800,h_450/https://readof.com/wp-content/uploads/2024/01/ai-poisoning-could-turn-open-models-into-destructive-sleeper-agents-says-anthropic.jpg)
AI poisoning may flip open fashions into harmful “sleeper brokers,” says Anthropic
Enlarge (credit score: Benj Edwards | Getty Photos) Think about downloading an open supply AI language mannequin, and all appears properly at first, but it surely later turns malicious. On Friday, Anthropic—the maker of ChatGPT competitor Claude—launched a analysis paper