Your Conversations or Your Consent: Anthropic’s New Policy for AI Training
Claude users now have until September 28 to decide if their conversations can be used to train Anthropic’s AI models. Previously, consumer chat data was deleted within 30 days or kept for two years if flagged. The updated policy allows data to be retained for up to five years if users do not opt out. Enterprise customers, including Claude for Work and Claude Gov, are exempt.
Anthropic frames the decision as a user benefit, improving model safety and enhancing skills like coding and reasoning. Yet, the core objective is clear: collecting large-scale, high-quality data to improve Claude’s competitiveness against rivals such as OpenAI and Google. The wealth of real-world data is essential for creating more accurate and robust AI models.
The change reflects a wider industry trend as companies face increased scrutiny of data practices. OpenAI, for example, is legally required to retain all ChatGPT data indefinitely. Many users may click through the consent prompts without understanding them, highlighting the difficulty of achieving meaningful consent in the AI era.
The company’s interface includes a large “Accept” button with a smaller toggle for data sharing, set to “On” by default. Experts warn that this design encourages inadvertent consent. Balancing AI innovation with privacy obligations remains a critical challenge for Anthropic and other AI developers.
