The Default Trap: Why Anthropic's Data Policy Change Matters
11 days ago
- #User Consent
- #Data Policy
- #AI Privacy
- Anthropic changed its data policy: user conversations with Claude now train the AI by default unless users opt out.
- Previously, explicit consent was required for training data use; now, users must navigate settings to opt out.
- Data from users who don't opt out can be retained for up to five years.
- Business and enterprise customers are exempt from this change, highlighting differing value exchanges for consumer vs. business users.
- Anthropic justifies the change as necessary for improving AI safety and capabilities, shifting from voluntary to presumed consent.
- The article emphasizes the importance of not relying on defaults in AI services, as terms can change without notice.
- Users are encouraged to actively manage their privacy settings and stay informed about updates to terms of service.
- The opt-out option is not prominently displayed, potentially leading many users to unknowingly contribute their data.
- The piece advises treating AI tools like rental cars—inspecting terms and conditions regularly to understand current agreements.
- The author opted out not due to sensitivity of data but to maintain conscious control over personal information sharing.