
Anthropic has introduced important changes in Claude’s privacy policy and the way it handles consumer-level user data. The group behind the often-used AI chatbot Claude announced on Thursday that it would start training its artificial intelligence with user data unless people chose otherwise by 28 September. Furthermore, revealing that it is extending the data retention policy for communications sent to Claude to five years for consumers, the AI company stated those not opting out of artificial intelligence education.
Anthropic previously avoided training its artificial intelligence systems with user messages; hence, the company stated that, unless legally bound to keep them, it would remove all user prompts and results within a 30-day period. Still, in the event of policy violations, the business did save users’ inputs and outputs for a maximum of two years.
Particularly when they use Claude Code from their connected accounts, these fresh changes will have an impact on all users inside Claude’s consumer tiers, including those subscribed to Claude Free, Pro, and Max. This strategy in Claude’s privacy policy will not affect, however, any of the company’s commercial offerings, including Claude for Work, Claude Gov, Claude for Education, or API. Change, even when accessed through third-party services like Google Cloud Vertex AI and Amazon Bedrock.
Anthropic claims that model training will help users to improve model safety, hence enabling our systems to detect hazardous stuff, taking inspiration from Gemini’s new memory feature. In Claude’s new privacy policy, content is less liable to misclassifying benign discussions and is more accurate. Moreover, the business said, “You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.” Anthropic will start presenting a pop-up window to current customers in the near future titled, ‘Updates to Consumer Terms and Policy.’ The accompanying text includes the option, ‘You can now improve Claude’ which by default is turned on. The unsuspecting users can tap on the accept option after which their data will be trained by the AI models.