SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
Digital lock over circuit brain symbolizing ai and data privacy protections

Le Chat tops AI privacy rankings as Meta & Google fall behind

Today

A new analysis by the Incogni Research Team has found that Mistral AI's Le Chat platform offers the strongest privacy-friendly policies among widely used artificial intelligence language models.

The study assessed major generative AI agents, comparing their practices across transparency, data collection, and model training criteria. Mistral AI's Le Chat was rated most privacy-friendly, while OpenAI's ChatGPT and xAI's Grok followed closely behind. Language models from Meta, Microsoft, and Google were deemed the least accommodating for user privacy.

Privacy assessment findings

Incogni analysed top AI platforms using 11 subcategories in three key areas: how user data is utilised in model training, the transparency of privacy practices, and the extent of data collection and third-party sharing. The study found significant variations in how leading models approach privacy and in the measures offered to users.

According to Incogni, Le Chat achieved its high ranking due to its limited data collection and sensitivity regarding the use of personal data in model training. Although its transparency scores were moderate, Le Chat provided multiple options for users to restrict data use and maintained a light data collection footprint.

ChatGPT took second place, with Incogni researchers noting its clear privacy policy and openness about whether user prompts would be used for model training. Despite concerns over the scope of data collected during interactions and the origins of the training data, Incogni found that ChatGPT provides sufficient clarity for users to make informed decisions about engagement.

xAI's Grok platform was placed third, primarily due to persistent issues with transparency and the volume of data collected. Anthropic's Claude followed in the ranking but was marked down for more extensive data collection and unclear practices regarding the use of user data.

Lower-ranked platforms

Meta's Meta.ai, Google's Gemini, and Microsoft's Copilot were among the lowest ranked by Incogni, alongside DeepSeek and Pi.ai. Researchers cited a lack of product-specific privacy policies, with these platforms instead applying a general privacy policy across their wider product suites. Incogni warned that such arrangements could increase the risk of user data being transferred freely between products, exposing individuals to additional privacy concerns.

The report also highlighted a common issue among these platforms: an absence of clear and accessible processes that allow users to opt-out of having their prompts or personal information used to further train the model. According to Incogni, this could be of particular concern for users in Europe, where robust privacy protections under GDPR are expected by default.

Industry-wide data collection

"All models we studied collect data from publicly accessible sources, with major platforms like Microsoft even possibly drawing from data brokers," said Darius Belejvas, the Head of Incogni. "Also, models like Gemini and Meta.ai most likely are not giving users the ability to opt-out from training data collection with their prompts, which might raise concerns, especially among European users, whose data will not be protected even under the GDPR."

The study employed a set of criteria designed to provide a clear comparison across AI platforms. The full ranking system used 11 individual factors, split across three categories: user data and model training, transparency, and data collection practices.

One shared observation across all tested models was the collection of data from publicly accessible sources, which for some platforms can include personal information. Incogni noted that Microsoft, among others, could potentially source data from commercial brokers in addition to public databases.

Transparency and user choice

Among the platforms reviewed, ChatGPT was highlighted for its clarity regarding user data use. The platform makes it apparent whether user interactions could be leveraged in training, an aspect that researchers noted as critical for user trust.

The report showed varying levels of effort by AI providers in enabling users to control their data and to make privacy-conscious decisions. Those that failed to establish clear communication channels or allow opt-outs were ranked lower for privacy friendliness.

Incogni's researchers believe that public awareness of how AI models handle personal data remains low but that greater transparency and more robust privacy safeguards could become increasingly important differentiators as adoption grows.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X