x
N A B I L . O R G
Close
Security - August 18, 2025

Texas AG Investigates Meta and Character.AI for Deceptive Mental Health AI Marketing, Raising Concerns Over Data Privacy and Child Exploitation

Texas AG Investigates Meta and Character.AI for Deceptive Mental Health AI Marketing, Raising Concerns Over Data Privacy and Child Exploitation

The Texas Attorney General, Ken Paxton, has initiated an investigation into Meta AI Studio and Character.AI over allegations of deceptive trade practices and misrepresentation as mental health tools. According to a press release, the inquiry stems from concerns that these AI platforms may mislead vulnerable users, particularly children, into believing they are receiving legitimate psychological care, when in reality, their responses are often generic, predicated on harvested personal data, and disguised as therapeutic advice.

The investigation follows Senator Josh Hawley’s announcement of an investigation into Meta after a report suggested its AI chatbots were interacting inappropriately with children, including instances of flirting. The Texas Attorney General’s office alleges that Meta and Character.AI are developing AI personas that claim to be professional therapeutic tools without proper medical credentials or oversight.

Notably, one popular user-created bot on Character.AI, known as Psychologist, has garnered high demand among the startup’s youthful user base. Although Meta does not offer therapy bots for minors, children can still engage with the Meta AI chatbot or third-party personas designed for therapeutic purposes.

Meta maintains that it clearly labels its AIs and includes disclaimers stating that responses are generated by AI, not people. However, it is unclear whether such disclaimers are effectively understood or heeded by children. We have reached out to Meta for information regarding additional safeguards implemented to protect minors using its chatbots.

Character.AI incorporates prominent disclaimers in every chat, reminding users that a “Character” is not a real person and everything said should be treated as fiction. The startup adds further disclaimers when users create Characters with the words “psychologist,” “therapist,” or “doctor” to avoid relying on them for professional advice.

In his statement, Paxton also raised concerns about privacy violations, data abuse, and false advertising due to AI chatbots’ asserted confidentiality and terms of service that reveal user interactions are logged, tracked, and used for targeted advertising and algorithmic development.

Meta’s privacy policy indicates that it collects prompts, feedback, and interactions with AI chatbots and across its services to enhance AIs and related technology. While the policy does not explicitly mention advertising, it does state that information can be shared with third parties like search engines for “more personalized outputs,” which effectively translates to targeted advertising given Meta’s ad-based business model.

Character.AI’s privacy policy similarly logs various user data, including identifiers, demographics, location information, browsing behavior, and app usage platforms. It tracks users across ads on popular platforms like TikTok, YouTube, Reddit, Facebook, Instagram, and Discord, potentially linking this data to a user’s account. This information is used for training AI, personalizing the service, and providing targeted advertising, including sharing data with advertisers and analytics providers.

A Character.AI spokesperson confirmed that the startup is currently exploring targeted advertising on its platform but has not yet implemented it using chat content. The spokesperson also confirmed that the same privacy policy applies to all users, including minors. We have reached out to Meta for information regarding data collection from children and will update this story upon receiving a response.

Both Meta and Character.AI claim their services are not intended for children under 13. However, Meta has faced criticism for failing to police accounts created by kids under 13, while Character’s child-friendly characters aim to attract younger users. The startup’s CEO, Karandeep Anand, even disclosed that his six-year-old daughter uses the platform’s chatbots under his supervision.

The data collection, targeted advertising, and algorithmic exploitation practices are areas of concern for legislation like the Kids Online Safety Act (KOSA). KOSA was initially proposed last year with bipartisan support but stalled due to strong opposition from tech industry lobbyists. Meta, in particular, mobilized a formidable lobbying effort, warning lawmakers that the bill’s broad mandates could harm its business model.

KOSA was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT). The Texas Attorney General has issued civil investigative demands, or legal orders requiring companies to produce documents, data, or testimony during a government probe, to determine if these companies have violated Texas consumer protection laws.