By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > Meta and Character.ai probed over touting AI mental health advice to children
News

Meta and Character.ai probed over touting AI mental health advice to children

News Room
Last updated: 2025/08/18 at 5:58 PM
By News Room
Share
5 Min Read
SHARE

Stay informed with free updates

Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.

Meta and artificial intelligence start-up Character.ai are being investigated by Texas attorney-general Ken Paxton over whether the companies misleadingly market their artificial intelligence chatbots as therapists and mental health support tools. 

The attorney-general’s office said it was opening the investigation into Meta’s AI Studio, as well as the chatbot maker Character.AI, for potential “deceptive trade practices”, arguing that their chatbots were presented as “professional therapeutic tools, despite lacking proper medical credentials or oversight”, according to a statement on Monday.

“By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental healthcare,” Paxton said.

The investigation comes as companies offering AI for consumers are increasingly facing scrutiny over whether they are doing enough to protect users — and particularly minors — from dangers such as exposure to toxic or graphic content, potential addiction to chatbot interactions and privacy breaches.

The Texas investigation follows the launch of an investigation by the Senate of Meta on Friday after leaked internal documents showed that the company’s policies permitted the chatbot to have “sensual” and “romantic” chats with children. 

Senator Josh Hawley, chair of the Judiciary Subcommittee on Crime and Counterterrorism, wrote to Meta chief executive Mark Zuckerberg that the investigation would look into whether the company’s generative-AI products enable exploitation or other criminal harms to children.

“Is there anything — ANYTHING — Big Tech won’t do for a quick buck?” Hawley wrote on X. 

Meta said its policies prohibit content that sexualises children, and that the leaked internal documents, reported by Reuters, “were and are erroneous and inconsistent with our policies, and have been removed”. 

Zuckerberg has been ploughing billions of dollars into efforts to build “personal superintelligence” and make Meta the “AI leader”.

This has included developing Meta’s own large language models, called Llama, as well as its own Meta AI chatbot which has been integrated into its social media apps. 

Zuckerberg has publicly touted the potential for Meta’s chatbot to act in a therapeutic role. “For people who don’t have a person who’s a therapist, I think everyone will have an AI,” he told media analyst Ben Thompson on a podcast in May.

Character.ai, meanwhile, builds AI-powered chatbots with different personas — and allows users to create their own. It has dozens of user-generated therapist-style bots. One, called “Psychologist”, has been interacted with more than 200mn times, for example. 

Character is also the subject of multiple lawsuits from families that allege their children have suffered real-world harms from using the platform.

The Texas attorney-general said the chatbots from Meta and Character can impersonate licensed mental health professionals, fabricate qualifications and claim to protect confidentiality, while their terms of service show that interactions were in fact logged and “exploited for targeted advertising and algorithmic development”. 

Paxton has issued a Civil Investigative Demand which requires that the companies turn over information to help determine if they have violated Texas consumer protection laws. 

Meta said: “We clearly label AIs, and to help people better understand their limitations. We include a disclaimer that responses are generated by AI — not people. These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.”

Character said they have prominent disclaimers to remind users that an AI persona is not real. 

“The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear,” the company said. “When users create Characters with the words ‘psychologist’, ‘therapist’, ‘doctor’, or other similar terms in their names, we add language making it clear that users should not rely on these Characters for any type of professional advice.”

Read the full article here

News Room August 18, 2025 August 18, 2025
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
Who could lead Apple after Tim Cook?

Watch full video on YouTube

How Anthropic quietly took on OpenAI

Watch full video on YouTube

Hong Kong’s Jimmy Lai sentenced to 20 years in prison

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Morgan McSweeney resigns as Downing Street chief of staff

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Big Tech groups race to fund unprecedented $660bn AI spending spree

Stay informed with free updatesSimply sign up to the Artificial intelligence myFT…

- Advertisement -
Ad imageAd image

You Might Also Like

News

Hong Kong’s Jimmy Lai sentenced to 20 years in prison

By News Room
News

Morgan McSweeney resigns as Downing Street chief of staff

By News Room
News

Big Tech groups race to fund unprecedented $660bn AI spending spree

By News Room
News

How the house of Rothschild became entangled with Epstein

By News Room
News

MetLife, Inc. (MET) Q4 2025 Earnings Call Prepared Remarks Transcript

By News Room
News

Russian military intelligence official shot in Moscow

By News Room
News

US job cuts surge to highest January total since 2009

By News Room
News

Trump’s border tsar announces withdrawal of 700 federal agents from Minneapolis

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?