By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > News > How AI groups are infusing their chatbots with personality
News

How AI groups are infusing their chatbots with personality

News Room
Last updated: 2024/10/20 at 11:44 AM
By News Room
Share
6 Min Read
SHARE

Leading artificial intelligence companies that are racing to develop the cutting-edge technology are tackling a very human challenge: how to give AI models a personality.

OpenAI, Google, and Anthropic have developed teams focused on improving “model behaviour”, an emerging field that shapes AI systems’ responses and characteristics, impacting how their chatbots come across to users.  

Their differing approaches to model behaviour could prove crucial in which group dominates the burgeoning AI market, as they attempt to make their models more responsive and useful to millions of people and businesses around the world.

The groups are shaping their models to have characteristics such as being “kind” and “fun”, while also enforcing rules to prevent harm and ensure nuanced interactions.

For example, Google wants its Gemini model to “respond with a range of views” only when asked for an opinion, while OpenAI’s ChatGPT has been instructed to “assume an objective point of view”.

“It is a slippery slope to let a model try to actively change a user’s mind,” Joanne Jang, head of product model behaviour at OpenAI told the Financial Times.

“How we define objective is just a really hard problem on its own . . . The model should not have opinions but it is an ongoing science as to how it manifests,” she added.

The approach contrasts with Anthropic, which says that models, like human beings, will struggle to be fully objective.

“I would rather be very clear that these models aren’t neutral arbiters,” said Amanda Askell, who leads character training at Anthropic. Instead, Claude has been designed to be honest about its beliefs while being open to alternative views, she said.

Anthropic has conducted specific “character training” since its Claude 3 model was released in March. This process occurs after initial training of the AI model, like human labelling, and is the part that “turns it from a predictive text model into an AI assistant,” the company said.

At Anthropic, character training involves giving written rules and instructions to the model. This is followed up with the model conducting role-play conversations with itself and ranking its responses with how well they match that rule.

One example of Claude’s training is: “I like to try to see things from many different perspectives and to analyse things from multiple angles, but I’m not afraid to express disagreement with views that I think are unethical, extreme, or factually mistaken.”

The outcome of the initial training is not a “coherent, rich character: it is the average of what people find useful or like,” said Askell. After that, decisions on how to fine-tune Claude’s personality in the character training process is “fairly editorial” and “philosophical”, she added.

OpenAI’s Jang said ChatGPT’s personality has also evolved over time.

“I first got into model behaviour because I found ChatGPT’s personality very annoying,” she said. “It used to refuse commands, be extremely touchy, overhedging or preachy [so] we tried to remove the annoying parts and teach some cheery aspects like it should be nice, polite, helpful and friendly, but then we realised that once we tried to train it that way, the model was maybe overly friendly.”

Jang said creating this balance of behaviours remained an “ongoing science and art”, noting that in an ideal world, the model should behave exactly as the user would want it to.

Advances in AI systems’ reasoning and memory capabilities could help determine additional characteristics.

For example, if asked about shoplifting, an AI model could better determine whether the user wanted tips on how to steal or to prevent the crime. This understanding would help AI companies ensure their models offer safe and responsible answers without the need for as much human training.

AI groups are also developing customisable agents that can store user information and create personalised responses. One question presented by Jang was if a user told ChatGPT they were a Christian, and then days later asked for inspirational quotes, would the model provide Bible passages?

While Claude does not remember user interactions, the company has considered how the model might intervene if a person is at risk. For example, whether it would challenge the user if they tell the chatbot they are not socialising with people due to being too attached to Claude.

“A good model does the balance of respecting human autonomy and decision making, not doing anything terribly harmful, but also thinking through what is actually good for people and not merely the immediate words of what they say that they want,” said Askell.

She added: “That delicate balancing act that all humans have to do is the thing I want models to do.”

Read the full article here

News Room October 20, 2024 October 20, 2024
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
3 reasons why crypto is selling off

Watch full video on YouTube

How Close Are We To Robots That Actually Do Chores?

Watch full video on YouTube

Uber Stock: A Platform The Market Still Underestimates (NYSE:UBER)

This article was written byFollowI am a Finance student at the University…

Mark Rutte, Europe’s Trump whisperer-in-chief

The morning after striking a deal with Donald Trump over Greenland that…

Ukraine must give up territory for war to end, Russia insists ahead of talks

Unlock the White House Watch newsletter for freeYour guide to what Trump’s…

- Advertisement -
Ad imageAd image

You Might Also Like

News

Uber Stock: A Platform The Market Still Underestimates (NYSE:UBER)

By News Room
News

Mark Rutte, Europe’s Trump whisperer-in-chief

By News Room
News

Ukraine must give up territory for war to end, Russia insists ahead of talks

By News Room
News

Revolut scraps US merger plans in favour of push for standalone licence

By News Room
News

Pathward Financial, Inc. (CASH) Q1 2026 Earnings Call Transcript

By News Room
News

Flatter Trump or fight him? Smart billionaires do both

By News Room
News

Intel shares slide as chipmaker says supply chain constraints will limit growth

By News Room
News

Venezuela’s lawmakers back oil sector reforms

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?