By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
IndebtaIndebta
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Notification Show More
Aa
IndebtaIndebta
Aa
  • Banking
  • Credit Cards
  • Loans
  • Dept Management
  • Mortgage
  • Markets
  • Investing
  • Small Business
  • Videos
  • Home
  • News
  • Banking
  • Credit Cards
  • Loans
  • Mortgage
  • Investing
  • Markets
    • Stocks
    • Commodities
    • Crypto
    • Forex
  • Videos
  • More
    • Finance
    • Dept Management
    • Small Business
Follow US
Indebta > Small Business > Keep Humans At The Center Of AI Decision Making
Small Business

Keep Humans At The Center Of AI Decision Making

News Room
Last updated: 2023/10/24 at 12:14 PM
By News Room
Share
7 Min Read
SHARE

Beena Ammanath – Global Deloitte AI Institute Leader, Founder of Humans For AI and Author of “Trustworthy AI” and “Zero Latency Leadership”

Contents
The Risks In Data AnnotationThe Importance Of Output Validation

In this era of humans working with machines, being an effective leader with AI takes a range of skills and activities. Throughout this series, I’m providing an incisive roadmap for leadership in the age of AI, and an important part of leading effectively today is making sure your people are at the center of decision-making.

In popular discussions on artificial intelligence, there can be a sense that the machine stands alone, distinct from human intelligence and capable of functioning independently, indefinitely. It has led to some consternation around the mass elimination of jobs and the unfounded fear that the future of business is in replacing humans with machines. This is wrongheaded, and in fact, holding this assumption may actually limit potential value and trust in AI applications.

The reality is that behind every AI model and use case is a human workforce. Humans do the hard, often-unsung work of creating and assembling the data and enabling technologies, using the model to drive business outcomes, and establishing governance and risk mitigation to support compliance. Put another way, without humans, there can be no AI.

Yet, while the human element is a key to unlocking valuable, trustworthy AI, it is not always given the attention and investment it is due. The imperative today is to orient AI programs to focus on humans working with AI, not simply alongside it. The reason is that it can have a direct impact on AI ethics and business value.

Two areas of AI development and use are illustrative of the way in which data is curated and the importance of validating AI outputs.

The Risks In Data Annotation

AI models are largely trained on annotated data. Annotating text, images, sentiments and other data at scale is a time-consuming, highly manual effort. With this, human workers follow instructions from engineers to label data in a particular way, according to whatever is needed for a given model. There are matters of trust and ethics that grow out of this. Are the human annotators injecting bias into the training set by virtue of their personal biases? For example, if an annotator is color blind and asked to annotate red apples in a set of images, they might fail to label the image correctly, thus leading to a model that is less capable of spotting red apples in the real world.

Separately, what are the ethical implications for the humans engaged in this work? While red apples are innocuous, some data might contain disturbing content. If a model is intended to assess vehicle damage-based accident photos, human annotators might be asked to scrutinize and label images that contain things better left unseen. In this, organizations have an obligation to weigh the benefits of the model against the repercussions for the human workforce. Whether it is red apples or crashed cars, the insight is to keep humans at the center of decision-making and account for risks to the employee, the enterprise, the model and the end user.

The Importance Of Output Validation

With machine learning and other types of more traditional AI, model management requires ongoing attention to outputs to account and correct for things like model drift and brittleness. With the emergence of generative AI, the importance of validating outputs becomes even more critical for risk mitigation and governance.

Generative AI, such as large language models (LLMs), has rightly created excitement and urgency around how this new type of AI can be used across myriad use cases, both complementing the existing AI ecosystem with upstream deployments and enabling downstream use cases, such as natural language chatbots and assistive summaries of documents and datasets. Generative AI creates data that is (usually) as coherent and accurate as real-world data. If a prompt for an LLM asks for a review of supply chain constraints over the past month, a model with access to that data could output a tight summary of constraints, suspected causes and remediation steps. That summary provides insight that the user relies on to make decisions, such as changing a supplier that regularly encountered fulfillment issues.

But what if the summary is incorrect and the LLM has (without any malicious intent) cited a constraint that does not exist and, even worse, invents a rationalization for why that “hallucination” is valid? The user is left to make decisions based on false information, which has cascading business implications. This exemplifies why output validation is necessary for generative AI deployments.

To be sure, not all inaccuracies bring the same level of risk and consequence. If using generative AI to write a marketing e-mail, the organization might have a higher tolerance for error, as faults or inaccuracies are likely to be fairly easy to identify and the outcomes are lower stakes for the enterprise. When it comes to other applications that concern mission-critical business decisions, however, the tolerance for error is low. This makes a “human in the loop” who validates model outputs more important than ever before. Generative AI hallucination is a technical problem, but it requires a human solution.

Deloitte, where I’m the Global Head of the AI Institute, calls this the “Age of With,” an era characterized by humans working with machines to accomplish things neither could do independently. The opportunity is limited only by the imagination and the degree to which risks can be mitigated. Recognizing and prioritizing the human element throughout the AI lifecycle can help organizations build AI programs they can trust.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

News Room October 24, 2023 October 24, 2023
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Finance Weekly Newsletter

Join now for the latest news, tips, and analysis about personal finance, credit cards, dept management, and many more from our experts.
Join Now
The great Syrian gold hunt

Since the despotic regime of Bashar al-Assad collapsed in December, residents of…

Generative AI models are skilled in the art of bullshit

Stay informed with free updatesSimply sign up to the Artificial intelligence myFT…

Trump pushes EU to cut tariffs or face extra duties

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

US-China trade war is pushing Asian nations to pick sides, ministers warn

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects…

Merz backs Nord Stream ban to prevent US and Russia restarting gas link

Stay informed with free updatesSimply sign up to the German politics myFT…

- Advertisement -
Ad imageAd image

You Might Also Like

Small Business

Brilliant Or Lucky? 4 Key Insights For Ventures & Angels

By News Room
Small Business

A Conversation With Agile Expert Harry Narang

By News Room
Small Business

College enrollment is down, Gen Z losing faith in a degree. Here is a better option.

By News Room
Small Business

The Digital Cyrano De Bergerac Of Modern Business

By News Room
Small Business

Why Do We Stay In A Job When We Are Not Happy? Insights To Help You Get The Career You Deserve

By News Room
Small Business

Making A Large Language Model Transparent, Compliant And Reliable

By News Room
Small Business

The Important Initiative For Real Digital Marketing Results

By News Room
Small Business

The Future Of Real Estate

By News Room
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Press Release
  • Contact
  • Advertisement
More Info
  • Newsletter
  • Market Data
  • Credit Cards
  • Videos

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions
Join Community

2023 © Indepta.com. All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?