Jodi Daniels is a privacy consultant and Founder/CEO of Red Clover Advisors, one of the few Women’s Business Enterprises focused on privacy.
In fiction, AI isn’t exactly benevolent. Sci-fi classics prefer the dystopian view of AI technology, from the malevolent Hal in 2001: A Space Odyssey to the sinister Cylons of Battlestar Galactica. Even the family-friendly WALL-E assumed that AI would turn humans into tech-dependent layabouts.
But the real-world application of AI is much more nuanced. Take one of the hottest AI topics as an example: ChatGPT and generative AI.
Amassing 100 million active monthly users just two months after it went live, ChatGPT is now the fastest-growing consumer application ever. But private citizens and businesses alike are using the tool without fully understanding its data privacy and ownership consequences.
That’s not to say that businesses shouldn’t use generative AI tools. They should, however, work to understand the parameters and limitations of tools like ChatGPT just like they would analyze any other software package or program prior to adoption.
What Is ChatGPT And Generative AI?
ChatGPT is a type of AI program known as generative AI.
These programs are essentially built to produce outputs based on prompts or queries from a user. Generative AI doesn’t stick to a set of preformulated answers but rather generates the answers based on its own “experience,” accessible data and the user’s input.
ChatGPT in particular is a language model operated by OpenAI, designed to operate as a conversation between the human input and program output.
Privacy And Data Implications Of Generative AI
If you’re a privacy-minded business owner or professional (and you should be!), remember that most generative bots, including ChatGPT, do not guarantee data privacy.
This is especially thorny for businesses whose contracts are designed to ensure privacy or confidentiality for clients. If a business enters any client, customer or partner information into a chatbot, that AI may use that information in ways that businesses can’t reliably predict.
What Data Does ChatGPT Collect?
Another privacy concern is that ChatGPT automatically opts everyone in. While ChatGPT states it provides an opt-out feature in its terms and conditions for user data collection, it also notes that opting out may limit the kind of answers users receive.
Truly, even the program itself cautions users: “I am not able to ensure the security or confidentiality of information…and the conversations may be used and stored for research or training purposes.”
If your business is moving forward with ChatGPT or another generative AI product, make sure you vet the service like you would any other vendor by asking questions like:
- What security measures are in place?
- What will happen with the data that is collected?
- Will data be separate or combined?
Businesses concerned with data privacy may find the most peace of mind by steering clear of ChatGPT unless they fully understand these terms and conditions for the product.
While data privacy regulators are still evaluating these systems, there are potential issues, especially related to Europe’s GDPR regulations. (Reminder that the U.S. does not have privacy regulations at the federal level, and only some states offer privacy laws.)
GDPR’s Article 17 confirms “right to erasure.” This allows users to rescind consent and require any entities to forget their data. Businesses are also legally required to notify any party with access to that person’s data of their rescinded consent and that third-party business must also “forget” that person’s data.
But this is where technology and the law diverge: Chatbots don’t have the complete ability to remove individual pieces of data from their training. As such, businesses that are subject to GDPR regulations may face long-term consequences for how they use AI bots. In a strong case of “an ounce of prevention is worth a pound of cure,” at the time of this writing, Italy, an EU member, has moved forward with a ChatGPT ban over concerns about GDPR violations.
As a result, businesses that rely on generative AI may be held liable for any violations resulting from their use of AI.
Moreover, there are copyright, privacy and security issues at play. A business could be put in a precarious position if it uses generative AI in a way that uses consumer data that runs counter to contractual obligations.
You wouldn’t introduce a new software or process to your employees without having a solid plan to train and document. Generative AI shouldn’t be the exception to this. Before your employees start inputting data into ChatGPT, determine company expectations and training. Issues to address should include:
- What’s the company policy on using generative AI?
- How can it/can’t it be used for different roles and responsibilities?
- What happens if policies are violated?
Data Privacy Risks
The ramifications of generative AI programs like ChatGPT will continue to emerge over the next decade. Without a clear strategy that centers on privacy, businesses can put their profitability and reputation at risk.
But this risk can be preventable. Businesses that proactively address how their teams use generative AI and their privacy responsibilities can reap the benefits of AI—and establish themselves as a privacy leader.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here