Top Class Actions’s website and social media posts use affiliate links. If you make a purchase using such links, we may receive a commission, but it will not result in any additional charges to you. Please review our Affiliate Link Disclosure for more information.
FTC OpenAI GPT-4 overview:
- Who: A tech policy group is asking federal regulators to block OpenAI’s artificial intelligence product GPT-4.
- Why: The group says the product does not meet federal standards.
- Where: The Federal Trade Commission OpenAI GPT-4 request has been filed in the United States.
- What are my options: Consumers looking for alternatives to AI technology may be interested in products from Microsoft.
A tech policy group is asking federal regulators to block the OpenAI artificial intelligence (AI) product GPT-4, saying the product does not meet federal standards.
The Center for Artificial Intelligence and Digital Policy lodged a complaint with the Federal Trade Commission (FTC) on March 30, saying the newly released OpenAI software is “biased, deceptive, and a risk to privacy and public safety.”
The group has asked the commission to open an investigation into the software’s maker, OpenAI, and to block future commercial releases of the company’s Generative Pre-Trained Transformer 4 (GPT-4).
“We are at a critical moment in the evolution of AI products,” The Center for Artificial Intelligence and Digital Policy Chair and Research Director Merve Hickok said.
“We recognize the opportunities and we support research. But without the necessary safeguards established to limit bias and deception, there is a serious risk to businesses, consumers, and public safety.”
Risks include proliferation of weapons and disinformation, policy group says
The software is a multimodal large language model that is designed to analyze vast datasets in order to recognize and mimic human speech patterns and text, and has been slated to disrupt multiple industries with its capabilities.
The Center for Artificial Intelligence and Digital Policy argues GPT-4 poses many risks outside of disrupting human jobs, which include disinformation and cybersecurity threats OpenAI has already acknowledged; however, the company “disclaims liability for the consequences that may follow.”
It said OpenAI’s GPT-4 system card, which is a technical description of the product, describes almost a dozen major risks posed by the product, including “Disinformation and influence operations,” “Proliferation of conventional and unconventional weapons” and “Cybersecurity.”
The FTC complaint also faults OpenAI for non-compliance with the FTC’s guidance dating back to 2020 on the use of AI. The commission said any AI should be “transparent, explainable, fair, and empirically sound while fostering accountability.”
“OpenAI’s product GPT-4 satisfies none of these requirements,” the complaint states. “It is time for the FTC to act. There should be independent oversight and evaluation of commercial AI products offered in the United States.”
In a March 20 guidance blog post, an attorney in the FTC’s Division of Advertising Practices warned that the FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury.
The attorney added if the tool is designed to help people, companies should also be asking “whether it really needs to emulate humans or can be just as effective looking, talking, speaking, or acting like a bot.”
The news comes after Microsoft announced it will implement AI language technology — including ChatGPT — in some of its popular Microsoft 365 (Office) business apps, including Word, Excel and Powerpoint.
What do you think about the regulation of AI? Let us know in the comments!
Don’t Miss Out!
Check out our list of Class Action Lawsuits and Class Action Settlements you may qualify to join!
Read About More Class Action Lawsuits & Class Action Settlements: