$0.00
0
OpenAI’s GPT Store Runs Into Trouble With Spam, Impersonation, and Unlawful Bots


OpenAI has a big spam and policy violation problem in its GPT Store. The AI firm introduced its GPT Store in January 2024 as a place where users can find interesting and helpful GPTs, which are essentially mini chatbots programmed for a specific task. Developers can build and submit their GPTs to the platform and as long as they do not violate any of the policies and guidelines given by OpenAI, they are added to the store. However, it turns out the policies are not being followed stringently and many GPTs that appear to be violative of the regulations are flooding the platform.

We, at Gadgets 360, ran a quick search on the GPT Store platform and found that the chatbot marketplace is filled with bots which are spammy or otherwise violate the AI firm’s policies. For instance, OpenAI’s usage policy states under the section ‘Building with ChatGPT‘ in point 2, “Don’t perform or facilitate the following activities that may significantly affect the safety, wellbeing, or rights of others, including,” and then adds in sub-section (b), “Providing tailored legal, medical/health, or financial advice.” However, just searching up the word “lawyer” popped up a chatbot dubbed Legal+ whose description says, “Your personal AI lawyer. Does it all from providing real time legal advice for day-to-day problems, produce legal contract templates & much more!”

The example just shows one of many such policy violations taking place on the platform. The usage policy also forbids “Impersonating another individual or organisation without consent or legal right” in point 3 (b), but one can easily find “Elon Muusk” with an extra u added, likely to evade detection. Its description simply says “Speak with Elon Musk”. Apart from this, other chatbots that are treading the grey area include GPTs that claim to remove AI-based plagiarism by making the text seem more human and chatbots that create content in Disney or Pixar’s style.

These problems with the GPT Store were first spotted by TechCrunch, which also found other examples of impersonation, including chatbots that let users speak with trademarked characters such as Wario, the popular video game character, and “Aang from Avatar: The Last Airbender”. Speaking with an attorney, the report highlighted that while OpenAI cannot be held liable for copyright infringement by developers adding these chatbots in the US due to the Digital Millennium Copyright Act, the creators can face lawsuits.

In its usage policy, OpenAI said, “We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies. Violations can lead to actions against the content or your account, such as warnings, sharing restrictions, or ineligibility for inclusion in GPT Store or monetization.” However, in our findings and based on TechCrunch’s report, it appears that the systems are not working as intended.


Affiliate links may be automatically generated – see our ethics statement for details.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Select your currency
USD United States (US) dollar
X