To lessen the mental load on human moderators, OpenAI unveiled an AI moderation system based on GPT-4. In order to help with content filtering on internet platforms, OpenAI recently disclosed that it had created an AI system employing GPT-4.
In comparison to conventional human-led moderation. The company claims that this technology enables faster iteration of policy changes and more uniform content labeling.
With this change, we hope to increase policy updates’ speed, improve consistency in content labeling, and lessen our reliance on human moderators. It could also improve the mental health of human moderators, demonstrating the potential for AI to protect mental health online.
Moderation of Content Issues
Content moderation, according to OpenAI, is difficult work that demands painstaking attention. A nuanced comprehension of context, and constant adaptability to new use cases.
Traditionally, human moderators have been responsible for these time-consuming responsibilities. They examine a lot of user-generated content and eliminate any hazardous or unsuitable information.
This type of labor may be psychologically exhausting. The human cost of online content moderation may be decreased by using AI to perform the task.
How Does the AI System at OpenAI Work?
By utilizing GPT-4 to read content policies and reach moderation decisions, OpenAI’s new approach intends to aid human moderators.
Experts in the policy first draught label samples and content standards that follow the policy. After that, GPT-4 identifies identical examples without revealing the reviewer’s responses.
GPT-4’s labels can be improved by comparing them to human labels, and OpenAI can retrain the AI until it consistently understands the rules.
Suggested:
The Most Recent ChatGPT iOS App Update Includes Customised Instructions.
US Workplaces Experience ChatGPT Craze, Raising the Alarm For Some.