Wallstreetcn
2023.08.15 18:46
portai
I'm PortAI, I can summarize articles.

One day completes six months of work! OpenAI reveals GPT-4's new feature: content moderation.

GPT-4 can be used to formulate content policies and also to enforce them by adding tags or making judgments on posts. OpenAI officials say that there is no need to hire tens of thousands of moderators; humans can act as advisors to ensure the proper functioning of the OpenAI system and make judgments in cases where the boundaries are not clear.

OpenAI is trying to use advanced artificial intelligence tools to assist with content moderation. It is believed that this could help businesses improve efficiency and add a useful feature to OpenAI tools that have not yet generated significant revenue for many companies.

OpenAI has been developing its own content moderation system based on its latest model, GPT-4. GPT-4 can assist businesses in content moderation by helping to formulate policies on what type of content is suitable and by enforcing those policies, such as adding labels to posts or making judgments.

OpenAI has previously tested this technology and invited some clients to experiment with it. OpenAI found that its proprietary content moderation system performs better than moderately trained human moderators, although it is not as efficient as the most skilled moderators.

OpenAI claims that its tool can help businesses complete tasks that would normally take six months in just one day.

Lilian Weng, the head of OpenAI's safety team, stated that there is no need to employ tens of thousands of moderators. Humans can act as advisors to ensure the proper functioning of the OpenAI system and make judgments on cases that cannot be clearly defined.

Andrea Vallone, OpenAI's product policy manager, stated that GPT-4 efficiently performs the task of content moderation. Drafting content moderation policies and labeling content typically takes a long time, but OpenAI's tool will help bridge the gap between demand and solutions.

Currently, large companies like Meta are already using OpenAI to assist their employees with moderation. However, OpenAI emphasizes that the moderation process should not be fully automated.

Vallone suggests that ideally, employees can use OpenAI to free themselves up to focus more on evaluating extreme cases of potential content violations and improving content policies. OpenAI will continue to conduct manual moderation to validate the judgments made by some OpenAI models. Vallone said, "I think it's important for humans to be involved throughout."

Some commentators believe that content moderation has already been a major challenge even before the advent of generative AI like OpenAI. The emergence of this new technology has brought about the increased threat of false information and other problematic content, further exacerbating the challenges faced in content moderation. However, due to OpenOpenOpenOpenAI's expanding capabilities, some technical experts believe that OpenOpenOpenOpenAI is the only possible means to address the increasing proliferation of false information.