OpenAI, a prominent artificial intelligence lab, has released an update to its language model GPT-3. The new version, GPT-3.5-turbo, is designed to be more cautious in its responses, avoiding topics like politics and religion, and refusing to generate inappropriate content. OpenAI has also introduced an experimental feature that allows developers to customise the AI’s behaviour, within certain limits. These changes come after criticism of the AI’s tendency to generate offensive or biased content.
Despite these improvements, some experts warn that the changes could limit the AI’s utility and creativity. They also highlight that the AI can still be manipulated to produce harmful content. Critics argue that OpenAI’s approach to regulation, which involves the company setting the rules and policing its own technology, is problematic.
Additionally, OpenAI’s decision to allow developers to customise the AI’s behaviour raises concerns about potential misuse. While the company has guidelines to prevent misuse, enforcement may prove challenging. OpenAI acknowledges these concerns and has pledged to learn from its mistakes and improve its systems.
The updated GPT-3 and its potential implications highlight the ongoing debate about how to regulate AI technology. The balance between utility and safety, and the role of companies in self-regulation, remain contentious issues.
Go to source article: https://www.nytimes.com/2022/03/03/technology/ai-chatbot.html