Artificial General Intelligence (AGI) is a type of artificial intelligence that can perform any intellectual task a human can. Several tech giants, including Google’s DeepMind and OpenAI, are racing to build AGI. However, there are concerns about its potential impact. AGI could be a game-changer for everything from climate science to healthcare, but it could also be used in harmful ways, such as powering killer robots or mass surveillance systems.

Some fear that AGI could become too powerful and outsmart humans, leading to a situation known as the ‘singularity’. To avoid this, OpenAI has committed to stop competing and start assisting any value-aligned, safety-conscious project that comes close to building AGI before they do.

While AGI could be decades away, there’s a need for clear rules about how it should be developed and used. Currently, the race to build AGI is largely unregulated. There’s a danger that in the rush to be first, companies might neglect safety precautions. It’s also unclear who would be held responsible if an AGI system caused harm.

In the meantime, AGI is already starting to have an impact. DeepMind’s AlphaFold has made significant breakthroughs in predicting protein structures, which could revolutionise drug discovery. Despite the potential benefits, it’s crucial to proceed with caution to ensure that AGI is developed and used responsibly.

Go to source article: https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/