Deep learning, a subset of artificial intelligence, is transforming many areas of technology, from driverless cars to voice-activated assistants. This technology, which trains large neural networks to recognise patterns in data, is now being used to predict and solve complex problems. However, the process is not fully understood, even by the researchers who use it.
Despite the mystery surrounding deep learning, it’s clear that it can make remarkably accurate predictions. For instance, it can identify a tumour in medical scans better than a human radiologist or predict, with 75% accuracy, which hospitalised patients, typically hard to identify, will develop a dangerous condition called sepsis.
Yet, the workings of these systems are more of a mystery than they should be due to the lack of transparency in the technology industry. Companies like Google, Facebook, and Microsoft are not required to publicly disclose how their algorithms work. This lack of transparency could lead to significant societal consequences, including the potential for algorithmic bias.
Moreover, as deep learning continues to evolve, it is becoming more complex, and the risk of something going wrong increases. If an algorithm makes a mistake, it can be hard to understand why. This lack of understanding could cause significant problems, especially in high-stakes industries like healthcare or self-driving cars.
In conclusion, while deep learning offers significant potential, its mysterious nature and lack of transparency pose significant challenges. Greater openness and understanding are essential as we continue to adopt and develop this powerful technology.
Go to source article: https://mobile.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html