Machine learning is increasingly being used to predict crime and aid in policing. PredPol, a predictive policing software, uses machine learning algorithms to calculate crime risk down to 500 square feet. The software analyses historical crime data and patterns to predict future offences. Chicago Police Department utilises a similar tool, the Strategic Subject List, which predicts who is likely to be involved in a shooting, either as a victim or perpetrator.

Despite the potential benefits, there are concerns about the use of machine learning in law enforcement. Critics argue it can reinforce existing biases in the system, as the data it learns from may reflect societal prejudices. For example, if police historically target certain neighbourhoods, the software may continue to focus on those areas, perpetuating the cycle of over-policing.

On a personal level, machine learning is also being used to predict health risks. The University of Nottingham developed an algorithm that can predict heart disease and diabetes more accurately than doctors. The algorithm analyses patient data, including age, gender, ethnicity, and family history, to predict the likelihood of developing these conditions.

Machine learning is a powerful tool, but its use in predicting crime and health risks raises questions about bias and privacy. It’s crucial that these issues are addressed to ensure the fair and ethical use of this technology.

Go to source article: http://readwrite.com/2016/11/09/machine-learning-used-pl1/