Dangers and Challenges of Artificial Intelligence

Increasing reliance on AI systems also poses potential risks.

Inadequate and overuse of AI

AI can help successfully implement EU programs such as the Green Deal and open up competitive advantages. If the EU misses these opportunities, there could be negative consequences, such as economic stagnation, lack of infrastructure, lack of investment spirit, lower investment, and poorer prospects for citizens and businesses. But overuse of AI can also be problematic. Possible examples include investing in applications that prove not useful or using AI in areas for which it is not suited.

Liability: who is responsible in the event of damage?

A significant challenge is determining who is responsible for damages caused by AI-based devices or services. For example, in an accident involving a self-driving vehicle, should damages be covered by the owner, the vehicle manufacturer, or the programmer?

If the manufacturer were free of any responsibility, there would also be no incentive to provide a good product or service. People’s trust in new technologies would be damaged; in turn, overly strict regulations would nip innovation in the bud.

Threats to fundamental rights and democracy

Artificial intelligence results depend on how it is designed and what data are used. Both data and design can be intentionally or unintentionally biased. For example, some essential factors of a problem may not be embedded in the algorithm, or the algorithm may be programmed to reflect and replicate structural biases. In addition, the use of numbers could make AI appear fact-based and accurate, even if this is not the case (“mat washing”).

If not used properly, AI could lead to decisions influenced by ethnicity, gender, or age in job hiring or lending.

There are also potential critical privacy and data protection implications. For example, AI can be used for facial recognition or online tracking and profiling individuals.

AI may also pose a threat to democracy. In this context, for example, websites are often criticized for tending to show users only information that matches their previous online behavior (“filter bubbles”) rather than creating an environment for pluralistic, equally accessible, and inclusive public debate. AI can even be used to create highly realistic fake videos, audio recordings, and images known as “deep fakes.” These mechanisms can contribute to polarization and election manipulation.

Tracking and profiling also can impact freedom of assembly and demonstration.

Impact on jobs

The use of AI in the workplace is expected to lead to job savings. Although artificial intelligence is also associated with creating new jobs, education and training will play a critical role in preventing long-term unemployment and training a skilled workforce.

Competition

Intelligence collection can also lead to competitive distortions, as players with more information have advantages over competitors.

Security risks

AI tools that people physically encounter or even implanted in the human body can pose high-security risks, as they can be poorly designed, misused, or hacked.

Challenges related to transparency

Imbalances in information access could be exploited. For example, based on a person’s online behavior or other data, an online provider can use AI to predict how much they are willing to pay without their knowledge. Another transparency issue is that it can sometimes be unclear whether they interact with an AI application or a natural person.

Murat Durmus -(Author of the Book: THE AI THOUGHT BOOK)

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Murat Durmus (CEO @AISOMA_AG)

Murat Durmus (CEO @AISOMA_AG)

385 Followers

CEO & Founder @AISOMA_AG | Author | #ArtificialIntelligence | #CEO | #AI | #AIStrategy | #Leadership | #Philosophy | #AIEthics | (views are my own)