Which Cognitive Biases One Should Pay Particular Attention To When Developing AI-Systems

Murat Durmus (CEO @AISOMA_AG)
3 min readJan 2, 2024
Which cognitive biases one should pay particular attention to when developing AI systems

When developing AI systems, paying particular attention to several cognitive biases is essential to ensure fairness, accuracy, and effectiveness. If not addressed, these biases can lead to skewed outcomes, perpetuate existing inequalities, and undermine AI systems’ credibility and ethical standards.

Here are some fundamental biases to consider:

Confirmation Bias: This tendency favors information that confirms pre-existing beliefs or hypotheses. In AI, this might mean an algorithm disproportionately learns from data that confirms its existing patterns, ignoring contradictory data.

Algorithmic Bias arises when an AI system generates systematic errors that create unfair outcomes, such as privileging one arbitrary group of users over others. It often stems from biases in the data used to train the AI or in the algorithms’ design.

Overconfidence Effect: This is the tendency to overestimate the accuracy of one’s predictions or decisions. In AI, this could manifest as over-reliance on the accuracy of an algorithm’s predictions without sufficient regard for potential errors or limitations.

Sampling Bias occurs when the data used to train an AI system do not represent the broader population or the environment where the AI will be applied. This can lead to AI systems that perform well on training data but poorly in real-world scenarios.

Anchoring or Focalism: This bias involves relying too heavily on initial information (the “anchor”) when making decisions. This might mean an algorithm gives disproportionate weight to specific data points in AI, influencing subsequent analysis and decisions.

Automation Bias: This tendency to depend excessively on automated systems can lead to errors, mainly when used as decision-making aids.

Groupthink or Bandwagon Effect: In AI development, this bias can occur when teams working on AI algorithms conform to a consensus without critically evaluating alternative ideas or perspectives.

Availability Heuristic: This is the tendency to overestimate the importance of readily available information. For AI, this might mean more recent or easily recalled data unduly influenced by algorithms.

Negativity Bias: AI systems may be designed to give more weight to adverse outcomes or data, which can skew results and decision-making processes.

Ethnocentrism or Cultural Bias: When AI systems are developed with a focus on one cultural or ethnic group, they may not perform well for users from different backgrounds.

Murat Durmus

To mitigate these biases, AI development should include diverse datasets, regular auditing for bias, transparent and explainable AI algorithms, and a multidisciplinary approach involving ethicists, sociologists, and domain experts alongside AI developers and data scientists.

--

--

Murat Durmus (CEO @AISOMA_AG)

CEO & Founder @AISOMA_AG | Author | #ArtificialIntelligence | #CEO | #AI | #AIStrategy | #Leadership | #Philosophy | #AIEthics | (views are my own)