12 Steps to put AI Ethics into Practice

Murat Durmus (CEO @AISOMA_AG)
3 min readOct 21, 2022
12 Steps to put AI Ethics into Practice

Today I would like to give you a quick overview of the 12 most essential steps to put AI ethics into practice.

1. Justify the choice of introducing an AI-powered service

Before considering how to mitigate the risks associated with AI-powered services, organizations willing to deploy them should lay out their assigned objectives and how they should benefit various stakeholders (such as end users, consumers, citizens, and society at large).

2. Adopt a multistakeholder approach

Project teams should identify the stakeholders, both internally and externally, that should be anchored to each particular project and provide them with relevant information about the usage scenarios envisioned and the specification of the AI system under consideration.

3. Consider relevant regulations and build on existing best practices

When considering the risks and benefits of specific AI-powered solutions, include applicable human and civil rights in impact assessments.

4. Apply risks/benefits assessment frameworks across the lifecycle

An essential distinction between AI software and traditional software development is the learning aspect (that is, the underlying model evolves with data and use). Therefore, any sensible risk assessment framework has to integrate both the build-time (design) and runtime (monitor and manage). Also, it should be amenable for assessment from a multistakeholder perspective both at build-time and runtime.

5. Adopt a user-centric and use a case-based approach

To ensure that risks/benefits assessment frameworks are effectively actionable, they should be designed from the perspective of the project teams and around specific use cases.

6. Lay out a risk prioritization scheme

Diverse stakeholders have different risk/benefit perceptions and levels of tolerance. Therefore, it is essential to implement processes explaining how risks and benefits are prioritized and competing interests resolved.

7. Define performance metrics

In consultation with key stakeholders, project teams should define clear metrics for assessing the AI-powered system’s fitness for its intended purpose. Such metrics should cover the system’s narrowly defined accuracy and other aspects of the system’s more broadly defined fitness for purpose (including factors such as regulatory compliance, user experience, and adoption rates).

8. Define operational roles

Project teams should clearly define human agents’ roles in deploying and operating any AI-powered system. The definition should include a precise specification of the responsibilities of each agent required for the effective operation of the system, the competencies needed for filling the role, and the risks associated with a failure to fill the positions as intended.

9. Specify data requirements and flows

Project teams should specify the volumes and nature of data required for any AI-powered system’s practical training, testing, and operation. In addition, project teams should map data flows expected with the system’s operation (including data acquisition, processing, storage, and final disposition) and identify provisions to maintain data security and integrity at each stage in the data lifecycle.

10. Specify lines of accountability

Project teams should map lines of responsibility for outcomes (both intermediate and final) generated by any AI-powered system. Such a map should enable a third party to assess blame for any unexpected result of the system.

11. Support a culture of experimentation

Organizations should advocate for a right to experiment with AI-powered services for deployment to encourage calculated risks. This requires establishing feasibility and validation studies, encouraging cross-collaboration across departments and fields of expertise, and sharing knowledge and feedback via a dedicated platform.

12. Create educational resources

It is key to building a repository of various risks/benefits assessment frameworks, their performance, and revised versions to develop solid organizational capability in deploying AI-powered services.

Murat Durmus

(Author of the Book “The AI Thought Book”)

— —

Dear Readers,

You can get the eBook “THE AI THOUGHT BOOK” for only 0.99$ on Amazon for a limited time.

THE AI THOUGHT BOOK

Amazon.comAmazon.deAmazon.co.ukAmazon.frAmazon.esAmazon.itAmazon.nlAmazon.co.jpAmazon.com.brAmazon.caAmazon.com.mxAmazon.com.auAmazon.in

--

--

Murat Durmus (CEO @AISOMA_AG)
Murat Durmus (CEO @AISOMA_AG)

Written by Murat Durmus (CEO @AISOMA_AG)

CEO & Founder @AISOMA_AG | Author | #ArtificialIntelligence | #CEO | #AI | #AIStrategy | #Leadership | #Philosophy | #AIEthics | (views are my own)

No responses yet