The Quest For AGI

Murat Durmus (CEO @AISOMA_AG)
2 min readMar 1, 2024
(created with DALL-E)

Below, you can see a post by Sam Altman from February 16. I can’t stop thinking about his statement, “extremely focused on making AGI.”

(source: X)

I am really not an enemy of progress in technology, but the more and deeper I think about it, the more I come to the conclusion that we (society) are not yet ready for AGI (of course, there is no clear definition of AGI but let’s start from the widely used definition: An artificial general intelligence (AGI) is a hypothetical type of intelligent agent which, if realized, could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks(source: Wikipedia)).

We do not yet sufficiently understand how these systems achieve their results or function internally and how they evolve. The alignment problem, the control problem, the ethical aspects/inadequacies and ultimately the emergent properties/behavior of such systems.

I don’t know how you see it, but the focus in the field of AI should not be on creating AGI. Rather, we should first try to solve the shortcomings mentioned above satisfactorily in the interest of all.

Maybe I’m thinking too pessimistically in this matter.




Murat Durmus (CEO @AISOMA_AG)

CEO & Founder @AISOMA_AG | Author | #ArtificialIntelligence | #CEO | #AI | #AIStrategy | #Leadership | #Philosophy | #AIEthics | (views are my own)