Duplication versus Simulation

Murat Durmus (CEO @AISOMA_AG)
4 min readJun 4, 2023
Duplication versus Simulation — Murat Durmus (Mindful AI)

There is still much confusion about this point in the AI community. Therefore, with this article, I want to present my view on the relationship between duplication and simulation because it is of great importance that there is clarity here.

The philosopher John Searle has attached great importance to this point by explaining that a simulation is not duplicated. A machine cannot duplicate human thought but simulate it at best. On the fact that simulation and duplication are two pairs of boots, I fully agree with him.

Suppose we have two kinds of objects in front of us, say, an Audi A4 (neither my favorite car nor do I drive it) and a second object that someone claims to be a “duplicate” or a “model” of the Audi A4. What exactly does that mean? What is a model of the A4? It means precisely what a ten-year-old interested in car models understands by it. Namely, there is a direct correspondence between the external stimuli, internal states, and behavior of the A4 and the model’s inputs, internal states, and outputs. The correspondence does not necessarily have to be one hundred percent. Thus, some external stimuli, states, and behaviors of Model A4 may not be present in the model. One human brain is not the same as another. If, for example, you go to Ingolstadt (Audi Headquarters) and look at a model of the A4 in the wind tunnel, you will see that the seats, the navigation, etc., may be in the model… and all the other equipment details that make up many of the internal states of the “real” Audi A4 are missing — for the simple reason that they are irrelevant to the purpose of the model, i.e., testing the aerodynamic properties of the right car.

Nevertheless, the model’s external stimuli, states, and behaviors are directly related to a subset of the actual engine’s inputs, states, and actions. Such correspondence results in a model relationship between the actual A4 and the object in the wind tunnel. Note that the model is more straightforward than the object it replicates because it has fewer states. This property is characteristic of model names: Models are always more straightforward than their originals.

What about a simulation?

Let’s take a printer of the brand X, whose operating instructions assure me that I can imitate, i.e., simulate, another type of printer, e.g., an HP Laserjet Plus. What does it mean when people say my X machine can simulate another device?

That means that the HP machine inputs and states can be encoded into my machine’s states, and those same states of my machine can then be decoded into the correct outputs that an actual HP printer would produce. What is important is that my machine has to be more complicated than the HP in a certain sense if such a dictionary of encryption and decryption is created. To be more precise: To encrypt the inputs and the states of the HP into the states of my simulator, my machine must have more states than the HP printer if you regard both devices as abstract machines. Therefore, the simulator (my printer) must be more complicated than the simulated object (the HP printer). In general, simulation is always more complex than the system it simulates.

These short, perhaps even common, and casual explanations about models and simulations can be translated into exact mathematical terms. Of course, there are criteria that can be verified in principle. We can use it to distinguish a program that simulates human thought processes in the model from another that merely simulates them. In this context, it is exciting that a brain simulation requires a system with more states than the brain itself. This fact justifiably makes much doubt whether the brain can ever be simulated.

The brain, with its approximately 100 billion neurons, has at least 2 to the power of 10 to the power of 11 possible states — a number that deserves the highest respect in every respect because it far exceeds even the number of protons in the universe known to us (10 to the power of 79) by a factor of approximately two to the power of 100 billion. Even this number is so large that it is difficult to express it in words. Not to mention his idea. Therefore, we can safely assume that there will be no human brain simulation in the medium and long term (the Human Brain Project, funded by the EU, has a similar objective).

A simulation is always more complex than the system it simulates.

Brain models are an entirely different matter, and it is good that the “strong AI, human” needs models, not simulations. But, all in all, I have the impression that the thinking machine debate is a battle between philosophers, not computer scientists and programmers.

However, for my part, I can conclude this brief excursion with an unambiguous and definitive statement: Whatever the outcome of the matter of “strong AI, human,” the result will radically change our self-image and our view of our position in the cosmic order.

Murat

(The text is an excerpt from the Book “Mindful AI: Reflections on Artificial Intelligence”)

--

--

Murat Durmus (CEO @AISOMA_AG)
Murat Durmus (CEO @AISOMA_AG)

Written by Murat Durmus (CEO @AISOMA_AG)

CEO & Founder @AISOMA_AG | Author | #ArtificialIntelligence | #CEO | #AI | #AIStrategy | #Leadership | #Philosophy | #AIEthics | (views are my own)

No responses yet