This article examines the concepts of artificial intelligence and knowledge in the light of Yuval Noah Harari's ideas. The nature of knowledge, the impact of technology on society, and the potential future effects of artificial intelligence are discussed.

I watched some videos from the seminars Yuval Noah Harari gave about his new book. The subject was mostly about knowledge and artificial intelligence. Harari's main argument is this: Knowledge, or "information", contrary to what most people think, does not always have to be true. In fact, there is much more fabricated information in the world than true information. The most obvious examples of this are religions and religious books.

For example, the most well-known and copied image in the world is a depiction of Jesus with blond hair and fair skin, which we are all familiar with. This picture was commissioned by an artist to motivate American soldiers sent to World War II. Although we all know that the Scandinavian-looking Jesus in that picture has nothing to do with the real Jesus, that picture is the very image that appears in almost all of our minds when "Jesus" is mentioned. Jesus was probably a North African, dark-skinned person and certainly did not look like a blond Scandinavian at all.

The Nature and Types of Knowledge

What Harari is trying to explain here is that there are three different types of knowledge. According to Yuval Noah Harari, knowledge is divided into three main categories: objective knowledge, subjective knowledge, and intersubjective knowledge.

1. Objective Knowledge (Reality)

This is the type of knowledge whose reality cannot be denied and is directly observable. For example, an earthquake early warning system provides objective information by detecting seismic waves, and this information can save lives. This type of knowledge exists even in animals. When a monkey sees a cheetah approaching from a distance, it shouts to inform the others. This is objective information and cannot be denied.

2. Subjective Knowledge (Reality)

This is the type of knowledge that stems solely from the individual's own experiences. For example, when you say "I have a headache", only you have this information. When I say "I have a headache", you cannot know whether I have a headache or not, but this does not mean that my shoulder does not hurt. Subjective knowledge belongs only to the person themselves and is information that others cannot directly access.

3. Intersubjective Knowledge (Reality)

This type of knowledge is a type of knowledge put forward by a person independently of its objective reality and accepted by a wider audience. It is a very fragile type of knowledge because its existence cannot be mentioned without an audience that believes in it. Conspiracy theories, which have become increasingly widespread in recent years, fall into this class. This is a type of knowledge that has existed since the beginning of history, but which we see frequently today, especially on social media. It is information that someone says and others believe, whose reality cannot be objectively questioned. For example, when Trump says "Immigrants are eating your cats and dogs", the person who believes this actually believes in this third type of knowledge. This type of knowledge has gained such power in today's society that; as we saw in the Cambridge Analytica scandal, the behavior of millions of voters was influenced through public manipulation.

Artificial Intelligence and Information Manipulation

The reason why this type of information is more dangerous today than in previous periods is that it can now be produced and fabricated very widely, and artificial intelligence is quite successful in this regard. As humanity, we have long had the fantasy that extraterrestrial or terrestrial beings will bring the end of our civilization. Especially Hollywood has shown us this many times with movies like "Matrix" or "Terminator". However, the point handled with an inaccurate prediction in these movies was that this entity had to have a physical body and go down to the field to bring the end of the human race.

However, when we consider the point artificial intelligence has reached today and where it can go in the future, we see that this is not necessary at all. Because artificial intelligence has the capacity to do whatever it wants without needing a physical existence, even by manipulating billions of physical entities in the world - primarily humans - to use them as tools for itself easily. How do we know this? We look back again: A man who claimed that immigrants were eating the cats and dogs of US citizens won the American elections. And not for the first time, but for the second time. The power of artificial intelligence over the production and distribution of information creates great effects both locally and globally.

We now know that artificial intelligence has reached the level of defending/sharing a claim on social media through thousands of bots at the same time, then analyzing the responses given to it, and producing the "best answers" that can be given to those counter-responses in the shortest time.

This awareness makes us frantically uneasy and we constantly think that we are the unique generation that has suffered such a technological disaster at this point in history. However, technology was not invented today and it does not only change today's world. Technology has continued to change the world since the days when it was not named. How? For example, when the first religion emerged 3 thousand years ago, while the knowledge of this religion could only be transmitted orally, a group of people seized power because they had the monopoly of this oral knowledge and could shape everything according to their own wishes. When oral knowledge began to be written down and this writing became reproducible countless times thanks to the printing press, that was when the center of power was dispersed.

Technology and the Need for Editorship

If we continue from the book example, we see that technology always needs an editor or a curator. So actually, a storyteller or a story selector is needed. After the invention of the printing press, hundreds of different Bibles emerged. At one point, as a result of the gathering of "elite" clergy in North Africa and determining that only twenty-seven of the hundreds of Bible interpretations were suitable for them and required by religion, they chose twenty-seven separately written Bibles.

Despite bad examples, the need of knowledge/science/truth for editors has always existed throughout history. Even today, it is not possible to put drugs, which are the product of scientific progress, on the market without passing the examination of the boards that inspect them. But in today's world of artificial intelligence, such an inspection mechanism still does not exist. All artificial intelligence applications are brought together with the end user without passing any inspection, without their purpose and usage being clear, and this means a tremendous danger.

Artificial Intelligence and Emotions

The claim that will relieve us against this danger is that machines still do not feel anything, so they cannot appeal to the emotions of the person in front of them. However, this is just an assumption and most likely wrong. Because imitating this, that is, "learning" it in quotes, is not difficult at all. This flow of thought brought the "puppy dog eyes" example to my mind. Dogs learn that famous look, that look that melts people's hearts. They learn it because they know it works and they put it on.

If we move from here, perhaps the things we say "we feel" to ourselves are no different from this. Maybe we are just imitating too. We also just put on what we think works out of thousands of possible things, and assuming that a machine cannot do this is currently just a human ambition. Artificial intelligence can learn emotions, and moreover, manipulating the emotions of the person in front of it very well.

Foucault and Datapolitics

The ideas about artificial intelligence inspection boards and regulations mentioned above actually require a deeper discussion. From Foucault's perspective, artificial intelligence regulations can be seen as a power mechanism that aims not only to control technology but also to shape society.

  • Power and Discipline of Technology: Artificial intelligence regulations can be explained by Foucault's concept of "disciplinary power".
  • Relationship between Knowledge and Power: The creation of regulatory institutions in the field of AI means that these institutions hold the power to produce knowledge on AI and control this knowledge.
  • Datapolitics: Foucault's concept of biopolitics can be adapted as "datapolitics" today. Artificial intelligence systems produce large amounts of data, and this data is used to understand and direct the behaviors of individuals.

At this early stage of artificial intelligence discussions, we can say that there is no single correct approach to regulations, and different perspectives should be taken into account. The control and regulation of technology is an issue that concerns all segments of society, and therefore more democratic and participatory approaches need to be developed.