The problem with Harari’s Homo Deus
Süddeutsche Zeitung (SZ) published my critique of Harari’s book Homo Deus in which I argue that Harari is evil because he remains neutral vis-a-vis a deeply problematic idea of wo/men.
In political and economic circles, the narrative of a 4th industrial revolution is circulating: everything is to be networked in a gigantic Internet of Things. "Knowledge" will magically flow from A to B. Books and art will be created by algorithms so creative and so much smarter than humans. Self-driving cars will have nano-optimized superhumans navigating the alleys of Italy. Wars, if they still exist, will be fought on their own by autonomous weapons systems, etc. The story of the 4th industrial revolution is fantastic, ubiquitous and seemingly inevitable. Those who do not "upgrade" themselves will soon be superfluous.
Is this grand story of our future true?
In his book Homo Deus, Yuval Noah Harari, the strongest mouthpiece of this future, writes: "As long as all sapiens ...believe in the same story, they all follow the same rules." (p. 198) He explains why we would be well advised to believe in this 4th revolution: we would have already overcome war, hunger and disease thanks to technology. Soon we could be "superhumans." With smug detachment, Harari puzzles together a whole armada of scientifically one-sided and semi-researched facts in his myth of Homo Deus, which has sold millions of copies, and solidifies in his readers this seemingly alternativeless story of our future. But where does this author stand, who is considered a guru today and is asked for advice by the greats of our time? Mark Zuckerberg, for example, asked him whether humanity is more united or fragmented by technology today, and the CEO of Axel Springer Verlag asked for advice on what publishers could do to survive successfully in the digital world.
Getting to the bottom of Harari
At the Institute for Information Systems and Society at the Vienna University of Economics and Business (WU), Harari was put under the microscope. After several scientists publicly complained about Harari's misrepresentations, his statements on technological developments were to be put to the test. For this purpose, 268 text passages from the book Homo Deus were subjected to a content analysis: Wherever Harari comments on algorithms, AI, or other future technologies, four young academics led by Esther Görnemann and Sarah Spiekermann analyzed the content of the statements, and his emotional stance. The text modules were divided into four themes by the business information scientist Heval Cokyasar, all of which identify Harari's story of the future as problematic: 1. Harari overestimates technology 2. he propagates a reductionist view of humanity 3. he describes a human enhancement imperative and 4. he presents his history as inevitable, although he of all people, as a historian, should know that history is never strictly causal and that it always turns out differently than expected.
Harari overestimates technology
Harari considers the "old-fashioned material-based economy" (p. 27), whose 16,000 suppliers are unfortunately needed to produce a chip, to be outdated. In 60 passages, he extols technological capabilities that can only leave a business information scientist scratching his head like one of the many "monkeys" to which his peers are compared in the book. Harari believes that algorithms will magically go their own way where no human has ever been and where no human can follow (p. 531). And although it is true that algorithms exhibit behavior where no one understands what they are doing anymore, for example in financial markets, that does not mean it is good. "If we give Google and its competitors access to our biometric devices, to our DNA scans, and to our medical records, we will get omniscient health services" (p. 454) Harari writes, leaving aside the fact that an omniscient health service provider, does not automatically guarantee health, and that machines may have information but cannot really ‘know’ anything. AI and biotechnology could soon overtake our societies and economies, he writes (p. 507). Which technical path is to be the basis here remains unclear, however. That we often feel overtaken in the jungle of poorly functioning IT systems is probably true. But in this misery of ubiquitous digitalization chaos, our Kindle will at least soon know what makes us laugh and makes us angry (p. 464), says Harari. He neglects that technical sensors can only observe that something is happening but not why. These small but unfortunately relevant realities tempt the historian to hopelessly overestimate technology. Even death is for him only "a technical problem" (p. 36). But will that bother him? After all, he writes himself (p. 234): "If one sticks ... to nothing but reality, without adding to it any fictions, few will follow one." Should politicians follow someone who adds in fiction?
Harari dramatically misrepresents human beings
Harari's story of the future becomes really technology-friendly when it comes to humans. The many comparisons of humans to pigs, chimpanzees, wolves, etc. prepare the reader for the fact that humans, as they are now, cannot remain. In Harari's world, our decisions are merely "a chain reaction of biochemical events" that, like a computer algorithm, are "each determined by a preceding event" (p 381). The comparison of humans to an algorithm, repeated at least two dozen times in the book, is as technically debunked as the historical belief that humans work like clocks. Specifically, according to Shannon & Weaver, the term "information" is defined quite differently in computer science than in anthropology, which is concerned with experiential content and meaning. Humans are corporeal units integrated from brain and body, where mental processes continuously change the body. Who has ever seen software permanently changing the underlying hardware? Human identity also does not consist of statically observable or retrievable memory data like data on a hard drive. The stubbornly repeated analogy is rhetorically clever. By reducing human beings to algorithms, Harari can then compare them to our powerful computers and identify them as "obsolete data-processing machines" (p. 534). This chain of arguments, of course, only works, because he disposes of a few other fundamental aspects of being human along the way. The "I" is an "invented story" (p. 409), our human actions are "full of lies and gaps" (p. 403), even that humans should be individuals at all is a "questionable liberal conviction" (p. 403). In general, man is not free and "freedom" is an "empty concept" (p. 381).
Scientifically informed readers raise their hackles at such striking statements; especially since Mr. Harari sells his book under the heading of non-fiction and not under fiction, where it actually belongs. In a nonfiction book, one would expect that here and there the spectrum of positions would at least be mentioned; otherwise, one bends reality too much in the direction of fiction. But Harari does not exercise this caution. To the contrary, he himself notes, "Instead of altering their story to fit reality, they can alter reality to fit their stories." (S. 250).
60 times in the book, Harari describes technology as godlike, magical, or directly as the next stage of evolution that will overtake humans. These statements are not questioned. Rather the opposite is the case - in most cases (more precisely in 55 cases / 92%) the author seems to consider this narrative obviously set. It is similar with text passages that propagate the societal duty to physically and mentally enhance humans. The author also seems to consider this narrative as the only possible scenario in 75% of the cases. At no point does Harari question this narrative or formulate possible alternatives. Thus, unlike a critical nonfiction writer, Harari is by no means neutral. An overall view of his work reveals him to be a clear proponent of the supposedly unstoppable 4th revolution. In contrast to the coming data processing systems, man appears puny to Harari "What is the advantage of man over chickens, please?" he asks matter-of-factly (p. 516).
Is Harari good or evil?
Now one has to ask whether Harari is not nevertheless a "good" guy. Perhaps he simply hasn't kept in mind the many complex debates about technical limits and about the nature of human beings. Perhaps, in the stress of success, he has bypassed fact-checking, as his doctoral advisor Steven Gunn recently noted with regard to Harari's doctorate. If so, he would still have the opportunity to be critical of what he found.
Looking at Harari's emotional stance toward the content he describes, one finds that for the most part the author is neutral toward the content. To be precise, in 12% of his techno-euphoric sketches he embraces them positively, in 18% he is pessimistic, and in 70% he is neutral. When it comes to the "obsolete data processing machine" human being, he affirms this attitude in 5% of the relevant text modules, is pessimistic about it in 8%, and remains neutral in 87%. It is this emotional neutrality of the author in view of even the most inhuman statements that leads to the fact that in the end one does not know where Harari actually stands himself. Hannah Arendt would have had a clear answer to this question, because, according to the philosopher, anyone who remains neutral about contempt for humanity is "evil," as banal as that may sound. The distance with which Harari discusses the "common medical folk" (p. 428) and pokes fun at what a virtual world designed by an insurance agent would probably look like (p. 441) fits in well.
Harari does not want to question technology at all, in order to "enable us to think about our future far more imaginatively than before" (p. 535), as he writes, because what he presents us with is nothing more than the one flat transhumanist story on which the very IT industry that has arguably made him a star is getting rich: "By choosing to tell one [of possibly several alternative stories] ... we also choose to silence the other" (p. 242) he writes himself. Policymakers and think tanks who buy into Harari's one story, which is not only scientifically dubious but associated with massive negative externalities for society, should know that they are victims of an unfortunately too widespread myth that aims to do nothing but further increase the power of the elites that already manipulate the collective consciousness anyway: that of the owners of major U.S. IT companies and AI technologies.
The quotes in this text are simply translations of the german version of the book Homo Deus with page numbers from the german edition