Skip to content

Why AI Hysteria Misses Humanity's True Potential

Summary report written by David Brunner, CEO & Founder & Anupriya Ankolekar, Co-Founder & Principal Scientist From ModuleQ

3 June 2024 -  Across the large western economies, citizens are broadly opposed to AI. This is the troubling conclusion of the 2024 Edelman Trust Barometer. In the US, the UK, Germany, and France, less than 20% of the population “embrace the growing use of AI”, while 50% or  more “reject the growing use of AI.” Given the tremendous and transformative potential of AI, this widespread antagonism is alarming. Unchecked, it may threaten the prosperity and sustainability of these societies. The Barometer finds many underlying concerns. At the top of the list are threats to privacy and “the possibility that it may devalue what it means to be human”.

 While privacy has been the subject of much discussion and legislation, devaluing “what it means to be human” seems to be a thornier challenge. Extrapolating from the extraordinary achievements of Large Language Models (LLMs) such as GPT-4, prominent scientists and entrepreneurs speculate that AI will surpass human intellect, rendering human beings superfluous and obsolete. To the extent that our sense of self-worth comes from our ability to make unique and valuable contributions to society, that would certainly devalue “what it means to be human”.

But this fear is unfounded, a misguided conclusion drawn from two mistaken premises: first, an overestimation of the abilities of AI and, second—equally important but often overlooked—an underestimation of humanity.  The abilities of AI are extraordinary and superhuman, yet they are also much narrower than they appear. Conversely, the unique abilities of human beings are far more profound, powerful, and mysterious than generally acknowledged. This is likely because of the great influence of economic theory in public discourse, which is premised on assumptions about human behavior that obscure the complexities at the heart of our humanity.

Is the reductive caricature of human nature at the core of economic science  why citizens in strongly neoliberal nations such as the US, the UK, and Germany are much more hostile toward AI than those of Japan, India, and China?  

Perhaps a more nuanced view of AI’s limitations and humanity’s self-actualising power may help overcome the fear of AI and facilitate broader adoption of AI in ways that complement human work and serve the broader interests of humanity.

FutureMatters is a platform for thought leaders, practitioners, and industry players to share their insights on emerging opportunities and challenges in today's world. Apply to be a contributor here.