13 May 2025

Beyond the algorithm: Decoding AI’s impact on work, thought and society

As AI reshapes humankind in unprecedented ways, the challenge lies not just in its responsible pursuit but also in ensuring that human ingenuity and machine intelligence coexist harmoniously

Polity_details_page_thumb.png

Every technological revolution – be it nuclear weapons, computers or the Internet – has come with a pivotal question: whether it will benefit humans and make their lives better or, instead, be harmful in its varied dimensions. The current debate is about Artificial Intelligence (AI), which is rapidly reshaping our world and making a profound impact on all aspects of human lives. Alongside debates on its true and vast potential, there are also concerns about its inherent dangers and whether AI will supplant human intelligence, displace jobs, erode democratic values and violate ethical boundaries. In the first of this two-part series, we explore the discourses surrounding this nascent but fledging technological phenomenon.

All images generated on leonardo.ai

Follow us on WhatsApp: https://www.whatsapp.com/
channel/0029Vb2MGE66xCSYBQlozV21

Follow us on Facebook: https://www.facebook.com/
profile.php?id=100073685446941

Follow us on X @vudmedia

In the swirling digital tempest of our age, where algorithms whisper prophecies and silicon brains dream of sentience, we stand at a cliff.

The question echoes: is this burgeoning artificial intelligence (AI) a genuine leap in cognitive evolution, or a sophisticated act of mimicry, a high-tech echo of human creativity? This fundamental query underlies the debate surrounding AI, particularly concerning the nature of understanding and originality.

There comes Noam Chomsky’s perspective on AI, that primarily it is nothing but ‘high-tech plagiarism,’ which is rooted in his critique of how AI models like GPT function. His argument essentially suggests that AI does not truly ‘understand’ or ‘create’ in the way we humans can do things or behave. Instead, he thinks, it relies on patterns learned from vast amounts of data, essentially reassembling or remixing existing information.

This can give the impression of creativity or knowledge, but it is more like rephrasing or reshuffling what has already been there.

“It is true that chatbots cannot in principle match the linguistic competence of humans, for the reasons repeated above. Their basic design prevents them from reaching the minimal condition of adequacy for a theory of human language: distinguishing possible from impossible languages. Since that is a property of the design, it cannot be overcome by future innovations in this kind of AI,” Chomsky argued in his interview with C.J. Polychroniou.

However, he added in the same breath that it was quite possible that future engineering projects would match and even surpass human capabilities if we meant the human capacity to act, perform tasks, and so on. “Some have long done so: automatic calculators for example.” 

In a way, he is arguing that AI lacks true originality or comprehension – what humans have in creative endeavours or in making new ideas. While AI can generate responses that seem new or insightful, it does so without an innate understanding that constitutes intentions.

However, some experts would, in equal fervour, argue that even though AI does not ‘create’ in the human sense, it still has utility in assisting creativity, accelerating problem-solving and offering perspectives that might not have been the case earlier. That is, in different ways, the ability of AI to generate new combinations of existing information can still be valuable to us, even if it does not align with traditional notions of creativity.

Eventually, whether AI is seen as plagiarism or innovation depends on one’s definition of creativity and the role AI should play in society. The distinction between human thought and machine-generated responses is crucial in this debate. While AI may not have a true understanding or consciousness, its capabilities are certainly reshaping many fields in interesting ways.

Consequently, there is considerable concern about AI among societies and nations about its manifestations. While there are concerns about the unbridled abuse of this technological frontier, the key concern is also about AI taking over jobs and making employment scarce for future generations.

The discourse around AI

As a result, many questions emerge around the AI phenomenon: Will AI reshape society the way computers did decades ago? Will AI create more jobs than it displaces? Will it deepen the inequalities or provide more areas of growth?

History tells us that every significant change in technology demands a corresponding change in society; AI will not be any different. Those who can predict, prepare and adapt could successfully harness the technology. This, in fact, has been the truism about all technologies – be it nuclear weapons, computing systems or the Internet.

The invention of the computer, it is widely believed, has been the most significant turning point in human history after the invention of electricity. Like every major technological leap, computers, too, disrupted the status quo, replacing a long period of old approaches, the way tasks were automated and industries reshaped.

The Industrial Revolution (18th–19th Century) was one of the most transformative periods in human history, bringing both unprecedented job losses and new opportunities. Like every major technological shift, it upended traditional ways of working – machines replaced manual textile workers with mechanised looms rendering traditional weaving skills obsolete and leading to protests by the Luddites.

This wave of automation marked one of the greatest shifts in employment history, much like the impact of electricity and computers in later centuries.

Perhaps unlike many other major inventions of the past, the computer first raised concerns about the loss of many jobs, particularly in manufacturing, clerical and administrative jobs. These worries, in other ways, were also not wholly unfounded. As businesses adopted digital workflows, traditional positions like clerks, front desk operators, and typists became obsolete.

Manufacturing facilities that were previously totally reliant on manual labour have mostly implemented computer-controlled automation, which has reduced the need for human labour in repetitive production tasks.

Yet, this transition was not a simple case of job loss alone.

While the whole spectrum of computing systems eliminated a considerable number of conventional roles, they also paved the way for entirely new functional areas, and resultant job creation. Software development, information technology (IT) services, cybersecurity, and digital marketing were domains non-existent before the digital age, and currently form the fulcrum of many advanced as well as burgeoning economies.

The economy shifted from being labour-intensive to knowledge-driven, emphasising problem-solving, data analysis and creativity over sheer manpower. Computers did not merely replace workers; they transformed how work itself was done, necessitating new skills and new ways of thinking.

A great deal of us, as of now, believe that AI is not a destroyer but a transformer.

Like every major invention before it, AI too will initially cause disruption, but over time, things will realign more efficiently. Rather than eliminating jobs, AI will reshape them, shifting work towards creativity, strategy, and human intelligence. AI’s true power, it is widely felt, lies in enhancing human capabilities, making us faster, smarter, and more efficient rather than replacing people.

Just as the advent of computers gave birth to new fields like software and hardware engineering, and information technology, AI could also open doors to a multitude of roles and functions. These could include job profiles such as AI trainers, linguists, ethicists, automation supervisors, and many such imaginable and currently unconceived functional areas leading to not just far greater opportunities but a new form of the workforce.

Experts are optimistic that the disturbance in the job market is temporary while in the long-term it can render greater opportunities to mankind for permanent progress in ways that were unknown to it until now. They also believe AI will unsettle industries only for a short period—just like the Industrial Revolution, computers, and the internet did.

In essence, what gives them solace is the historicity of such inventions and innovations which prove that the world adapts faster than we fear. As businesses, workers, and policies adjust, AI could eventually integrate smoothly into and with our daily lives.

AI through the eyes of thinkers

While the numbers and trends paint a compelling picture of AI’s transformative impact on the job market, the conversation extends far beyond statistics. The rise of AI is not just an economic shift but a profound philosophical and ethical dilemma.

Thinkers like Yuval Noah Harari, along with other futurists and scholars, have raised deeper concerns – not only about job displacement but about the very role of human beings in an AI-driven world. What happens when machines surpass human intelligence in decision-making, creativity, and even emotional labour?

To truly grasp the future AI is shaping, it is essential to explore the perspectives of these intellectual voices.

Harari, in his book Nexus, offers a sobering perspective on the integration of AI into society, echoing concerns about its potential to reshape not only the job market but also the very fabric of democracy and individual liberty. He acknowledges that “every time a powerful new technology has emerged, anxieties arose that it might bring about the apocalypse, but we are still here,” yet cautions that “even if in the end the positives of these technologies outweigh their negatives, getting to that happy end usually involves a lot of trials and tribulations.”

This mirrors the broader narrative of AI’s disruptive potential, as seen in the discussion about job displacement and industrial transformation.

Harari addresses the historical context of technological anxiety, noting that “fears of automation leading to large-scale unemployment go back centuries, and so far they have never materialised.” He points to the Industrial Revolution as a precedent, where displaced agricultural workers found new roles in factories.

However, he also highlights the unprecedented nature of AI’s capabilities, stating, “unfortunately, nobody is certain what skills we should teach children in schools and students in university, because we cannot predict which jobs and tasks will disappear and which ones will emerge.”

This uncertainty is compounded by the fact that “some skills that we have cherished for centuries as unique human abilities may be automated rather easily,” challenging traditional notions of human expertise and value.  

The erosion of democratic discourse is a central concern for Harari. He observes that “for most of history large-scale democracy was impossible because information technology wasn’t sophisticated enough to hold a large-scale political conversation,” and now, “ironically, democracy may prove impossible because information technology is becoming too sophisticated.”

He fears that “if unfathomable algorithms take over the conversation, and particularly if they quash reasoned arguments and stoke hate and confusion, public discussion cannot be maintained.”

The ease with which misinformation and sensationalism can spread, driven by profit motives, is another specific concern. As Harari notes “It is not difficult to understand why printers and booksellers made a lot more money from the lurid tales of The Hammer of the Witches than they did from the dull mathematics of Copernicus’s on the revolution of the Heavenly Spheres,” and “More specifically a completely free market of ideas may incentivise the dissemination of outrage and sensationalism at the expense of truth,” further undermines the foundation of informed democratic decision-making.  

Furthermore, Harari warns of the potential for AI to bolster totalitarian regimes. He explains that “totalitarianism seeks to channel all information to one hub and process it there,” and “the rise of AI may greatly exacerbate these problems.” He fears that “if even just a few of the world’s dictators choose to put their trust in AI, this could have far-reaching consequences for the whole of humanity.”

The danger, he suggests, lies not in AI’s overt rebellion but in its subtle influence: “The easiest way for an AI to seize power is not by breaking out of Dr Frankenstein’s lab but by ingratiating itself with some paranoid Tiberius.” He also cautions that “it would be foolish of dictators to believe that AI will necessarily tilt the balance of power in their favour. If they aren’t careful, AI will just grab power for itself.”  

Finally, Harari introduces the concept of the ‘Silicon Curtain,’ a digital divide that transcends physical borders.

He states: “during the Cold War, the Iron Curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the Silicon Curtain.” This curtain, “made of code,” determines “on which side of the Silicon Curtain you live, which algorithms run your life, who controls your attention and where your data flows.”

This fragmentation of the digital world, where “it is becoming difficult to access information across the Silicon Curtain, say between China and the United States, or between Russia and the EU,” poses a significant challenge to global collaboration and understanding.  

In essence, Harari’s perspective reinforces our point: AI’s transformation is not merely a technological or economic phenomenon but a profound shift in the human experience. It demands a critical examination of our values, institutions, and the very nature of intelligence itself.

Noam Chomsky, while approaching the subject from different angles, shares a common thread of apprehension regarding the uncritical embrace of AI.

Chomsky, in his opinion about AI, focuses more on the fundamental limitations of current AI, particularly Large Language Models (LLMs). He differentiates between AI as a science-seeking understanding and AI as engineering focused on creating useful products. He expresses concern over the reckless claims and potential harm from AI, particularly LLMs and chatbots, highlighting their potential for disinformation and defamation.

He emphasises that current LLM systems cannot truly understand language or cognition, as they fail to distinguish between possible and impossible languages. He also points out that AI lacks a human moral faculty, which could lead to dangerous outcomes if applied to critical areas like patient care or missile defence systems.

While acknowledging the potential benefits of AI, Chomsky is sceptical about the possibility of effectively controlling its threats, suggesting that malicious actors may find ways to evade safeguards.

This aligns with Harari's concern about AI being used for malicious purposes, particularly by authoritarian regimes. Both thinkers emphasise the need for careful consideration of the ethical and societal implications of AI, rather than simply focusing on its potential benefits.

Chomsky’s scepticism about AI's comprehension and Harari's warnings about its societal impact paint a picture of AI as a powerful tool that demands careful management and a deep understanding of its limitations.

Such philosophical reflections entail a deep interrogation of the elements of AI which we fail to grasp while approaching the phenomenon from a pure technological angle. The impact of society, nations, and, above all, the human race, could only be discerned when deeper inquisitions are made and all manifestations of the technology are imagined in order to develop credible and lasting normative frameworks to formulate its constructive functioning.

What the technologists have to say

Accordingly, it is vital to also look at the other side of the aisle – how technology leaders look at the AI phenomenon. Interestingly, notwithstanding the urge to harness the technology to its optimum, technology leaders are also wary about the rampant misuse and potential of AI to disrupt normalcy and the prevailing equilibrium in the global order.

A notable instance in this discourse is the alarm that Sundar Pichai, the chief of Google/Alphabet triggered by his revelation in October 2024 that more than a quarter of Google’s codes are now being developed by the AI, and validated by its human programmers subsequently.

AI, according to Pichai, will “spur innovation, opportunity and growth in economies around the world, and drive an explosion in knowledge, learning, creativity, and productivity that will shape the future in exciting ways.” Every generation worries that the new technology will change the lives of the next generation for the worse – and yet, it is almost always the opposite, he remarked at the AI Action Summit in Paris last month.

Earlier, he had termed AI as the biggest technological shift, “bigger than the Internet – a fundamental rewiring of technology and an incredible accelerant of human ingenuity.” In his 2400-word blog written for the 25th anniversary of Google in 2023, Pichai stated, “as excited as we are about the potential of AI to benefit people and society, we understand that AI, like any early technology, poses complexities and risks. Our development and use of AI must address these risks, and help to develop the technology responsibly.”

At the Paris Summit, Pichai had called for addressing risks without stymying innovation and investment. Pointing out that a fragmented regulatory environment, with different rules across countries and regions, will create problems, Pichai called for drawing on existing laws in order to fill the gaps, as opposed to a new global normative framework – as demanded by many sections.

Pichai’s counterpart at Microsoft, Satya Nadella, had famously stated that AI is a tool and should not be treated by humans. “I don’t like anthropomorphising AI. I sort of believe it’s a tool,” and adding that “I think one of the most unfortunate names is ‘artificial intelligence’. I wish we had called it ‘different intelligence’.”

However, Nadella’s most discerning pitch was: “Because I have my intelligence, I don’t need any artificial intelligence.” Elaborating on this, Nadella remarked that users should recognise the fact that the capabilities exhibited by AI software do not equate to human intelligence. “It has got intelligence, if you want to give it that moniker, but it’s not the same intelligence that I have,” he stated in a May 2024 interview given to Bloomberg Television.

At a fireside chat with Nandan Nilekani, co-founder of Infosys, earlier this year, Nadella envisaged the possibility of “humans and swarms of agents working together where AI agents will act as digital workers, orchestrating tasks across multiple systems to improve operational efficiency.”

While Nadella dismisses as hype the notion that Artificial General Intelligence (AGI) would replace humans in most tasks, it is evident that Microsoft, the company he heads, has massively invested in AI ventures, and is the primary backer of OpenAI. Similar to what Google has done with AI-led coding, Nadella also released Copilot, an AI factotum that automates coding. Microsoft has also integrated OpenAI’s language model into Bing, its search engine platform.

Moving up the ladder on the implications of AI, particularly in the arena of coding, was Nvidia chief, Jensen Huang, who felt that AI would kill coding as “everybody is now a programmer” made possible by the numerous and popular AI platforms. Terming AI as the “new industrial revolution,” Huang bats for AI-first companies by gearing up to use AI in all processes of production and productivity and operating through an ‘AI brain.’

Huang also claimed that AI will pass “human tests” in five years. “If I gave an AI … every single test that you can possibly imagine, you make that list of tests and put it in front of the computer science industry, and I’m guessing in five years time, we’ll do well on every single one,” he remarked in March 2024 at a Stanford conference, also adding that AGI may be much further away because scientists still disagree on how to describe how human minds work.

Another interesting debate, among technologists, was the 2019 public faceoff between Jack Ma of Alibaba and Elon Musk of Tesla/SpaceX in which Jack Ma claimed that AI poses no threat to humanity while Elon Musk disagreed and called that “famous last words.”

“Computers may be clever, but human beings are much smarter,” Ma said, adding that “I think AI can help us understand humans better. I don’t think it’s a threat.” In his response, Musk claimed that the “rate of advancement of computers, in general, is insane” and warned that in the near future “super-fast, artificially intelligent devices will rebel against dumb and slow humans.”

Ma felt that humans can never create another human, as computers are just toys with chips, while humans have hearts, which is where wisdom comes from. He disagreed with the idea that humans will be controlled by machines, considering it impossible because humans invented machines.

Though confessing that computers are clever, Ma felt humans are smarter because humans possess experience-driven intelligence, whereas computers have knowledge-driven intelligence.

Such vibrant debates define the nascent evolution of the AI phenomenon – marked by technologies, tools and platforms – with the world unsure about the future direction of this spectrum. What, though, comes out clear from these dialectics is: that AI is not just changing how we use intelligence – it is redefining what intelligence itself means!

Follow us on WhatsApp: https://www.whatsapp.com/
channel/0029Vb2MGE66xCSYBQlozV21

Follow us on Facebook: https://www.facebook.com/
profile.php?id=100073685446941

Follow us on X @vudmedia

Subscribe

Write to us

We welcome comments, suggestions and also articles/op-eds/analyses. Do write to us.