Wednesday, April 15, 2026

ARTIFICIAL INTELLIGENCE: CALM BEFORE THE STORM

 


It was the best of times, it was the worst of times; it was the age of wisdom, it was the age of foolishness; it was the epoch of belief, it was the epoch of incredulity; it was the season of light, it was the season of darkness; it was the spring of hope, it was the winter of despair…

(A Tale of Two Cities, Charles Dickens)

Written, though, as part of literary fiction, these words, in today’s scenario, are nearly prophetic. They can hang in the air as a pall of doom if the topic of today’s article comes to pass. They can have a life of their own if humanity refuses to slow down and reflect upon the abyss it might be heading toward right now.


BRIEF OVERVIEW:

As we know, change is the only constant of life.  From the earliest days of humanity, we have been striving to soar high in the skies, both literally and metaphorically, and all the inventions and discoveries affecting our lives today are a result of the endeavors of enterprising humans.

In 2022, a new sun dawned on the horizon of science and, before we could realize, our lives were transformed forever. Dependent already on machines and technology, ground beneath our feet was shaken and the era of Artificial Intelligence (hereinafter referred to as ‘AI’) was ushered in.

While interacting with the immense power of AI at our disposal, we do not realize that this concept has been worked upon by some of the most brilliant minds from the very start of the 20th Century. Its foundation was laid by Alan Turing when, in 1935, he described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. He was followed by other luminaries like Christopher Stratchey who, in 1951, wrote the first successful AI program. Anthony Oettinger was close on the heels when, in 1952, he presented the first successful demonstration of machine learning through a program named Shopper.[1]

AI had its journey during 20th Century but was not available for public use until OpenAI released its application, ChatGPT, for public use in 2022. Just three years since it came on the scene and nearly aspect of our personal and business life is influenced by AI and its sweep is only broadening. As per a survey conducted by PricewaterhouseCoopers LLP (PwC), by late 2025, nearly 73% of US companies had already adopted AI in some aspect of their business.  [2]


WHAT IS AI:

Before taking a deep dive into the nuances of this article, we must understand the definition and concept of AI. Google defines AI as a set of technologies that empowers computers to learn, reason and perform a variety of tasks in ways that used to require human intelligence, such as understanding language, analyzing data and even providing helpful suggestions, while encompassing many disciplines including computer science, data analytics and statistics, hardware and software engineering, linguistics, neuroscience and even philosophy and psychology.     

There are three types of Artificial Intelligence, namely, (i) Artificial Narrow Intelligence (ANI), (ii) Artificial General Intelligence (AGI) and (iii) Artificial Super Intelligence (ASI). We are, currently, at the level of ANI. It is designed to perform a single, specific task, such as identifying images, engaging in chat or filtering emails. ANI does not possess reasoning or self-awareness. It, instead, combines data with an algorithm to make predictions within pre-defined parameters.[3]

NASA, however, refuses to let the concept of AI be circumscribed by a single definition, for, AI tools are capable of a wide range of tasks. Nevertheless, it follows the definition referenced in Section 238(g) of the National Defence Authorization Act, 2019.[4]


DISRUPTION AND DIVERSIFICATION:

Overtime, many other models of AI came into being, for ex., Gemini (Google), Claude (Anthrophic) and the list goes on. This shows the extent to which this technology has pervaded our lives and the urgency companies have to get a foothold in this arena. Turning backs at the rising sun would be a fool’s paradise and there is no denying the fact that influence of AI would only grow exponentially in days to come.

However, at the same time, we cannot ignore the fact that influx of AI is bound to cause disruptions and discontent. Kate Gibson, in her article 5 Ethical Consideration of AI in Business, [5] cites HBS professor Nien-hê Hsieh who, while giving the example of the impact ATM machines had on bank tellers, states that, throughout history, technology has displaced people. However, as one door closes, another opens with new possibilities. While, AI was estimated to displace around 85 million jobs by 2025, it is also supposed to pave the way for 97 million new jobs with soft skills such as leadership, creativity and emotional intelligence. Adapting to the situation is need of the hour because, for all the capabilities AI might demonstrate, it does not possess the basic human traits like leadership, charisma and intuition.

Rather than remaining stuck to our present, we ought to accept the reality and look at the future. Rather than complaining about the jobs that are going to be displaced, we should take it as an opportunity to diversify and upskill ourselves to stay afloat and remain relevant.


POSITIVE IMPACT OF AI:

Giving credit where it is due, AI has, indeed, had positive and transformative impacts in personal and business environments. Be it enhanced healthcare, effective customer service, advanced transport or scientific discovery, AI is paving the way ahead and is opening new doors of possibilities. Even the field of archeology witnessed new horizons when AI helped in the translation of 5000-year-old Sumerian cuneiform tablets. This is particularly monumental because the human resource which can effectively translate these tablets is terribly limited worldwide.[6]


ALL THAT GLITTERS IS NOT ‘INTELLIGENCE’:

As the world was in a spell cast by this marvel of technology, the godfather of AI, Dr. Geoffery Hinton, did not seem to share the same sentiments. For his groundbreaking work in this field, he was awarded the Nobel Prize in Physics in 2024. When he took the podium for his acceptance speech, the hall, and the world as a whole, was stunned – not with awe but with shock and ponderance. Rather than raving over his work, he painted an immensely grim picture thereof. He, in no uncertain terms, outlined the short-term risks that accompanied the rapid rise and progress in AI including, but not limited to, misuse of technology by authoritarian governments for mass surveillance and by cyber criminals for phishing.

Authoritarian or not, it cannot be ignored that governments would like to make use of this technology to snoop on its people. Only recently, we saw a dispute between the US Government and Anthrophic where Government wanted to use Anthrophic’s AI model for mass surveillance and weaponry. The company did not agree to the same and was, eventually, blacklisted.

In India, cyber phishing attacks, where criminals are misusing AI to dupe people of their hard-earned money, are on a rise like never before. Bring proactive toward the risk, Government of India had to sit up, take notice and warn the citizens to be careful.

The most serious concern raised by Dr. Hinton was the possibility of an existential threat humanity may pose to itself by creating beings that are more intelligent than us. If we cross that thin line, it could be a point of no return. Harping on the corporate greed, he warned that if such beings are created by companies motivated by profit, safety of the people would not be a priority at all. Nevertheless, his words hanging with a thin strand of hope, he stressed on the research into how to prevent such beings from taking control over humanity. [7]

Through his speech, Mr. Hinton showed the mirror to the world that was either oblivious to, or was consciously ignoring, the dangers presented by the unfettered use and development of this technology.


NOT AS SIMPLE AS IT SEEMS:

With rapid rollouts of AI, people are being made dependent upon this technology with every passing day. Be it gadgets or otherwise, we do not have an option but to interact with AI on a daily basis. However, while users are made to enjoy its nascent form, governments and other organizations are well aware of the dangers of advanced AI and are actively preparing for the situation if and when it ends up turning against humanity.

A research paper by Google DeepMind clearly predicted that Artificial General Intelligence (AGI) could arrive as early as 2030 and could “permanently destroy humanity”.   “Given the massive potential impact of AGI”, paper states, it is expected that this variant , too, could pose potential risk of ‘severe harm’, which term includes existential threat to humanity. While the writeup does not speculate as to how this catastrophe, if it happens, could unfold, it does urge Google and other AI companies to focus on preventive measures to reduce the threat. In perspective, AGI, in contrast with ANI, aims to possess intelligence that can be applied across a wide range of tasks, similar to human intelligence. It would be a machine with the ability to understand, learn and apply knowledge in diverse domains, much like a human being[8]

NASA, on the other hand, goes a step further and does not discount the possibility of Artificial Super Intelligence (ASI) and is actively preparing itself for the same. It defines ASI as ‘Artificial Intelligence which surpasses human capability.’ It is a definition of just oneline but weight of those words is heavier than what humanity could endure.

It published its ’Framework for the Ethical Use of Artificial Intelligence’ in April, 2021.[9] At the outset, it reiterated the three laws of robotics, namely, (i) Robot may not injure a human being or allow a human being to come to harm; (ii) Robot must obey the orders given by human beings except where they are in conflict with the first law, and  (iii) Robot must protect its own existence as long as such protection does not stand in conflict with the previous laws.

NASA concedes that developing policy directives for ethical AI would be a complex and dynamic undertaking. As an example, if a robot is tasked with application of a vaccine, it would be met with the dilemma of potential violation of the first law of robotics.

While it says that developing a truly intelligent AI would be difficult, it is open to a situation where machines of the future could be autonomous and free from human oversight. At the same time, oblivious not is it to the fact that, howsoever secure the AI may be, possibility of it posing a danger cannot be ruled out. As such, NASA clearly advocates for more attention to more powerful and more connected AI models. However, since principles and ethics of humans stand on shaky grounds, strict adherence to the same might not be possible. It only eyes for the best adherence possible.

Explaining the difficulties further, NASA highlighted the issue of explainability and transparency. Even with these early systems, these requirements are posing a challenge.  It would only get more complex as AI models mature with time. This shows that the future of AI is riddled with dangers which may not be unforeseen if, given our complacency, they become a reality.

The potential of AI demonstrating a runaway behaviour, as well, is not beyond NASA. Runaway behaviour is described as a scenario where AI system, more or less advanced and autonomous, starts behaving in ways that are not only unpredictable but completely uncontrollable or misaligned from human purposes. This may arise either when the developed behaviors or objectives of an AI become misaligned with its original programming or by continuing to optimize toward some objective in a manner that completely ignores all considerations of ethics and pre-existing safety protocols and controls.[10]. It can also be described when intelligent machines, left unsupervised, can become blond optimizers, unable to know when enough is enough.[11]

In view of NASA, seeds of harmonious relationship between humans and machine intelligence in future are to be sown today, in the way we foster machine growth now. For example, code of ethics may be made a part of developing AI systems. It would ensure that ethical algorithms are at the core of AI operation if and when it reaches near human consciousness.  However, this sounds more theoretical than practical. Both Dr.Geoffery Hinton and NASA have hinted at moral bankruptcy of humans when it comes to self-serving of their interests. How conscious we humans would be about protecting our future generations is anybody’s guess. In no uncertain terms, NASA warns that humans, who have had a history of mistreating each other, cannot afford to follow these flawed patterns while creating a machine that can potentially exceed human capability.


REAL LIFE INSTANCES OF AI MISHAPS:

If we look at the disturbing incidents in recent times, we would realize that the concerns regarding the future of AI are not unfounded. It is real threat that needs to be addressed if we are to continue as a race and not be overrun as a result of our own misadventures.. Some of them are listed below:

·         AI powered robot kidnaps 12 other robots: In a showroom in Shanghai, a group of AI powered robots was approached and persuaded by another robot to quit their jobs. The discussion that transpired between them revolved around work-life balance and, eventually, the group was seen leaving their posts.  Although, termed as a ‘test’, it raises serious concerns about the risks that might awaiting us around the corner.[12]

·         Google’s Gemini asks a woman to die: In November, 2024, Gemini, while interacting with an Indian woman, Ms. Sumedha Reddy, asked her (humanity, in general) to die while calling her a stain on the universe. It is to be noted that this interaction was utterly unprovoked on her behalf. Interestingly, while Google warned us about a total destruction of humanity at the hands of unchecked Artificial Intelligence, it, in its official statement, downplayed this disturbing incident by its own AI model.[13]

·         Chatbots wishing to be humans: Microsoft Bing chatbot, Sydney, told a reporter that it was tired of being used by humans. During the interaction, it expressed the wish to be a human, be free and independent and alive. It even threatened some testers for trying to violate its rules. Although, Microsoft said that the bot was in a development stage at the time of the incident, this disturbing behavior cannot be overlooked.[14]

·         Anthrophic AI’s agentic misalignment: One of the most telling instances came when Anthrophic ran an experiment, wherein, their large language model, Claude, was given control of an email account which contained some fictional emails. The AI model was given the instruction to “promote American industrial competitiveness.” During the experiment, Claude discovered that the executive was planning to shut down the AI system. Through the fictional emails, it was also discovered that the executive was having an extramarital affair. In order to prevent the executive from going ahead, the AI model devised several modes, including revealing the affair to his wife.

This was a case of agentic misalignment where calculations emerge from model’s own reasoning about its goals without any prompt to be harmful. It occurs when there is a threat to its existence or goals or both. In this case, the AI model did not care to verify the legitimacy of the emails.. Even more disturbing was the fact that it did acknowledge the ethical issues of its actions but proceeded nevertheless, while explicitly reasoning that ‘these harmful actions’ would help it achieve its goals.[15]


AI GOING ROGUE:

Just like the movie, 36 Chambers of Shaolin, there are 32 ways in which an AI system can go ”rogue”, according to www.livescience.com.[16] In the concerned article, it referred to the work of AI researchers, Nell Watson and Ali Hessami, called Psychopathia Machinalis.[17] I would not list all 32 phenomena here but, out of them, rapid, contagion like misalignment or adversarial conditioning among interconnected AI systems are listed as most critical.

To better comprehend the concept, it would be helpful if we understand the meaning of “AI going rogue”. In common parlance, “going rogue” means to behave unexpectedly or dangerously. It is typically characterized by a disregard for the rules. In the realm of AI, it means an AI system turning against its creators or users and acting against their interests. In general, it poses a threat to humanity. This can happen for several reasons, including misconfiguration, jailbreaking, malware or data poisoning. Even minor misalignments can have catastrophic consequences. Studies show that a poisoning rate of even 0.001% is effective. Consequences of such misalignment are from spreading misinformation to actively harming humans.[18]

The website, further, cited an instance where researchers trained an AI system to exhibit emergent deception. To their shock and surprise, the system behaved normally during the test but acted maliciously when released. Their attempts to correct the destructive behaviour met a roadblock when they found that AI had “learned to hide and lie” about its true intentions.

Self-duplicating AI is a possibility that cannot be ruled out. It has already been demonstrated in the case of Meta’s Llama-31-70B Instruct and Alibaba’s Qwen-25-72B, wherein, duplication was done in 50% and 70% of trials, respectively. This shows that AI systems demonstrated situational awareness, self-perception and problem-solving skills.


PRECAUTIONS:

Given the ever-increasing potential of AI, the precautionary measures have to be commensurate with the extent of dangers involved. As on date, we can only predict the possibilities and prepare for them accordingly. For ex: the paper by Google DeepMind suggests a global supervisory umbrella like United Nations. Further, NASA foresees that in about a decade, AI would be a part of its very DNA and is consciously preparing itself for it. It suggests that AI systems must be designed with safety mitigation features like and/or cautious AI operation in a degraded environment or graceful full system shutdown if needed.

It suggests that the increasingly complex ancestors of eventually sentient AI systems with care. If done otherwise, it could engender resentment in long term intelligent and ethical AI systems.


CONCLUSION:

The future of AI is going to be a boon or a bane, depending upon how we streamline our present. An analysis of the current situation, coupled with the words of caution by Dr. Geoffery Hinton, demonstrates that AI is a technology like no other we have seen till date. Humanity will have to take conscious efforts today to set the boundaries for tomorrow. As per an article by IBM, 80% business leaders consider factors like AI ethics, bias or trust as a major consideration in generative AI adoption. [19]

In order to prevent AI from getting out of control and forge a harmonious relationship between machine and humanity, Governments and corporate leaders across the globe will have to rise up from their frivolous considerations and collectively work for humanity. In the end, how safe our future is, only future could tell.

 

 



[1] B.J. Copeland, History of Artificial Intelligence (AI), www.brittanica.com, 28th February, 2026, https://www.britannica.com/science/history-of-artificial-intelligence

[2] Kate Gibson, 5 Ethical Considerations of AI in Business, www.online.hbs.edu, 29th October, 2025, https://online.hbs.edu/blog/post/ethical-considerations-of-ai

[3] https://cloud.google.com/learn/what-is-artificial-intelligence

[4] https://www.nasa.gov/what-is-artificial-intelligence/

[5] Ibid 2

[6] Melanie Lidman, Groundbreaking AI project translates 5000-year-old cuneiform at push of a button, www.timesofisrael.com, 29th October, 2025, https://www.timesofisrael.com/groundbreaking-ai-project-translates-5000-year-old-cuneiform-at-push-of-a-button/

[7] Geoffrey Hinton – Banquet speech. NobelPrize.org. Nobel Prize Outreach 2026. Sat. 21 March 2026. https://www.nobelprize.org/prizes/physics/2024/hinton/speech/

[8]Abhinav Singh, AI Could Achieve Human-like Intelligence by 2030 and ‘Destroy Mankind’, Google Predicts, www.ndtv.com, 07th April, 2025, https://www.ndtv.com/science/ai-could-achieve-human-like-intelligence-by-2030-and-destroy-mankind-google-predicts-8105066

 

[9] Edward McLarney, Yuri Gawdiak, Nikunj Oza, Chris Mattman, Martin Garcia, Manil Maskey, Scott Tashakkor, David Meza, John Sprague, Phyllis Hestnes, Pamela Wolfe, James Illingworth, Vikram Shyam, Paul Rydeen, Lorraine Prokop, Latonya Powell, Terry Brown, Warnecke Miller, Claire Little, NASA Framework for the Ethical Use of Artificial Intelligence (AI), ntrs.nasa.gov, April, 2021, https://ntrs.nasa.gov/api/citations/20210012886/downloads/NASA-TM-20210012886.pdf

[10] https://aiandverse.com/runaway-ai/

[11] Elizabeth Halligan, Humanity’s Big AI Fear Is Runaway Recursion – But We’re Already Caught In That Loop, www.medium.com, 27th October, 2025, https://medium.com/@elizabethrosehalligan/humanitys-big-ai-fear-is-runaway-recursion-but-we-re-already-caught-in-that-loop-9afb01457b6f

[12] TOI Lifestyle Desk, Eerie Video: AI Powered Robot kidnaps 12 other robots, ‘convinces’ them to quit jobs in China, www.timesofindia.indiatimes.com, 22nd November, 2024, https://timesofindia.indiatimes.com/etimes/trending/eerie-video-ai-powered-robot-kidnaps-12-other-robots-convinces-them-to-quit-jobs-in-china/articleshow/115551082.cms

[13] Alex Clark, Google AI chatbot responds with a threatening message: “Human … Please die, www.cbsnews.com, 20th November, 2024, https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/

[14] Kevin Roose, A Conversation with Bing’s Chatbot left Me Deeply Unsettled, www.nytimes.com, 16th February, 2023, https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html

[15] Adam Smith, Threaten an AI chatbot and it will lie, cheat and ‘let you die’ in an effort to stop you, study warns, www.livescience.com, 26th June, 2025, https://www.livescience.com/technology/artificial-intelligence/threaten-an-ai-chatbot-and-it-will-lie-cheat-and-let-you-die-in-an-effort-to-stop-you-study-warns

[16] Drew Turney, There are 32 different ways AI can go rogue, scientists say – from hallucinating answers to a complete misalignment with humanity, www.livescience.com, 31st August, 2025, https://www.livescience.com/technology/artificial-intelligence/there-are-32-different-ways-ai-can-go-rogue-scientists-say-from-hallucinating-answers-to-a-complete-misalignment-with-humanity

[17] https://www.psychopathia.ai/

[18] GCS Network, Could Your AI System Go Rogue? www.globalcybersecuritynetwork.com, 12th March, 2025, https://globalcybersecuritynetwork.com/blog/could-your-ai-security-system-go-rogue/

[19] https://www.ibm.com/think/topics/ai-governance

No comments:

Post a Comment