Logo

Logo

The AI Wars

This April, more than 9,000 Hollywood screen- writers, meeting under the banner of the Writers Guild of America, authorised a strike with ninety-eight percent of the vote. One of the major issues in dispute was the demand of screenwriters that Artificial Intelligence (AI) be used only for the purpose of research or ideation, and not as a means to replace them.

The AI Wars

(Representational Image: iStock)

This April, more than 9,000 Hollywood screen- writers, meeting under the banner of the Writers Guild of America, authorised a strike with ninety-eight percent of the vote. One of the major issues in dispute was the demand of screenwriters that Artificial Intelligence (AI) be used only for the purpose of research or ideation, and not as a means to replace them. The strike which started on 2 May ended only on 27 September i.e., after 148 days, with the acceptance of the strikers’ aforesaid demand.

This instance is proof of the power of AI-powered language models which entered the pub- lic domain late last year; the most popular AI-powered language model, Chat GPT, amassed one hundred million users within two months of its release, and had one billion visitors in the first four months of its existence. The popularity of Chat GPT can be easily understood; it is free, easily accessible, capable of self-supervised learning on large data sets, that enables it to write essays, search answers to questions, create computer applications, write computer code, build a resume, write Excel formula, summarize content, write cover letters etc.

Such enormous capabilities of AI-powered language models have led to concerns ranging from intrusion in the privacy of netizens, providing misleading answers that appear to be corre- ct, but mainly for its potential of replacing humans in hundreds of millions of jobs. Such fears have led to a number of educational and other institutions banning Chat GPT, and similar Artificial Intelligence (AI) Chatbots.

Advertisement

Already experts are predicting that AI Chatbots could be the nemesis of analytical and writing skills just as pocket calculators (later mobile phones) killed arithmetical ability in children. More dangerously, AI chatbots could disincentivise learning. Which student would willingly spend hours reading bulky tomes or trawling the internet to identify relevant content, another few hours in summarising the content, some more hours in writing a worthwhile essay or paper, when all these tasks could be accomplished in less than a minute by giving some simple commands to Chat GPT?

Even highly specialised business reports, that would readily pass muster with higher-ups can be generated by Chat GPT in seconds. No wonder, there is a move to ban AI chatbots in renowned academic institutions. According to the Cambridge University Rule Book: “Content produced by AI plat forms, such as Chat GPT, does not represent the student’s own original work so would be considered a form of academic misconduct to be dealt with under the University’s disciplinary procedures.”

Associated Press which describes itself as ‘independent global news organization dedicat- ed to factual reporting’ recently trained AI software to automatically write short news stories. This initiative produced twelve times more stories and also freed reporters to write more in-depth pieces. Newsquest, one of the biggest publishers of regional newspapers in England is recruiting an “AI-powered reporter” who would use artificial intelligence to “create national, local, and hyper-local content.” A company
called Real Fast Reports, with at least a thousand subscribers, uses AI to create end of
the year reports for students, at £10 a year. Teachers have only to provide some basic details of the pupil, and Real Fast Reports compiles the report, in seconds, in perfect prose.

On the downside, the rise and rise of AI, which has opened up such immense possibilities for the human race, poses an existential threat for the human race because hypothetically, artificial-intelligence systems might in due course, replace human intelligence, and develop their own goals and intentions, making humans irrelevant or even extinct. This is not a doomsday prediction but a real possibility, which should worry governments the world over.

Exactly a year ago, US White House Office of Science and Technology Policy (OSTP) relea- sed a “Blueprint for an AI Bill of Rights” which begins by highlighting the dangers of AI: “Among the great challenges posed to democracy today is the use of technology, data, and automat- ed systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services… Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity ~ often with- out their knowledge or consent.”

Approximately sixty countries now have national AI strategies creating policies which disallow irresponsible use of artificial intelligence. UNICEF has released a paper “Artificial Intelligence and Children’s Rights” to help stakeholders bet- ter understand and lay down policies, and to address the potential impact of artificial intelligence on children.

Contrarily, the ultimate goal of the large corporations behind the development of AI platforms is building machines that can operate like humans in physical spaces, such as factories and offices ~ far beyond the capabilities of the current AI chatbots. Yet, with the world’s biggest cor porations pouring in billions of dollars in research, this goal may be reached sooner rather than later. This conclusion follows from the superfast development of AI-powered machines that were in the concept stage ten years ago to their present-day sophistication.

The history of AI-powered chatbots is quite interesting. Demis Hassabis, a video-game designer and artificial-intelligence researcher co-founded a company named DeepMind in 2010, that sought to design computers that could learn how to think like humans. Google purchased Deep-Mind in 2014, making available billions of dollars for AI research to the founders of DeepMind that culminated in Deep Mind- Google releasing Bard, as a rival
to ChatGPT in 2023. At the same time, Elon Musk of Tesla fame, and Sam Altman, a software entrepreneur, co-founded, OpenAI, a non-profit artificial-intelligence- research lab, to stymie the development of thinking computers and develop open-source AI software to match Google’s growing dominance of the AI field. Such were Musk’s fears of unregulated AI that he even tried to persuade US President Barack Obama to regulate the development of AI.

Somewhere along the line, Microsoft invested heavily in OpenAI, prompting OpenAI to transform itself into a for profit corporation, which led Musk to withdraw from OpenAI. Thereafter, there was no holding back, and OpenAI released ChatGPT on 30 November 2022.
Miffed at the failure of his plans, Musk launched xAI, a company to challenge Deep- Mind-Google and OpenAI-Mic- rosoft. Musk has brought his tremendous financial and technological capabilities to the table;
in addition to AI-powered Tesla self-driving cars, Musk owns Neuralink, which aims to plant microchips in human brains; Optimus, a human-like robot; and Dojo, a super-com- puter that can use millions of videos to train an artificial neur- al network to simulate the human brain.

Musk’s plan is to train xAI on X (Twitter)’s more than a tril- lion tweets, that encompass all kinds of conversations, argu- ments, news, and interests. Sim- ultaneously, Musk has restricted the number of tweets a viewer could see per day, which denied access to Google and Microsoft to the X database.

xAI also has access to Tesla’s 160 billion frames of video, that it receives and processes daily, from the cameras mounted on Tesla’s cars. This video data of humans navigating in real-world situations is far more advanced than the text-based documents that chatbots like ChatGPT ac- cess. Such data could create AI for physical robots, like the robot Optimus being developed by Tesla ~ which would be humanoid ~ far more sophisticated than text-generating chatbots.

The obvious danger being overlooked is that self-learning AI systems might turn hostile to the human species and threat- en our existence or make hum- ans redundant.
With some of the world’s largest corporations, Google, Microsoft and Tesla trying to win the AI race it is only a question of time before robots, having almost human intelligence, come into existence. Unfortunately, these humanoids will not have our morals, discretion or aesthetics, and the tale of Frankenstein’s monster that killed its creator, could well come true in our lifetime.

Overawed by money power, national Governments take no action to regulate the development of AI, rather Governments have themselves joined the fray; US National Security Commission on AI asked for $40 billion to “expand and democratise federal AI research and development,” while the British Government, has made a commitment of £1 billion and Baidu, a forty- eight billion dollar Chinese corporate, has joined the AI race.

We should, perhaps, heed the warning of Nobel Laureate Stephen Hawking: “The development of full artificial intelligence could spell the end of the human race…It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, could- n’t compete, and would be superseded.”

(The writer is a retired Principal Chief Commissioner of Income Tax)

Advertisement