Logo

Logo

OpenAI fiasco triggers serious calls for guardrails around AI industry

Couple of days before his ouster, Altman had said at a tech event that big regulatory changes weren’t needed for current AI models, but would be soon.

OpenAI fiasco triggers serious calls for guardrails around AI industry

(Representational Image) File Photo

The nail-biting drama around Sam Altman being sacked from OpenAI, joining Satya Nadella-run Microsoft and then returning to OpenAI — all within a span of six days — has alerted governments and regulators and the call to apply guardrails on AI industry is now more vocal than ever.

Couple of days before his ouster, Altman had said at a tech event that big regulatory changes weren’t needed for current AI models, but would be soon.

“We don’t need heavy regulation here or probably for the next couple generations. But at some point, when a model can do the equivalent output of a whole company, or a whole country, or a whole world, maybe we do want some collective supervision around that,” he said on a panel at the Asia-Pacific Economic Cooperation summit in San Francisco.

Advertisement

However, the OpenAI fiasco has once again triggered the call to regulate AI in such a way that such episodes are not repeated.

France, Germany and Italy have reached an agreement on how AI should be regulated.

However, businesses and tech groups have cautioned the European Union against excessive regulation of foundation models in upcoming AI rules.

“For Europe to become a global digital powerhouse, we need companies that can lead on AI innovation also using foundation models and the Global Partnership on Artificial Intelligence (GPAI),” DigitalEurope, whose members include Airbus, Apple, Ericsson, Google, LSE and SAP, wrote in a letter.

In India, concerns over deepfakes have put the government in motion to warn social media platforms to remove altered audio/video from their respective platforms or face action.

The government on Friday gave a seven-day deadline to social media platforms to tweak their policies as per Indian regulations in order to address the spread of deepfakes on their platforms.

Deepfakes could be subject to action under the current IT Rules, particularly Rule 3(1)(b), which mandates the removal of 12 types of content within 24 hours of receiving user complaints, said Minister of State for Electronics and IT, Rajeev Chandrasekhar.

The government will also take action of 100 per cent against such violations under the IT Rules in the future.

“They are further mandated to remove such content within 24 hours upon receiving a report from either a user or government authority. Failure to comply with this requirement invokes Rule 7, which empowers aggrieved individuals to take platforms to court under the provisions of the Indian Penal Code (IPC),” the minister said.

“For those who find themselves impacted by deepfakes, I strongly encourage you to file FIRs at your nearest police station,” said Chandrasekhar, adding that the IT Ministry will help aggrieved users in filing FIRs in relation to deepfakes.

India is mulling regulation to tame the spread of deepfakes and other user harms that AI can bring along, said Union IT Minister Ashwini Vaishnaw.

After meeting representatives from large social media platforms and other stakeholders, the minister said India will draft new rules to spot and limit the spread of deepfakes. The new regulation will also strengthen the reporting process for such deepfake videos.

After the successful AI Safety Summit in the UK, the Global Partnership on Artificial Intelligence (GPAI) in Delhi next month will further deliberate upon the risks associated with AI — in the presence of world leaders — before a global framework is reached in Korea next year.

Advertisement