Analyst Fears Over Regulation Of A.I. Amid Threats OF Human Extinction Raised By Tech. Leaders

03rd June 2023, New York / New Delhi Recently many Tech. Leaders and A.I. Experts have expressed their fears about A.I. (Artificial Intelligence) leading to the extinction of humanity and equating its dangers par with nuclear war as per the declaration signed by them on the webpage of the Centre for AI Safety.

The signatories include some of the tallest tech. leaders like Bill Gates, Demis Hassabis, CEO of Google DeepMind, Sam Altman, CEO of OpenAI, Ilya Sutskever, Co-Founder of OpenAI, Kevin Scott, CTO of Microsoft, Jaan Tallinn, Co-Founder of Skype, Adam D’Angelo, CEO of Quora.

Amid such threat concerns and negative sentiments getting built for A.I. Tech. developments throughout the World, 5 Jewels Research Analyst fears over regulation of A.I. by various Governments around the World and other regulatory bodies.

While giving his analyst perspective on the recent Center for AI Safety campaign on the danger of A.I. Tech. by terming this danger as extinction threat for humanity, Chief Analyst of 5 Jewels Research Mr Sumant Parimal said “A.I. Tech. innovations and developments are happening since many decades, but since emergence of Generative A.I. in last years, A.I threat perceptions have gone up, and we could see many social media / on-line campaigns on highest level of risks including extinction of humanity due to AI. Owning to such negative campaigns, very often good use cases of A.I. gets undermined, and positive aspects of A.I. tech. gets shadowed by its potential misuses and dangers. Under such negative sentiments against A.I., we fear that many Governments may start overregulating A.I. Tech. under public pressure and the World may miss many A.I. led innovation opportunities ”.

“Analysts, consultants, architects, programmers, developers, and data scientists are key designers of the A.I. and Digital Systems in patronages of CTOs/CIOs/CxOs. Its time to sensitize them on ethical uses of A.I. Tech. Some Regulations, SOPs (Standard Operating Procedures) and Code of Conducts may be required to be released and practiced for preventing possible misuses of A.I. Tech., but having pre-biases on A.I. Tech. by equating it with pandemic and nuclear war, concerned tech. leaders who have supported Center for AI Safety campaign, may actually defeat the core objective of A.I. in augmenting humans and humanity. Once A.I. was believed as most empowering tool for humanity, and infact enabled many use cases where it made paralyzed persons to talk and walk and helped doctors in identifying cancers and other diseases at a much-advanced stages, thus saving many lives. But today A.I. has been getting portrayed as biggest threat to the humanity. If this trend continues, then tomorrow some powerful groups may start portraying electricity as threat to humans, and as an electrical engineer I can tell that even electricity particularly high voltage electric current is also very dangerous, and if not handled properly then many people may get electrocuted and die also. But over the time we have ensured safe production, transmission, and usages of electricity in houses, enterprises, establishments and at the streets. Similarly, it is possible to ensure same level of safe usages of A.I. systems with optimum regulations” Sumant Parimal says further.

Recently Australian Govt. started evaluating various possible regulations for A.I., and Italian Govt. banned usages of ChatGPT. The European Union has proposed a groundbreaking piece of legislation ‘The European AI Act’, to heavily restrict the use of AI in critical infrastructure, education, law enforcement, and the judicial system

Leave a Reply

Your email address will not be published. Required fields are marked *