AI has had a free run. Now, India is weighing standards.

AI has had a free run. Now, India is weighing standards.

Source: Live Mint

From diagnosing diseases to conjuring cool images, artificial intelligence (AI) had a free run for long. That may be about to end.

The Bureau of Indian Standards (BIS) is preparing a comprehensive set of standards for AI-related applications in India, two people directly involved in the process said. The bureau is consulting the ministries of consumer affairs, information technology and education, as well as AI industry partners to develop these standards, the people cited above said on the condition of anonymity.

Apart from Generative AI that throws up realistic images and text, AI finds use in a wide range of fields, including healthcare, finance, transportation, education and customer service. However, the explosive growth of AI applications has come with rising concerns on ethics and trustworthiness, necessitating a robust regulatory framework to ensure its responsible use, the people cited above said.

The initiative by BIS, which comes under the consumer affairs ministry, aims at a structured approach to regulate AI, focusing on the entire lifecycle of AI applications from development to deployment and their eventual impact, one of the two people cited above said.

Queries emailed to spokespersons of BIS as well as the ministries of consumer affairs and IT remained unanswered till press time.

Unintended consequences feared

“AI technologies are advancing rapidly, but the absence of clear standards and regulations could lead to unintended consequences, especially in areas where trust and transparency are paramount,” the second person said. “Our goal is to create a framework that not only guides developers but also protects users and stakeholders from potential risks,” he added.

The BIS framework will aim to ensure that all parties, regardless of their level of AI expertise, understand the processes they need to follow to enable coherent and effective stakeholder engagement. The framework would provide guidance for AI applications based on a common set of rules that include “make,” “use,” and “impact” perspectives, the people cited above said. These perspectives are intended to offer a comprehensive view of AI applications, taking into account both the functional and non-functional characteristics of AI, such as trustworthiness and risk management.

Experts believe the introduction of new BIS standards will bring much-needed uniformity to the sector and establish consistency across the industry.

“It’s a welcome move. The introduction of these standards will bring much-needed uniformity to the sector and establish consistency across the industry. Currently, in the absence of a formal framework, those in power set the rules, but this initiative will put an end to that,” said Pawan Duggal, a cybersecurity expert.

“As a result, it will benefit consumers, make the ecosystem more robust, and help eliminate inconsistencies in the practices and procedures that must be followed in cybersecurity,” Duggal said over the phone.

Wide-ranging applications 

AI’s impact on society has been transformative. In healthcare, AI-powered systems can diagnose diseases, analyse medical images, and develop personalized treatment plans. In finance, AI algorithms can detect fraud, predict market trends, and optimize investment portfolios. In transportation, autonomous vehicles driven by AI have the potential to improve safety and efficiency. In education, AI-powered tutoring systems can provide personalized learning experiences and adapt to the needs of individual students.

However, the advance of AI has been accompanied by controversy.

Last year, New York Times sued ChatGPT creator OpenAI for allegedly using millions of NYT articles to train chatbots which now compete with it. In May, AI chipmaker Nvidia was sued by three authors for unpermitted use of copyrighted works to train its NeMo AI platform. In the same month, a professional model in India sent a legal notice to the Advertising Standards Council of India stating travel portal Yatra Online had used her facial features in an advertisement. In December, the Delhi High Court asked the Centre to respond to a public interest litigation against the unregulated use of AI and deepfakes. Earlier this month, the Bombay High Court granted interim relief to singer Arijit Singh in his copyright suit against AI platforms, which he said were violating his personality rights.

“While AI presents unparalleled opportunities for innovation, economic growth, and public welfare, it also introduces significant ethical, privacy, and security challenges. Without well-defined laws, the unchecked deployment of AI could lead to misuse, discrimination, and violations of fundamental rights,” said Dr. Vishal Arora, chief of business transformation and operation excellence at Gurugram’s Artemis Hospitals.

“Clear regulations are essential to safeguard against biases in AI algorithms, protect personal data, and ensure transparency in AI-driven decision-making processes. As India aspires to be a global leader in the AI domain, creating a legal ecosystem that supports responsible AI usage is not just necessary but urgent, ensuring that the benefits of AI are maximized while mitigating potential risks,” Arora said.

Growing AI market

According to a recent report by Boston Consulting Group (BCG) and Nasscom, India’s AI market is growing at 25-35% annually, and is projected to reach around $17 billion by 2027. The report predicted that the demand for AI talent in India may grow at 15% annually by 2027, as investments in AI continue to rise.

“Since 2019, global investments in AI have grown at a 24% annual rate, reaching nearly $83 billion in 2023. Most of this investment has been in AI technologies like data analytics, Generative AI, and machine learning platforms,” the report highlighted.

Consumer experts have raised concerns about the societal impact of AI, emphasizing the importance of transparency and accountability in AI applications.

“AI is not just a technological innovation. It has profound implications for society. Ensuring that AI applications are trustworthy and that stakeholders are aware of their roles and responsibilities is crucial for fostering public confidence in these technologies,” said Manish K. Shubhay, partner at The Precept-Law Offices and consumer rights advocate.



Read Full Article

Leave a Reply

Your email address will not be published. Required fields are marked *