Regulatory and Legal Challenges with AI in India
Authors
India has long been at the forefront of adopting and scaling technology, and artificial intelligence (AI) is no exception. Even before the recent surge in AI-driven tools, India was already demonstrating use of early forms of machine intelligence through machine-to-machine (M2M) technologies, which enable automated systems to communicate with minimal human intervention. This adaptability has paved the way for the sophisticated adoption of AI, particularly in the field of financial technology.
Indian fintech institutions have extensively adopted AI for fraud detection, risk assessment, transaction monitoring, and algorithmic trading, as well as for customer service via virtual assistants. AI has also recently become embedded in consumer-facing platforms, powering features such as content recommendations, targeted advertising, generative tools, and voice-based interactions. These developments are reshaping digital content creation and consumption, representing a significant shift.
Despite its strong technological focus and the rapid adoption of AI, India’s legal framework faces significant challenges due to the lack of a dedicated, comprehensive AI law. Currently, AI is addressed through related and sector-specific legislation, such as data protection and intellectual property laws. While the Digital Personal Data Protection Act 2023 provides a basic framework for regulating personal data, which is critical in the context of AI to some extent, it does not fully address AI-specific concerns such as algorithmic accountability and automated decision-making. Similarly, intellectual property law is being tested by issues relating to ownership, authorship, and infringement in AI-generated/assisted content, particularly with regard to training data and generated outputs.
In the absence of a unified AI law, India has taken gradual regulatory steps to place due diligence obligations on digital platform intermediaries, including amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (Intermediary Guidelines) under the framework of the Information Technology Act 2000. These amendments primarily focus on AI-generated content, including deepfakes, and strengthen platform accountability and disclosure requirements .[1] Nevertheless, these measures remain fragmented and reactive, underscoring the need for a comprehensive, principles-based AI framework.
The legal path ahead for AI in India is expected to be gradual and based on principles, building on existing frameworks while introducing specific regulations for emerging issues such as AI safety, synthetic content, transparency, and accountability. India may also draw on international approaches and evolving global standards to inform its domestic framework. The long-discussed Digital India Act, which is expected to modernise India’s internet and technology ecosystem, may introduce a unified digital governance law targeting AI specifically. Overall, the approach is likely to be evolutionary rather than revolutionary, balancing innovation with safeguards and gradually aligning with global best practices in AI governance.