Last verified: 13 May 2026
The AI Ethics and Accountability Bill 2025 was introduced in the Lok Sabha on 17 December 2025, two days before the Winter Session ended. Listed as Bill No. 59 of 2025, it is the first dedicated Bill on artificial intelligence ever tabled in either House of Parliament. A Bharatiya Janata Party Member of Parliament from Madhya Pradesh introduced it as a Private Member’s Bill. Even though it received minimal floor time, the document raises two questions any compliance officer needs to answer before drafting an AI policy.
The first is whether the Bill is law. It is not. A Private Member’s Bill is one introduced by a Member who is not a Government Minister, and the track record for such Bills is poor. Hundreds are tabled in every Parliament (during the 17th Lok Sabha’s tenure, 2019 to 2024, 729 were introduced in the Lok Sabha and 705 in the Rajya Sabha), yet only fourteen have ever been passed into law since 1952, the last one in 1970. The odds of this Bill reaching the statute book are low, and any guidance that treats it as imminent is wrong.
The second is whether AI compliance can wait until the Bill becomes law. It cannot. Six weeks before the tabling, on 5 November 2025, the Ministry of Electronics and Information Technology (MeitY) released the India AI Governance Guidelines. The Guidelines are not statute, but they are the Government’s stated position. They set out seven principles, called Sutras (Trust; People First; Innovation over Restraint; Fairness and Equity; Accountability; Understandable by Design; Safety, Resilience and Sustainability), and they favour a light-touch approach built on existing law rather than a new standalone AI statute. The Bill is the opposite signal.
The compliance reality sitting between these two documents is harder than either suggests. The Digital Personal Data Protection Act 2023 governs personal data, including AI training data tied to identifiable individuals. The Information Technology Amendment Rules 2026, operational from 20 February 2026, require Synthetically Generated Information labelling, a three-hour takedown window for content flagged unlawful by a court or the appropriate government, and a tighter two-hour window for deepfakes and non-consensual intimate imagery. The Bharatiya Nyaya Sanhita 2023 covers AI-driven impersonation fraud through its cheating-by-personation provisions. The EU AI Act provides the international benchmark.
This guide walks through the Bill clause by clause as introduced, sets it against the instruments above, integrates the Delhi High Court personality-rights judgments delivered between 2023 and 2025, and ends with a three-tier compliance roadmap a Board can act on. The Bill may not become law. The compliance architecture around it already exists.
The Artificial Intelligence (Ethics and Accountability) Bill, 2025 is a Private Member’s Bill introduced in the Lok Sabha on 17 December 2025 as Bill No. 59 of 2025. It proposes a statutory Ethics Committee for AI, mandatory bias audits, restrictions on AI surveillance, and penalties up to ₹5 crore. It is not yet law.
Why India needs an AI law in 2026, and why this Bill landed when it did
For the better part of a decade, India’s AI policy has lived in PDFs that nobody had to obey. That’s the regulatory void the AI Ethics and Accountability Bill 2025 is trying to fill, and it’s the reason the Bill feels both inevitable and rushed. Generative AI tools went mainstream in late 2022.
Indian users adopted them at startling speed. But the legal architecture stayed frozen at the level of advisory documents and ministerial speeches. Why the lag?
Here’s the thing. AI policy in India started as aspirational strategy, not law. NITI Aayog put out the “National Strategy for AI” in 2018 and followed it with the “Responsible AI for All” papers in 2021. The IndiaAI Mission, launched by MeitY in 2024, layered a national programme on top, but its mandate was capacity, compute, and chips, not statutory regulation.
Both documents articulated principles (safety, equality, inclusivity, privacy, transparency, accountability) but neither carried statutory force. They were intended to seed conversation, not bind anyone. The harms they anticipated, however, started showing up in court papers before Parliament caught up.
NITI Aayog’s Responsible AI papers and what they left undone
The NITI papers did something useful: they gave Indian regulators a vocabulary. Words like “algorithmic bias” and “explainability” entered policy discourse through them. But they didn’t create a regulator, didn’t define a regulated entity, and didn’t propose a remedy.
The 2021 paper acknowledged this gap explicitly. It noted that future legislative action would be required. None followed. By 2023, the Data Protection Board of India was being created under the DPDP Act, but its remit was personal data, not AI conduct.
AI sat in a separate, unowned policy lane.
Looking back at the timeline, the pattern is hard to miss. India moved from voluntary principles in 2018-21, to a narrow advisory in 2024, to soft-law governance frameworks in November 2025, to enforceable subordinate legislation on specific risks (the IT Rules 2026 on deepfakes) in February 2026. The architecture has been built bottom-up, in fragments. This Bill is the first attempt to put a roof on it.
The March 2024 MeitY Advisory and the industry pushback that softened it
On 1 March 2024, MeitY issued a now-famous Advisory requiring intermediaries to seek explicit Government permission before deploying “under-tested” or “unreliable” AI models in India. The startup ecosystem reacted sharply. Within fourteen days, on 15 March 2024, the Advisory was reissued with the permission requirement dropped. The episode demonstrated two things: the government had real concerns about AI harms, and it was not yet prepared to legislate them through hard rules.
That equilibrium held for nearly two years, until the Bill landed.
What changed in 2025: deepfakes, ChatGPT-era harms, and the personality-rights judgments
What broke the equilibrium was harm. Deepfake videos targeting Bollywood actors, female public figures, journalists, and ordinary citizens migrated from edge cases to weekly headlines. The Delhi High Court began issuing personality-rights injunctions one after another.
The first was the Anil Kapoor v. Simply Life India & Ors., CS(COMM) 652/2023 ruling in September 2023, which expressly extended personality protection to AI-generated voice, image, and likeness. A journalist-led PIL, Rajat Sharma v. Union of India, pushed MeitY to publicly commit to deepfake regulation. By late 2025, courts were doing the work statute should have done. The Bill is, in part, Parliament catching up.
A common question that surfaces on Quora and LinkedIn is whether AI ethics and AI regulation are the same thing. And they are not. Ethics is the set of principles a developer voluntarily adopts; regulation is the set of obligations a developer is forced to comply with.
The NITI papers were ethics. This Bill is, formally, regulation. The fight inside Indian policy circles is whether ethics-led soft-law (MeitY’s path) or accountability-led hard-law (the Bill’s path) is the better way forward.
Here’s the thing experienced practitioners know: the Bill’s most important effect may not be enactment at all. Even unenacted Bills shape contract terms, board-level risk reporting, and litigation strategy. The DPDP Bill drafts circulated for years; they were cited in privacy writ petitions before the Act passed.
The AI Ethics Bill is likely to do the same. The mistake we see most often is treating “not yet law” as “not yet relevant.” It’s already relevant.
The pitfall worth flagging upfront: do not write organisational AI policy that assumes this Bill is on the statute book. The text could change materially through Standing Committee review (if referred) or could be displaced entirely by a government-sponsored Digital India Act provision. Build a policy that complies with what is enforceable today, and tracks the Bill’s themes as a directional signal.
What the AI Ethics and Accountability Bill 2025 actually proposes
The Bill is short by Indian statutory standards. It runs slightly under 30 pages as introduced, organised into a preamble, definitions, principles, an Ethics Committee chapter, developer and deployer duties, penalty and grievance provisions, and miscellaneous clauses. The architecture roughly tracks what the EU AI Act borrowed from a decade of European data law (definitions, principles, obligations, oversight body, penalties, appeals). The AI Ethics and Accountability Bill 2025 is recognisably a regulatory statute in form, even if its substance is uneven.
So what does it actually say, clause by clause?
How the Bill defines AI, AI system, high-risk AI, developer, and deployer
The Bill’s definition of an “AI system” is deliberately broad: any computational system that, for a defined set of human-determined objectives, can generate outputs (predictions, recommendations, content, decisions) influencing physical or virtual environments. The breadth has been flagged by commentators (more on this below) because it captures everything from a fraud-detection model to an email spam filter. It defines “developer” as the entity that designs, trains, or codes the system, and “deployer” as the entity that puts it into commercial or public-facing use. It treats large language models and other generative systems within the same envelope, without a dedicated GPAI chapter of the kind the EU AI Act now has.
The seven core principles the Bill borrows from international frameworks
The Bill lists seven principles that look familiar to anyone who has read the OECD AI Principles or the NITI Aayog papers: safety and security, fairness and non-discrimination, transparency and explainability, accountability, privacy, human oversight, and respect for human autonomy. Each principle is then operationalised, in patchier fashion, in the developer and deployer duty chapters. This part gets overlooked: principles are easy to draft and hard to enforce. The enforcement question is where the Bill diverges sharply from international counterparts.
Right to explanation, transparency by design, and algorithmic bias
The Bill creates a statutory “right to explanation” for any person subjected to an automated decision with significant legal or commercial impact. This is the clause most directly responsive to the European Union’s GDPR-era debate. It also requires “transparency by design”, meaning developers must document model architecture, training data sources, and bias-mitigation measures before deployment. Eight headline obligations, distilled from the Bill, look like this in practice:
- Establish a statutory Ethics Committee for AI with cross-sectoral composition.
- Mandate pre-deployment bias audits for high-risk AI systems.
- Create a right to explanation for affected individuals on automated decisions.
- Restrict AI surveillance to “lawful purposes” with safeguards.
- Impose registration and documentation duties on developers of high-risk systems.
- Impose risk-assessment and transparency-notice duties on deployers.
- Impose civil penalties up to ₹5 crore for non-compliance.
- Create criminal liability for wilful or repeat violations.
Chapter and clause structure as introduced
The Sansad PDF lists the Bill’s chapters as: preliminary, principles, Ethics Committee, obligations of developers, obligations of deployers, restrictions on high-risk AI, penalties and offences, grievance redressal and appeals, and miscellaneous. Specific clause numbers (the ones a compliance officer will eventually cite to a Board) should be read off the as-introduced PDF directly, because the Bill could be amended during Standing Committee review. The right discipline today is to internalise the structure, not memorise clause numbers that may shift.
In practice, the most useful exercise for an in-house counsel is to map each of these chapters against the company’s existing DPDP compliance file. A great deal of the documentation, risk-assessment, and grievance-redressal architecture overlaps. We’d recommend treating the Bill as DPDP-adjacent for internal planning, even though it’s constitutionally a separate instrument.
Worth flagging: the pitfall here is the opposite of what most write-ups warn about. The risk isn’t that compliance teams will under-prepare. The risk is that they’ll over-engineer a compliance regime for a Bill that may never pass in its current form, and bake those costs into product road-maps before they need to.
The proposed Ethics Committee for Artificial Intelligence
If the Bill has a centrepiece, it’s the proposed Ethics Committee for Artificial Intelligence. It would be a statutory body with powers to receive complaints, conduct audits, issue advisories, and refer matters for prosecution. The closest existing parallel in Indian law is the Data Protection Board of India under the Digital Personal Data Protection Act, 2023. And the Ethics Committee would sit alongside the Board, not subsume it.
So why a separate committee?
The drafting logic is that AI conduct (model design, training, deployment) isn’t the same as personal-data processing. The Board handles the latter. The Committee, the Bill imagines, handles the former. In theory, the two would coordinate; in practice, the overlap on automated decision-making, profiling, and AI training on personal data is so substantial that practitioners are already asking whether one body should do both jobs.
Composition and appointment
The Bill provides for a chairperson with judicial or technical eminence, plus members drawn from technology, law, ethics, civil society, and (this is the part most write-ups miss) sectoral expertise. Composition is not fully fleshed out in the as-introduced text; specifics, including selection process and tenure, are likely to be sharpened during Standing Committee review. Practitioners reading the Bill against the DPDP precedent will recognise the pattern: composition clauses get redrafted, sometimes substantially, before enactment.
Powers, complaints process, and timelines
The Committee would have suo motu inquiry powers, the power to direct disclosures from developers and deployers, the power to recommend penalties to a designated authority, and the power to refer matters for criminal investigation. Complaint timelines are notional in the as-introduced text. The discipline a compliance team can build today is procedural readiness: who in the organisation receives a Committee notice, who responds, what’s the disclosure protocol.
Coordination with sectoral regulators
Here’s the gap the Bill leaves wide open. The RBI is already drafting AI principles for the financial sector; SEBI has been thinking about AI in algo-trading and robo-advisory disclosures; IRDAI on AI in claims processing; ICMR on AI in clinical research. The Bill doesn’t say how the Ethics Committee coordinates with any of these sectoral regulators.
The MeitY Guidelines, by contrast, explicitly designate sectoral regulators as the immediate AI-rules layer. If the Bill passes without a coordination clause, the country will have one body with cross-cutting authority and four sectoral regulators with conflicting rule-books. That’s a real second-order risk worth tracking.
Adjacent regulatory architecture provides a useful comparator: how sectoral regulators like the Online Gaming Authority of India plug into existing law is the kind of design question the Bill has not yet answered.
Appeals
The Bill provides for appeals against Committee orders, but the appellate forum and process are placeholder-style as introduced. Likely candidates: the proposed Digital India Authority, an existing tribunal, or the High Court of the relevant jurisdiction. Compliance teams should assume the High Court route until clarity emerges, because that is the constitutional default for any regulatory order absent a specifically designated tribunal.
A community insight worth flagging: corporate respondents on LinkedIn keep asking whether the Committee would have prosecutorial standing or only advisory authority. And the short answer, as drafted, is investigative and advisory, with referral powers. Prosecution remains with the appropriate criminal court under existing statutes. The pitfall to avoid is assuming the Committee is a court.
It’s a regulator with teeth, not a tribunal with finality.
Who the Bill regulates: developers, deployers, and users
The Bill’s two-actor model (developer and deployer) borrows from the EU AI Act and from international AI governance literature. It’s a useful binary, but it cracks in places where the Indian market doesn’t fit the European template. Where does an Indian SaaS company that licenses an LLM from a US developer, fine-tunes it on Indian-language data, and resells it to a domestic bank actually sit?
Developer duties
Developers, under the Bill, bear the heaviest obligations: registration of high-risk systems, mandatory bias audits before deployment, model documentation (architecture, training data sources, intended use cases, known limitations), and a duty to withdraw or remediate systems found to be biased. The drafting style here is closest to the EU AI Act’s GPAI obligations, though without the EU’s compliance gradient between systemic and non-systemic models. A developer registering a high-risk hiring AI would, on paper, be in the same compliance bracket as one registering a high-risk facial-recognition system. That uniformity is a deliberate drafting choice.
It’s also one of the Bill’s most-criticised features.
Deployer duties
Deployers, meanwhile, must conduct risk assessments before deployment, issue transparency notices to users interacting with the system, maintain a grievance-redressal channel, and report material incidents. The duties echo the DPDP Act’s data fiduciary obligations, which is unsurprising; the Bill’s drafter has clearly studied that statute. For an in-house counsel mapping out compliance posture, the deployer duties are the more immediately operational set. They map onto product and customer-experience workflows, not just engineering ones.
Extraterritorial reach
Will the Bill apply to OpenAI, Anthropic, Google DeepMind, and Meta? The as-introduced text contemplates extraterritorial application where AI systems are deployed in India or where outputs are directed at Indian users, regardless of where the developer is incorporated. This mirrors the Section 3 of the Digital Personal Data Protection Act, 2023 extraterritorial framing.
Enforcement against foreign developers will, of course, depend on practical execution (notice, service, asset attachment). But the jurisdictional anchor’s there. The data-privacy questions agentic AI raises become more pointed under this kind of jurisdictional reach, because agentic systems often loop back into multiple developer stacks.
Open-source AI, academic research, and the missing carve-out
Here’s the Bill’s most consequential silence. There’s no clear carve-out for open-source models, academic research, or sandbox deployment. The developer duties (registration, audit, documentation, withdrawal) apply, on a plain reading, to a graduate student fine-tuning a Hugging Face model for a thesis.
If enacted without amendment, this would freeze a meaningful share of India’s academic AI research community and significantly constrain the open-source ecosystem. The downstream consequence is non-obvious but important. Universities will reroute AI research through foreign partnerships to avoid Indian registration burdens, and India’s contribution to global open-source AI will shrink. Drafters of the Standing Committee response, if any, should flag this carve-out as a priority.
The better approach, in our view, is for businesses to maintain two parallel inventories: one of AI systems they develop, and one of AI systems they deploy. The dividing line matters because the obligations differ, and a single organisation can easily wear both hats for different products. A common question practitioners raise is whether a buyer of an off-the-shelf SaaS AI tool becomes a “deployer.” Reading the Bill, yes, if the buyer puts the system into customer-facing or operationally consequential use.
That’s a wider net than most procurement teams currently appreciate. The pitfall, worth flagging now: don’t treat the developer-deployer line as fixed. Many Indian companies will sit on both sides simultaneously, and the documentation burden compounds when both hats apply.
High-risk AI: surveillance, hiring, credit, policing, and healthcare
The Bill’s restrictions are sharpest on what it calls “high-risk AI systems”: those used in surveillance, employment and hiring, credit scoring, predictive policing, and healthcare. The list isn’t exhaustive in the as-introduced text. The Ethics Committee would have the power to add categories by notification. So what counts as “high-risk” in the first place?
The Bill’s high-risk definition and the absence of a tiered classification
The Bill defines high-risk AI as any AI system whose use could materially affect individual rights, safety, livelihood, or access to essential services. The drafting is functional rather than tier-based. Compare this with the EU AI Act, which uses a four-tier risk classification (unacceptable, high, limited, minimal) with prescribed prohibited practices at the unacceptable tier.
And the Indian Bill, as drafted, has high-risk and not-high-risk. Nothing more granular. The consequence is real: a deployer cannot easily know in advance which category a borderline system falls into. The Ethics Committee’s first big institutional job, if the Bill becomes law, will be to publish classification guidance.
AI surveillance and the “lawful purposes” critique
The Bill restricts AI surveillance to “lawful purposes” with safeguards. The phrase “lawful purposes” is the single most-criticised drafting choice in the Bill, and the criticism is doctrinally serious. In Shreya Singhal v. Union of India, (2015) 5 SCC 1, the Supreme Court struck down Section 66A of the Information Technology Act, 2000 for vagueness; the court held that an offence definition cannot rest on terms that fail to give citizens fair notice of what is prohibited.
The constitutional bedrock against vague-purpose surveillance was laid even earlier in Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1, which recognised privacy as a fundamental right under Article 21. A “lawful purposes” standard that does not specify the conditions, proportionality test, or oversight architecture is a doctrinal heart attack waiting to happen. Constitutional lawyers are watching this clause specifically.
AI in employment, hiring, and the algorithmic discrimination problem
The Bill flags AI in employment and hiring as high-risk and proposes pre-deployment bias audits. Here’s the thing: the Indian market context is unusual. Resume-screening AI is widely used by Tier-1 and Tier-2 companies, but caste, religion, and regional discrimination aren’t coded into European bias-audit frameworks, which focus on protected characteristics under European law.
An Indian bias-audit framework would have to be designed for Indian protected categories: gender, caste, religion, region, age, disability. But the Bill does not yet specify the audit framework. This is a place where Indian drafting cannot lift from the EU.
AI in credit scoring and the RBI overlap
AI in credit underwriting, including the “buy now pay later” ecosystem, is already on the RBI’s radar. The Bill creates obligations that overlap with the RBI’s draft Framework on Responsible and Ethical Enablement of AI in financial services. The risk for a fintech is duplicative compliance: a bias audit acceptable to the Ethics Committee may not match what the RBI eventually requires. Until coordination clauses are added, fintechs should plan for both, not either.
AI in predictive policing
Predictive policing AI is the most constitutionally exposed category. India doesn’t yet have a major reported judicial decision on a domestic predictive-policing system. The Wisconsin Supreme Court ruling in State v. Loomis, 881 N.W.2d 749 (Wis. 2016) is the closest comparator: it allowed the use of a proprietary risk-assessment tool in sentencing but required disclosure of the tool’s known limitations.
If predictive policing reaches Indian courts, expect citation to both Loomis and Puttaswamy. The Bill’s right to explanation could, in principle, give Indian defendants a stronger right than US defendants currently have under Loomis. But whether it is enacted, of course, is another question.
What experienced practitioners know is that the high-risk category is where most actual litigation will happen, with or without the Bill. The pitfall is to treat “high-risk AI” as a single bucket. It is five very different problems in a trench coat: surveillance is a constitutional problem, hiring is a discrimination problem, credit is a sectoral-regulator problem, policing is a criminal-procedure problem, and healthcare is a clinical-evidence problem. A blanket compliance approach will fail in at least one of them.
Penalties, criminal liability, and the grievance mechanism
The Bill’s enforcement teeth are the part that has driven most of the headlines. ₹5 crore caught the attention of Boards. Criminal liability caught the attention of compliance officers. The grievance mechanism is the part that will matter most to ordinary affected individuals.
Each of these works in a different way.
The ₹5 crore civil penalty and tiered structure
The Bill proposes a civil penalty ceiling of ₹5 crore for non-compliance with developer or deployer duties. The actual penalty levied would be tiered against factors like severity of harm, scale of deployment, prior conduct, and good-faith remediation efforts. The structure tracks the DPDP Act’s penalty design under the Schedule, where specific contraventions carry specific ceilings. The Ethics Committee would recommend; a designated authority would impose.
The structure is unfamiliar to a sector used to the SEBI-style direct adjudication model, and that unfamiliarity is a feature rather than a bug.
Criminal liability for repeat or wilful violations
Repeat or wilful violations attract criminal liability under the Bill, in addition to civil penalties. The exact maximum imprisonment term isn’t extractable from the early-stage commentary on the as-introduced text; compliance teams should read it off the Sansad PDF directly rather than off secondary write-ups. Separately, AI-enabled impersonation and AI-enabled cheating remain prosecutable today under Section 318 of the Bharatiya Nyaya Sanhita, 2023 (cheating) and Section 319 of the BNS (cheating by personation). Those provisions don’t go away.
A deepfake-driven fraud is already a BNS offence; the Bill adds AI-specific liabilities on top, not in substitution.
The grievance mechanism
Here’s how the two-tier escalation actually works. Any person aggrieved by an AI system’s decision or output can file a complaint to the deployer (first tier) and, on unsatisfactory resolution, escalate to the Ethics Committee (second tier). The 30-day window standard the DPDP Rules use for grievance resolution is a plausible default; the Bill leaves the timeline to subordinate rule-making. Compliance teams should design grievance workflows that can be scaled into both DPDP and AI grievance pipelines, because the same complaint will often raise both data and AI questions.
Compensation for victims of algorithmic harm
The Bill contemplates compensation for victims of demonstrated algorithmic harm. The compensation framework is sketched, not detailed. Practitioners will recognise the pattern from the DPDP Act, where the compensation question was eventually punted to subordinate rule-making and to the Board’s case-by-case discretion. Expect the same here.
In practice, the criminal-liability clauses won’t move many cases. They’re deterrent provisions. Civil penalties and grievance-driven compensation will do most of the work.
And the pitfall worth flagging is the misreading of “criminal liability” as automatic on first contact. It applies to wilful or repeat conduct, which means a single, good-faith compliance failure followed by remediation is unlikely to attract prosecution. Compliance posture should focus on remediation evidence, not on avoidance.
A common question on LinkedIn legal-tech groups: can a class of users sue jointly under the Bill? The text is silent on class action explicitly, but Indian consumer protection law and writ jurisdiction both permit collective actions. Expect tactical use of consumer fora for AI grievances once the Bill (or any successor instrument) is in force.
How the Bill interacts with the DPDP Act, IT Rules 2026, and the Copyright Act
This is the section every compliance officer reads first. The honest answer to “how do I comply today” is: with everything that’s already enforceable, while the Bill is debated. Three instruments are doing the actual regulatory work right now. The Bill, if enacted, would sit on top of them, not in place of them.
So what’s enforceable today, and what’s merely proposed?
The DPDP Act 2023 overlap
The Digital Personal Data Protection Act, 2023 is enforceable and operational, with the DPDP Rules 2025 building the procedural architecture. Data fiduciaries face obligations on consent, purpose limitation, security, breach notification, and grievance redressal. Where AI systems train on or process personal data, DPDP applies directly.
The Ethics Committee, if constituted, would not override the Data Protection Board on personal-data questions; the two bodies would coordinate. For a working compliance map, in-house counsel should treat the operational compliance regime under the DPDP Rules 2025 as the floor and read the Bill’s themes as the directional ceiling.
The IT (Intermediary Guidelines) Amendment Rules 2026
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified on 10 February 2026 and operational from 20 February 2026, are the single most important enforceable AI-adjacent instrument in India today. They mandate Synthetically Generated Information (SGI) labelling on intermediary platforms, compress the takedown window for content flagged illegal by a court or the appropriate government to three hours (down from the earlier 24-36 hours), apply a tighter two-hour window to non-consensual intimate imagery and deepfakes, and require declaration mechanisms for AI-generated content. These obligations are live now. A private user complaint, on its own, does not start the three-hour clock: the trigger is a court order or an authorised government direction.
The practical effect is a clear shift in the regulatory architecture. The IT Rules 2026 move the compliance burden onto platforms: intermediaries are now expected to identify and label AI-generated content, act on flagged unlawful content within the two-hour or three-hour windows, and lose Section 79 safe-harbour protection if they fail. The default has flipped from “platforms are passive conduits” to “platforms are responsible for the AI content they host.” Any organisation that does not have an SGI policy operational already is in non-compliance with subordinate legislation. The IT Rules 2026 are also the practical reason India can claim to have “AI rules” today: not the Bill, not the MeitY Guidelines, but these Rules. Treat them as the most concrete present-day compliance demand.
The Copyright Act 1957 gap
The Copyright Act, 1957 does not address AI training-data appropriation, AI-generated content authorship, or the fair-use status of model training. This gap is the subject of ANI Media Pvt. Ltd. v. Open AI Inc. & Anr., CS(COMM) 1028/2024, where the Delhi High Court is being asked to decide whether training an LLM on copyrighted news content is an infringement. The Bill does not close this gap. And read together, the Copyright Act, the IT Rules 2026, and the Bill leave training-data copyright in a doctrinal limbo.
A working assumption for content publishers: opt-out mechanisms and licensing protocols matter more than statutory clarity for now.
The IPC/BNS overlap
Section 318 of the BNS (cheating) and Section 319 of the BNS (cheating by personation) already apply to AI-enabled fraud and deepfake impersonation. They’ve done so since the BNS came into force in 2024. A criminal complaint for a deepfake-driven scam doesn’t have to wait for the AI Ethics Bill. Existing criminal law already covers it, supplemented by IT Act provisions for the digital element.
The Bill adds AI-specific liabilities on top, but doesn’t displace the existing toolkit. The disambiguation matters because reader confusion on this point is the single largest risk in writing about the Bill. The short version, issue by issue: deepfakes? IT Rules 2026 today, BNS 318-319 today, Bill someday. AI surveillance? Constitutionally tested under Puttaswamy today, Bill someday. AI in credit? RBI guidance today, Bill someday. Training-data copyright? Copyright Act today (with the ANI case running), Bill never (as drafted).
Bottom line: a common practitioner question is whether a company can ignore the Bill entirely if it’s fully DPDP and IT Rules compliant. The honest answer is yes, for now. But ignore-mode is a poor governance posture. The Bill’s themes will surface in vendor contracts, board reporting, and litigation strategy well before formal enactment.
Anticipate the themes; comply with the present. The pitfall to avoid is the inverse: building an AI-Bill-first compliance regime that under-invests in DPDP and IT Rules 2026 readiness. Those are today’s obligations. The Bill is tomorrow’s, at best.
| Issue | Enforceable today | Proposed under AI Ethics Bill 2025 | Soft-law / MeitY Guidelines |
|---|---|---|---|
| Deepfakes | LiveBNS s.318, s.319; IT Act s.66D; IT Amendment Rules 2026 (operational 20 Feb 2026). | ProposedCivil penalty up to Rs 5 crore plus a dedicated synthetic-content regime. | Soft-lawMeitY Advisory Mar 2024; India AI Governance Guidelines (5 Nov 2025). |
| AI surveillance | LivePuttaswamy privacy doctrine; DPDP Act 2023 (where personal data processed). | ProposedHigh-risk AI category likely to apply; Ethics Committee oversight envisaged. | Soft-lawResponsible AI for All principles (NITI Aayog 2021). |
| Algorithmic discrimination | LiveArticle 14/15 constitutional remedies; sectoral anti-discrimination rules. | ProposedFairness / non-discrimination obligations on developers and deployers. | Soft-lawFairness principle in MeitY Guidelines and NITI Aayog papers. |
| AI in credit decisions | LiveRBI fair-practice circulars; DPDP Act consent and purpose limitation. | ProposedHigh-risk classification with transparency and explainability duties. | Soft-lawMeitY Guidelines themes; no binding rule specific to credit AI. |
| AI in hiring | LiveDPDP Act for candidate data; general labour-law non-discrimination. | ProposedHigh-risk system; bias-audit and human-oversight obligations. | Soft-lawMeitY Guidelines fairness and accountability principles. |
| Training-data copyright | LiveCopyright Act 1957; ANI v. OpenAI litigation pending in Delhi HC. | ProposedBill does not resolve the training-data question; gap remains. | Soft-lawNo binding MeitY guidance; matter is judicially live. |
| AI grievance redressal | LiveConsumer Protection Act; DPDP Act grievance officers for data fiduciaries. | ProposedDedicated grievance mechanism under the Ethics Committee framework. | Soft-lawMeitY Guidelines recommend redressal channels; not enforceable. |
| AI registration | LiveNo dedicated AI registration regime in force today. | ProposedRegistration / notification regime envisaged for high-risk systems. | Soft-lawVoluntary disclosures encouraged by MeitY Guidelines. |
How India’s AI Ethics Bill compares globally
The fastest way to understand the AI Ethics and Accountability Bill 2025 is to read it next to its international counterparts. The patterns it borrows, and the choices where it diverges, both tell a story. Five jurisdictions are doing real AI law: the European Union, the United States, China, Brazil, and India. India’s doing it twice, in fact, because the MeitY Guidelines and the Bill represent two different philosophies in the same country.
So which of those five templates is the Indian Bill actually closest to?
India AI Ethics Bill vs MeitY India AI Governance Guidelines
The MeitY Guidelines, published 5 November 2025, articulate seven “Sutras”, create an Advisory AI Governance Group, propose a Technology and Policy Expert Committee, and signal an AI Safety Institute. The architecture is deliberately soft: principles, advisory bodies, no civil or criminal penalties. The Bill, six weeks later, is the opposite: hard statute, statutory Committee, civil and criminal liability. Two regulatory philosophies, one country, six-week interval.
The market response so far has been to take the Guidelines seriously (because they’re in force, even if non-binding) and to treat the Bill as a directional signal. In practice, both documents shape compliance behaviour even though neither is, strictly, enforceable against industry.
India AI Ethics Bill vs EU AI Act
The EU AI Act (Regulation (EU) 2024/1689) uses a four-tier risk classification, prohibits a defined set of unacceptable practices, regulates high-risk AI through pre-market conformity assessment, and adds a separate chapter for General Purpose AI models. Penalties go up to €35 million or 7% of global turnover, whichever is higher. But the Indian Bill, by contrast, has no risk tiers, no enumerated prohibitions, no conformity assessment, no dedicated GPAI chapter, and a fixed ₹5 crore ceiling. The drafting’s less mature, and that’s being polite about it.
India AI Ethics Bill vs US AI Bill of Rights and Executive Order
The US framework is overwhelmingly voluntary. The “AI Bill of Rights” issued by the Office of Science and Technology Policy is a white paper. The 2023 Executive Order, partially rescinded since, directed federal agencies to develop AI risk-management practices. State-level laws (California, Colorado, Illinois) are doing the substantive work.
India’s Bill is closer to the EU template than the US template: mandatory, statutory, penalty-backed.
India AI Ethics Bill vs China Generative AI Measures and Brazil PL 2338/2023
China’s Interim Measures for Generative AI Services (effective August 2023) impose pre-deployment security assessments, content moderation, and provider registration. China’s regime is enforcement-heavy and content-focused. Brazil’s PL 2338/2023 is in legislative limbo, structurally closest to the EU AI Act with tiered risk and a federal supervisory authority. The Indian Bill sits closer to Brazil’s structural template than to China’s content-focused approach.
| Jurisdiction | Instrument type | Risk classification | Penalties | Enforcement body | Notes |
|---|---|---|---|---|---|
| India (AI Ethics Bill 2025) | Proposed Private Member’s Bill | None (binary high-risk / not) | Up to ₹5 crore + criminal liability | Proposed Ethics Committee | Not law yet; sits beside DPDP and IT Rules 2026 |
| India (MeitY Guidelines Nov 2025) | Soft-law / advisory | None | None | AIGG / TPEC / AISI | In force as guidance; non-binding |
| European Union (AI Act 2024) | Regulation, in force | 4 tiers (unacceptable, high, limited, minimal) | Up to €35M or 7% global turnover | AI Office + Member State authorities | Most mature global framework |
| United States | Voluntary frameworks + EO | None at federal level | None federally | None centralised | State-level mandates emerging |
| China (GenAI Measures 2023) | Binding interim measures | Provider category | Administrative penalties | CAC (Cyberspace Administration) | Content-moderation focus |
| Brazil (PL 2338/2023) | Bill, in legislative process | Tiered (closely follows EU) | Proposed, ~2% global turnover | Proposed supervisory authority | Closest structural analogue to EU |
The pitfall, worth flagging: don’t assume India’s Bill will end up looking like the EU AI Act if it ever becomes law. Indian statutory drafting tends to add or remove tiers, simplify or compound categories, and recast obligations during Standing Committee review. The current text is the start of a conversation, not its end. What experienced practitioners know is that comparative tables age fast in this space.
The EU AI Act itself is being interpreted through implementing acts month by month.
| Feature | India: AI Ethics Bill 2025 | India: MeitY Guidelines (Nov 2025) | EU AI Act 2024 | US AI Bill of Rights + EO | China GenAI Measures 2023 | Brazil PL 2338/2023 |
|---|---|---|---|---|---|---|
| Instrument type | Private Member’s Bill (Bill No. 59 of 2025) | Non-binding policy guidelines | Binding EU Regulation | Blueprint + Executive Order (non-statutory) | Sector-specific binding measures | Draft federal Bill (pending) |
| Risk classification | Categorical (high-risk AI listed) | Principles-based; no risk tiers | Four-tier (unacceptable / high / limited / minimal) | Sector-led; rights-based framing | Service-specific (generative AI focus) | Tiered, modelled on EU approach |
| Penalties | Civil up to Rs 5 crore; criminal liability for repeat or wilful violations | None (non-binding) | Up to EUR 35 million or 7% of global turnover | Agency-level enforcement; no central penalty schedule | Service suspension, fines, criminal referrals | Proposed administrative fines |
| Enforcement body | Proposed Ethics Committee for AI | MeitY (advisory) | National competent authorities + EU AI Office | FTC, EEOC and sectoral regulators | Cyberspace Administration of China (CAC) | Proposed federal authority |
| Status | Introduced 17 Dec 2025; awaiting Standing Committee | Released 5 Nov 2025; in force as soft-law | Adopted 2024; phased application | EO active; legislative track open | In force from Aug 2023 | Under parliamentary consideration |
| Notes | Categorical penalty ceiling lower than EU; criminal hook present | Designed to complement, not substitute, the Bill | Most comprehensive horizontal AI law | Rights-led rather than risk-led | Tightest content-moderation duties globally | Closely tracking the EU template |
What courts have already done: the personality-rights AI cases
Indian courts haven’t waited for Parliament. Between September 2023 and 2025, the Delhi High Court issued a sequence of personality-rights injunctions against AI-generated misuse that, taken together, amount to a judicially-built mini-statute on AI impersonation. Why does that matter? Because the AI Ethics and Accountability Bill 2025 is, in part, Parliament catching up to what the courts had been doing case by case.
The five cases below are the live judicial backdrop against which the Bill must be read.
The Anil Kapoor judgment
Anil Kapoor v. Simply Life India & Ors., CS(COMM) 652/2023 is the foundational ruling. The Delhi High Court, hearing a suit by a Bollywood actor against multiple defendants using AI-generated voice, image, and likeness without consent, granted an injunction protecting personality rights against AI-generated misuse. The court grounded the protection in Article 21 (right to privacy and dignity) and common-law passing off. The reasoning is straightforward: a person’s name, image, voice, and likeness are protectable interests, and the use of AI tools to misappropriate them does not change the underlying wrong.
The Amitabh Bachchan parent case
Amitabh Bachchan v. Rajat Nagi & Ors., CS(COMM) 819/2022 is the doctrinal predecessor. The court granted an interim injunction in 2022, before generative-AI deepfakes became a mass phenomenon, protecting the actor’s persona from unauthorised commercial use. The framework Anil Kapoor and the cases that followed all build on this. And the doctrine is older than the technology.
The Jackie Shroff ruling
Jaikishan Kakubhai Saraf alias Jackie Shroff v. The Peppy Store & Ors., CS(COMM) 389/2024 extended the personality-rights doctrine expressly to AI chatbots, e-commerce platforms, and social media accounts using the actor’s persona. It was, as far as Indian case law goes, the first reported judgment to name AI chatbots as injuncted entities. The judgment matters because it shifted the doctrine from human impersonators to algorithmic ones.
Aishwarya Rai and AI-generated explicit imagery
Aishwarya Rai Bachchan v. Aishwaryaworld.com & Ors., CS(COMM) 956/2025 applied the personality-rights doctrine to AI-generated explicit content. The court reinforced Article 21 grounding and expanded the remedy to include domain take-downs, search-engine de-indexing requests, and dynamic injunctions against future violators. The case is significant because the harm pattern (AI-generated explicit content) extends far beyond celebrities; the doctrine, however, was developed in celebrity cases first because they were the litigants with resources to pursue injunctions.
The Abhishek Bachchan ruling and platform-level orders
Abhishek Bachchan v. The Bollywood Tee Shop & Ors., CS(COMM) 960/2025, decided 10 September 2025, is the most operationally important of the personality-rights rulings for compliance teams. The plaint named seventeen defendants: e-commerce sellers using the actor’s image on merchandise, YouTube channels publishing AI-generated videos using his name and likeness, Google LLC as the YouTube operator, and the Ministry of Electronics and Information Technology and the Department of Telecommunications as nodal defendants. The Delhi High Court granted an interim injunction, directed Google LLC to take down the AI-video YouTube channels within seven days of notice, and directed MeitY and the Department of Telecommunications to issue blocking directions to disable URLs hosting infringing content. The ruling is significant for three reasons. First, it confirmed that personality-rights remedies reach AI-generated content as a matter of settled doctrine, not novel claim. Second, it brought platform intermediaries and the executive into the order, not just primary infringers. Third, it tied the personality-rights doctrine to operational platform-level enforcement, which is the same architecture the IT Rules 2026 use. The takeaway for compliance: AI cannot be used to commercially exploit a person’s voice or likeness in India, regardless of whether the AI Ethics Bill ever passes.
The Rajat Sharma deepfakes PIL
Rajat Sharma v. Union of India is the journalist-led PIL that put public pressure on MeitY to regulate deepfakes. The Delhi High Court issued notice to MeitY and asked for a response on the regulatory framework. The PIL is widely credited with accelerating MeitY’s pre-Bill activity, including the March 2024 Advisory and the eventual IT Rules 2026. The case shows the role of strategic litigation in pushing executive and legislative branches into action.
In practice, the personality-rights line of cases gives compliance teams a usable doctrine even without the Bill. If a deepfake or AI misuse incident occurs today, the litigation path is clearer than the regulatory path. We’d recommend that any organisation doing AI-driven persona work (voice cloning, image generation, deepfake detection) build its compliance posture around the personality-rights case law as the immediate live standard.
The pitfall to avoid: assuming that case-law remedies obviate statutory rules. They don’t. Injunctions reach individual defendants. Statute reaches structural conduct. The two are complementary, not substitutable. The Bill (or whatever instrument replaces it) is needed for structural reform; the cases are doing the work in the meantime.
ANI v. OpenAI and the training-data copyright gap
The most legally consequential AI litigation in India right now isn’t a deepfake injunction. It’s a copyright suit. ANI Media Pvt. Ltd. v. Open AI Inc. & Anr., CS(COMM) 1028/2024 is the first reported Indian generative-AI copyright case, and its outcome could shape AI training practices for every developer with India exposure. Why is this case the right way to understand the Bill’s biggest gap?
The suit
Asian News International, the news agency, filed suit at the Delhi High Court in November 2024, seeking ₹2 crore in damages and a permanent injunction against OpenAI. The two claims are interlinked. First, training ChatGPT on ANI’s copyrighted news content (the appropriation claim).
Second, ChatGPT’s “hallucinations” attributing fabricated statements to ANI (the misattribution claim). The Delhi High Court reserved the interim order in April 2026. The substantive order is awaited.
OpenAI’s defence
OpenAI’s defences are familiar to anyone following US AI copyright litigation: territorial jurisdiction (servers are outside India, training occurred outside India), fair-use-equivalent doctrine (transformative use of publicly accessible content), and standing (whether ANI’s interest in news content is the kind of copyright Indian law recognises in the AI training context). Each of these defences will, if accepted, materially constrain how Indian courts approach AI training-data claims.
What this case exposes about the Bill
The AI Ethics and Accountability Bill 2025 is silent on training-data copyright. It doesn’t create a copyright regime for AI training. It doesn’t modify the Copyright Act 1957. And it doesn’t address fair-use-equivalent doctrine for transformative AI training.
This silence is the Bill’s single most-criticised drafting gap. MediaNama flagged it on day one. LexOrbis flagged it in its January 2026 analysis. The reason matters: training data is the upstream input that determines whether downstream AI conduct is even possible. A statute that regulates outputs without addressing inputs is structurally incomplete.
In practice, the ANI v. OpenAI case will set the doctrinal direction for Indian AI training-data law for the next decade, regardless of whether the Bill passes. Compliance teams at content publishers, content platforms, and AI developers should be tracking it closely. The mistake we see most often is treating the case as a celebrity dispute; it is, in reality, the foundational case for Indian AI copyright doctrine.
A community question that surfaces repeatedly: would an opt-out mechanism for training data solve the problem? The honest answer is that opt-out works only if the upstream developer respects the opt-out signal. That’s a contractual or regulatory question, not a technical one. The Bill, as drafted, doesn’t address it.
The pitfall: don’t wait for the Bill to address training-data copyright. It won’t. Read the Bill alongside the Copyright Act and the ANI case as a three-part frame.
Criticisms and gaps in the Bill: what the drafting got wrong
A thorough explainer can’t stop at description. The Bill has been the subject of substantive criticism from commentators, academics, and practitioners. Some of the criticisms are drafting-level, fixable in Standing Committee. But others are structural, harder to fix.
Reading them together gives a clear-eyed picture of what the Bill is missing.
The “lawful purposes” surveillance loophole
The single most-cited criticism, in a December 2025 Deccan Herald op-ed by a Jindal Global Law School academic, is that the Bill’s “lawful purposes” standard for AI surveillance is too vague to survive a constitutional challenge. As discussed earlier in the high-risk AI section, the Shreya Singhal ruling is the doctrinal anchor here. A vague offence definition fails constitutional muster. And the Bill’s surveillance clause, as drafted, would, in our view, be vulnerable to challenge from day one.
The fix is simple but politically charged: define “lawful purposes” with specificity (statutory authorisation, proportionality test, oversight architecture, sunset clauses).
No risk-tier classification
Unlike the EU AI Act’s four-tier classification, the Bill operates on a high-risk / not-high-risk binary. The consequence is that low-risk AI (a spam filter, a basic recommendation engine) sits in the same regulatory zone as a borderline non-high-risk case, while the most dangerous AI sits in the same zone as the borderline high-risk case. Risk-tiering exists in the EU AI Act for a reason: it lets compliance burdens scale with actual risk. The Indian Bill loses this calibration.
Silence on training data, AI-generated content liability, and data ownership
MediaNama’s analysis of the Bill flagged three structural silences. First, no training-data regime. Second, no clear allocation of liability for AI-generated content (is the developer, the deployer, or the user responsible for a defamatory output?
). Third, no answer to who owns AI-generated outputs. Each silence is a gap that will be litigated in the absence of statute. The Bill is, in this sense, narrower than the regulatory question it sets out to answer.
No carve-out for open-source, academic, or sandbox AI
The developer duties (registration, audit, documentation, withdrawal) apply on a plain reading to open-source contributors, academic researchers, and sandbox deployments. The downstream effect is to freeze a meaningful share of India’s open-source AI ecosystem and to push academic AI research abroad. In our view, this is the most consequential second-order risk of the Bill as drafted, and it’s fixable with a single clause.
No coordination architecture with sectoral regulators
The Bill creates the Ethics Committee but does not specify how it coordinates with the RBI, SEBI, IRDAI, ICMR, and other sectoral regulators that already have AI rules in train. Without a coordination clause, fintech companies, healthtech companies, and regtech vendors face overlapping and potentially conflicting rules. The MeitY Guidelines handle this differently, designating sectoral regulators as the immediate AI-rules layer. The Bill should adopt a similar coordination architecture.
A common practitioner observation, often raised by senior partners at tier-1 firms reviewing the Bill for clients: the drafting reflects an ambition larger than the as-introduced text supports. The Standing Committee, if seized of the Bill, will need to redraft major sections. But whether the Standing Committee gets seized is itself uncertain, given the Private Member’s Bill pathway.
The pitfall to avoid: don’t assume the Bill will be redrafted to fix all of these. Private Members’ Bills, when they progress at all, often progress with cosmetic amendments only. Substantive redrafting usually requires the government to take ownership and reintroduce the Bill as a government-sponsored measure. Whether that happens is the central political question for 2026.
What businesses should do today: a 3-tier compliance roadmap
What should Indian compliance teams be doing right now, while the Bill is debated, while the MeitY Guidelines circulate as soft law, while the IT Rules 2026 are operational, and while the DPDP Act is in force? The honest answer depends on the size and exposure of the organisation. Here’s a three-tier compliance roadmap that compliance heads can adapt for Board reporting. Where does your organisation fit?
Tier 1: Startup deployer under ₹50 crore revenue
For early-stage startups deploying AI in customer-facing workflows, the immediate compliance demands are DPDP-aligned consent flows, IT Rules 2026 SGI labelling (if user-generated content is involved or AI is generating content for distribution), and a basic AI register listing every AI tool in production use. The bias-audit obligation is not yet mandatory under any enforceable instrument, but a minimum-viable bias-audit document for each high-risk system (hiring, credit scoring, recommendation) creates a defensible posture. Expected compliance burden: low, manageable in-house with a part-time legal lead.
Tier 2: Mid-cap developer or SaaS company
Mid-cap developers and SaaS companies face a more complex stack. The minimum compliance set: an AI register with model documentation (architecture, training data sources, intended use cases, known limitations), vendor contract clauses requiring bias-audit warranties and training-data lineage disclosures from upstream suppliers, board-level AI risk reporting on a quarterly cadence, a designated grievance-redressal channel, and incident-response protocols. The procurement-contract change is non-trivial and worth flagging separately: vendor contracts signed today should include AI ethics clauses that survive the eventual enactment of the Bill or its successor instrument.
This is where AI tools that Indian lawyers are already using in practice become genuinely useful, because legal-tech-AI deployment is itself a Tier 2 compliance problem the same firms are solving for their clients.
Tier 3: Listed company or public-sector deployer
Listed companies, public-sector undertakings, and regulated entities (banks, insurers, healthcare providers) face the highest compliance demand. The recommended set: a formal AI governance committee at the senior management level with Board reporting authority, third-party AI audits on an annual cadence, voluntary alignment with the MeitY Guidelines’ seven Sutras, designated AI Compliance Officer role with explicit responsibility for Bill-readiness, sectoral-regulator coordination protocols (RBI for banks, SEBI for listed entities, IRDAI for insurers), and litigation-readiness for the personality-rights and copyright case law discussed earlier.
The procurement contract change
Procurement contracts are the single highest-impact compliance touchpoint a Board can pull on. The practical reality is that inserting AI ethics clauses today (bias-audit warranties, training-data lineage disclosures, indemnities for third-party algorithmic harm, audit-rights provisions, withdrawal-and-remediation obligations) builds in compliance posture before any statute requires it. The EU AI Act has already made these clauses standard in European AI procurement. The same template can be adapted for Indian use.
We’d recommend that any organisation procuring AI today insert these clauses by default, on the same logic that DPDP-aligned data-processing clauses became standard before the Act was even notified.
The internal AI Officer role
The DPDP Act created a Data Protection Officer role for significant data fiduciaries. The AI Ethics Bill, if enacted, would create regulatory exposure that maps neatly onto a designated AI Compliance Officer role. Even without enactment, large organisations are already starting to designate this role. Job postings on LinkedIn signal early movement.
Compliance heads should think about the AI Officer function as a complement to the DPO function, not a replacement. In practice, the better approach for most organisations is to start with Tier 1 hygiene (consent flows, SGI labelling, AI register) and scale up to Tier 2 and Tier 3 as the Bill’s status crystallises. The pitfall is to over-engineer for Tier 3 demand when the organisation’s actual exposure is at Tier 1. Calibrate compliance investment to actual exposure, not to worst-case statutory projections. That’s the discipline.
- Map every AI tool you build or use against the DPDP Act’s data-fiduciary obligations.
- Maintain a synthetic-content labelling protocol for any user-generated AI content.
- Adopt the MeitY India AI Governance Guidelines as your written internal policy floor.
- Document a basic grievance channel and publish a designated point of contact.
- Watch the Bill’s Standing Committee report before committing to heavy controls.
- Run a high-risk AI inventory aligned to the Bill’s likely high-risk categories.
- Build bias and fairness audits into release gates for hiring, credit, and content AI.
- Operationalise DPDP Act consent flows and breach-notification playbooks for AI pipelines.
- Build training-data provenance logs to manage Copyright Act 1957 exposure.
- Stand up a cross-functional AI ethics review forum reporting to the board.
- Treat the Bill’s high-risk regime as your current minimum design standard.
- Run independent algorithmic impact assessments for surveillance, credit, and policing AI.
- Build human-in-the-loop overrides for any AI decision affecting rights, money, or liberty.
- Publish a public AI register and an annual transparency report; brief the audit committee.
- Coordinate with the DPDP Data Protection Board and sectoral regulators on AI use cases.
What happens next: Standing Committee, debate, and enactment odds
Here’s the question every reader of this guide should hold in mind: what’s the realistic probability that the AI Ethics and Accountability Bill 2025 becomes law? The honest base-rate answer is low. The directional-signal answer is high. Both are correct, and a sensible compliance posture treats them as compatible.
What is a Private Member’s Bill, and the enactment base rate
A Private Member’s Bill is one introduced by an MP who is not a Government minister. Hundreds are tabled in every Parliament (the 17th Lok Sabha alone saw 729 in the Lok Sabha and 705 in the Rajya Sabha), yet only fourteen have ever been passed into law since 1952, the last one in 1970. The base-rate enactment probability is in the low single digits. The 14 enactments include some genuinely consequential statutes, but they are statistical outliers, not the modal outcome.
Standing Committee referral and its likelihood
The next procedural step, if the Bill is taken up at all, is referral to a Standing Committee for examination. The Standing Committee process typically runs six to twelve months and can substantially redraft the Bill before it returns for debate. Whether the Bill is referred at all depends on parliamentary calendar pressure, the Government’s appetite for engaging with the Bill’s themes, and the political salience of AI harms in the news cycle.
The Digital India Act as the more likely vehicle
The more likely path to hard AI law in India runs through the Digital India Act, the long-anticipated replacement for the IT Act, 2000. Public consultation on the Digital India Act is expected in 2026. If the Government wants to legislate on AI, the cleaner route is to fold AI provisions into the Digital India Act rather than to advance a Private Member’s Bill. Compliance teams should track the Digital India Act consultation closely; that is where the next material AI provisions are most likely to land.
Five second-order shifts to watch in 2026-2028
Looking ahead, five non-obvious downstream shifts are likely to play out regardless of whether the Bill itself becomes law. First, the AI Compliance Officer role will move from voluntary to expected at large companies. Second, AI-liability insurance products will emerge, with D&O policies starting to carve in or carve out AI-decision liabilities.
Third, sectoral regulators (RBI, SEBI, IRDAI) will issue AI rules in their domains, which will in practice be the binding AI law for those sectors. Fourth, AI-grievance litigation will multiply in consumer fora and writ jurisdiction, with the Bill cited as persuasive material even before enactment. Fifth, mandatory AI labelling (currently SGI under the IT Rules 2026) will likely expand to cover deployers and developers, not just intermediaries.
What experienced practitioners know is that the Bill’s most important function may be agenda-setting, not enactment. Indian Private Member’s Bills rarely become law directly, but they regularly act as policy precursors. The Rights of Transgender Persons Bill, 2014, introduced as a PMB by a Member of the Rajya Sabha, did not become law itself; the Government later introduced and passed the Transgender Persons (Protection of Rights) Act, 2019, which carried forward the PMB’s core themes. The AI Ethics and Accountability Bill 2025 is well-positioned to play the same role for whatever AI legislation comes next, most plausibly through the Digital India Act consultation. Even if the Bill dies in Lok Sabha, it’s already shifted the conversation. Vendor contracts, board reporting, sector regulator drafts, and litigation strategy are all moving in the Bill’s direction. The mistake we see most often is treating “did it pass” as the only question worth tracking.
The harder question is “what did it move?” The pitfall: don’t under-invest in tracking parliamentary process. Even Private Members’ Bills can be folded into government-sponsored Bills, can shape Standing Committee reports on adjacent legislation, and can be cited in budget speeches. The 2026 Budget Session will be the next material checkpoint for AI-policy signals.
FAQ: AI Ethics and Accountability Bill 2025
1. What is the Artificial Intelligence (Ethics and Accountability) Bill, 2025? It’s a Private Member’s Bill introduced in the Lok Sabha on 17 December 2025 as Bill No. 59 of 2025. It would create a statutory Ethics Committee for AI, mandate bias audits, restrict AI surveillance, and impose civil penalties up to ₹5 crore plus criminal liability. Not yet enacted.
2. Is the AI Ethics and Accountability Bill, 2025 a law in India? No. It’s a Private Member’s Bill. As of May 2026 it hasn’t been debated, hasn’t been referred to a Standing Committee, and hasn’t been enacted. Compliance obligations on AI conduct today flow from the DPDP Act 2023, the IT Rules 2026, and the BNS.
3. Who introduced the AI Ethics and Accountability Bill in the Lok Sabha? The Bill was introduced by a Bharatiya Janata Party Member of Parliament from Madhya Pradesh. The introducer’s name appears on the Sansad.in record as a matter of public Parliamentary procedure. The Bill is listed as Bill No. 59 of 2025 (as introduced) and is hosted on the Lok Sabha Bills portal.
4. When was the AI Ethics and Accountability Bill, 2025 introduced? The Bill was introduced on 17 December 2025, during the Winter Session of the Lok Sabha. The Sansad.in PDF and contemporaneous reporting by Bar & Bench, MediaNama, and SCC OnLine all carry the date.
5. What is a Private Member’s Bill, and can the AI Ethics Bill become law? A Private Member’s Bill is one introduced by an MP who isn’t a Government minister. Hundreds are tabled in every Lok Sabha (729 in the 17th Lok Sabha; 705 in the Rajya Sabha), yet only 14 have been passed into law since 1952, the last in 1970. Base-rate enactment probability is in the low single digits. Even without enactment, the Bill shapes vendor contracts, drafts, and litigation strategy.
6. What is the maximum penalty under the AI Ethics and Accountability Bill, 2025? The Bill proposes a civil penalty ceiling of ₹5 crore for non-compliance with developer or deployer duties. Penalty amounts are tiered by severity, scale of deployment, prior conduct, and remediation efforts. Repeat or wilful violations attract criminal liability over and above civil penalties.
7. Does the AI Ethics Bill create criminal liability? Yes, for repeat or wilful violations of developer or deployer duties, in addition to civil penalties. The exact maximum term should be read directly off the Sansad PDF rather than off secondary commentary. Existing BNS liability for cheating-by-impersonation continues to apply to AI-enabled fraud.
8. What is the Ethics Committee for Artificial Intelligence proposed in the Bill? A statutory body with powers to receive complaints, conduct audits, issue advisories, refer matters for prosecution, and recommend penalties. It would sit alongside the Data Protection Board of India, with both bodies coordinating on overlapping matters. It’s the Bill’s centrepiece.
9. Who will sit on the AI Ethics Committee? The Bill provides for a chairperson with judicial or technical eminence, plus members drawn from technology, law, ethics, civil society, and sectoral expertise. Specifics on selection, tenure, and quorum are likely to be developed during Standing Committee review or in subordinate rule-making.
10. What counts as a “high-risk AI system” under the Bill? Surveillance, hiring and employment, credit scoring, predictive policing, and healthcare are flagged as high-risk. The list isn’t exhaustive; the Ethics Committee could add categories by notification. The Bill doesn’t adopt a tiered risk classification like the EU AI Act’s four-tier system.
11. Does the Bill apply to AI surveillance? Yes. The Bill restricts AI surveillance to “lawful purposes” with safeguards. Constitutional commentators argue the standard is too vague and fails the precision test the Supreme Court set in the Section 66A judgment. Surveillance must still satisfy the Puttaswamy privacy framework.
12. What obligations does the Bill place on AI developers? Registration of high-risk systems, pre-deployment bias audits, model documentation (architecture, training data, intended use, known limitations), and a duty to withdraw or remediate biased systems. On a plain reading these duties apply to open-source and academic models too.
13. What obligations does the Bill place on AI deployers? Pre-deployment risk assessment, transparency notices to users interacting with the AI system, a designated grievance-redressal channel, and material-incident reporting. The duties echo the DPDP Act’s data fiduciary obligations and are the more operationally relevant set for most Indian companies.
14. How does the AI Ethics Bill interact with the DPDP Act, 2023? The DPDP Act governs personal-data processing and is enforceable today. The AI Bill, if enacted, would govern AI system conduct. The two overlap on automated decisions, AI training on personal data, and grievance redressal. The Ethics Committee and the DPB would coordinate, not subsume.
15. Does the AI Ethics Bill cover deepfakes? Partially, through personality and surveillance restrictions. The enforceable deepfake rules today come from the IT Amendment Rules 2026 (operational 20 February 2026): SGI labelling plus a two-hour takedown window for deepfakes (and three hours for other unlawful content) once a court or the appropriate government flags it. IT Rules 2026 compliance is the immediate demand.
16. How does the India AI Ethics Bill compare with the EU AI Act? India: principles-led, no risk tiers, single Ethics Committee, ₹5 crore civil cap plus criminal liability. EU: four-tier risk classification, prohibited practices, GPAI chapter, conformity assessment, penalties up to €35 million or 7% of global turnover. Structurally simpler, substantively narrower.
17. How does the AI Ethics Bill differ from the MeitY India AI Governance Guidelines (Nov 2025)? The Bill is proposed hard law: statutory, penalty-backed. The Guidelines are soft law: principles-based, advisory, non-binding. Both arrived within a six-week window in late 2025, representing two competing regulatory philosophies. The Guidelines are in force as guidance; the Bill is not law.
18. What should compliance teams be doing today, before the Bill is debated? Focus on what’s enforceable: DPDP Act 2023, IT Rules 2026 SGI labelling, BNS 318-319 readiness, plus voluntary alignment with MeitY’s seven Sutras. Maintain an AI register, insert AI ethics clauses in vendor contracts, and designate an AI Compliance Officer at large organisations.
References
Case Law
- Abhishek Bachchan v. The Bollywood Tee Shop & Ors., CS(COMM) 960/2025: Delhi High Court, 10 September 2025
- Aishwarya Rai Bachchan v. Aishwaryaworld.com & Ors., CS(COMM) 956/2025: Delhi High Court, 9 September 2025
- Amitabh Bachchan v. Rajat Nagi & Ors., CS(COMM) 819/2022: Delhi High Court, 25 November 2022
- ANI Media Pvt. Ltd. v. Open AI Inc. & Anr., CS(COMM) 1028/2024: Delhi High Court (interim order reserved, April 2026)
- Anil Kapoor v. Simply Life India & Ors., CS(COMM) 652/2023: Delhi High Court, 20 September 2023
- Jaikishan Kakubhai Saraf alias Jackie Shroff v. The Peppy Store & Ors., CS(COMM) 389/2024: Delhi High Court, 18 July 2024
- Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1: Supreme Court of India, 9-judge Constitution Bench, 24 August 2017
- Rajat Sharma v. Union of India: Delhi High Court (PIL on deepfakes, 2024)
- Shreya Singhal v. Union of India, (2015) 5 SCC 1: Supreme Court of India, 24 March 2015
- State v. Loomis, 881 N.W.2d 749 (Wis. 2016): Wisconsin Supreme Court (global comparator)
Statutes, Bills, and Subordinate Legislation
- Copyright Act, 1957: Copyright Office, Government of India
- Information Technology Act, 2000: Ministry of Electronics and Information Technology (note: Section 66A struck down in Shreya Singhal, 2015)
- Bharatiya Nyaya Sanhita, 2023: Ministry of Home Affairs; sections cited: 318, 319
- Digital Personal Data Protection Act, 2023: Ministry of Electronics and Information Technology; sections cited: 3
- Regulation (EU) 2024/1689, European Union Artificial Intelligence Act (comparator)
- Artificial Intelligence (Ethics and Accountability) Bill, 2025 (Bill No. 59 of 2025, Lok Sabha; as introduced, 17 December 2025): contemporaneous coverage by Bar and Bench (17 December 2025). The official Bill text is on the Lok Sabha Bills portal at Sansad.in; that endpoint is currently inaccessible and Monitor will replace this citation once it recovers.
- Digital Personal Data Protection Rules, 2025: Ministry of Electronics and Information Technology
- MeitY India AI Governance Guidelines, 5 November 2025: Press Information Bureau / MeitY
- Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026: Ministry of Electronics and Information Technology (notified 10 February 2026; operational 20 February 2026)
This article is for informational and educational purposes only and does not constitute legal advice. Bill text and clauses are cited as introduced; readers must verify against the current Lok Sabha record before relying on any specific clause. For specific legal guidance on AI compliance or any matter arising from the AI Ethics and Accountability Bill 2025, consult a qualified legal professional.
Serato DJ Crack 2025Serato DJ PRO Crack







Allow notifications