By Mahak Yadav and Avani Raj.
The increasing use of artificial intelligence in international arbitration raises important questions for India’s arbitration regime under the Arbitration and Conciliation Act, 1996, which remains silent on AI-assisted decision-making. This article examines whether AI-assisted arbitral awards are compatible with the statutory framework, particularly Sections 31 and 34, and Supreme Court jurisprudence on reasoned awards and public policy. It argues that while AI is not per se impermissible, its use is normatively justified only in an assistive, human-in-the-loop role ensuring transparency, accountability, and confidentiality.
Keywords: Artificial Intelligence; Arbitration and Conciliation Act, 1996; Section 34 Judicial Review; Reasoned Arbitral Awards; Public Policy and Patent Illegality; Human-in-the-Loop Adjudication; Confidentiality in Arbitration; Algorithmic Bias.
The use of artificial intelligence (“AI”) in arbitration has recently gained institutional acceptance at the international level. Arbitral bodies such as the American Arbitration Association and the International Centre for Dispute Resolution have introduced AI-assisted tools to support the issuance of arbitral awards, while the China International Economic and Trade Arbitration Commission has issued the Asia-Pacific region’s first Guidelines on the Use of AI in Arbitration. These developments reflect a broader shift toward efficiency-driven adjudication in dispute resolution.. However, the Indian arbitration regime under the Arbitration and Conciliation Act, 1996, remains silent on the permissibility and scope of AI-assisted decision-making. This silence is particularly significant, given the statutory emphasis on procedural flexibility, grounded in party autonomy under Section 19, and the requirement of reasoned arbitral awards under Section 31. Indian courts, in landmark cases such as ONGC v. Saw Pipes and Associate Builders v. DDA, have consistently emphasized that arbitral awards must reflect independent application of mind and adherence to principles of natural justice.
In this background, this article pursues two aims: first, to examine whether AI-assisted arbitration is feasible within the statutory framework of the Act and the scope of judicial review under Section 34; and secondly, to assess whether its adoption in Indian arbitrations is desirable, balancing efficiency gains against concerns of transparency, bias, and accountability.
Section 34 of the Arbitration and Conciliation Act, 1996, circumscribes judicial interference with arbitral awards to narrowly defined grounds, reflecting the legislative policy of minimal court intervention. The provision permits setting aside an arbitral award if the procedure violates the parties’ agreement or the Act, if it contravenes Indian public policy or if it shows patent illegality.
In ONGC v. Saw Pipes Ltd., the Supreme Court expanded the scope of “public policy” to include patent illegality, while subsequently calibrating this expansion in Associate Builders v. DDA by clarifying that interference is warranted only where the award is perverse, irrational, or reflects no application of mind. In Ssangyong Engineering & Construction Co. Ltd. v. NHAI, the Court further narrowed the scope of review post the 2015 amendments, holding that courts cannot reappreciate evidence and may intervene only where the award contravenes fundamental notions of justice or suffers from patent illegality.
Against this backdrop, AI-assisted reasoning raises questions about whether such awards meet the requirement of conscious and independent adjudication. Indian courts have consistently treated the requirement of a “reasoned award” under Section 31(3) as an integral component of natural justice. In Som Datt Builders v. State of Kerala, the Supreme Court held that reasons must disclose a rational nexus between the material on record and the conclusions reached, even if they are concise. Similarly, in Dyna Technologies v. Crompton Greaves Ltd., the Court observed that reasons are the “heartbeat” of an arbitral award and that an absence of intelligible reasoning may attract interference under Section 34. If an award is substantially generated by AI, issues arise regarding attribution of reasoning and decision-making. A “black-box” AI outcome lacking explainability or traceable reasoning may render the award vulnerable to challenge for perversity or patent illegality, especially where the tribunal cannot demonstrate independent application of mind to the facts and law. Further, reliance on AI tools trained on opaque datasets may raise concerns under Section 18 of the Act, which mandates equal treatment of parties, especially if algorithmic bias or data asymmetry can be shown to have influenced the outcome.
Section 34 does not prohibit the use of technological assistance in arbitration, provided the tribunal retains control over the decision and the award reflects independent application of mind. Courts assess the substance of the reasoning rather than the mode of assistance used. Consequently, AI-assisted arbitration is not per se incompatible with Section 34. However, its permissibility depends on transparency and demonstrable human oversight. In the absence of a statutory or institutional framework regulating AI use, awards substantially reliant on AI-generated reasoning are likely to face closer scrutiny under the grounds of patent illegality and conflict with public policy. This is because opaque or unregulated AI use may compromise the arbitrator’s independent application of mind, due process and the requirement of reasoned awards.
The likely benefits of integrating AI in Indian arbitration should be evaluated based on its impact on efficiency and fairness in arbitral decision-making. Recent empirical and institutional developments suggest that AI can help bring efficiency to the arbitral process. However, if unregulated AI is used in arbitration to make decisions, it raises various concerns regarding transparency, bias, responsibility, clarity, and confidentiality. These concerns have a direct impact on the validity of the arbitral award.
Internationally, at the institutional level, we observe a careful yet inconsistent approach to the use of AI in arbitration. AI can be used as a supportive tool with human oversight and disclosure, according to the guidelines released by professional organisations like CIArb, SVAMC, and AAA-ICDR. However, some important institutions, such as the ICC, ICSID, LCIA, and SIAC, have not established rules to guide the use of AI in the arbitral process. This uneven approach points toward a general acceptance that although AI helps in procedural aspects, its role in core decision-making is debatable.
Empirical studies show the difference between substitutive and assistive AI use. In the 2025 International Arbitration Survey by White & Case, 2,402 questionnaire responses and 117 interviews were collected from a diverse cross-section of the international arbitration community. Participants included in-house counsel from the public and private sectors, arbitrators, private practitioners, representatives of arbitral institutions, academics, tribunal secretaries, experts, and third-party funders. There is strong support for the use of AI in administrative tasks. 77% of respondents are in favour of utilising AI to determine interest, costs, and damages. 66% of respondents support using it to summarize submissions. It is important to note that only 23% of respondents are in support of using AI for legal reasoning, while the majority oppose using it to evaluate merits or credibility. This skepticism is backed by recent research showing that AI judges apply the law consistently and strictly, whereas human adjudicators consider a wider context and use moral reasoning in their decisions. The main concern is that AI biases and mistakes can go unnoticed, which is further worsened by the black box nature of large language models.
Apart from concerns about bias and explainability, using AI in arbitration presents serious challenges to the confidentiality and privacy that are essential to arbitral proceedings under Section 42A. Arbitration often involves sharing sensitive commercial information, trade secrets, and personal data. This makes deploying AI particularly delicate, especially when using third-party or cloud-based tools. The 2023 BCLP Annual Arbitration Survey highlights data protection and confidentiality as major concerns for arbitration users regarding AI adoption. In response, the SVAMC Guidelines stress that AI must be used in a way that respects confidentiality obligations. They also warn against processing confidential information without permission and proper safeguards. Therefore, confidentiality should be a key limit on AI-assisted arbitration, necessitating clear disclosure requirements, security standards, and restrictions on data retention to maintain the legitimacy of arbitration.
It is important to evaluate these concerns in the real world. Some cases show that blind trust in AI harms procedural integrity. In Mata v. Avianca, a US court sanctioned lawyers for using AI-generated fake citations. This case demonstrates problems of error and loss of trust when outputs are not verified. In LaPaglia v. Valve Corp., one of the parties challenged the award, alleging that the arbitrator used AI for reasoning, which is an unauthorized delegation of authority. Although these are not Indian cases, they point out that fairness, party autonomy, and independent thinking can be jeopardized by AI. These principles are highly valued by the Indian court for maintaining the legitimacy of the award.
Indian judiciary and policy discussions have taken a careful approach to AI. The Kerala High Court guidelines prohibit the use of AI in judicial reasoning because of data security, privacy, and public confidence. The Supreme Court of India’s Centre for Research and Planning, in its white paper on AI and the judiciary, advocates a governance framework centered on human-in-the-loop oversight, mandatory verification protocols, and transparency obligations whenever AI assistance is used. This cautious position was judicially reaffirmed in Kartikeya Rawal v. Union of India, where, while dismissing a PIL seeking regulation of AI in the judiciary, the Supreme Court categorically assured that AI would not be permitted to overtake judicial decision-making, emphasising that technology must remain strictly subordinate to human judgment. While these are judicial guidelines, they offer insights for the use of AI in arbitration since arbitral awards are reviewed under section 34 of the A&C Act.
Unregulated use of AI in arbitration goes against the Indian arbitration law. Sections 18 and 31 mandate impartial treatment of parties and reasoned awards. AI systems that primarily rely on probabilistic pattern matching instead of true reasoning go against the mandate. Apple’s The Illusion of Thinking shows that a large reasoning model can mirror taught patterns but face problems with novel or complex situations. These limitations of AI make it difficult for an arbitrator to explain, defend, and accept accountability for decisions, thereby undermining transparency, accountability, and clarity.
This is not to completely negate the role of AI in Indian arbitration. The T.K. Viswanathan Committee treats AI as a helpful tool to reduce delay and procedural issues. The Pyrrho Investments Ltd. v. MWB Property Ltd. case shows the court’s acceptance of AI for technical tasks like predictive coding with human oversight. However, extending AI into substantive legal reasoning risks diluting statutory mandates under Sections 18 and 31 of the A&C Act, which require impartial treatment, intelligible reasoning, and demonstrable application of mind. Accordingly, AI assistance can be normatively justified only where it operates in an assistive, human-in-the-loop capacity, supported by standards on disclosure, verification, explainability, and accountability. Such a calibrated approach preserves efficiency gains while remaining faithful to the foundational principles of arbitral legitimacy under Indian law.
AI-assisted arbitration has a sensitive role in India’s arbitration system. While AI can significantly improve efficiency in procedural and administrative areas, its unchecked use in decision-making brings serious challenges to the Arbitration and Conciliation Act, 1996. Indian arbitration law emphasizes the need for independent thinking, well-reasoned awards, equality of parties, fair procedures, and confidentiality. These key qualities could be undermined if arbitral decisions are influenced by unclear or unexplainable AI systems that lack proper human oversight or protections for sensitive information. Without a specific regulatory framework, the validity of AI-assisted awards will rely on clear human oversight and transparency, along with strong protection of arbitration confidentiality. Any future use of AI should therefore follow a human-in-the-loop model and include clear guidelines on disclosure, data security, and restrictions on the use and storage of confidential information. A balanced and controlled approach is crucial to gain efficiency benefits while maintaining the privacy, trust, and legal integrity essential to arbitration in India.
Copyright© 2026 Milon K. Banerji Centre for Arbitration Law instituted at NALSAR University of Law, Hyderabad. All rights reserved.