The Legal Personality of AI in Arbitration: Can Machines Be Arbitrators?
- Narmadha Ragunath
- 3 hours ago
- 4 min read
The rapid integration of Artificial Intelligence (AI) into dispute resolution mechanisms has ushered in a transformative era for arbitration. From document review and predictive analytics to Online Dispute Resolution (ODR), AI is no longer a peripheral tool but an active participant in the adjudicatory process. This raises a provocative and complex question: can AI systems transcend their role as facilitators and assume the position of arbitrators? In other words, can machines possess legal personality sufficient to adjudicate disputes?
This article critically examines whether AI can be recognised as an arbitrator within the existing legal framework, the doctrinal challenges of assigning legal personality to machines, and the implications for procedural fairness, accountability and legitimacy in arbitration.
Understanding Legal Personality in Arbitration
Legal personality refers to the capacity of an entity to hold rights and obligations under the law. Traditionally, arbitrators are natural persons entrusted with adjudicatory authority based on party consent. Arbitration laws across jurisdictions, including the UNCITRAL Model Law and the Arbitration and Conciliation Act, 1996 (India), implicitly assume that arbitrators are human.
Key attributes of an arbitrator include:
Independence and impartiality
Capacity to exercise judgment and discretion
Accountability for misconduct or bias
AI, however, lacks consciousness, intent and moral agency, those qualities that are foundational to traditional notions of legal personality.
AI in Arbitration: Current Role and Capabilities
AI is already transforming arbitration through:
Predictive analytics (forecasting case outcomes)
Document automation and review
Case management systems in ODR platforms
Decision-support tools for arbitrators
Some advanced systems can simulate reasoning based on large datasets and past precedents. However, simulation is not equivalent to adjudication. AI operates through algorithms and data patterns, not through normative judgment or legal reasoning grounded in values.
The Core Question: Can AI Be an Arbitrator?
1. Party Autonomy vs Legal Constraints
Arbitration is fundamentally rooted in party autonomy. If parties agree to appoint an AI system as an arbitrator, should that agreement be upheld?
While party autonomy is broad, it is not absolute. Arbitration frameworks impose mandatory requirements to ensure fairness and enforceability. Most national laws:
Require arbitrators to be natural persons
Impose duties that presuppose human cognition (e.g., disclosure of bias)
Thus, even if parties consent, an AI arbitrator may not satisfy statutory requirements, rendering the award vulnerable to challenge.
2. The Challenge of Legal Personality for AI
Granting AI the status of an arbitrator would require recognising it as a legal person or at least a quasi-legal entity.
Three theoretical models emerge:
Tool Theory: AI is merely an instrument used by human arbitrators
Agency Theory: AI acts as an agent under human supervision
Electronic Personhood Theory: AI is granted independent legal status
The third model is the most controversial. Recognising AI as an “electronic person” raises unresolved questions:
Who bears liability for erroneous decisions?
Can AI be held accountable for bias embedded in algorithms?
How can independence and impartiality be assessed?
At present, no jurisdiction formally recognizes AI as a legal person capable of adjudication.
3. Due Process and Procedural Fairness
Arbitration must comply with principles of natural justice, including:
Audi alteram partem (right to be heard)
Nemo judex in causa sua (absence of bias)
AI systems present significant concerns:
Opacity (“black box” problem): Decisions may not be explainable
Algorithmic bias: AI may replicate biases in training data
Lack of reasoning transparency: Awards must contain intelligible reasoning
Courts reviewing arbitral awards require clear reasoning. An AI-generated award that cannot adequately explain its decision-making process risks being set aside.
4. Enforcement Challenges
Under instruments like the New York Convention, 1958, arbitral awards must meet procedural and substantive standards.
An AI arbitrator raises enforcement issues:
Is the tribunal properly constituted?
Does the award meet due process requirements?
Can an AI-generated award be attributed to a “tribunal”?
If courts refuse recognition, the utility of AI arbitrators becomes practically redundant.
Comparative Developments and Emerging Trends
While fully autonomous AI arbitrators are not yet recognized, hybrid models are emerging:
AI-assisted arbitration: AI supports human arbitrators in decision-making
ODR platforms: Automated systems resolve low-value disputes (e.g., e-commerce)
Algorithmic mediation: AI suggests settlement outcomes
These developments indicate a gradual shift, not toward replacing arbitrators but toward augmenting human decision-making.
Normative and Ethical Considerations
The prospect of AI arbitrators raises broader concerns:
Legitimacy: Would parties trust a machine to resolve complex disputes?
Human element: Arbitration often involves equity, fairness and contextual judgment
Ethics: Who is responsible for ensuring fairness in AI systems?
Arbitration is not merely a technical process; it is deeply human, involving discretion, empathy and normative reasoning qualities that AI currently cannot replicate.
The Way Forward: Regulation, Not Replacement
Rather than granting AI independent adjudicatory authority, a more viable approach is the following:
Regulating AI use in arbitration
Ensuring transparency and explainability of algorithms
Establishing ethical standards for AI deployment
Retaining human oversight in all decision-making processes
International bodies and arbitral institutions may soon develop guidelines governing AI in arbitration, balancing innovation with due process.
Conclusion
The idea of AI as an arbitrator challenges foundational principles of arbitration law. While technological advancements have significantly enhanced efficiency, the leap from assistance to adjudication is neither legally nor philosophically justified at present. AI lacks legal personality, moral agency and accountability attributes essential for an arbitrator. Granting machines the power to decide disputes risks undermining procedural fairness, enforceability and the legitimacy of arbitration itself.
For now, the future lies not in replacing arbitrators with machines, but in integrating AI as a powerful tool under human control. Arbitration must evolve with technology, but not at the cost of its core values.
