•  
  •  
 

Abstract

This paper presents the first comprehensive empirical evaluation of Large Language Models’ (‘LLMs’) performance in Indian legal education. We compare six Artificial Intelligence (‘AI’) chatbots with law students at the National Law School of India University, Bengaluru, across four subjects: Contract Law, Corporate Law, Criminal Procedure, and Jurisprudence. Our findings show that LLMs achieve performance comparable to human students (B+ grade), with newer commercial models consistently outperforming older and open-source alternatives. We also find that while LLMs excel in theoretical subjects and structured legal analysis, they show limitations in handling jurisdiction-specific knowledge and complex scenario-based reasoning. These findings have important implications for legal education in diverse jurisdictions and highlight the need for adaptive, pedagogical approaches in an AI-augmented legal landscape.

Digital Object Identifier (DOI)

10.55496/RZKH7712

Share

COinS