Close this search box.


By: Shruti Singh, Advocate


As artificial intelligence (AI) is increasingly integrated into the legal sector, chatbots are becoming increasingly important as useful tools for lawyers and legal practitioners. OpenAI’s cutting-edge chatbot, GPT-3, stands out as a revolutionary force poised to transform various aspects of legal practice, ranging from legal research to document creation to distributing general legal information to the general public. This scholarly paper delves into the potential applications of chatbots like GPT-3 within the legal domain. Additionally, it discusses the accompanying challenges and ethical considerations that must be carefully negotiated when using such technology. A key objective of the paper is to offer insights into the evolution of chatbots, such as GPT-3, and predict their impact on the legal profession. It is becoming increasingly important to understand the implications of incorporating chatbots into legal workflows as AI advances. As a result of this paper, we shed light on both the promise of this technology shift and its complexities.


On November 30, 2022, OpenAI introduced chatGPT, a highly sophisticated chatbot. The majority of the content in this paper was created in under an hour using prompts in chatGPT, highlighting its potential implications for legal services and society. It’s important to acknowledge that while ChatGPT’s responses were not flawless and occasionally presented challenges, utilizing an AI tool for legal services raises significant regulatory and ethical concerns. Despite its imperfections, ChatGPT highlights the potential of artificial intelligence, signaling an imminent transformation in how we access information, receive legal services, and prepare for careers. This technological advancement prompts reflections on the changing role of knowledge workers and considerations regarding the attribution of work, such as identifying the authorship of written content. There are also concerns about potential misuse and overreliance on information generated by these tools. The disruptions resulting from the rapid development of AI are no longer distant; they are already here. This document offers a glimpse into what lies ahead, emphasizing the need for careful consideration and ethical scrutiny as we navigate the impact of AI on various aspects of our lives and society.


The legal field is experiencing a profound evolution fueled by the integration of new technologies. Lawyers are increasingly shifting their focus away from traditional tasks such as billable hours and case management towards exploring consulting services driven by technological advancements, aimed at optimizing practice efficiency. This transition brings forth new responsibilities, particularly in the realms of data analytics and artificial intelligence (AI) utilization, exemplified by tools like ChatGPT. Such tools are revolutionizing tasks like document drafting and collaborative efforts between humans and AI entities for ensuring compliance, as well as managing contract lifecycles.

The adoption of AI marks a significant milestone in reshaping the legal landscape, empowering professionals to augment their practices and expand their skill sets while upholding core values. Integrating ChatGPT into existing systems facilitates process enhancements, benefiting both clients and lawyers and providing a competitive advantage. Moreover, AI technologies, including ChatGPT’s vast access to information and rapid data assimilation capabilities, ease challenges for students in completing curriculum activities.

The fusion of instructors’ expertise with existing AI tools elevates the proficiency of legal practice, enabling simulations that reflect cutting-edge AI advancements. These simulations can be seamlessly integrated into classes, offering tailored instruction and insights into how specific cases unfold using particular technologies. As technological progress continues to reshape various industries, legal professionals must adapt swiftly while maintaining their established practices and values.

Lawyers are entrusted with acquiring new knowledge and understanding of AI platforms, along with fostering awareness of ethical considerations linked to these transformative changes. Navigating this evolving landscape requires legal practitioners to strategically leverage technological advancements for success while upholding the integrity of their roles.


The translation of legal documents between English and vernacular languages in the Supreme Court of India is facilitated by the use of SUVAS (Supreme Court Vidhik Anuvaad Software). 


The Punjab & Haryana High Court utilized ChatGPT for input on a bail petition involving allegations of a brutal fatal assault. The presiding judge sought a broader perspective on bail jurisprudence related to cruelty. Importantly, this ChatGPT reference does not express an opinion on the case’s merits, emphasizing its sole purpose to provide a comprehensive understanding of bail considerations in cases involving cruelty.

Bail jurisprudence in cases of cruel assault depends on specific circumstances, local laws, and the severity of the crime. Judges may be cautious when cruelty is involved, considering the accused’s potential danger to the community and flight risk. Factors such as the severity of the assault, criminal history, and evidence strength are crucial in bail decisions. Despite the seriousness of the charges, the presumption of innocence prevails, and bail may be granted if the accused does not pose a risk to the community or a flight risk.

The reference to ChatGPT is explicitly stated as non-binding, indicating that trial courts will not consider these comments in their proceedings. The Supreme Court’s Artificial Intelligence Committee is actively exploring AI applications in the judicial sector, focusing on document translation, legal research assistance, and process automation.

If the case goes to court based on “suspicion” made by AI, then on what basis will the suspicion be measured?  

To gain further insight into this matter, we can turn our attention to the case of Sharad Bhirdi Chand Sarda v. State of Maharashtra (1984). In this case, the entire legal proceedings heavily relied on circumstantial evidence, specifically emphasizing the necessity to establish a chain of command between the purported letters attributed to the deceased and certain key witnesses. It is imperative to recognize that the validation of this chain of command hinges on quantifiable circumstances.

The Division Bench of the Bombay High Court presided over the appeal and the Criminal Revision application. The appellant’s appeal saw partial success concerning his conviction and sentence under Section 120B of the Indian Penal Code, 1860. However, the court upheld his conviction and death sentence under Section 302 of the Code. In a contrasting outcome, the appeal of accuseds 2 and 3 was fully allowed, leading to their acquittal. Additionally, the Criminal Revision Application was dismissed.

This judicial action by the Division Bench illustrates the intricate nature of the legal proceedings and the nuanced evaluation of evidence. The reliance on circumstantial evidence, particularly in establishing a coherent chain of events, emphasizes the importance of measurable circumstances in determining guilt or innocence. The varied outcomes for different accused parties further highlight the complexity of criminal cases and the necessity for a thorough examination of evidence in the pursuit of justice.

The Division Bench of the Bombay High Court, in the mentioned case, acknowledged the necessity of fulfilling five golden principles outlined by the Supreme Court in Hanumant v. The State of M.P (1952) to establish guilt beyond a reasonable doubt. These principles include establishing fully established circumstances, consistency with the hypothesis of guilt, conclusive nature of circumstances, exclusion of alternative hypotheses, and a complete chain of evidence.

Quoting Justice Fazal Ali, “Suspicion, no matter how substantial, cannot substitute legal proof. A moral conviction, however genuine, lacks legal support.” The acquittal stemmed from the failure to satisfy the five golden principles articulated by the Supreme Court in Hanumant v. The State of M.P (1952): 

The circumstances supporting the guilt conclusion must be fully established.

2. facts should align exclusively with the hypothesis of guilt, excluding other explanations.

3. The circumstances must conclusively indicate guilt.

4. They should eliminate every conceivable alternative hypothesis, leaving only the one to be proven.

5. A complete chain of evidence must eradicate any reasonable doubt of the accused’s innocence, demonstrating that, in all human probability, the accused committed the act.

An inherent challenge in integrating artificial intelligence (AI) into cameras lies in the reliance on data for authenticating findings, such as assessing the likelihood of an individual committing a crime based on their prior criminal history. The issue arises from the fact that the datasets used by AI cameras often contain outdated information, potentially leading to the misidentification of individuals as potential suspects, raising concerns about the lack of reasonable suspicion in such cases.

Hanumant v. the State of MP?

Law enforcement agencies may encounter an overrepresentation of individuals with prior criminal records and their associates in police records when focusing on this demographic. Barocas and Selbst assert that organizations must conscientiously decide on the data categories for their AI program’s feature selection.

Consider a hypothetical scenario where the government employs AI trained on the last decade’s data to identify repeat offenders in the XX neighborhood. Devoid of sensitive information like race, religion, or sexual orientation, the AI learns that individuals from this region are more prone to criminal activities. While the system uses an ostensibly objective criteria space for predictions, let’s assume the AI’s forecast exhibits racial or religious bias. Innocent individuals from a specific group might suffer if law enforcement acts on this forecast, labeling them as potential criminals in the XX region.

This hypothetical situation starkly violates Section 7 of the proposed Data Protection Bill, 2022, and Article 15 (1) due to the absence of proper consent. Photos and texts are being collected without the individuals’ consent, underscoring the urgent need to enforce the Data Protection Act.


Ankit Sahni, a multidisciplinary artist and lawyer, used the AI tool RAGHAV to create ‘Suryast’ in 2020. The US Copyright Office initially refused copyright registration, deeming it too robotic and lacking human touch. Despite Sahni’s argument that RAGHAV’s contributions were unique, the USCO maintained that the final image was a derivative work primarily authored by RAGHAV.

Sahni’s subsequent attempts to secure registration focused on RAGHAV as assistive software, highlighting human elements in the base image, and asserting the non-derivative nature of the work. However, the USCO rejected these arguments based on legal precedent and copyright office guidance.

Interestingly, while the USCO’s stance aligned with denials of protection for synthetic creations, including the Thaler case, the Indian Copyright Office initially granted registration to ‘Suryast’ in November 2020. This marked Sahni as the first person to receive copyright protection for an AI-generated piece. However, a subsequent withdrawal notice raised questions about the legal status of RAGHAV, showcasing a lack of clarity in India’s approach.

The disparity in global legal interpretations is evident as Canada recognized Sahni’s co-authorship with the AI tool, while the Beijing Internet Court acknowledged AI-generated content for copyright protection based on originality and human oversight. This divergence raises questions about whether non-human AI entities can be considered authors and the necessity of human co-authorship, highlighting the need for a cohesive international legal framework.

In conclusion, these cases illustrate the evolving intersection of AI, legal proceedings, and copyright law, prompting ongoing discussions about the role of AI in decision-making, authorship, and the protection of creative works.


The integration of AI into the judicial system poses the ‘centaur’s dilemma,’ balancing human control over AI to avoid interference while ensuring just and reasonable results. This dilemma reflects the trade-off between the swiftness of AI-driven judgments and the human element of fairness. Considering Cesare Becarria’s principles embedded in constitutional democracies, encompassing due process, equal treatment, fairness, and transparency, as seen in Articles 14 and 21 of the Indian Constitution, is crucial. The Supreme Court’s stance in Zahira Habibullah Sheikh and ors. v. State of Gujarat and Ors. emphasizes the inherent right of every stakeholder to a fair trial, free from bias or prejudice. Therefore, evaluating the use of AI-enabled technologies should align with these fundamental values.


In conclusion, the integration of ChatGPT and similar AI technologies into legal matters brings about a paradigm shift with significant consequences that must be carefully considered. As demonstrated throughout this discourse, ChatGPT’s introduction into legal workflows offers both promise and challenges. 

Firstly, the evolution of ChatGPT signifies a remarkable advancement in leveraging AI for legal services, from assisting in legal research to aiding in document drafting and even providing insights into complex legal matters. However, alongside its potential benefits, the utilization of ChatGPT raises critical regulatory and ethical concerns. The imperfections inherent in AI tools like ChatGPT underscore the need for robust regulatory frameworks and ethical guidelines to govern their use in legal contexts. 

Moreover, the implications of ChatGPT on the legal industry extend beyond mere efficiency gains. They prompt a reevaluation of traditional legal practices and necessitate an adaption to emerging technological landscapes. Legal professionals are tasked with acquiring new knowledge and skills related to AI platforms while remaining vigilant about ethical considerations and preserving the integrity of their roles. 

The case studies presented further highlight the intricate interactions between AI, legal proceedings, and copyright law. They underscore the ongoing debate surrounding AI’s role in decision-making processes, the attribution of authorship, and the protection of creative works. These cases serve as catalysts for ongoing discussions and the development of a cohesive international legal framework to address the complexities of AI integration in legal matters. Additionally, the incorporation of AI into the judicial system must navigate the “centaur’s dilemma,” balancing the efficiency of AI-driven judgments with the fundamental principles of fairness, due process, and equal treatment embedded in constitutional democracies. Ensuring that AI-enabled technologies align with these core values is paramount to upholding the inherent right of every individual to a fair trial, free from bias or prejudice.

In essence, the consequences of using ChatGPT in legal matters are multifaceted, encompassing technological advancements, regulatory challenges, ethical considerations, and implications for legal practices and principles. As we navigate this evolving landscape, it is imperative to approach the integration of AI in legal contexts with caution, foresight, and a steadfast commitment to upholding the principles of justice and fairness. Only through thoughtful deliberation and collaborative efforts can we harness the full potential of AI while mitigating its potential risks and ensuring its responsible use in the legal domain.