AI REGULATION & ITS LEGAL LIABILITY
~This Article has been written by Jiya Javrani
Jiya Javrani is a 5th-year B.A.LL.B. student at Dr. D.Y. Patil Law College, Pune. I am a certified PoSH & POCSO Trainer, and Mediator, with a keen academic interest in criminal law and emerging legal issues. Her current research focuses on AI regulation and its legal liability, and she’s eager to contribute to the growing discourse in this field. She holds keen interest in publishing more research papers on contemporary legal topics.

Jiya Javrani is a 5th-year B.A.LL.B. student at Dr. D.Y. Patil Law College, Pune.
ABSTRACT
The rapid integration of artificial intelligence (AI) into everyday systems has introduced significant legal and ethical challenges. As AI systems gain greater autonomy, issues concerning legal accountability, ownership of generated content, and the transparency of decision-making processes remain inadequately addressed by current legal frameworks. This document explores these developing issues from a legal standpoint, assessing the implications of AI-related harm, the complexities of attributing authorship to works produced by AI, and the lack of clarity in algorithmic decision-making. By reviewing international approaches and policy suggestions, the document underscores the urgent need for adaptable legal frameworks that reflect the evolving nature of AI technologies. The research aims to stimulate conversation regarding regulations that strike a balance between accountability and fostering innovation.
INTRODUCTION
Artificial intelligence ought not to be regarded as a substitute for human intelligence. Instead, it functions as a means to amplify human creativity and innovative thought. Additionally, one of the most notable dangers associated with artificial intelligence is that people may hastily believe they have a complete understanding of it. The two statements presented are ironic; one implies that AI can serve as an alternative that fosters and enriches our human intellect, while the other suggests that it is crafted in a manner that could stifle our creativity and pose a threat to us. Our intelligence is what characterizes our humanity. In this regard, AI acts as an extension of that essential quality. Conversely, we argue that predicting the future isn’t a form of magic. It’s artificial intelligence. Thus, we find ourselves confused about whether the regulation of artificial intelligence is advantageous or harmful.
Although there is no uniformly agreed-upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.”
Even though many people are not well-acquainted with it, AI is a technology that is revolutionizing all aspects of life. It serves as a versatile tool that allows individuals to reconsider how they assimilate information, evaluate data, and apply the insights gained to enhance their decision-making processes. This paper examines the legal uncertainties surrounding AI, focusing on liability for harm, ownership of content, and the need for transparency—issues that require immediate legal consideration as AI increasingly influences human behavior.
Legal Liability: Who is Responsible When AI Causes Harm?
OWNERSHIP OF DAMAGE
Given the times when artificial intelligence is beginning to take the place of human intelligence, have we truly considered the implications of an AI surveillance system failing to recognize a face or mistakenly identifying inappropriate behavior? What happens if a financial analyst makes an error on the quantitative trading platform? What if delivery robots or zip line drones end up making a mistake? What if medical AI, which supports doctors in diagnostics, misdiagnoses conditions like cancer, analyses X-rays incorrectly, or inaccurately predicts patient outcomes? We haven’t thoroughly contemplated the consequences of these potential errors, nor who would be held accountable when AI causes harm. Although AI is a product of human creation, the question remains about who bears the full responsibility and accountability for any resulting damage.
Imagine if a self-driving car were involved in an accident that resulted in personal injury to you. What actions should we consider in that scenario? Who is accountable for reimbursing you for the losses? Whom should we hold accountable? The fact that AI-driven products learn on their own complicates the process of assigning liability to any specific individual. There are numerous parties involved in the creation of an AI system, ranging from the designer to the manufacturer, the programmer, the developer, the user, and the AI itself. As a result, it remains uncertain where the lines of liability are drawn. Current thought suggests that answers to this dilemma may be found within existing laws regarding contracts, consumer protection, and tort law.
The European Commission released a report that offered various recommendations for addressing the issue of liability concerning artificial intelligence. One of the recommendations in this report advocates for strict liability, which assigns responsibility for certain actions without requiring proof of fundamental fault on the part of the accused. The strict liability would rest with the individual in charge of the associated risk. Therefore, anyone who has control over the AI and the factors that led to the risk should be held accountable for the resulting damages.
For instance, a victim involved in a car accident has typically a strict-liability claim against the owner of the car (i.e. the person who takes out motor vehicle liability insurance) and a fault-based liability claim against the driver, both under national civil law, as well as a claim under the Product Liability Directive against the producer if the car had a defect.
This will apply similarly to operators, manufacturers, designers, developers, and programmers. Suppose two or more individuals contribute different components of the AI, and the victim demonstrates that the AI caused the harm, but cannot identify the specific element responsible. In that case, all parties will be jointly and severally liable to the victim. Furthermore, the report suggests that there is no need for a distinct legal identity for AI, and disputes can be resolved by assigning liability to the individuals and entities behind the AI.
AI vs Human Content: Issues of Attribution and Authenticity
OWNERSHIP OF CONTENT
Copyrightable works are exclusively produced by humans, meaning that anything generated by AI cannot be copyrighted. In other words, if you produce an item using an AI tool, that creation is not yours. It belongs to the public domain. According to the U.S. Copyright Office, if AI generates content for you, your brand lacks ownership of that work. However, what constitutes sufficient human involvement to secure copyright? In situations where creations include AI-generated content, the office will evaluate if the AI’s input stems from mechanical reproduction or if it originates from the author’s unique mental idea that was given tangible expression. The determination will rely on the specific conditions, especially the functioning of the AI tool and how it was utilized to produce the final piece. This must be approached as an inquiry based on individual cases.
The Office confirms that the use of AI to assist in the process of creation or the inclusion of AI-generated material in a larger human-generated work does not bar copyrightability. The Office is closely monitoring the latest factual and legal changes concerning AI and copyright, and might offer additional guidance in the future on registration or other copyright-related matters linked to this technology.
As an industry, we need to,
a) Assess whether the specific tool we are utilizing genuinely qualifies as true AI based on a legitimate learning model.
b) Understand what the learning model actually entails and how it can potentially become our own learning framework.
Consequently, we must emphasize that it is your narrative and the way you convey it that matters.
Comparative analysis: Transparency in AI
A black box AI describes an artificial intelligence system whose internal workings always remain hidden from its users. While users are able to observe the inputs and outputs of the system, they are unable to scrutinize the underlying processes that generate those outputs. These AI systems are developed using large datasets and intricate deep learning techniques, and even the developers themselves do not completely grasp how they function. While these sophisticated black boxes can yield impressive results, their lack of transparency can sometimes raise doubts about their outputs. Users find it difficult to verify a model’s outputs if they are unaware of its internal mechanisms.
Furthermore, Opaque models can hide possible vulnerabilities, making it challenging to detect risks related to cybersecurity and privacy. Some creators and programmers of AI choose to keep the inner workings of their technologies undisclosed before their public launch. This approach is frequently employed to safeguard intellectual property. The developers of the system have a complete understanding of its functionality, yet they maintain the confidentiality of both the source code and the decision-making procedures.
White box AI with black box AI. It represents an AI system defined by its transparent and accessible processes. Users are able to understand how the AI processes information, analyses it, and concludes. Transparent AI models build confidence and allow for the validation of outcomes while also facilitating adjustments to rectify errors and improve effectiveness. However, it is difficult to convert all AI systems into white boxes. Still, this does not provide a direct understanding of the model; rather, it delivers an explanation produced by the model concerning its behaviors.
A “black box” system is difficult to understand, while explainable AI and interpretable machine learning allow organizations to access the decision-making process of AI technology and make necessary changes. Explainable AI enhances the user experience by clarifying the decision-making process of the AI, which fosters confidence in the precision and dependability of its results.
COMPARATIVE LEGAL FRAMEWORKS:
EUROPEAN UNION
LEGAL FRAMEWORK: EU Artificial Intelligence Act (AI Act)
POSITION ON AI LIABILITY: The EU AI Act, enacted in 2024, is the first comprehensive regulation on artificial intelligence in the world. While it does not explicitly establish a new system for legal accountability, it introduces a risk-based classification for AI systems—unacceptable, high, limited, and minimal risk—imposing more stringent requirements on those identified as high risk. The obligations placed on AI developers and users, such as ensuring transparency, maintaining data quality, and providing human oversight, indirectly influence legal accountability. Noncompliance may lead to penalties or civil liability according to the laws of individual countries.
USA
LEGAL FRAMEWORK: Sectoral approach (no unified law)
POSITION ON AI LIABILITY: The United States lacks a comprehensive federal regulation for AI, instead utilizing a sector-oriented regulatory framework, such as the FDA for the healthcare sector and the SEC for finance. Legal responsibility for damages related to AI is addressed under existing tort and consumer protection laws, resulting in a disjointed and case-by-case system.
A new Executive Order (EO) aims to enhance federal regulation by promoting:
- Privacy protections through the implementation of privacy-preserving technologies and explicit guidelines for the application of federal data;
- Equity and civil rights by setting best practices in housing, federal programs, and the criminal justice system to reduce algorithmic biases;
- Safeguards for consumers and patients, especially in the healthcare and education fields;
- Assistance for workers, including principles to mitigate job loss and a report on AI’s effects on the labor market;
- Encouraging innovation and competition by advancing research, creating equitable AI ecosystems, and facilitating skilled immigration in areas vital for AI progress.
INDIA
LEGAL FRAMEWORK: Information Technology Act, 2000 and Digital Personal Data Protection Act 2023
POSITION ON AI LIABILITY: Currently, India does not have a dedicated legal framework for artificial intelligence and instead relies on general laws like the IT Act, 2000, and the DPDP Act, 2023, which offer limited provisions mainly focused on data protection and cybersecurity.
These laws fail to specifically address the accountability of AI systems or their creators, leading to a lack of regulations to tackle the harm that AI may cause. Recognizing this gap, NITI Aayog published ethical guidelines for ‘Responsible AI’ in 2021, advocating for principles such as transparency and accountability. However, these guidelines do not carry legal force, and India remains in the early stages of developing thorough regulations for AI.
CHINA
LEGAL FRAMEWORK: Regulations on Deep Synthesis Tech (2023), AI Generative Content Regulation (2023), Personal Information Protection Law (2021), and Cybersecurity Law 2017.
POSITION ON AI LIABILITY: China has developed one of the most thorough and enforceable regulatory systems for AI globally, utilizing a state-led, command-and-control strategy. It is among the pioneers in adopting binding laws specifically designed to regulate AI technologies. Significant regulations include the Regulations on Deep Synthesis Technology (2023), which require AI-generated content to be labeled and hold platforms responsible for the misuse or dissemination of false information, and the Generative AI Regulation (2023), which establishes standards for transparency, safety, and supervision of content for AI tools.
These regulations are supported by wider legislation such as the Personal Information Protection Law (2021) and the Cybersecurity Law (2017), which outline principles for data privacy and security in AI applications. Although China does not grant a separate legal status to AI systems, it imposes strict liability on developers, providers, and platforms engaged in the creation and management of AI technologies. The regulatory framework in China prioritizes compliance, risk management, and public welfare, positioning it as one of the most proactive in the international arena.
CANADA
LEGAL FRAMEWORK: Artificial Intelligence and Data Act (AIDA) – Proposed
POSITION ON AI LIABILITY: In 2022, Canada introduced the Artificial Intelligence and Data Act (AIDA) as a component of Bill C-27 to regulate AI systems that significantly affect society. Although it is not yet in effect, AIDA seeks to create transparency and safety obligations for developers and users, along with oversight from regulators and penalties for non-compliance. While it does not specifically tackle AI liability, it lays the groundwork for legal accountability by requiring due diligence and risk management practices. The Privacy Commissioner of Canada has emphasized that regulating AI is expected to be a key priority in legislation to protect citizens and encourage a thriving digital economy.
CONCLUSION
The growing capabilities of artificial intelligence require a reassessment of traditional legal concepts. As machines operate with reduced human intervention, the legal system must shift from a reactive approach to a proactive one. Future regulatory frameworks should focus on clearly defining accountability, enhancing human oversight, and encouraging the development of transparent systems. By implementing these measures, the law can act not only as a response to technological advancements but also as a forward-thinking structure that supports ethical and responsible AI development.
REFRENCES:
- Artificial Intelligence and Legal Liability https://arxiv.org/pdf/1802.07782
- Black Box AI https://www.ibm.com/think/topics/black-box-ai
- Explainable AI https://www.ibm.com/think/topics/explainable-ai
- Journal of Lifestyle and SDGs Review: Comparative Analysis of Laws in AI
- United States Approach to Artificial Intelligence https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/757605/EPRS_ATA(2024)757605_EN.pdf
- EU Artificial Intelligence Act (AI Act) & European Parliament Reports
- Information Technology Act, 2000 and Digital Personal Data Protection Act 2023
- Artificial Intelligence and Data Act (AIDA) – Proposed
- Regulations on Deep Synthesis Tech (2023), AI Generative Content Regulation (2023), Personal Information Protection Law (2021), and Cybersecurity Law 2017