When Algorithms Break the Law: Who’s Accountable?

Introduction: The Rise of Algorithmic Power

In the 21st century, algorithms have quietly assumed roles once reserved for human judgment approving loans, screening job candidates, recommending criminal sentences, or curating news feeds. Artificial Intelligence (AI) and Machine Learning (ML) systems now shape decisions that affect millions, often with limited transparency or oversight.

While the efficiency and scale of algorithmic systems are undeniable, they also raise an unsettling question: when an algorithm causes harm, who is responsible?

This question lies at the intersection of law, ethics, and technology — where established legal doctrines meet emerging forms of automated agency. Unlike human actors, algorithms do not possess intent, emotion, or moral reasoning. Yet, their outputs can produce outcomes that are discriminatory, invasive, or even unlawful.

From biased hiring algorithms to faulty facial recognition leading to wrongful arrests, the consequences of algorithmic decisions are no longer theoretical. They demand urgent legal and policy responses that reimagine accountability in the age of autonomy.

Understanding Algorithmic Decision-Making

An algorithm is, in essence, a set of coded instructions that process input data to generate an output. In AI systems, these algorithms can learn from large datasets — identifying patterns, predicting outcomes, and improving performance over time.

The key distinction between traditional software and AI-based systems is opacity. Many modern algorithms operate as “black boxes” — even their developers may not fully understand how specific outputs are derived. This opacity complicates accountability because errors, biases, or harms are difficult to trace back to individual human actions.

Consider a few examples:

  • A facial recognition system misidentifies a person, leading to wrongful arrest — as documented in multiple cases in the U.S. and U.K.
  • An AI recruitment tool filters out female candidates because its training data reflects past gender bias in hiring.
  • A predictive policing model disproportionately flags marginalized communities as “high risk” due to biased historical crime data.

In each instance, the harm originates not from a malicious human actor, but from complex data interactions encoded in digital systems. The legal system, however, is built around the assumption of human agency — and therein lies the problem.

The Legal Problem: When Code Causes Harm

When a product or service causes injury, traditional legal frameworks — such as tort law or product liability — determine who bears responsibility. But algorithmic systems challenge these foundations.

Who is the liable party when harm is caused by an autonomous system?

  • The developer, who designed the code?
  • The company, that deployed it?
  • The data providers, whose information shaped its biases?
  • Or the user, who relied on its outputs?

This chain of accountability is often diffuse and obscured. Algorithms can evolve through self-learning, making their outputs unpredictable even to their creators. Legal doctrines like mens rea (intent) and causation are strained when applied to non-human actors.

Moreover, algorithms are not “persons” under law. They lack legal personality, moral responsibility, and intent — yet their autonomous actions can have real-world consequences akin to human conduct.

This accountability gap risks creating a “responsibility vacuum,” where victims of algorithmic harm face uncertainty about legal recourse, and corporations evade liability by blaming the system’s complexity or autonomy.

Existing Legal Frameworks and Their Limits

1. Indian Context

India’s primary cyber legislation, the Information Technology Act, 2000, was never designed for algorithmic governance. It addresses data security, electronic contracts, and cybercrime but remains silent on AI liability, algorithmic bias, or automated decision-making.

The Digital Personal Data Protection Act, 2023 (DPDP), introduces principles of consent, purpose limitation, and lawful processing — but stops short of regulating autonomous algorithmic actions.

India’s NITI Aayog’s “Responsible AI for All” framework recognizes transparency, accountability, and inclusivity as key pillars, yet these remain policy aspirations rather than enforceable obligations. There is no binding requirement for algorithmic audits, human oversight, or explainability mechanisms.

2. Global Developments

Globally, regulators are beginning to respond. The European Union’s AI Act proposes a risk-based framework, classifying AI systems as unacceptable, high-risk, or minimal-risk, with corresponding compliance obligations. It also mandates transparency, documentation, and human supervision for high-risk applications.

The EU’s General Data Protection Regulation (GDPR) already grants individuals the right not to be subject to automated decision-making (Article 22), and to demand meaningful information about the logic behind such processing.

In contrast, the United States follows a sectoral approach — regulating AI through existing consumer protection, civil rights, or competition laws. The Blueprint for an AI Bill of Rights (2022) outlines principles like algorithmic discrimination protection and human alternatives but lacks statutory force.

These frameworks mark progress, but none fully resolve the accountability dilemma. They rely on corporate self-assessment, and enforcement mechanisms remain limited. The law still lags behind the speed of technological innovation.

Models of Accountability: Who Should Bear the Burden?

Scholars and policymakers have proposed several competing models for allocating responsibility when algorithms cause harm.

1. Developer Accountability

This model attributes responsibility to software engineers or designers who code the system. If an algorithm’s architecture or training data introduces bias, the developer may be liable for negligence.
Challenge: Developers often work within large teams or under corporate directives, limiting their individual agency. Additionally, the complexity of AI models makes it nearly impossible to foresee every potential bias or malfunction.

2. Corporate Accountability

This is the most pragmatic and legally coherent approach. The company deploying the algorithm — as a “legal person” — should bear primary liability for harms caused by its systems. This aligns with traditional doctrines of vicarious liability and product liability.

Corporations are best positioned to implement risk assessments, audit mechanisms, and compensation frameworks. They also profit from algorithmic deployment and should therefore shoulder the corresponding risks.

3. Shared Responsibility

A multi-tiered framework distributes accountability across the AI lifecycle — from data collection and model design to deployment and post-market monitoring.

  • Developers ensure ethical design.
  • Companies conduct transparency and fairness audits.
  • Regulators establish oversight mechanisms.

This collaborative model echoes the “three lines of defense” approach in governance — balancing innovation with safeguards.

4. Algorithmic Personhood (Speculative Model)

Some scholars propose granting limited legal personhood to AI systems, akin to corporations or autonomous entities, allowing direct attribution of rights and duties.
However, this raises philosophical and ethical challenges: Can an algorithm possess intent or understanding? Who represents it in court? While conceptually intriguing, algorithmic personhood risks diluting human accountability rather than strengthening it.

Ethical and Governance Perspectives

While law grapples with liability, ethics offers a complementary lens — emphasizing responsibility before regulation.

Global frameworks such as the OECD Principles on AI and the UNESCO Recommendation on the Ethics of Artificial Intelligence emphasize the values of transparency, fairness, and accountability. These principles converge around four key pillars, often summarized as TFAE:

  • Transparency: Algorithms should be explainable and auditable.
  • Fairness: Systems must avoid bias and discrimination.
  • Accountability: Clear lines of responsibility must exist for outcomes.
  • Explainability: Decisions must be interpretable by humans.

India’s “Responsible AI for All” echoes these values, advocating a human-centric AI approach. However, without statutory backing, these principles remain voluntary.

Ethical AI governance demands structural changes — not just voluntary codes. Organizations should embed “ethics by design” — integrating fairness and explainability at every development stage. Regular algorithmic audits, impact assessments, and oversight boards can strengthen accountability and public trust.

The Way Forward: Building an Accountable Algorithmic Future

To close the accountability gap, India and other digital democracies must undertake coordinated legal and institutional reforms.

1. Legal Reform

India requires a dedicated AI Liability Framework or amendments to the IT Act that address algorithmic harms explicitly. This should include:

  • Mandatory human oversight for high-risk AI systems.
  • Disclosure obligations for automated decision-making.
  • Compensation mechanisms for victims of algorithmic harm.
  • Statutory recognition of algorithmic audits and risk assessments.

Parliament could also consider model provisions from the EU AI Act and OECD recommendations, adapting them to India’s socio-legal realities.

2. Institutional Oversight

Creating an Algorithmic Accountability Authority (AAA) — akin to the Data Protection Board under the DPDP Act — could ensure compliance, conduct investigations, and issue penalties. Such a body could also coordinate with sectoral regulators (e.g., RBI, SEBI, TRAI) to address algorithmic risks in finance, telecom, and e-commerce.

3. Industry and Academic Collaboration

Accountability cannot be achieved by law alone. Multi-stakeholder participation — academia, industry, civil society, and government — is essential. Research centers like CRGCL can play a crucial role in:

  • Conducting empirical studies on AI bias and governance.
  • Proposing legal models for algorithmic liability.
  • Facilitating dialogue between technologists and lawmakers.

Collaborative policy innovation can ensure that India’s AI ecosystem grows with integrity and fairness.

4. Cultural Change in Tech Development

Ultimately, accountability must become a cultural value in technology design — not merely a compliance checkbox.
Developers should view themselves as custodians of ethical responsibility, not just coders of functionality.
Corporate boards must prioritize algorithmic transparency as part of ESG (Environmental, Social, and Governance) reporting.

Conclusion

As algorithms increasingly shape our social, economic, and political realities, the question of accountability becomes central to digital governance. Law must evolve from regulating technology to ensuring responsibility within technological ecosystems.

Accountability is not a barrier to innovation — it is the foundation of trust in a digital society. Without it, technological progress risks eroding the very rights and values it seeks to advance.

The path forward requires more than new statutes; it demands a shift in perspective — from “Who can we blame?” to “How can we build systems that answer for their actions?”

In a world where machines can act, judge, and decide — justice demands that we also teach them to answer.

Suggested Reading

  • EU Artificial Intelligence Act (2024)
  • NITI Aayog: Responsible AI for All (2021)
  • OECD Principles on AI (2019)
  • UNESCO Recommendation on the Ethics of Artificial Intelligence (2021)
  • Blueprint for an AI Bill of Rights (White House, 2022)
case studies

See More Case Studies

GET IN TOUCH

Collaborate with Us for Research and Policy Innovation

We welcome collaborations, research partnerships, and inquiries related to cyber law, governance, and digital policy. Whether you are an academic, policymaker, or institution, we’d be delighted to explore how CRGCL can work with you on impactful research and outreach initiatives.

Why Collaborate with CRGCL:
Reach Out to Our Research Team