Boards must decide who authorises agentic AI – The HinduBusinessLine

Clipped from: https://www.thehindubusinessline.com/opinion/boards-must-decide-who-authorises-agentic-ai/article70635722.ece

Given the speed with which agentic AI functions, it is vital for the board to define authority and assign responsibility

AI: Fixing accountability | Photo Credit: anyaberkut

Artificial intelligence is no longer merely advising executives. Increasingly, agentic AI systems, when deployed at scale, shall be acting on behalf of organisations. Across parts of the financial sector, travel, infrastructure operators and digital platforms, advanced AI systems are beginning to execute consequential decisions autonomously. They will block transactions, approve or deny permissions, adjust prices, resolve customer disputes and execute trades, often in milliseconds.

No human reviews these decisions in real time. There is no committee vote before a suspicious wire transfer is stopped or a credit application is declined. Oversight, if it occurs, happens after the fact. Yet most corporate governance frameworks still assume that a person remains “in the loop” for high-risk decisions. That assumption is increasingly misaligned with operational reality, and it creates a governance exposure that boards of directors cannot afford to ignore.

Much of the public debate about AI has focused on accuracy, bias and ethics. But who authorises these systems to act without human approval? Delegated authority is, ultimately, a board responsibility.

Consider an example. A fraud detection system must decide within milliseconds whether to block a $50,000 wire transfer flagged as suspicious. If the system waits for human review, the money is gone. So it acts. If it blocks a legitimate payment, a business may miss payroll. If it lets a fraudulent transfer proceed, a customer suffers a loss. Either outcome carries financial, legal and reputational consequences. Where agentic systems are deployed, thousands of such decisions occur daily. The speed that makes these systems valuable also makes contemporaneous human supervision impossible.

Delegation factor

This is not merely automation. It is delegated authority operating at machine speed. And corporate boards understand delegation well. Directors routinely approve credit limits for lending officers, authorise trading mandates for investment desks and set capital allocation thresholds for senior executives. In each case, authority is explicit and bounded.

Accountability is clear. A chief risk officer is responsible for loans issued within approved limits, even if she never reviews them individually. A head of trading is accountable for algorithmic strategies operating within a defined mandate. Responsibility attaches to the delegation.

Agentic systems represent a similar transfer of decision rights, except the delegate is software. When deployed, these systems are effectively exercising authority on behalf of the organisation. Yet in many companies, that delegation has never been formally acknowledged at the board level. Autonomy often emerges incrementally, through vendor platforms, system integrations or efficiency initiatives. By the time directors fully understand the extent of automation, it is already embedded in core processes. Authority exists in code, but not in board minutes.

The legal treatment of autonomous vehicles offers a useful comparison. When a self-driving car causes harm, prosecutors do not automatically charge the board of directors. Liability typically attaches to the manufacturer, operator or insurer. Individual criminal exposure arises only when there is evidence of personal wrongdoing or gross negligence. That allocation reflects a simple principle: the authority to deploy autonomous driving is explicitly regulated, documented and insured. Delegation is formal. The debate is not whether machines may drive, but under what authorised conditions they may do so.

Corporate AI deployments rarely receive the same governance treatment. If a lending algorithm violates fair-lending rules, a pricing model discriminates or a trading system contributes to market instability, regulators will ask a basic question: Who authorised the system to act? Boards that cannot answer with documented limits and clearly assigned accountability will face scrutiny grounded not in technical error but in governance failure. The issue will not be whether the model was perfect. It will be whether the board understood and formally approved the authority it had effectively delegated.

Addressing this risk does not require directors to become experts in machine learning. It requires them to perform a familiar function: define authority and assign responsibility.

At a minimum, boards should require a clear inventory of decisions currently being made autonomously, explicit approval of which categories of decisions may be delegated to agentic systems, defined financial and operational limits on that autonomy and a named executive accountable for outcomes within those limits. Autonomy should never be accidental. It should be authorised.

The writer is a tech entrepreneur and former Managing Director of CGI India

Published on February 16, 2026

Leave a Reply