Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price II.

Humans in the Loop
Rebecca Crootof, Margot E. Kaminski, W. Nicholson Price II. 2023. (View Paper → )
First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop.
Second, we identify "the MABA-MABA trap," which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decisionmaking process. Regardless of whether the law governing these systems is old or new, inadvertent or intentional, it rarely accounts for the fact that human-machine systems are more than the sum of their parts: they raise their own problems and require their own distinct regulatory interventions.
But how to regulate for success? Our third contribution is to highlight the panoply of roles humans might be expected to play, to assist regulators in understanding and choosing among the options.
For our fourth contribution, we draw on legal case studies and synthesize lessons from human factors engineering to suggest regulatory alternatives to the MABA-MABA approach. Namely, rather than carelessly placing a human in the loop, policymakers should regulate the human-in-the-loop system
The "MABA-MABA trap" refers to a common governance error with human-in-the-loop systems. It's definition is: allocating tasks based on what "Men Are Better At" versus what "Machines Are Better At". The trap lies in the seductive but false assumption that simply inserting a human into an automated process will combine the best of both worlds, creating a superior hybrid system.
This "slap a human in it" approach is dangerous because it ignores a critical fact: human-machine systems are more complicated than the sum of their parts. Instead of marrying the strengths of each, these hybrid systems can actually exacerbate the worst of each while introducing entirely new sources of error.
- E.g. a human might be asked to take over from an autonomous vehicle moments before a crash, a situation that sets the human up for failure and blame.
The MABA-MABA trap distracts policymakers from more effective—though more complex—regulation that addresses the hybrid system as a whole, accounting for issues like interface design, bungled handoffs, and inadequate training.
Nine Potential Roles for a Human in the Loop
To regulate systems effectively, the paper argues that policymakers must first be clear about why a human is being included. The authors identify nine potential roles, which are not mutually exclusive:
- Corrective: The human is there to improve the system's performance and accuracy. This includes correcting factual errors, tailoring general recommendations to specific situations, and counteracting algorithmic bias.
- Resilience: The human acts as a backstop or fail-safe, capable of taking over or shutting down the system during a malfunction or emergency.
- Justificatory: The human's purpose is to provide reasons for a decision, which helps make the outcome more palatable and legitimate to those affected by it.
- Dignitary: In this role, the human presence is meant to protect the dignity of the person affected by the decision, ensuring they are not treated as a mere object or "data shadow" by a machine.
- Accountability: The human is included to ensure someone can be held legally liable or morally responsible for the system's outcomes. Cynically, this can mean the human serves as a "liability sponge" or "moral crumple zone," designed to absorb blame and protect the system's creators.
- Stand-In Roles: Here, the human serves as abstract proof that something has been done to address the risks of automation, whether or not they have any real power or effect.
- Friction: The human is intentionally included to slow down the pace of an automated system, which can be a benefit when algorithmic speed is a source of harm.
- "Warm Body": This role is about job protection, where a human is kept in the loop primarily to preserve their employment rather than for a specific functional purpose.
- Interface: The human acts as a go-between, helping users interact with a complex algorithmic system by translating inputs and explaining outputs.
Because hybrid systems are distinct systems, success depends on interfaces, handoffs, training, and organisational context—not merely on inserting a person. Recommendations: (a) be explicit about which role(s) the human should play and why; (b) consider legal, technical, organisational, and societal context; and (c) regulate the whole hybrid system drawing on human‑factors engineering. Three safety‑critical exemplars—railroads, nuclear reactors, and medical devices—show how detailed interface rules, training, resilience planning, and monitoring can be embedded in regulation. Key design lessons include minimising operator information load, preventing over‑reliance and skill fade, enabling smooth transfer of control, building safe‑failure modes, and logging for post‑incident analysis.