Christopher Noessel
Review
2017 was an unusual time to write a book on agentive technology, given the subsequent explosion of opportunities brought by natural language processing and generative AI. Nevertheless, the book's greatest strength lies in its meta-analysis of historical papers on agentic systems. The author's thesis about tools evolving to become more agentive over time proves particularly relevant, as we now stand at the threshold of agents capable of performing most knowledge work.
You Might Also Like:
Key Takeaways
The 20% that gave me 80% of the value.
Agentive technology represents an evolution from simple tools to sophisticated systems that act on our behalf, merging sensing capabilities with physical actions. Early devices, such as the thermostat, evolved from manually controlled instruments to smart agents that learn from user behaviour and environmental cues. This transformation underscores how technology shifts from merely reducing physical work to also handling complex information tasks autonomously, thereby supporting human goals with minimal intervention.
At the core of this evolution is the recognition that agentive systems are more than just automated tools. They continuously monitor data streams and take action when necessary, blending narrow artificial intelligence with human oversight. Users engage with these systems by configuring their goals, preferences, and permissions, then monitoring their performance through ambient feedback and subtle notifications. Even as these agents operate autonomously, they remain designed to facilitate smooth handoffs and allow human intervention when unexpected situations arise, preserving the necessary balance between automated efficiency and human judgment.
Designers of agentive technology must consider both the technical and human factors involved. Establishing clear communication about what the system can and cannot do is essential, as is creating interfaces that allow for behaviour tuning and transparent feedback. Systems must be capable of handling exceptions—whether by alerting users to resource constraints or by facilitating seamless transitions when manual control is required. In doing so, designers help maintain trust and ensure that users remain comfortable with the level of autonomy delegated to the machine.
Evaluating these systems calls for a blend of traditional usability assessments and new heuristics that capture the unique dynamics of agentive interaction. It is important to measure how accurately the system responds to triggers, how confidently users can rely on it, and how well it supports overall task performance without eroding essential human skills. This evaluation process is critical not only for refining performance but also for addressing the broader ethical and societal implications.
The rise of agentive technology brings with it significant ethical and societal challenges. Embedded biases, privacy concerns, and accountability issues require careful attention, as autonomous systems can affect decision-making processes in unforeseen ways. Moreover, over-reliance on these systems risks diminishing human skills, potentially leading to stratified service quality and increased vulnerabilities if the systems fail. As technology evolves, designers and organisations must advocate for transparent, ethical practices that keep human interests at the forefront.
Looking ahead, the future of agentive technology lies in its potential to act as a true partner in achieving human goals. This requires a shift in design practices—from creating tools for manual task execution to developing systems that empower users, encourage skill retention, and adapt fluidly to changing conditions. The challenge for practitioners will be to integrate new frameworks and collaborative models that support this transition while ensuring that the benefits of agentive technology are realised in a responsible and sustainable manner.
Historical frameworks highlight key considerations:
- Fitts (1951) compared human and machine capabilities
- Bainbridge (1983) warned removing humans degrades troubleshooting ability
- Hoffman-Woods (2002) argued humans keep machines context-aligned
- Bradshaw noted autonomy emerges from human-machine interaction
- Parasuraman et al. (2000) described a manual-to-autonomous spectrum
- Johnson et al. defined virtues like clarity and humility for human-machine teamwork
- Proactive Resource Management (2004) ranged from preparation to finalisation
Deep Summary
Longer form notes, typically condensed, reworded and de-duplicated.
Part 1: Seeing
Technology isn’t a collection of tools and gadgets - instead think of it as an ever evolving human problem solving force.
The thermostat evolved from tool to agent. The thermostat invented 500 years ago has had a number of evolutions. Expect almost every device you use to do the same, to move from tool to agent.
- A physical column of mercury rises with the temperature closing off the heat source, and opening it again as it shrinks and cools automating the control of temperature with a feedback loop.
- Mercury was replaced by electric circuits - cleaner and allowed more programability.
- Nest thermostats learn preferences from adjustments you make, but also other inputs like your location and calendar.
Agentive Technology is Here
- The evolution of tools can be viewed as iterated solutions to some core human need.
- Agents are a natural solution to a great many computable human problems, as designers attempt to reduce effort and maximise results.
- The two ways tools help reduce work:
- Reducing physical work by abstracting it away (e.g. hand held fan to desk fan)
- Reducing information work - giving us metrics and rules to help make decisions
- Tools can become Agentic when you connect both information awareness and the ability to do the physical work.
- Tools start out manual, evolve to reduce effort. Others evolve to help with information work to help measure and regulate actions. Then a few systems combine the physical and information work to become agentive.
- This has happened to writing, music and search.
- Writing: Writing → Typing → spell check → smart reply → LLMs
- Music: Playing → Recording → Radio → Spotify
- Search: Index cards → Google → Did you mean? → LLMs
- An agent is piece of narrow artificial intelligence that acts on behalf of its user.
- Categories of AI:
- Artificial Super Intelligence (strong): Capabilities advanced far beyond human
- Artificial General Intelligence: General problem solving similar to human intelligence
- Artificial Narrow Intelligence (weak):Constrained AI fantastic at narrow tasks - can’t generalise
- Dimensions of intelligence for agents:
- Nuanced: It has a more human-like and detailed understanding of its environment.
- Expansive: It can track and process larger and more complex sets of data.
- Insightful: It can draw smarter conclusions from what it observes and act on them.
- Strategic: It can plan how to achieve goals by considering resources and constraints.
- Adaptive: It can adjust its behavior based on feedback and changing conditions.
- Self-improving: It can learn and get better over time by refining its predictive models.
- Similar to intelligence, agency can be thought of as a spectrum.
Acting on your behalf - doing a thing while out of sight and out of mind is foundational to the notion of what an agent is, why it’s new, and why it’s valuable.
Agentive technology watches a data stream for triggers and then responds with narrow artificial intelligence to help its user accomplish some goal. In a phrase, it’s a persistent, background assistant.
- Agents might also have advanced features: infer what you want, adapt, learn to make better predictions, it might make itself obsolete.
- A good agent does a task for you per your preferences. It does so out of sight.
- When designing an agentive experience, the goal is to make the touchpoints clear and actionable, and help the user keep the agent on track.
What Agentive technology isn’t:
- Assistive Technology: It supports the user in making informed decisions or performing tasks, but it does not take independent action on the user’s behalf.
- Conversational Agents: These interfaces focus on real-time, human-like dialogue to assist a user, rather than autonomously and persistently acting for the user behind the scenes.
- Robotics: While robots may contain agentive software, they are primarily about physical embodiments performing tasks in the real world, and the mere presence of a robot body does not make technology agentive.
- Service by Software: It delivers value—often “backstage”—through software, but is not necessarily granted the autonomy to act for the user in pursuit of the user’s goals.
- Automation: Its aim is to remove humans from a process entirely, whereas agentive technology focuses on serving a human’s intentions, keeping the user “in the loop” while taking actions on their behalf.
- These questions help determine whether a task is a good candidate for an agentive solution:
- Can it be delegated?
- Is the trigger measurable?
- Does it require human oversight / focus?
- Can it be done successfully without user input or preference settings?
Agentive Technology can Change the World
- Agentive technology:
- Can move us from discrete usage to continuous usage (constantly keeping an eye out for us).
- Can do the monotonous work in the middle we’re not good at (e.g. autopilot)
- Can do things we’re unwilling to do - they can do things for you whilst your attention is elsewhere. Things that are too boring for you to do yourself.
- Can encourage discovery through drift / serendipity
- Can help us achieve goals with minimal effort
You can even say that an agent is the ultimate expression of goal-focused design thinking, because it gets users to their goals with the least effort possible.
- Agents that focus on the goal - will win out over agents that focus on the task
Six Takeaways from the History of Agentive Thinking
The analytical engine has no pretensions to originate anything. It can do whatever we know how to order it to perform. Ada Lovelace
- Paul Fitts published a paper in 1951 that mentioned what humans are better at and what machines are better at (HABA-MABA)
- Easily recalling relevant memories from a vast lifetime of experiences.
- Detecting subtle audio-visual signals.
- Perceiving emergent patterns in light and sound.
- Improvising and using flexible procedures to solve problems.
- Using inductive reasoning to infer new conclusions from givens.
- Passing judgment on the value or “rightness” of a thing.
- Maintaining perfect and erasable short-term memory (avoiding human biases).
- Performing tasks and responding to stimuli with high speed.
- Handling great forces with speed and precision.
- Maintaining consistency across repetitive tasks, without boredom.
- Applying deductive reasoning by eliminating false hypotheses.
- Managing many complex operations simultaneously.
- 1983 Bainbridge publishing ironies of automation: When you take people away from regular practice at working as part of a system, they become worse at preventing, troubleshooting, and remedying problems in that system.
- The Un-fitts list from Hoffman-Woods 2002 paper. More complicated, less memetic and more true.
- How Machines Are Constrained:
- Sensitivity to context is low and is ontology-limited
- Sensitivity to change is low and recognition of anomaly is ontology-limited
- Adaptability to change is low and is ontology-limited
- They are not "aware" of the fact that the model of the world is itself in the world
- What Machines Need People For:
- Keep them aligned to the context
- Keep them stable given the variability and change inherent in the world
- Repair their ontologies
- Keep the model aligned with the world
- How People Are Not Limited:
- Sensitivity to context is high and is knowledge- and attention-driven
- Sensitivity to change is high and is driven by the recognition of anomaly
- Adaptability to change is high and is goal-driven
- They are aware of the fact that the model of the world is itself in the world
- Why People Create Machines:
- Help them stay informed of ongoing events
- Help them align and repair their perceptions because they rely on mediated stimuli
- Affect positive change following situation change
- Computationally instantiate their models of the world
- The Seven Deadly Myths of Autonomous Systems - by Jeffrey Bradshaw
- Autonomy is not a one-dimensional trait but rather a balance between self-sufficiency and self-directedness
- The concept of "levels of autonomy" oversimplifies complex, context-dependent capabilities
- Autonomy emerges from integrated human-machine interactions, not as a plug-and-play feature
- True autonomy doesn't exist - all machines require some level of human oversight
- Full autonomy doesn't eliminate human involvement - collaboration remains essential
- Increased autonomy transforms task dynamics rather than simply multiplying capabilities
- Pursuing full autonomy often creates new complexities and increased oversight demands
- Practical examples of ways that software hight help its users from A Model for Types and Levels of Human Interaction with Automation (by 2000 Raja Parasuraman et al)
- Each line describes the tool as one of the following:
- Fully manual
- Showing the user every option
- Narrowing the options
- Suggesting the "best"
- Asking the user to approve an action
- Giving the user time to veto a selected action
- Keeping the user informed of actions that have been taken
- Responding to user inquiry about actions that have been taken
- Deciding when to inform the user of actions that have been taken
- Fully autonomous
Humans Are Better At
Machines Are Better At
A better model is to do away with thinking of agency as having levels. It is better to think of the workflows, goals, and tasks of the individual or team, and then build agentive (or assistive) tools that enable mutual observability, predictability, and directability. Think of agency as fluid.
- Seven Cardinal Virtues of Human-Machine Teamwork: Examples from the DARPA Robotic Challenge (Johnson et al.)
- Clarity – Focusing on mission performance ensures decisions are made with straightforward, effective guidance.
- Humility – Recognising and embracing the limits of automation promotes necessary human–machine collaboration.
- Resilience – Designing systems to plan for and recover from failure creates robust and adaptable operations.
- Helpfulness – Enabling mutual support between humans and machines enhances team effectiveness.
- Cohesiveness – Maintaining observability, predictability, and directability unifies team actions into a coordinated whole.
- Integrity – Seamlessly integrating algorithms and interfaces ensures a dependable, mutually supportive system.
- Thrift – Dynamically right-sizing human involvement optimises both cost efficiency and overall performance.
- The Agentive/Assistive line will be blurry. Six Modes of Proactive Resource Management - A User-Centric Topology for Proactive Behaviour. 2004:
- Preparation: Can objects and places know when people are headed their way and prepare themselves for use?
- Optimisation: Can the agent observe available possibilities and pick the right one for the user's goals?
- Advising: Can agents observe tasks in progress and suggest better or alternate options?
- Manipulation: What can the agent do on its own when it is absolutely certain it is what the user wants?
- Inhibition: Can the agent understand enough of the context to know what is welcome and suppress the rest?
- Finalisation: What can the agent end or close when it is no longer in use?
Part 2: Doing
Chapter 5: A modified Frame for Interaction
- Human Loop: See → Think → Do
- Computer Loop: Input → Process → Output
As we move from assistive to agentic technologies the computer does the heavy lifting, with only occasional human interaction.
User Experience Journey:
- Setting Up the Agent
- Understanding the agent's capabilities and limitations.
- Conveying your goals and preferences.
- Granting your permissions and authorisation.
- Taking the agent out for a test drive.
- Launching the agent.
- Discovering and adding new capabilities as they come available or grow popular.
- Seeing What the Agent Is Doing
- Monitoring what's happening.
- Receiving notifications of successes and problems.
- Having or Helping the Agent Do Stuff
- Pausing and restarting the agent.
- Playing alongside the agent.
- Tuning triggers and behaviors such that they perform better in the future.
- Handing off the task to some intermediate person, or even a different, non-human actor.
- Practicing the main task to maintain skills.
- Taking over the task from the agent.
- Handing the task back to the agent.
- Disengaging from the Agent
- The user's no longer needing the agent.
- The user's passing.
Sensing technologies that enable agents to ‘see’: Object recognition, face recognition, biometrics, gaze monitoring, natural language processing, voice recognition, handwriting recognition, sentiment, gesture recognition, activity recognition, affect recognition, personality insights.
Chapter 6: Ramping Up with an Agent
Few agents operate in a small enough domain that there’s very little to no setup, but more sophisticated, powerful, and interesting agents will require conscious attention to setting them up to perform well. There are five key aspects to design:
- Conveying capability - making it clear to potential users what an agent can do. Consider sharing sample outputs / results from agents and allowing users to conduct a trial run.
- Conveying limitations - making it clear to potential users what an agent can’t do. Avoid anthropomorphisation - use more-constrained signals, convey capabilities are less-than-human.
- Getting goals, preferences, and permissions - Smart defaults, gather what you need implicitly, get permission for private data, observe how users currently conduct the task. Setup explicit triggers and exceptions to a set of actions and constraints.
- Test driving - If the risks are small - an agent can just launch. Some agents need to be test driven, to build trust and transparency about how they’ll operate before launch. Users can monitor a test drive on a more compressed schedule.
- Launch - If agents appear idle then users will need some assurance that they’re patiently waiting for a trigger. Provide monitoring feedback and the ability to pause and resume the agent.
Chapter 7: Everything Running Smoothly
- Users need clear, accessible controls to pause and restart an agent, with visual cues to indicate a paused state and potential missed opportunities.
- Even when operating autonomously, the agent should offer monitoring tools (like ambient displays and clear graphics) so users can check status, view results, and see upcoming triggers.
- Provide an option for users to “play” alongside the agent—using a virtual dataset or a subset of resources—to compare performance, build trust, and retain their own task skills.
- Notifications should inform users of completed actions (using subtle audio or log entries) and deliver helpful suggestions only when confidence is high, avoiding unnecessary interruptions.
- Routine contact from the agent helps keep the system “in mind” without being intrusive, with respectful defaults and opt-out controls for frequency.
- When trends become concerning, the agent must send concise, friendly alerts detailing the issue, thresholds, current actions, and recommended user responses.
- Critical problems or significant deviations should trigger immediate outreach from the agent to alert the user of potential issues.
Chapter 8: Handling Exceptions
- Physical agents may include built-in controls like buttons, displays, touchscreens, microphones, and speakers (e.g., stopping a Roomba by stepping in its way).
- Designers are increasingly shifting control and display functions to the cloud, allowing users to interact via desktops or smartphones, which necessitates robust security and authentication features.
- Trust
- The level of trust in an agent depends on the risk associated with its tasks; low-risk tasks (e.g., cleaning an unswept floor) are trusted more readily than high-risk tasks (e.g., financial decisions or physical safety).
- Trust is built gradually over multiple interactions but can be quickly eroded by failures; factors like task complexity, scope layers, and the provider’s reputation all influence trust levels.
- Resource Management and Physical Interventions
- Agents must monitor limited resources (e.g., batteries, storage, bandwidth) and alert users well in advance to manage or replenish these resources.
- When agents encounter physical problems (e.g., a Roomba getting stuck or a pet feeder’s bin jamming), users need to clear alerts so they can make simple manual corrections without disrupting the agent's function.
- Trigger Tuning:
- Agents rely on triggers to act, but false positives (e.g., playing an unwanted song) and false negatives (e.g., missing a desired action) can occur.
- Users should have options to skip specific cases, add items to blacklists or whitelists, and modify or add rules using a constrained natural language builder that provides previews and highlights edge cases.
- Behaviour Tuning:
- Interfaces should allow users to adjust an agent’s goals (desired outcomes) and methods (how it achieves those outcomes) through both physical demonstrations and virtual tools, such as menus or conversational interfaces.
- Clear, user-friendly rule modification interfaces can help refine how agents act, ensuring that adjustments are intuitive and reduce the risk of errors.
- Handoff, Takeback, and Disengagement
- When an agent cannot handle a situation, it should facilitate a handoff (user takes control) and later allow for a takeback (user returns control to the agent), ensuring smooth transitions.
- Agents must provide polite, painless options for disengagement, allowing users to opt out gracefully, and should have protocols for handling sensitive scenarios like a user's death (e.g., transferring estate management tasks).
Chapter 9: Handoff and Takeback
- Handoff and Takeback as Critical Challenges:
- Early automation envisioned computers as replacements with humans as fail-safes, but computers lack key human abilities like pattern recognition, inductive reasoning, and flexible memory, necessitating human oversight.
- Trust and control become complicated because users often lose system-level expertise and vigilance, degrading their ability to manage emergencies.
- Human Limitations and Vigilance
- Research shows human vigilance significantly declines after about 30 minutes, making continuous monitoring impractical for critical takeover tasks.
- Overreliance on automation can erode users’ skills, leaving them as ineffective as novices when emergencies occur.
- Design Strategies to Mitigate Expertise Loss
- Intermediary Control: Use remote operators or share sensor data among nearby agents (e.g., vehicles) to assist during critical handoff scenarios.
- User Handoff and Regular Practice: Incorporate routine takeover drills (both real and high-fidelity virtual practice) to keep user skills fresh and ensure smooth transitions during emergencies.
- Interface and Notification Considerations
- Provide trending monitors and high-information alarms that alert users well before a critical threshold is reached, with clear, glanceable information.
- Design persistent maps and assistive cues to quickly build user situational awareness during handoff events, ensuring immediate recognition and action.
- In emergencies, use prominent visual and control affordances to signal urgent actions (e.g., “grab the steering wheel”) and facilitate clear, low-distraction takeback controls, allowing users to signal readiness for the agent to resume control.
- After takeback, include a brief reassurance period to help users regain confidence and recover from the critical event.
Chapter 10: Evaluating Agents
- You can evaluate agentive systems using both traditional usability methods and additional heuristics tailored to agent-specific behaviours.
- Even agents "that just do their thing" include interfaces (e.g., sign-up, notifications, exception handling) that must be assessed.
- Use rapid, in-progress prototypes (paper, digital, or with a human acting as the agent) to gather early feedback on agent interactions.
- For live agents, combine quantitative usage metrics with qualitative ethnographic research and lab tests to understand user behaviour and trigger handling.
- Assess traditional task-related interfaces using established usability principles, while applying specialised heuristics for agentive aspects.
- Verify that the agent triggers appropriately (true positives/negatives) and performs as specified through external audits or measurements.
- Measure user confidence via surveys on clarity, progress visibility, and reliability of the agent's performance.
- Evaluate perceived value by determining if users find the agent worth the investment in terms of cost, hassle, and overall benefit.
- Analyse cooperation by testing how well the agent allows users to direct or intervene, handles exceptions, and collaborates with other systems.
- Compare objective performance data (with and without the agent) to assess the overall value added to the system.
Part 3: Thinking
Chapter 11: How Will Our Practice Evolve?
- Agentive technology demands new vocabulary, techniques, and testing methods distinct from traditional tool design.
- Designers must introduce and advocate for agentive solutions by sharing vocabulary, anecdotes, and case studies with stakeholders.
- Early agents will be small and buggy; work will focus on creating smarter defaults, refined rules, and exception handling that align with users' mental models and emotional contexts.
- Agents should serve as temporary scaffolds that support skill acquisition and then gradually recede, allowing users to regain full autonomy.
- Agentive technology represents the forefront of narrow AI, with general AI (AGI) expected in several decades, guiding the evolution of current practices until AGI potentially renders traditional agent designs obsolete.
Chapter 12: Utopia, Dystopia and Cat Videos
- Agentive technology is inherently non-neutral, embedding biases and values that can influence society.
- Predictions about new tech tend to swing between utopian and dystopian extremes, while the actual effects are more nuanced.
- Autonomous agents raise significant ethical and accountability challenges due to their independent decision-making.
- Software agents can exhibit biases or be manipulated, as seen in examples like the non-random shuffle favouring certain music.
- Limited user ability to inspect agent code makes open-source approaches helpful, but expertise remains a barrier.
- People or companies may game the data-streams driving agents, potentially distorting agent behaviour.
- Detailed user models built by agents pose serious risks for privacy and identity theft.
- Autonomous decision-making in critical systems (e.g., self-driving cars) brings ethical dilemmas similar to the trolley problem.
- The drive to make agents smarter could trigger an arms race toward artificial general intelligence (AGI), complicating ethical oversight. The potential emergence of AI personhood raises complex legal and cultural questions about rights and responsibilities.
- Anthropomorphic qualities of agents can be exploited for social engineering, undermining security.
- Widespread use of agents might lead to stratified services, where less profitable customers receive lower-quality support.
- The proliferation of agents may overwhelm users, necessitating intermediary management systems similar to hierarchical structures.
- Increased reliance on agents risks the gradual loss of human skills, even if some specialised expertise is maintained.
- Society’s growing dependence on agents creates vulnerabilities if these systems fail, underscoring the need for robust handoff and resilience mechanisms.
Chapter 13: Your Mission, Should You Choose to Accept It
- Agentive technology forces us to reconsider how we discuss and design interactions in a world where machines exhibit more agency.
- There’s a tension between technology's capabilities and the need for human ethical decisions.
- Agentive technology is a promising new frontier that requires its own vocabulary, techniques, and evaluation methods.
- Designers must shift from building tools for manual task execution to creating effective, humane agents that manage tasks.
- There is a need to integrate complementary models and patterns for assistive and agentive technologies in our design practices. Sharing case studies, design heuristics, and development libraries is essential to evolve the practice of agentive technology.
- Organisations should spread these new models across teams to enhance collaboration and innovation in agent design.
- Effective agentive technology supports human decision-making, enabling users to manage tasks rather than perform them manually. Future designs should empower users, making technology a partner that enhances human capabilities rather than replacing them. The goal is to create technology that works seamlessly for people, making our lives easier and more efficient.