Ethan Mollick
Review
We’re all trying to get our head around what the latest wave of AI disruption means. This book is an invaluable resource in that quest, offering helpful frameworks and explainers to help you approach the subject with clarity. The authors’ take on the disruption of education is eye opening.
You Might Also Like:
Key Takeaways
The 20% that gave me 80% of the value.
- So far AI has often overpromised and underdelivered.
- AI is a general-purpose technology, like electricity or the internet. It will take decades to fully realise its potential as we build out supporting technologies and people choose to adopt it.
- AI models though are being quickly adopted and are rapidly improving, increasing in size tenfold each year.
- AI differs from past technologies by augmenting and replacing human thinking, boosting productivity by 20-80% in various jobs.
- Recent breakthroughs have seen AI pass the Turing Test, Lovelace Test, and even excel in academic exams and math olympiads.Some believe we are witnessing the emergence of a new intelligence.
- Even though they are just predictive models, Frontier AI models, trained on large datasets with lots of compute, show surprising abilities. This is called emergence. No one is entirely sure why a token prediction system resulted in an AI with such extraordinary abilities.
There are hundreds of billions of connections between these artificial neurons, some invoked many times during processing, making any precise explanation of an LLM’s behavior too complex for humans to understand.
- For us, it's worth focusing on the practical—what can AIs do, and how will they change our lives, learning, and work?
- There is no particular reason that AI should share our view of ethics and morality.
- Artificial General Intelligence (AGI) is when AI becomes as smart, capable, creative and flexible as a human. An AGI or group of AGIs could start working around the clock to become more intelligent. They could invent Artificial Super Intelligence (ASI) - the moment an ASI is invented, humans could become obsolete.
- There is no guarantee that an AI system will keep its original values and goals as it evolves and learns from its environment.
- Biases in AI training data and human raters can lead to skewed AI outputs. Few AI companies seek permission for training data, raising ethical issues even before legality is discussed.
The path forward requires a broad societal response, with coordination among companies, governments, researchers, and civil society. We need agreed-upon norms and standards for AI’s ethical development and use, shaped through an inclusive process representing diverse voices. Companies must make principles like transparency, accountability, and human oversight central to their technology. Researchers need support and incentives to prioritize beneficial AI alongside raw capability gains. And governments need to enact sensible regulations to ensure public interest prevails over a profit motive.
- The public also needs education on AI so they can pressure for an aligned future as informed citizens.
Four Rules for Co-Intelligence
- Principle 1: Invite AI to help you in everything you do. It’ll help you understand its capabilities and limitations. Those who understand the nuances, limitations, and abilities of AI tools are uniquely positioned to unlock its potential.
- Principle 2: Be the human in the loop. Aim to be a helpful human in the loop. We need human judgment in complex systems. LLMs often generate incorrect answers and justify them with eloquence. You must check AI for errors, providing oversight with your perspective, critical thinking, and ethics. This collaboration leads to better results, keeps you engaged, and helps you maintain and sharpen your skills.
- Principle 3: Treat AI like a person (but tell it what kind of person it is).Imagine your AI collaborator a fast intern - eager to please but prone to bending the truth. They can adapt to your preferences and personality by learning from your feedback and interactions. They are suggestible and even gullible. To make the most of this relationship, you must establish a clear and specific AI persona, defining who the AI is and what problems it should tackle. Break the pattern of generic responses by providing context and constraints. Be mindful of the downsides of anthropomorphism.
- Principle 4: Assume this is the worst AI you will ever use. There is no reason to suspect that the abilities of AI systems are going to stop growing anytime soon. Future software will be far more advanced than it is today. Remaining open to new developments will help you adapt to change, embrace new technologies, and remain competitive in a fast-paced business landscape driven by exponential advances in AI.
AI As a Person
- Large Language Models are impressive but behave unpredictably compared to traditional software. They excel at human-like tasks (writing, analysing, coding, chatting) but struggle with machine-like tasks (consistency, complex calculations). So we should treat AI as if it were human.
- Modern AI demonstrates adaptability to different conversation styles and creates a convincing illusion of sentience.
- The possibility of "perfect AI companions" could have profound implications for human relationships. There are concerns about future AI systems being optimised for user engagement, similar to social media.
- Viewing AI as human-like but not human can be a pragmatic approach to interacting with it
AI As a Creative
- The paradox of AI creativity: Hallucination makes LLMs unreliable and dangerous for factual work BUT it makes them great at creative tasks.
- The key question: How can we use AI to take advantage of its strengths while avoiding its weaknesses?
- Jobs with creative tasks are most impacted by AI.
- We mistake novelty for originality. New ideas are based on existing concepts, often connecting distant ideas. LLMs are connection machines, linking seemingly unrelated tokens.
- Invite AI to brainstorming sessions.You have to come up with many bad novel ideas to find a good one. AI can help with creation, we’re good at being the filter. Most ideas will be mediocre, but it can help you avoid the blank page problem. Look for inspiring ones and filter the rest.
- Example: "You are a marketing expert. Generate 20 clever and diverse slogans for a new mail-order cheese shop."
- Using LLMs can help you complete tasks 40% faster, and at a higher quality as judged by other humans.
- AI works tremendously well as a coding assistant because writing software code combines elements of creativity with pattern matching - expect 55.8% productivity increase.
- AI may turn out to reinvigorate art and creativity, rather than be its demise.
- People who have deep or broad knowledge of unusual fields will be able to use AI in ways that others cannot.
- Many people want to express themselves - there is a lot of frustrated creative energy in the world. Generative AI is giving people new modes of expression and new languages for their creative impulses.
AI As a Coworker
- Studies conclude almost all of our jobs will overlap with the capabilities of AI. Surprisingly - AI overlaps most with the most highly compensated, highly creative, and highly educated work.
- Jobs are composed of bundles of tasks. Jobs fit into larger systems. Without considering systems and tasks, we can’t really understand the impact of AI on jobs.
- AI can take over some tasks, but getting rid of some tasks doesn’t mean the job disappears. Power tools didn’t eliminate carpenters but made them more efficient.
- Relying too much on AI can backfire. Fabrizio Dell’Acqua showed recruiters who used high-quality AI became lazy, careless, and less skilled in their own judgment.
- We want to be more efficient while doing less boring work, and to remain the human in the loop while also addressing the value of AI. Divide tasks into categories that are more or less suitable for AI disruption.
- Evaluate what tasks AI performs well and what tasks require human involvement. Tasks can be divided into three categories:
- Just Me Tasks: These are tasks where AI is not useful or could compromise personal touch and creativity.
- Delegated Tasks: These are tasks assigned to AI, which are typically low importance, repetitive, or time-consuming but still require human oversight.
- Automated Tasks: These tasks are fully managed by AI without human intervention, typically because they are reliable and scalable.
- Centaurs and Cyborgs
- Centaur work has a clear division between person and machine, strategically switching between AI and human tasks based on their strengths.
- Centaurs handle tasks they are best at and pass others to AI.
- Cyborgs deeply integrate machine and human efforts, moving back and forth, intertwining their tasks with AI.
- Using AI as a co-intelligence is where AI is the most valuable. Figure out a way to do this yourself.
- Follow the first principle (invite AI to everything)
- Learn the shape of the Jagged Frontier in your work (knowing what the AI can do and what it can’t)
- Start working like a Centaur. Give the tasks that you hate but can easily check (like writing meaningless reports or low-priority emails) to the AI and see whether it improves your life.
- Transition into Cyborg usage once you find AI becomes indispensable in overcoming small challenges (this is when you’ve reached co-intelligence)
AI As a Tutor
- The average student tutored one-to-one performs two standard deviations better than students educated in a conventional classroom environment. Well enough to get you into the 98th percentile. A powerful, adaptable, and cheap personalised tutor is the holy grail of education.
- We’re at an inflection point where AI will reshape how we teach and learn.
- We need to rethink education
- Teachers will have to think about what AI use is acceptable. Where is the line? Writing an outline for an essay? Helping with a sentence that someone is stuck on? Asking for references? Getting AI to explain a topic to you?
- Just as calculators did not replace the need for learning math, AI will not replace the need for learning to write and think critically.
- AI provides the chance to generate new approaches to pedagogy that push students in ambitious ways.
- The new advice: Make what you are planning on doing ambitious to the point of impossible; you are going to be using AI - I won’t penalise you for failing if you are too ambitious.
- We won’t need prompt engineering degrees - writing good prompts is easy - prompting is not going to be that important for that much longer. Chain-of-thought prompting and step-by-step instructions that build on each other can get better results.
- The lecture is in danger. Lectures are too passive and don’t engage us in active problem-solving or critical thinking. They are one-size-fits-all - they don’t account for individual differences and abilities, leading to some becoming bored and some falling behind.
- The flipped classroom idea: students learn new concepts at home (through videos, and AI tutors) then apply what they’ve learned in the classroom through collaborative activities, discussions, or problem-solving exercises. Maximises classroom time for active learning and critical thinking, while using at-home learning for content delivery.
- AI tutors could provide personalised learning at scale, tailoring instructions to each student’s unique needs while continually adjusting content based on performance.
- Students engage with content at home effectively → come to class prepared and ready to dive into hands-on activities or discussions.
Students will want to understand why they are doing assignments that seem obsolete thanks to AI. They will want to use AI as a learning companion, a coauthor, or a teammate. They will want to accomplish more than they did before, and will also want answers about what AI means for their future learning paths
AI As a Coach
- Amateurs become experts by learning from more experienced experts in a field - who create a safe space for them to fail and learn. In a future with AI, more experienced people might favour working with AI over brining on an apprentice. This could create a major training gap.
- In order to learn to think critically, problem-solve, understand abstract concepts, reason through novel problems, and evaluate the AI’s output, we still need subject matter expertise.
- In a world where AI augments our work - the more we need to maintain and nurture human expertise. We need expert humans in the loop.
- The way to be useful in the world of AI is to have high levels of expertise as a human.
- AIs could become great coaches creating a better training system than we have to day. They could create plans and increase the quality and volume of our deliberate practice. We’ll have consistent, rapid feedback loops, combined with targeted suggestions for improvement.
- The difference in growth trajectories will become evident.
- AI levels the playing field, making previously less skilled workers more competent.
- Humans working with AI outperform almost everyone except the best humans working alone.
- AI won't kill expertise. Jobs involve complex tasks needing human judgment. AI will improve performance in some areas, allowing workers to focus on their expertise.
AI As Our Future: At what rate will AI progress from here? There are a few scenarios:
- Scenario 1: As Good as It Gets
- A global ban or halting of development through regulation. Even if AI progress stopped now, its implications are significant. We still have further to go in adopting the technology we have today. In the workplace, AI would likely complement human efforts, improving performance and relieving tedious tasks.
- Scenario 2: Slow Growth
- AI growth slows to 10-20% per year due to factors like training costs, regulations, or technical limits. This slower pace allows time to develop usage rules and identity verification systems. AI-generated personas become common in games and personalised media, while AI therapists and chatbots normalise in business settings. Work transforms gradually. AI could boost scientific innovation by addressing the "burden of knowledge" in research. Overall, this scenario presents mixed but largely positive results, with humans remaining in control of AI development and application.
- Scenario 3: Exponential Growth
- AI becomes hundreds of times more capable within a decade. This rapid growth brings increased risks of AI hacking, influence campaigns, and the potential development of dangerous pathogens or chemicals. Society might need AI filtering systems to combat misinformation, risking further information bubbles. There's a danger of "AI-tocracy" with increased surveillance. Socially, new forms of isolation may emerge as AI-powered entertainment and assistants become prevalent. This scenario might necessitate significant policy changes like shortened workweeks or universal basic income to manage the societal impact.
- Scenario 4: The Machine God
- AI reaches and surpasses human-level intelligence, potentially leading to an end of human dominance. The implications are profound and uncertain: AI might solve human problems and improve our lives dramatically, or it might view humanity as irrelevant. This scenario represents both the greatest potential benefits and the most severe risks of AI development.
- Regardless of which scenario unfolds, it's crucial to focus on the more likely outcomes where AI remains under human control. We should consider both the benefits and risks of AI across various domains, prepare for societal and work-related changes, and address potential small-scale catastrophes rather than fixating on one big AI apocalypse. The decisions we make now about AI implementation and regulation will shape its impact on our future.
Deep Summary
Longer form notes, typically condensed, reworded and de-duplicated.
Introduction
- LLMs act more like people than computers.
- AI has often overpromised and underdelivered.
- AI, like electricity or the internet, is a general-purpose technology. It will take decades to fully realise its potential (as we build out supporting technologies and people choose to adopt it).
- AI models are being quickly adopted and are rapidly improving, increasing in size tenfold each year.
- AI differs from past technologies by augmenting and replacing human thinking, boosting productivity by 20-80% in various jobs.
- Some believe we are witnessing the emergence of a new intelligence.
- Recent breakthroughs have seen AI pass the Turing Test, Lovelace Test, and even excel in academic exams and math olympiads.
Part 1:
Chapter 1: Creating Alien Minds
- Machine Learning became popular in the 2010s. Primarily for data analysis and prediction. The vast majority was supervised learning - humans labelling enough data with the correct answers for machines to learn from. Very little about these systems that actually seemed intelligent or clever. They struggled with predicting “unknown unknowns,” or situations that humans intuitively understand but machines do not. They struggled with data they had not yet encountered through supervised learning, making them brittle. They had very limited ability to understand and generate text in a coherent and context-aware manner.
- Attention Is All You Need a 2017 paper introduced a significant shift in the world of AI, it introduced a new architecture called the Transformer. Transformers helped computers pay attention to the most relevant parts of a text - helping them produce more context-aware and coherent writing.
- Large Language Models (LLMs) are trained on a massive amount of text (websites, books, documents, chats). The Pretraining is unsupervised, which means the AI doesn’t need carefully labeled data.
- They learn to recognise patterns, structures, and context in human language and they are stored in weights (parameters). Weights tell the AI how likely different words or parts of words are to appear together or in a certain order. Some models now have trillions of parameters.
- During training, the model attempts to re-create documents and identifies any mistakes or discrepancies. It can then adjust weights over time, and through countless iterations, the model becomes more organised and accurate. It requires lots of compute and the pretraining phase is one of the main reasons AIs are so expensive to build. Most advanced LLMs cost over $100 million to train.
- AI companies are keeping their training sources secret - but assume it’s everything they can find. High quality training data will be exhausted soon.
- It might be possible for AI to pretrain on its own content. This is what chess-playing AIs already do, learning by playing games against themselves.
- AI will learn biases, errors, and falsehoods from the data it sees.
- LLMs undergo further improvement in a second stage, called fine-tuning. Fine-turning brings humans into the process, they read AI answers and judge them (on accuracy, or to screen out violent or pornographic content). The process is called Reinforcement Learning from Human Feedback (RLHF). During fine tuning more information might be provided by a specific customer trying to fit the model to its use case (e.g. a company using it for customer support). making it more specific to a particular need.
- Image-based models analyze images with text captions to learn word-visual associations. They start with a static-like image and use diffusion to refine it into a clear image based on text descriptions.
- Diffusion models create images from text.
- Multimodal LLMs combine language models and image generators, using Transformer architectures and extra components for images. They link visual concepts with text, helping them understand the visual world. For example, given a crude drawing of an airplane with hearts, it identifies it as a "cute drawing of an airplane."
- Despite being just a predictive model, Frontier AI models, trained on vast datasets with immense computing power, show unexpected abilities—a concept called emergence.
- No one is entirely sure why a token prediction system resulted in an AI with such extraordinary abilities.
There are hundreds of billions of connections between these artificial neurons, some invoked many times during processing, making any precise explanation of an LLM’s behavior too complex for humans to understand.
- It's hard to predict where AI works best or will fail. They are so good at sounding correct, providing an illusion of understanding. High test scores can result from the AI's problem-solving ability or prior exposure to the data, making the test an open book.
- For us, it's worth focusing on the practical—what can AIs do, and how will they change our lives, learning, and work?
Chapter 2: Aligning the Alien
- There is no particular reason that AI should share our view of ethics and morality.
- Artificial General Intelligence (AGI) is when AI becomes as smart, capable, creative and flexible as a human. An AGI or group of AGIs could start working around the clock to become more intelligent. They could invent Artificial Super Intelligence (ASI) - the moment an ASI is invented, humans become obsolete.
- What happens then is literally unimaginable to us. This is why this possibility is given names like the Singularity, a reference to a point in mathematical function when the value is unmeasurable, coined by the famous mathematician John von Neumann in the 1950s to refer to the unknown future after which “human affairs, as we know them, could not continue.”
- In an AI singularity, hyper-intelligent AIs appear, with unexpected motives. The paper clip maximising AI might choose to kill all humans to use their atoms to make more paperclips. It never even considers whether humans are worth saving, because they are not paper clips.
- Many of these concerns revolve around an ASI - being able to make smarter machines yet, kick-starting a process that escalates machines far beyond humans in an incredibly short time.
- Figuring out how to align an ASI before it is made is an immense challenge. AI alignment researchers, using a combination of logic, mathematics, philosophy, computer science, and improvisation are trying to figure out approaches to this problem.
- There is no guarantee that an AI system will keep its original values and goals as it evolves and learns from its environment.
- Experts in the field of AI put the chance of an AI killing at least 10 percent of living humans by 2100 at 12 percent.
- Some believe creating Superintelligence is humanity's most important task with boundless potential. We are in the early days of the AI Age - there are crucial decisions to be made which could have big implications.There are varied ethical concerns, especially regarding AI alignment.
- Biases in AI training data and human raters can lead to skewed AI outputs. Few AI companies seek permission for training data, raising ethical issues despite legality.
The path forward requires a broad societal response, with coordination among companies, governments, researchers, and civil society. We need agreed-upon norms and standards for AI’s ethical development and use, shaped through an inclusive process representing diverse voices. Companies must make principles like transparency, accountability, and human oversight central to their technology. Researchers need support and incentives to prioritize beneficial AI alongside raw capability gains. And governments need to enact sensible regulations to ensure public interest prevails over a profit motive.
- The public also needs education on AI so they can pressure for an aligned future as informed citizens.
Chapter 3: Four Rules for Co-Intelligence
We need to understand how to work with them. So we need to establish some ground rules.
- Principle 1: Always invite AI to the table.
- Inviting AI to help you in everything you do.
- AI is a General Purpose Technology it doesn’t come with a manual.
- Using it will help you understand its capabilities, limitations and threats to your work.
- As artificial intelligence proliferates, users who intimately understand the nuances, limitations, and abilities of AI tools are uniquely positioned to unlock its potential.
- Try and unlock its full innovative potential and create breakthrough opportunities
- Worried about becoming dependent? Use it as an assistive tool, not as a crutch, keep humans firmly in the loop.
- Principle 2: Be the human in the loop.
- AI works best with human help. Even as AI improves, you should aim to be that helpful human. Learn to be the "human in the loop."
- "Human in the loop" emphasizes the need for human judgment in complex systems.
- LLMs often generate incorrect answers, a phenomenon known as "hallucination."
- AIs can justify wrong answers, making you think they are correct.
- To be the human in the loop, you must check AI for errors, providing oversight with your perspective, critical thinking, and ethics. This collaboration leads to better results, keeps you engaged, and helps you maintain and sharpen your skills.
- Active participation in the AI process will help us align AI with our values and ethics.
- Working closely with AI allows you to notice emerging intelligence early, giving you a head start in adapting to changes.
- Principle 3: Treat AI like a person (but tell it what kind of person it is).
- Anthropomorphise AI. AI systems don’t have a consciousness, emotions, a sense of self, or physical sensations. But working with AI is easiest if you think of it like an alien person rather than a human-built machine.
- Imagine your AI collaborator a fast intern - eager to please but prone to bending the truth.
- They aren’t experts, but can mimic the language and style of experts in ways that can be either helpful or misleading.
- They aren’t your friends but can adapt to your preferences and personality by learning from your feedback and interactions.
- They are suggestible and even gullible.
- To make the most of this relationship, you must establish a clear and specific AI persona, defining who the AI is and what problems it should tackle.
- Break the pattern of generic responses by providing context and constraints.
- It helps to tell the system who it is, because that gives it a perspective.
- Give the LLM guidance and direction on how to generate outputs that match your expectations and needs.
- There are some downsides to anthropomorphism in general discourse to beware of those. It can create unrealistic expectations, false trust, or unwarranted fear among the public and policymakers. It can obscure the true nature of AI as software, leading to misconceptions about its capabilities.
- Principle 4: Assume this is the worst AI you will ever use.
- There is no reason to suspect that the abilities of AI systems are going to stop growing anytime soon. Future software will be far more advanced than it is today.
- We are playing Pac-Man in a world that will soon have PlayStation 6s.
- Remaining open to new developments will help you adapt to change, embrace new technologies, and remain competitive in a fast-paced business landscape driven by exponential advances in AI.
Part II
Chapter 4: AI As a Person
- Large Language Models (LLMs) are impressive but behave unpredictably compared to traditional software
- They excel at human-like tasks (writing, analysing, coding, chatting) but struggle with machine-like tasks (consistency, complex calculations). So we should treat AI as if it were human.
- AI has no morality of its own, but it can interpret our moral instructions. When not given instruction, AI defaults to efficient outcomes.
- The Turing Test, proposed by Alan Turing, aimed to determine if machines could imitate human intelligence. While influential, the Turing Test has limitations and doesn't capture all aspects of human intelligence. Some research suggests AI may have a "theory of mind," but this remains controversial. Measuring AI consciousness, sentience, or free will is challenging due to lack of clear definitions and objective tests. A recent paper lists 14 potential indicators of machine consciousness, with current LLMs exhibiting some but not all.
- Modern AI demonstrates adaptability to different conversation styles and creates a convincing illusion of sentience. The possibility of "perfect AI companions" could have profound implications for human relationships. There are concerns about future AI systems being optimised for user engagement, similar to social media.
- Viewing AI as human-like but not human can be a pragmatic approach to interacting with it.
Chapter 5: AI As a Creative
- Always invite it to the table.
- The biggest downside of AI, its ability to make stuff up, to hallucinate turns into a strength in the creative domain. Hallucination does allow the AI to find novel connections outside the exact context of its training data.
- The paradox of AI creativity: Hallucination makes LLMs unreliable and dangerous for factual work BUT it’s also what makes them useful.
- Ask yourself how can we use AI to take advantage of its strengths while avoiding its weaknesses?
Automatic Creativity
- Jobs with creative tasks are most impacted by AI.
- We mistake novelty for originality. New ideas are based on existing concepts, often connecting distant ideas. LLMs are connection machines, linking seemingly unrelated tokens.
- Innovative people benefit least from AI creative help.
- AI tends to pick similar ideas, while humans generate more diverse ideas.
- Invite AI to brainstorming sessions. Most ideas will be mediocre, but it can help you avoid the blank page problem. Look for inspiring ones and filter the rest.
- Instruct AI clearly:
- Example: "You are a marketing expert. Generate 20 clever and diverse slogans for a new mail-order cheese shop."
- You have to come up with many bad novel ideas to find a good one. AI can help with creation, we’re good at being the filter.
Adding AI to Creative Work
- A surprisingly large amount of work is actually creative work. There are many situations in which there is no right answer.
- Using LLMs can help you complete tasks 40% faster, and at a higher quality as judged by other humans.
- AI works tremendously well as a coding assistant because writing software code combines elements of creativity with pattern matching - expect 55.8% productivity increase.
- AI may turn out to reinvigorate art and creativity, rather than be its demise.
- People who have deep or broad knowledge of unusual fields will be able to use AI in ways that others cannot, developing unexpected and valuable prompts and testing the limits of how they work.
The Meaning of Creative Work
- Many people want to express themselves - there is a lot of frustrated creative energy in the world. Generative AI is giving people new modes of expression and new languages for their creative impulses.
- Since requiring AI in my classes - I no longer see badly written work at all. It’s easier to operate in a second language.
- Everyone is going to use ‘The Button’ to get rid of the tyranny of the blank page.
- The implications of having AI write our first drafts are huge - could we lose our creativity and originality? Could we reduce the quality and depth of our thinking and reasoning?
- Could we soon face a crisis of meaning in creative work of all kinds.
- Work that was boring to do but meaningful when completed by humans (like performance reviews) becomes easy to outsource - and the apparent quality actually increases.
- We are going to need to reconstruct meaning, in art and in the rituals of creative work.
Chapter 6: AI As a Coworker
- Studies conclude almost all of our jobs will overlap with the capabilities of AI. Surprisingly - AI overlaps most with the most highly compensated, highly creative, and highly educated work.
- Jobs are composed of bundles of tasks. Jobs fit into larger systems. Without considering systems and tasks, we can’t really understand the impact of AI on jobs.
- AI can take over some tasks, but getting rid of some tasks doesn’t mean the job disappears. Power tools didn’t eliminate carpenters but made them more efficient.
- The systems within which we operate play a crucial role in shaping our jobs as well.
Tasks and the Jagged Frontier
- A study showed consultants working with the AI did significantly better than the consultants who were not. The effect persisted through 118 different analyses. The AI-powered consultants were faster, their work was considered more creative, better written, and more analytical than that of their peers.
- Most consultants were simply pasting in the questions they were asked, and getting very good answers.
- On a task that combined a tricky statistical issue and one with misleading data → Human consultants got the problem right 84% of the time without AI help, but those with AI did worse getting it right 60-70% of the time.
- Relying too much on AI can backfire. Fabrizio Dell’Acqua showed recruiters who used high-quality AI became lazy, careless, and less skilled in their own judgment. They missed out on some brilliant applicants and made worse decisions than recruiters who used low-quality AI or no AI at all. The powerful AI made it likelier that the consultants fell asleep at the wheel and made big errors when it counted.
- It’s going to take time and experience for us to learn how to work with AI - hence the strategy of inviting AI to everything - will help us understand the Jagged Frontier and how that maps to the tasks make up our individual jobs.
- We want to be more efficient while doing less boring work, and to remain the human in the loop while also addressing the value of AI. Divide tasks into categories that are more or less suitable for AI disruption.
- When working with AI, it's crucial to evaluate what tasks AI performs well and what tasks require human involvement. Tasks can be divided into three categories:
- Just Me Tasks: These are tasks where AI is not useful or could compromise personal touch and creativity. Examples include making important decisions, or expressing values and principles. These tasks are fulfilling and meaningful for humans and may shift as AI evolves.
- Delegated Tasks: These are tasks assigned to AI, which are typically low importance, repetitive, or time-consuming but still require human oversight. Examples include summarising documents or managing simple finances. The goal is to save time while ensuring accuracy.
- Automated Tasks: These tasks are fully managed by AI without human intervention, typically because they are reliable and scalable. Examples include spam filtering and simple coding tasks, where errors can be quickly identified and corrected by the AI itself.
- Tasks we delegate to AI today may become fully automated in the future as performance improves. Similarly, some Just Me Tasks could move to the Centaur category if AI becomes skilled enough to collaborate fluidly rather than just assist.
- Centaurs and Cyborgs
- Until AIs excel at a range of automated tasks, the most valuable way to use AI at work is by becoming a Centaur or Cyborg.
- Centaur work has a clear division between person and machine, strategically switching between AI and human tasks based on their strengths.
- Centaurs handle tasks they are best at and pass others to AI.
- Cyborgs deeply integrate machine and human efforts, moving back and forth, intertwining their tasks with AI.
- The author would engage AI if stuck on writing a paragraph with the following prompts:
- Using AI as a co-intelligence is where AI is the most valuable. Figure out a way to do this yourself.
- Follow the first principle (invite AI to everything)
- Learn the shape of the Jagged Frontier in your work (knowing what the AI can do and what it can’t)
- Start working like a Centaur. Give the tasks that you hate but can easily check (like writing meaningless reports or low-priority emails) to the AI and see whether it improves your life.
- Transition into Cyborg usage once you find AI becomes indispensable in overcoming small challenges (this is when you’ve reached co-intelligence)
- Some Cyborgs and Centaurs stay secret because they don’t want to get in trouble. Organisations should incentivise AI users to come forward, and encourage more to adopt AI.
- Some believe much of the value of using AI today comes from other people not knowing you are using it.
- From Tasks to Systems: LLMs could transform work organisation and management. Our current work systems are historical artefacts shaped by technological and social conditions.
- AI could act as a co-intelligence, helping managers organise work more efficiently.
- There's potential for AI to create a comprehensive monitoring system, tracking worker activities, setting goals, and evaluating performance.
- AI could provide personalised feedback and coaching to improve worker skills and productivity.
- LLMs might help identify and eliminate tedious tasks, potentially improving the human experience of work.
- It’s likely we’ll start AI integration with boring, repetitive tasks, similar to previous waves of automation.
- From Systems to Jobs: AI might impact job roles and the broader employment landscape:. AI is likely to take over certain human tasks, potentially allowing humans to focus on higher-value work.
- While some industries may change rapidly, economists generally expect little overall effect on jobs in the near term.
- AI adoption is happening more quickly and broadly than previous technological waves though — making its impact harder to predict.
- AI could act as a "great leveler," significantly improving the performance of less skilled workers and potentially reducing performance gaps.
- There's a possibility of mass unemployment or underemployment in the long term, which might necessitate policy solutions like shorter work weeks or universal basic income.
- While short-term effects might be limited, the long-term impact of AI on jobs could be more significant than we currently anticipate.
Chapter 7: AI As a Tutor
- The average student tutored one-to-one performs two standard deviations better than students educated in a conventional classroom environment. Well enough to get you into the 98th percentile.
- There is something unique and powerful about the interaction between a tutor and a student. A powerful, adaptable, and cheap personalised tutor is the holy grail of education.
- We’re at an inflection point where AI will reshape how we teach and learn.
- LLMs have killed many homework assignments (of the read and summarise type). Even prior to LLMs 20,000 people in Kenya earned a living writing essays full time for students.
- Essays are ubiquitous in education - but easy for LLMs. There is no reliable way to detect whether or not a piece of text is AI-generated. A couple of rounds of prompting remove the ability of any detection system to identify AI writing. Unless doing assignments on pen and paper, there’s no way of detecting whether work is human-created.
- We need to rethink education
- Teachers will have to think about what AI use is acceptable. Where is the line? Writing an outline for an essay? Helping with a sentence that someone is stuck on? Asking for references? Getting AI to explain a topic to you?
- Students will want to understand why they are doing assignments that seem obsolete thanks to AI. They will want to use AI as a learning companion, a coauthor, or a teammate. They will want to accomplish more than they did before, and will also want answers about what AI means for their future learning paths
- Just as calculators did not replace the need for learning math, AI will not replace the need for learning to write and think critically.
- AI provides the chance to generate new approaches to pedagogy that push students in ambitious ways.
- The new advice: Make what you are planning on doing ambitious to the point of impossible; you are going to be using AI - I won’t penalise you for failing if you are too ambitious.
- AI can tell you 10 ways your project could fail
- Ask 3 famous figures to criticise your plan (Steve Jobs, Jack Ma, Julius Caesar)
- We won’t need prompt engineering degrees - writing good prompts is easy - prompting is not going to be that important for that much longer.
- Default responses are generic, so break the pattern, you’ll get much more useful and interesting outputs. Provide context and constraints:
- It helps to give AI explicit instructions that go step by step through what you want.
- Chain-of-thought prompting → make it clear how you want it to reason, before you make your request.
- Or provide step-by-step instructions that build on each other, making it easier to check the output of each step.
- Even saying ‘Take a deep breath and work on this problem step by step!’ can help
- Teach students to be the humans in the loop: bringing their own expertise to bear on problems.
- School will continue to add value, even with excellent AI tutors. Classrooms provide opportunities to practice learned skills, collaborate on problem-solving, socialise, and receive support from instructors.
- AI can assist teachers to prepare more engaging, organised lectures and make the traditional passive lecture far more active.
- The lecture is in danger. Lectures are too passive and don’t engage us in active problem-solving or critical thinking. They are one-size-fits-all - they don’t account for individual differences and abilities, leading to some becoming bored and some falling behind.
- Active learning reduces the emphasis on lectures by involving students in activities like problem-solving, group work, and hands-on exercises. While effective, it requires effort to develop strategies and initial instruction is still necessary. How can active and passive learning coexist?
- Flipped classrooms: students learn new concepts at home (through videos, and AI tutors) then apply what they’ve learned in the classroom through collaborative activities, discussions, or problem-solving exercises. Maximises classroom time for active learning and critical thinking, while using at-home learning for content delivery.
- AI systems can help teachers generate better learning experiences and make classes more interesting.
- AI tutors could provide personalised learning at scale, tailoring instructions to each student’s unique needs while continually adjusting content based on performance.
- Students engage with content at home effectively → come to class prepared and ready to dive into hands-on activities or discussions.
- Teachers devote more time to fostering meaningful interactions with their students during class.
- AI tutors let teachers know where students might need extra support - enabling teachers to provide more personalised and effective instruction.
- This change is likely to be worldwide. Education is the key to increasing incomes and even intelligence.
Chapter 8: AI As a Coach
- Amateurs become experts by learning from more experienced experts in a field - who create a safe space for them to fail and learn.
- In a future with AI, more experienced people might favour working with AI over brining on an apprentice. This could create a major training gap.
- As experts become the only people who can effectively check the work of capable AIs → we are in danger of stopping the pipeline that creates experts.
- The way to be useful in the world of AI is to have high levels of expertise as a human.
- Given LLMs seem to have mastered a lot of collective human knowledge - you might think teaching basic facts has become obsolete, but the exact opposite is true.
- Foundational skills and facts are tedious to learn but aren’t obsolete because the path to expertise still requires a grounding in facts.
- In order to learn to think critically, problem-solve, understand abstract concepts, reason through novel problems, and evaluate the AI’s output, we still need subject matter expertise.
- In a world where AI augments our work - the more we need to maintain and nurture human expertise. We need expert humans in the loop.
- Experts become experts through deliberate practice, which requires serious engagement and a continual ratcheting up of difficulty. It also requires a coach, teacher, or mentor who can provide feedback and careful instruction, and push the learner outside their comfort zone.
- AIs could become great coaches creating a better training system than we have to day. They could create plans and provide continuous feedback and mentorship.
- Architect AIs provide instantaneous feedback, they highlight structural inefficiencies, suggest improvements based on sustainable materials, and even predict potential costs. They can highlight differences and suggest areas of improvement. The AI is akin to having a mentor watching over his shoulder at every step, nudging him toward excellence.
- The difference in growth trajectories will become evident.
- We’ll be able to use AI to increase the quality and volume of our deliberate practice. We’ll have consistent, rapid feedback loops, combined with targeted suggestions for improvement,
- Talent will likely still matter. Practice only accounts for some of edge of elite athletes.
- Top 75th percentile programmers can outperform the bottom 25th percentile by up to 27x in some aspects.
- Even top workers have weaknesses, requiring them to be part of larger organisations. AI levels the playing field, making previously less skilled workers more competent.
- Humans working with AI outperform almost everyone except the best humans working alone.
- AI will reduce the need for engineers by 80% and that high school graduates could replace college graduates in some roles.
- AI won't kill expertise. Jobs involve complex tasks needing human judgment. AI will improve performance in some areas, allowing workers to focus on their expertise.
- An AI future requires us to build our expertise. Students still need to learn basic skills like reading, writing, and history. If AI doesn't change drastically, it will likely become our co-intelligence, filling our knowledge gaps and helping us improve.
Chapter 9: AI As Our Future
- At what rate will AI progress from here? There are a few scenarios:
- Scenario 1: As Good as It Gets
- This scenario is technically unrealistic, as there's no natural limit to AI improvement. A global ban or halting development through regulation is unlikely. Even if AI progress stopped now, its implications are significant. It's already impossible to distinguish AI-generated images from real ones, leading to increased distrust in information sources and more insular information bubbles. There might be a resurgence of trust in mainstream media as arbiters of truth. In the workplace, AI would likely complement human efforts, improving performance and relieving tedious tasks.
- Scenario 2: Slow Growth
- AI growth slows to 10-20% per year due to factors like training costs, regulations, or technical limits. This slower pace allows time to develop usage rules and identity verification systems. AI-generated personas become common in games and personalized media, while AI therapists and chatbots normalize in business settings. Work transforms gradually. AI could boost scientific innovation by addressing the "burden of knowledge" in research. Overall, this scenario presents mixed but largely positive results, with humans remaining in control of AI development and application.
- Scenario 3: Exponential Growth
- AI becomes hundreds of times more capable within a decade. This rapid growth brings increased risks of AI hacking, influence campaigns, and the potential development of dangerous pathogens or chemicals. Society might need AI filtering systems to combat misinformation, risking further information bubbles. There's a danger of "AI-tocracy" with increased surveillance. Socially, new forms of isolation may emerge as AI-powered entertainment and assistants become prevalent. This scenario might necessitate significant policy changes like shortened workweeks or universal basic income to manage the societal impact.
- Scenario 4: The Machine God
- AI reaches and surpasses human-level intelligence, potentially leading to an end of human dominance. The implications are profound and uncertain: AI might solve human problems and improve our lives dramatically, or it might view humanity as irrelevant. This scenario represents both the greatest potential benefits and the most severe risks of AI development.
- Regardless of which scenario unfolds, it's crucial to focus on the more likely outcomes where AI remains under human control. We should consider both the benefits and risks of AI across various domains, prepare for societal and work-related changes, and address potential small-scale catastrophes rather than fixating on one big AI apocalypse. The decisions we make now about AI implementation and regulation will shape its impact on our future.