Book review | Life 3.0 – Being Human in the Age of Artificial Intelligence by Max Tegmark, Knopf (2017).
If there’s one thing poised to change health care in the near future it’s artificial intelligence (AI). While at a flexion point our understanding of its boundaries and potential remain limited. Among physicians, for example, we can’t seem to get beyond the analog preoccupation of doctors losing their jobs to robots.
MIT’s Max Tegmark has a solution with his new book, Life 3.0 – Being Human in the Age of Artificial Intelligence. I highly recommended independent of whether you’re excited or terrified about a future with machines.
Life 3.0 delivers a comprehensive overview of AI for the general reader. The impact of AI on government, society, global politics and the law are well constructed. He defines the core issues surrounding AI and takes on the myths that dominate public conversation on the subject. While slim in the health care department, Tegmark’s otherwise thoughtfully composed book makes up for it. His introduction will allow you to extrapolate and move on to whatever discipline you happen to work in. Life 3.0 finishes with a brilliant overview of consciousness as it relates to machines and people. (Makes me wish I had paid more attention in philosophy).
A few takeaways for me:
Life 3.0 is about to get real
Our 20th century lens makes it hard to conceive of a machine actually thinking beyond simple calculations. But getting in and around this amazing book and working through the scenarios makes it clear how very real and life altering the future of AI really is. If not for the details, Life 3.0 gives us a clear sense of what is just ahead of us as humans. Quite literally.
Safety is a major preoccupation
What was most eye opening for me was the evolving organization happening around the safety of intelligent machines. The top minds in AI and machine learning have organized to begin a discussion and offer guidance surrounding its safe development and application. Check out the Asilomar AI principles (and check out the endorsements at the bottom of the page). The link in the previous sentence is to Tegmark’s Future of Life Institute, an organization founded to create a discussion around issues such as artificial intelligence and life in the machine age.
Elon Musk has referred to AI as “potentially more dangerous than nukes.” And Stephen Hawking has suggested AI “would be the biggest event in human history. Unfortunately, it might also be the last.” So it’s clear that AI safety is a concern for even the most progressive leaders in the field of AI.
Tegmark suggests,
..the real risk with artificial general intelligence isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
What will happen vs. what should happen
So this alignment of goals, as described by Tegmark, means we have to know what we want from what we create.
Eric Schmidt in the New Digital Age said,
For all the possibilities that communication technologies represent, their use for good or ill depends solely on people. Forget all the talk about machines taking over. What happens in the future is up to us.
(And all this time I thought we had to take what was created.)
We need to plan for our future with AI. Our goals need to be aligned with that of technology or the technology we create will evolve its own goals.
The evolving place of the human
As interesting as the evolution of machines may be, more interesting is the reshaping of what we do and what it means to be human. In a strange way it is machines that will truly help us to understand what it is to be human.
How will near-term AI progress change what it means to be human? We’ve seen that it’s getting progressively harder to argue that AI completely lacks goals, breadth, intuition, creativity or language—traits that many feel are central to being human. This means that even in the near term, long before any AGI can match us at all tasks, AI might have a dramatic impact on how we view ourselves, on what we can do when complemented by AI and on what we can earn money doing when competing against AI.
Most of my peers have strong opinions on artificial intelligence and most come squarely from a place of misunderstanding. I know, because I’m there. As health professionals charged with creating a future for ourselves and our patients we have a responsibility to understand how we can and should work with AI.
Tegmark’s Life 3.0 is a great starting point.
Links to Amazon are affiliate links.