DeepMind’s Psychlab Heralds Dawn of Artificial General Intelligence
Conversations about AI tend to oscillate between the wildly optimistic and the obnoxiously dystopian. Some pundits will argue we’re on the verge of a new renaissance, in which self-driving cars will spirit us between locations while robotic housekeepers do our bidding. Others foresee a Terminator-like apocalypse. As is often the case, the truth lies somewhere between these extremes. However, no one should be under the illusion that artificial general intelligence (AGI), meaning an AI that can learn the same tasks a human can, is a vague and distant reality. DeepMind has put any doubt to rest with its recent release of Psychlab, a toolkit for assessing artificial intelligence with the same psychological tools we use for assessing human cognitive abilities.
DeepMind is the company behind the algorithm that defeated Lee Sedol in Go. The company has pioneered work on “reinforcement learning” algorithms, which utilize the same general-purpose recipes that underpin much of human and animal learning. In the paper’s introduction, the authors catalog the various accomplishments chalked up by state-of-the-art deep reinforcement learning, which include “navigating 3D virtual worlds viewed from their own egocentric perspective, binding and using information in short term memory, playing ‘laser tag,’ foraging in naturalistic outdoor environments with trees, shrubbery, and undulating hills and valleys and even responding correctly to natural language commands.”
These are all activities humans and our primate cousins engage in, and if this doesn’t read like a catalog of the qualities belonging to an artificial general intelligence, than I don’t know what does. To see a reinforcement learning algorithm in action, check out this YouTube video demonstrating an AI succeeding at the same kind of learning task that is used by psychologists to assess the cognitive skills of rats, primates, and other animals with general intelligence.
Some pundits may refuse to read the writing on the wall regarding AGI because single-purpose, supervised learning algorithms (which possessed no generalizable skills) previously accomplished these tasks. These were the equivalent of one-trick ponies. This is not the case with state-of-the-art deep reinforcement learning – a single algorithm can learn a wide variety of skills just as a single human can. The authors of the Psychlab paper believe these deep reinforcement algorithms can be measured with the same tools that we measure ourselves and other creatures possessing generalized intelligence, such as visual search, change detection, random dot motion discrimination, and multiple object tracking.
While DeepMind has generally been coy regarding the similarity of its work to artificial general intelligence, even playing directly into the hands of naysayers, it will be harder and harder to hide the obvious: A single algorithm can now learn many of the same tasks humans can, and much better in some cases.
Some of the remaining performance gap between ourselves and such algorithms will likely be diminished by improved hardware – sensors and control systems that will allow these algorithms the physical degrees of freedom and sensing power we possess. But these are problems with tractable engineering solutions. Therefore, we shouldn’t surprised when robots begin accomplishing the same tasks previously manned by humans in the workforce.
Whether such AIs will rise up against their human overlords in some epic battle for supremacy remains much in doubt — after all, the reward function used in such reinforcement learning algorithms is not some open-ended mystery, but rather explicitly given by the programmers. But we shouldn’t kid ourselves that the advent of artificial general intelligence with a skill set similar to our own is decades away. It is a reality that is already upon us.