AlphaGo, an Artificial Intelligence (AI) system developed by Google’s DeepMind subsidiary has now won four out of the five games of Go it has played against the current world champion.
As impressive as that is – and it is truly impressive – I’m less interested in the triumph of AI at these games, but rather how AI performs at the very things that people (and most insects and a great many fish and birds) find easy: being social and useful. There is no reason to believe that boardgames such as Go and chess are universally difficult. Only that humans find them hard. Machines are not human. The true potential of AI lies in being able to look at the world through different eyes, not in being able to out-think a single human.
Boardgames like Go, which is mathematically harder than chess, serve as the AI developer’s equivalent of a psychologist’s lab. Microcosms in which to test out theories about how the human mind functions. AlphaGo wasn’t created to play Go. It learned to play it. AlphaGo is a general purpose AI algorithm, a system of machine learning inspired by the interconnectivity of neurons in our brains that tries to mimic human learning. By comparison, IBM’s Deep Blue, which beat chess world champion Garry Kasparov in 1997, was specifically designed to win at chess. Deep Blue could only play chess, AlphaGo can be taught to do other things.
Chauvinism that continues to drive the general perception of human intelligence as being the pinnacle of all intelligence.
There is a degree of shortsighted human chauvinism in wanting to build a better human-like brain; the same chauvinism that continues to drive the general perception of human intelligence as being the pinnacle of all intelligence. Advancements in AI appear to be making us fearful that we are on the verge of building machines more intelligent than ourselves. However, the reality is that the harder we look, the more we discover organisms possessing specialised intelligence superior to our own generalised capabilities. These are not creatures with complex, learnt behaviours that can be explained as stimulus-response associations, but creatures with genuine cognitive intelligence like that demonstrated by honeybees, for example.
Solitary intelligences, ceaslessly toiling for a singuler purpose.
It is specialised intelligence that promises to unlock the potential of the huge volumes of economic, health, environmental and other data that we are collecting at mass. Solitary intelligences, ceaslessly toiling for a singular purpose. In contrast to AlphaGo’s single mindedness, Lee Se-dol, AlphaGo’s human opponent isn’t playing in isolation, self-efficacy plays a powerful role.
Go players belong to a community with traditions and etiquette similar to those of martial arts. Mr Lee, who turned professional in 1996, has a ranking of 9, the highest professional dan. A dan is equivalent to martial arts black belt. To promote learning, Go has a system of kyu (student) ranks and dan (master) ranks which work in a similar way to golf handicaps.
The complex social system that has developed around the game over the last two millennia places meaning on not just a player’s wins and loss record but also on the opponents against whom they played and the teacher from whom they learnt the game. For Mr Lee there are potential social and economic consequences to playing against an unranked opponent. Loosing to AlphaGo has the potential to affect not just him personally but also his family and his proteges as well as his rivals.
In a few short decades, machines may be teaching us what it means to be human.
As humans we are genetically predisposed to conform. We need to trust and to fit in. Being social requires conformity; a willingness to acquiesce, and not just to those who we perceive to be more knowledgeable than ourselves, but also those who we perceive as having a better social standing. The ability to see that one’s own selfish needs may be best served by servicing the needs of the group. To do that requires an understanding of the concepts of self and of belonging to a group.
AlphaGo is not cognisant of these concepts; these behaviours that served us well on the ancient savannah but are increasingly ill-suited to a fast moving world of competing, often contradictory ideas. A world in which the real and the virtual blur into one. A world in which the impact of a decision can be fast, far-reaching and potentially irreversibly fatal. A world in which in a few short decades, machines may be teaching us what it means to be human.
Gibney, Elizabeth. “Game-playing Software Holds Lessons for Neuroscience.” Nature.com. Nature Publishing Group, 25 Feb. 2016.
Collett, M., Chittka, L. & Collett, T.S. (2013). “Spatial Memory in Insect Navigation.” Current Biology, 23(17): R789–R800.
Reeves, B., & Nass, C. (1996). “The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places.” Cambridge University Press.