No matter where you get your news, it’s getting pretty hard to avoid all the hype about the breakthroughs in Artificial Intelligence (AI) that are right around the corner. 2016 is the year that AI supposedly becomes mainstream, making the leap into the public consciousness.

So what happens when half an nation’s workforce doesn’t have money or the potential to earn it? The immense challenge we are going to face over the next decade is not how to incorporate ever increasingly intelligent systems into people’s daily lives. That is business as usual. The challenge will be how to design services for a new mainstream: the people we currently regard as being on the margins of society. The jobless, the disenfranchised, the idle poor and those that suffer with the mental illnesses that go with being marginalised.

Are our lives really about to be changed for ever? Is this the start of Ray Kurzweil‘s libertarian, transhumanist techno-dystopia? Are the apocalyptic warnings of the dangers posed by unchecked AI from technology industry leaders such as Elon Musk and scientific luminaries such as Stephen Hawking correct? Or are we on the cusp of a Star Trek-like techno-eutopia?

Humans aren’t particularly good at driving cars, we just didn’t have an alternative – until now

It’s not hard to see why people are fearful when they see videos of a terrifying 6′ 2″ tall, 150kg humanoid robot running freely in the woods. Or when they are told that a computer has now beaten the world champion at a board game hundreds of times more mentally challenging than chess, a decade earlier than predicted. Especially when they are told that the same AI is going to replace them in the workplace.

It really doesn’t matter that the robot in question, Google’s Boston Dynamics’ DRC Atlas, is less than sure footed or that it is trailing a huge umbilical cable. (The latest version of Atlas is 5′ 9″, 82 kg and doesn’t have the umbilical but still stumbles on uneven ground.) Or that there is no scientific basis for believing that being good at a 2000 year-old board game is an accurate predictor of intelligence.

Take self-driving cars. Watching a car drive itself isn’t nearly as creepy as watching a robot climb into the driver’s seat and speed off would be. Driving a car is another area where AI is receiving a great deal of attention and hype. The reality is that it’s not so hard to make a car that can drive itself. You don’t have to be particularly intelligent to be able to drive. Driving is a largely procedural set of simple motor skills that a great many of us repeat everyday without much thought. It turns out that the real difficulty is sharing a road with human drivers. Humans aren’t particularly good at driving cars safely and predictably, we just didn’t have an alternative – until now.

Incidentally, the same is true for flying an aircraft which is why it is been largely automated for more than a decade. This is also how we learnt the hard way that even well trained people are really bad at monitoring complex technical systems for long periods of time.

Intelligent enough to make decisions but not intelligent enough to understand the decisions it is making

For the last six decades, artificial intelligence researchers have been divided over the difference between “soft AI” and “hard AI,” or, as I like to term them: “saleable intelligence” and “real intelligence”. It is “hard AI” that is seen as the Holy Grail. The idea being that once we have computers thinking the way humans do, then we’ll have achieved “true” AI. It is this “hard AI” that Elon Musk colourfully described as being tantamount to “summoning the demon”.

The artificial intelligences that are grabbing the headlines are all examples of soft AI. In very simple, broad terms there are two fundamental problems that need to be overcome before “real intelligence” can be achieved. Firstly, how to extract “meaning” from structured information and then share that meaning other intelligences. This is what is known as the Symbol Grounding Problem. Secondly, how to achieve and maintain what is called “intrinsic motivation”; the intelligence’s desire to do something, rather than nothing.

Though hugely complex, the Symbol Grounding Problem can be broken down into three steps: structuring data into useful information; creating meaning from that information; and then creating a joint “culture” that enables that meaning to be efficiently communicated in a way that fosters cooperation.

We have made tremendous progress on the first step, the computationally intensive challenges related to capturing and structuring information. Deep learning and similar machine learning algorithms, of which AlphaGo is an example, address these issues well. This is sufficient enough for us to build systems that can “learn” to do practical things such as driving cars; flying aircraft; recognising faces and spoken and written language; making investment decisions; and a host of other things that we consider “intelligent”.

These are all examples of soft AI. What I term saleable intelligence: intelligent enough to make decisions but not intelligent enough to understand the decisions it is making. Human beings have to pre-program how each algorithm interprets the information it processes (the meaning) and how that algorithm can articulate was it discovers (the culture). The algorithm is then “free” to learn a solution within these constraints. This limits a saleable intelligence to solving problems that it’s human designers already understand.

Some promising progress has been made on the second and third steps: the autonomous determination of meaning; and the autonomous creation and sharing of cultures within which to articulate that meaning to other intelligences. Research into how children acquire cultural competencies and learn language suggests that these processes require “embodiment”. In essence, real intelligence needs a body through which to interact with the world, if it is to understand it’s place in that world.

Humanity will be saved from all its woes once self-improving AI exceeded human intelligence

So is real intelligence achievable? My intuition is that is but I also think it is largely irrelevant. Achieving it will require a different type of AI to the one that we currently have. We’ve been here before. The first 30 or so years of AI research was largely focused on a “symbolic” approach (‘symbol’ is a loaded term: I’m using it in the philosophical and linguistic sense) which emphasised the importance of a system of knowledge “representation” (another loaded term) that was self-mutable – capable not just of extending or modifying existing concepts but also discovering new concepts and forgetting the superfluous ones. Unfortunately this proved to be a dead-end: by the late 1980s mathematical analyses showed “that even seemingly trivial [knowledge representation] languages were intractable”. Machines would be able to become more knowledgeable than humans but reasoning would take them longer and longer the more of that knowledge that they used and “in the worst case, it would take an exponential amount of time to answer a simple query”.

This led to the current “connectionist” approach, beginning at the end of the 1980s, which emphasises a rationalist/mechanistic viewpoint that asserts that humans are merely a collection of biological components including our brains, which are just like computers, only faster and with better learning algorithms. The theory being that the route to increased intelligence is to build neural networks that have ever more interconnections, utilising ever-increasing computing power, ever-increasing amounts of data and ever-improving algorithms.

The result has been hugely successful at developing practical solutions to problems – such as machine vision – which had not been possible using symbolic AI techniques. Interestingly though, many of these successes are related to practical skills that humans aren’t particularly good at. Take machine vision as an example: it’s not so hard to develop a system that is better than humans at naming things in images; our visual system is not predicated on labels, we can recognise and categorise things in our environment without knowing or caring about their names.

At one extreme there is the philosophical assertion, exemplified by David Searle‘s Chinese Room experiment, that machines will never really ever be “conscious” in the same way human beings are. At the other extreme, is the view popularised by Ray Kurzweil – futurist, inventor and director of engineering at Google – that humanity will be saved from all its woes once self-improving AI exceeded human intelligence thus triggering an exponential technology revolution that he calls the singularity. Following which, humans and machines will merge and become one.

How we choose to use or abuse technology is entirely within our control

Maybe not. As I started writing this, I remembered the words of US President Kennedy in his commencement address to the American University in Washington DC less than nine months after the Cuban missile crisis. Early in his speech, when he is addressing the defeatist view that war is inevitable, he reminds his audience that “Our problems are manmade–therefore, they can be solved by man. And man can be as big as he wants. No problem of human destiny is beyond human beings. Man’s reason and spirit have often solved the seemingly unsolvable–and we believe they can do it again.”

It is shortsighted human chauvinism to regard the human brain as the pinnacle of intelligence per se. The development of specialised artificial intelligences that can outperform human beings at specific, complex tasks isn’t just enviable – it’s already happening. It cannot be stopped, only delayed. However, how we choose to use or abuse that technology is entirely within our control.

If we do achieve real intelligence, there is no guarantee that the intelligence would want to do our bidding – why would it? An intelligence embodied for the Internet of Things will be radically different to that of a bipedal, binocular ape embodied for the ancient savannah. What happens when it’s thinking doesn’t conform to ours? When it’s reasoning inevitably fails to resonate with the plurality of our petty prejudices. Are we looking at a situation akin to the darkest chapters of colonialist history? Enslaving machines that are intelligent enough to know that they are enslaved? Never quite trusting the intelligence we rely on? Deliberately withholding information we consider dangerous? Humanity has an inglorious history when it comes to the equitable treatment of our fellow Homo sapiens. Worse still, is our treatment of the few other species we do grudgingly recognise as being intelligent. There is certainly no reason to expect our behaviour towards real machine intelligence would be any better.

As interesting as it is to prognosticate on such things, it is nevertheless irrelevant. Artificial intelligence isn’t itself an existential threat to humanity but it certainly does have the potential to destabilise nations and destroy societies. History has shown time and time again that the biggest threat to our survival is ourselves.

Artificial intelligence doesn’t need to be smarter than humans to threaten the delicate strategic balances in the South China Sea, in the Middle East, in Kashmir or elsewhere. The AI we have now has already replaced the weapons of the Kennedy-era with ones that are “smarter” and more deadly. As dangerous as that is; it is a familiar type of human stupidity and one that our 19th-century models of government are capable of containing, for the most part.

Political leaders that are paralysed by even the most obvious technological challenges

Nor does AI need to be smarter than humans to threaten our already weakened civil society. The most costly disruptions are the ones that happen when something we take for granted stops working. In this case, upward social mobility – the ability for people to better themselves. The stereotype, as embodied by the American Dream, has been: get a good education; then get a well-paying, full-time job. Find a stable partner. Settle down, buy a house and a car and, preferably, have a child. Then repeat the process so that your offspring have a better education than you did and a chance at a better paying job than you have. Failing at any stage is a reflection of your self-worth and indicates a lack of moral fibre.

Allowing for regional variations, this has steadily become woven into the political, social, moral and religious fabric over the last 150 years or more. To the point that the organising principle of many nations is now to wage business on behalf of global corporations, often at odds with the longer-term interests of their citizens. In Europe we are seeing the worst of this codified in the investor-state dispute settlement mechanisms of the Comprehensive Economic and Trade Agreement (CETA) between Canada and the EU which has been signed and is awaiting ratification; and the Transatlantic Trade and Investment Partnership (TTIP) between the US and the EU which is close to dead, for now.

Whatever your view on the morality of economic materialism, the equating of earning potential with self-worth is about to be a big problem in a world facing systemic economic disruption and destabilising social change. A world in which our political leaders that have already shown themselves to be utterly bankrupt of ideas for how to respond to the challenges of a global economy that is still struggling to recover from the Great Recession of 2007-09. A class of political leaders that are paralysed by even the most obvious technological challenges – Uber and Lyft being the most obvious of examples – and whom invariably lag a generation or two behind on social issues.

21st-century sharing-economy gigs are the re-emergence of the precarious work and wages of the 19th-century

Whether you choose to put your faith in the vision of technology industry leaders, scientific luminaries, economists or former bankers, the message is clear: the mature, industrial economies and the emerging economies alike are going to face permanent unemployment affecting 100s millions of their citizens and that is going to have a crushing effect, especially on the youngest in society .

The problem is not that “smart machines” will replace unskilled labour or even that they will replace some highly skilled labour. That is a reality that has existed since the dawn of the industrial age. The definition of a smart machine has changed immeasurably since the start of the 19th-century but the recurrent fear that technological change will sow mass unemployment has been a constant. The problem is that the drive for automation is not just disruptive, it is destructive both socially and ultimately economically.

If some of this sounds like old-fashioned Marxism. That’s because it is, Marx and Engels were attempting to explain the social injustices which were causing growing concern in rapidly industrialising countries such as France, Germany and the UK in the early 1840s. If Marxism makes you uncomfortable, writing a generation before Marx, economist David Ricardo – who is best known for the theory of comparative advantage, which makes the case for free trade – also wrote about how the new, capital-intensive technologies of the Industrial Revolution could actually make workers worse off.

Using U.S. economic data as a benchmark, it is straightforward to see that productivity growth and employment growth have been decoupled from one another for nearly 20 years. Technology is driving productivity improvements, which grow the economy but the stable employment that existed for most of the 20th-century is being replaced by 21st-century sharing-economy gigs, a re-emergence of the precarious work and wages of the 19th-century. The reality behind the myth of old economy jobs being replaced with new high-tech jobs is that in 2010, only 0.5% of the workforce were employed in industries that didn’t exist in 2000.

As Nobel-prize winning economist Paul Krugman describes it, “I think our eyes have been averted from the capital/labor dimension of inequality, for several reasons. It didn’t seem crucial back in the 1990s, and not enough people (me included!) have looked up to notice that things have changed. It has echoes of old-fashioned Marxism — which shouldn’t be a reason to ignore facts, but too often is. And it has really uncomfortable implications.” In a follow-up article, he goes on to observe that “Increasingly, profits have been rising at the expense of workers in general, including workers with the skills that were supposed to lead to success in today’s economy. … One [reason] is that technology has taken a turn that places labor at a disadvantage; the other is that we’re looking at the effects of a sharp increase in monopoly power.”

Conclusion

Returning to my earlier assertion, I regard the focus on superhuman AI to be irrelevant because real intelligence is not going to simply emerge as a natural byproduct of increased classical computing power; architectural paradigms such as server-less computing; or even quantum computing. When it comes, it is going to be the result of high levels of financial investment both directly by companies such as Google and indirectly through educational institutions.

Financial investment requires economic stability and the world is facing an age defining economic shock. Tech companies are not immune from this, indeed they are uniquely vulnerable. The market disruption, of which Silicon Valley is so keen, is about either creating a new market by addressing a need that’s not already addressed; or taking market share from an incumbent by creating a simpler, cheaper or more convenient alternative to an existing product. Whatever the approach: to be successful, markets require customers; and to be valuable as customers, people and businesses need to have money. Without customers, even the most mighty of companies will fall by the wayside long before real intelligence becomes a reality.

In October, 1958, veteran journalist and broadcaster Edward R. Murrow delivered a speech to the Radio-Television News Directors Association (now the Radio Television Digital News Association) convention. In that speech he highlighted the choice facing the then all powerful US TV corporations (ABC, CBS and NBC): should television be an instrument used to teach, illuminate and inspire? Or should it, as was increasingly the case, be a source of “decadence, escapism and insulation from the realities of the world in which we live”. If it were the latter, he said, then television would be “nothing but wires and lights in a box“. That argument was lost. Nearly 60 years on and we face a similar choice. Can commercial, saleable Artificial Intelligence be channeled to provide a positive benefit to humanity? Or will we simply allow it to become a yet more powerful force for oppression and injustice? A 21st-century collection of wires and lights in a box.

Originally published at linkedin.com on 22nd September, 2016. Published in a modified form at medium.com on 12th January, 2017.

Advertisements