A bit of a jest, but I couldn’t help thinking of these movies during a recent presentation by Greg Cross from New Zealand AI start-up, Soul Machines. Seeing a totally human looking digital AI responding verbally and expressively in conversation I felt propelled into a future world.
Proposed uses for what Soul Machines call Digital Humans, or Digital Employees, include helpdesk and call centre staffing, Personal Bankers, Digital Celebrities – as online salespeople – and, within five years or so, the ability to create digital versions of ourselves, potentially spawning a digital population in the hundreds of millions.
Unsettling? Greg assures us that this will “better humanity” and that AI’s will “learn from our human-like qualities and want to be like us”. Perhaps a little ironic then that one of the first of these Digital Employees, Ava – sold to Autodesk – has been given the name of the alluring android in Ex Machina, who leaves Caleb trapped, in all likelihood to die, in the highly secure house most of the movie takes place in. What was poignant about that scene is that there was no actual malice in the android’s actions; it was just that the moment she no longer needed his help he ceased to be relevant to her, and all the apparent emotive connection she’d previously shown towards him vanished.
The gap between fiction and reality is closing fast. Ex Machina was released in 2015 and by 2017 we see Soul Machines early digital humans and, even closer to the Ex Machina androids, Sophia the robot who, speaking on stage back in October, was granted citizenship by a country that still restricts the rights of human woman. Umm, what do we the Homo sapiens think of all this?
We’ll come back to that question but first, for some context, let’s review where AI and robotics are heading and what to realistically expect by when.
AI as a cool buzzword
As anyone knows who’s attended recent AI events, such as AI Happy Hour, AI is a trending buzzword right now, packaging up things like:
- Pattern recognition
- Machine learning
- Natural Language Processing (NLP)
- Computer Vision (CV)
Effectively a range of modern software tools and approaches.
Narrow AI
It becomes AI proper when combinations of these technologies are brought together into ‘Narrow AI’ software applications such as:
- Driving a vehicle (Driverless cars)
- Providing helpdesk services (Chatbots or Digital Employees)
- Cognitive assistance analysing big data (E.g. IBM’s Watson)
- Machine translation
And much more coming our way fast.
The technology’s rapidly increasing capability is demonstrated by Narrow AI’s that have already surpassed humans at things like Jeopardy, word recognition, cat recognition and many other tasks. However, by definition, Narrow AI should always be constrained to specific tasks, so it’s unlikely to ever go further than being a powerful set of tools for humans; with the biggest realistic concern being large-scale disruption of the employment market.
On the exponential curve
But where AI is at now and where it’ll be at in twenty years are two different realities. After decades of AI technology dawdling in the doldrums increasing computer power and new AI techniques mean that AI is now an exponential technology.
So over the next ten to twenty years, we’ll almost certainly witness an explosion in the number of AI applications we interact with, and we’ll see many AI’s far exceed traditionally human abilities, just as machines have for so long now vastly exceeded our physical strength.
The new wave of robots
For those of us who grew up reading books like I Robot, real life robots have long been disappointing creatures; often bodiless one-armed mutants stuck on the floor of factories doing mindlessly repetitive tasks. But now advances in both the hardware and software are giving birth to vastly more capable robots including:
- General purpose robots for more varied and niche manufacturing or construction
- Planetary exploration robots (like Curiosity Rover)
- Sex robots
- Various scary prototypes of autonomous military robots
And of course robots designed to be human-like (androids), such as Sophia mentioned earlier, which could be used as carers, domestic help, workers or for anything else humans currently do. While these may still seem stupid and clumsy, Ray Kurzweil, Google’s director of engineering, predicts that robots will reach human intelligence by 2029, meaning that we could realistically expect to see convincing human replicants by then.
AI designing AI
One of the freakier scenarios of AI is the idea that, if there was an AI that could design and create other AI’s, this could give rise to runaway intelligence that would quickly be beyond our understanding, and potentially beyond our control. Google’s Auto ML project, an AI to build AI’s, feels like a step in this direction, even if for now it’s limited to a fairly practical scope.
General AI
General AI’s are those that are broadly intelligent and capable of interacting in flexible and expanding ways, rather than restrained to a narrow task. It’s this form of AI that Elon Musk calls our biggest existential threat, and about which Stephen Hawking was speaking when he told the BBC that AI could spell the end of the human race.
To an extent, general AI already exists in projects like Google’s Deep Mind, though as Alan Winfield of Bristol Robotics says at the end of this article on Deep Mind, “it cannot yet generalise from one learned capability to another, something you and I were able to do effortlessly as children”. Other significant parties working towards General AI include Good AI, which states its mission as being, “to develop general artificial intelligence – as fast as possible – to help humanity and understand the universe”.
Well I am keen on the stated objectives, and AI certainly offers some tantalising possibilities, but while many technologies could enhance us, AI has the weird potential to eventually replace us.
Are humans worth preserving?
So coming back to considering our response; how do we feel about what’s emerging?
Are our squishy human feelings are even what’s relevant? One could argue that there isn’t any reason other than sentimentality why humans shouldn’t be super-ceded by machines. Much as the AI ‘agent Smith’ puts it in The Matrix when, talking more personally to Morpheus, he says, “…evolution, like the dinosaur; look out that window, you had your time; the future is our world Morpheus”.
This is the big issue I struggled with while writing this; finding a widely compelling reason to back up my feeling that humans are special – that they matter. In the end I did have an insight; the title of this article, that ‘AI’s feel no pain’. That, whatever they simulate, they won’t genuinely empathize. Despite what they’re programmed to pretend, internally they’ll be cold, dead imitations of humans – soulless machines. To anyone who really appreciates what it is to be human, and connected, and to feel pain and love, it should be self-evident that we humans are, despite all that’s messy about us, worth preserving.
But it’s not just about the risk of AI’s taking over one day; the more immediate risks are around how AI’s will affect human interactions, society, and who we are. Parents are already struggling with the effects of digital immersion and there’s arguably an accelerating disruption of deep human connectedness occurring in society. If we haven’t yet grasped how we’re going to hold on to our humanity amid the current wave of technologies, are we ready for the impact of AI’s?
Placing limits on AI
I have the honour of living in a country that is, by government decree, nuclear free; a tribute to the Lange led Labour government that took power in 1984 and responded to the majority New Zealander’s concerns about visits by US nuclear vessels. Imagine if at the dawn of the Nuclear age, say after WWII, the international community had come together through the newly formed UN and, with an equally enlightened mind-set, formed a consensus to enforce restriction of nuclear technology to peaceful purposes globally. Current tensions with North Korea, and rising concerns about nuclear proliferation in general, could have been headed off right back then. At the dawn of the AI age it seems appropriate to learn from the past, and to lobby governments to start putting in place measures to monitor and limit AI. One potential approach is suggested by Jane McCarroll, marketing and membership manager at The Institute of Management NZ, who recently proposed the formation of an AI watchdog in her excellent article, Who is going to teach the robots manners?
Coming back to science fiction, what it has provided us with is a wealth of future possibilities to consider in advance, but it looks as though there’s now limited time to reflect on these insights of art before digital life potentially imitates them. So while the human brain is still the most complex object in the known universe it seems like the right time to ramp up the conversation about what it means to be human, and where as a society we do and don’t want AI’s amidst that.
If you think so too and wish to pick up further please follow one of the links below to get in touch.