Lifesize has announced a partnership with Voicera to introduce the Eva virtual assistant to voice and video meetings. Eva uses Artificial Intelligence (AI) to create a transcript of the meeting and to highlight actions points. Billionaire futurist Elon Musk has identified AI as “the greatest risk to the survival of the human race”, while at the same time acknowledging that: “If your competitor is rushing to develop AI and you are not, you will be crushed.” Bryan Denyer attempts to square the circle.
Environmentalist, entrepreneur, and space pioneer Elon Musk has a real bee in his baseball cap about AI. One of his favourite stories concerns the game of Go. Go is an abstract strategy board game for two players, in which the object is to surround more territory than your opponent. A few years ago, it was thought that no computer device could ever have the capability of being a human player of Go. Then two years ago, a computer beat the human world champion of Go. Today a computer is capable of taking on the world’s Top 50 Go players simultaneously, and the computer would beat them all.
So what, you might ask? But Musk also cites a more serious example. He describes a theoretical AI solution for stock market investment optimisation. The device could be programmed to go long on defence, short on consumer and then through a combination of fake news and spoof email accounts create conditions where war was the likely outcome – a great result for the AI technologists and stock market investors, but not so good for the planet.
Alert to the possible consequences of the machines taking over, and the resulting job losses as humans are literally rendered redundant. Musk is using his celebrity profile to advocate regulation of the AI space “for the public good”. But isn’t that a bit draconian? Eva is designed to make notes and to try to improve post-meeting follow-up, rather than to poison the board and strip their bank accounts.
Ultimately safeguards will be needed, but it is surely harsh to strangle a new branch of technology before its is even partially formed? AI has some things that it’s already really good at – an example might be interpreting big data in a digital signage application. Or using Eva to distil interminable meetings into a list of action points that might actually be actioned?
But, currently, there are some things that AI is not so good at, and that require much more work. Eric Horvitz, MD at Microsoft Research, explains that it is not currently practical to “encode common-sense”. The human-machine interface is still somewhat stilted in terms of conversational flow, although this is largely a consequence of computational limitations. Horvitz says that: “Conversation is a difficult intellectual tango,” and while coding “compelling and engaging personalities is a possibility, the user is left with the feeling that the lights are on but there’s nobody at home.”
For the foreseeable future, AI applications are likely to be concentrated on enhancing human productivity and the quality of the human experience. In fact. AI is often defined as “the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.”
Nowhere does the definition refer to destroying the human race.
But, just in case, Elon Musk is supporting his view that access AI technology should be democratised by funding the OpenAI initiative. Musk argues that regulation has to be proactive. rather than reactive, as we approach the point of ‘Singularity’ (where the power of computers exceeds the power of the human brain.) He argues that by the time we are reactive it will be too late.
For now, most early adopters’ experience of AI is limited to commercial developments including IBM Watson, Google’s Deep Learning, and conversational assistants such as Apple’s Siri, Google’s Now and Microsoft’s Cortana. But there is another category of AI, already commonly in use, that has more short-term relevance for collaboration – AI solutions designed for specific tasks (so-called “narrow AI”). Examples include systems that: can recommend things based on past behaviour; those that can learn to recognise images from other examples; and those that can make decisions based on the syntheses of evidence.
While development since the term Artificial Intelligence was coined, as far back as 1956, has really accelerated recently, it is the ‘narrow AI’ thread that has received most attention – and arguably achieved the most in terms of the human experience. The convenience of a fridge placing an order with a store for the ingredients from a recipe is currently winning the battle with any concerns about the fridge ganging up with the waste processor to devastate the food chain.