Everyday AI: Less Skynet, more Siri
By Benjamin Cher July 22, 2016
- Will be the default way for humans to interact with machines
- AI advancements + cheap hardware = convergence, not singularity
WHENEVER the subject of artificial intelligence (AI) comes up, there is inevitably a pop culture reference to Skynet, the AI from the Terminator franchise, or 2001 AD’s HAL 9000 for the previous generation.
It’s always about an AI that goes rogue and turns on its human creators. However, the reality of AI today is less like Skynet and more like Apple’s digital assistant Siri, Appier cofounder and chief executive officer Yu Chih-Han likes to point out.
“It will take a long way to get to that stage [of a rogue AI] – AI agents and robotics do not have the capability to work outside the constraints that humans have confined it to,” he says, speaking to Digital News Asia (DNA) in Singapore recently.
“Usually, AI and robot capabilities are good at solving a problem with a clear objective – the objective and goals are given by humans and not the AI itself.
“It just serves humans and solves tasks with clear goals,” he adds.
Founded in 2012, Appier harnesses AI technology to develop cross-screen marketing solutions for advertisers. The Taipei-headquartered startup has offices in Singapore and San Francisco, and last November secured a US$23-million Series B round, according to Crunchbase.
Far from the Hollywood nightmare, AI today is making great strides in the consumer space, in the form of digital assistants like Apple’s Siri and Microsoft’s Cortana.
These kinds of AI technologies will eventually surround the user in what Yu (pic above) describes as “everyday AI.”
“Basically everything surrounding the user will be around AI – we are living in a world of information abundance, and there is too much information that is irrelevant to a human being.
“We need to have an intelligent way of building that, and personalisation will be incredibly important,” he says.
As AI becomes more pervasive, it will soon become the new user interface (UI) between humans and machines, replacing more traditional methods such as the keyboard, mouse or even the touchscreen, according to Yu.
“Talking to the machine will become natural,” Yu declares. “It will actually come up very soon – as voice and gesture recognition technology advances, they will have a great impact on the user.”
That’s the sexy part, but just as critical would be the use of AI in the business world.
Yu believes AI can be harnessed to solve complex enterprise problems “because in business, we usually have a clear goal in what we want to achieve.
“AI can help in recognising patterns across different variables and find the most relevant ones or most optimal solution for the company to make decisions – whether it’s for marketing or finance, AI will be useful across enterprise functions,” he adds.
Convergence, not singularity
The AI field is now is at a convergence point, powered by its current advancements as well as the commoditisation of hardware.
“Because hardware computation power has become so cheap and is now everywhere – even the smartphone is more powerful than our PCs of 10 years ago – this has fundamentally transformed our visibility of AI,” says Yu.
“In the early days the problem with AI was that it was too slow, and there was no computation to empower it to process faster – AI often requires real-time interaction which was not achievable by the machines [back then].
“Now that hardware and computation costs are much cheaper, that has enabled AI fields and has contributed to a big shift of AI into real applications,” he adds.
While there were many theoretical applications for AI 10 years ago, it was only when hardware was able to boost the underlying infrastructure that these applications could be realised.
“Those research and innovations [from years ago] accumulated until we could empower the underlying infrastructure to make it possible, and now you can see the fruits of all kinds of applications come to life,” says Yu.
Advancements such as neural networks, where an AI can learn by itself, have also been helpful in advancing AI field, especially beyond early iterations such as expert systems.
“Early AI was mostly rule-based or logic-based, where you find the experts in a field and tell the computer about the logic behind making decisions and code it in,” says Yu.
“But it is very hard for experts to list down all the scenarios, decisions and how to deal with them – even for something as simple as walking, where there are so many different variables from rough terrain to steps, it is hard to list down every rule.
“That’s why learning is important – you can give the machine a goal, and it will automatically learn by trial and error.
“This also has the added benefit of learning when there is no human present in the background, allowing the machine to improve overnight,” he adds.
AI’s bright future
While the two are already entwined, AI will also further boost the next generation of robotics.
“I do see an opportunity for the next level of automation in the real world in a couple of domains – first, in the medical domain, in using robots to assist patients or elders,” says Yu.
“Second in the rescue domain – in dangerous or emergency situations, robots can replace humans to solve more dangerous tasks, so you don’t endanger more people while saving lives.
“Third is in home appliances – not just as a vacuum cleaner that moves around [like the Roomba], but appliances with AI components that make it possible to change other aspects of life,” he adds.
AI: Man vs machine, or man AND machine?
Cybersecurity industry facing AI, privacy and trust issues: RSA president
Of AI and the human touch: EmTech Asia Day One
For more technology news and the latest updates, follow us on Twitter, LinkedIn or Like us on Facebook.