With all the fascination around artificial intelligence (AI) as an immense part of the future, there is an underestimation of how firmly AI is rooted in the present already. Bright entrepreneurs have brought AI to agriculture, the oil and gas industry, radiology, financial technology, security and more. In fact, the largest and most successful companies in the world strongly believe in AI and invest substantial time and resources to harvest its potential.
But the future that bold minds are dreaming about is already here. Let’s look at some inevitable parts of one’s everyday life to demonstrate that.
Our smartphones have brains
iPhones, which are among the top-selling smartphones in the world, have long been powered by a built-in intelligent assistant called Siri. In 2014, Siri had a brain transplant. Late July, Apple quietly moved Siri's voice recognition to a neural-net based system for US users and did the same worldwide mid-August the same year. Without users knowing it, Apple brought AI right into our smartphones years ago.
As the author of iBrain is Here story, Steven Levy, describes, “If you’re an iPhone user, you’ve come across Apple’s AI, and not just in Siri’s improved acumen in figuring out what you ask of her. You see it when the phone identifies a caller who isn’t in your contact list (but did email you recently). Or when you swipe on your screen to get a short list of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar. Or when a map location pops up for the hotel you’ve reserved, before you type it in. Or when the phone points you to where you parked your car, even though you never asked it to. These are all techniques either made possible or greatly enhanced by Apple’s adoption of deep learning and neural nets.
"Yes, there is an 'Apple brain' – it’s already inside your iPhone.”
And Apple is not alone in firming the presence and impact of AI. Android has its own insider – SILVIA. SILVIA for Android is an artificial intelligence system that runs natively on Android phones and tablets so that users can talk with devices like they would talk to another person.
AI-powered laptops are already here
With laptops, it's almost the same story as with smartphones. AI-powered laptops are not really fresh news. For example, just this Wednesday, Lenovo unveiled its new Yoga Book without a keyboard. The new laptop has no physical keys; instead, the halo keyboard appears as a solid white outline, which you can type against. Lenovo leverages artificial intelligence software that the company says gets smarter the more you type, to help predict what comes next. Moreover, users can make the halo keyboard disappear entirely and use the panel as a digitizer.
Braina (Brain Artificial) is another example here. Braina is an intelligent personal assistant, human language interface and automation software for Windows PC that makes it possible for users to control a computer using natural language commands. Importantly, Braina is not a Siri or Cortana clone for PC but rather a powerful personal and office productivity software. It's not a chatbot either. Users can either type commands or speak to it and Braina will understand what a person wants to do. Moreover, using speech recognition built in Braina for Android App, users can interact with their computers from anywhere in the house over a Wi-Fi network.
Meet Eyeriss: An independent decision-maker for devices
Have you heard of Eyeriss yet? ICYMI, earlier this year scientists from MIT have unveiled an energy-friendly chip that can perform powerful artificial intelligence tasks. At the International Solid-State Circuits Conference in San Francisco, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, rather than uploading data to the Internet for processing.
In expanding the list of all-mighty Eyeriss, MIT touches upon IoT and connected devices. The new chip is believed to be able to usher in the IoT – the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination, as stated in the announcement.
“With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots.” Does that remind you of something? A real-life Skynet maybe?
In transportation, the term ‘driver’ does not solely refer to a human anymore
Moving to another class of ‘devices’ – cars. A milestone event has happened in February this year, equaling AI to a human in some sense. Clearing the runway for driverless cars and their makers, The National Highway Transportation and Safety Administration (NHTSA) told Google that the artificial intelligence system that controls its self-driving car can be considered a driver under federal law. Essentially, a driver does not have to be a human – AI software can be considered a driver with proper certification meeting standards set for human drivers.
Self-driving cars are not a novelty anymore, but the fact that AI software is on the way to being recognized as a legitimate driver is quite interesting.
There are plenty of initiatives from innovative companies, and the stages of experiments vary. The bottom line is, however, that self-driving cars are far from being a concept anymore; they are out there with highly advanced AI software and picking up passengers around the world.
Some examples of most interesting projects are Google’s self-driving car project and Uber’s self-driving fleet. Last Thursday, the world's first self-driving taxis (operated by nuTonomy) started picking up passengers in Singapore.
The cars are increasingly becoming ‘smarter’ and more connected
Self-driving vehicles will not be the only brainy ones. Overall, estimations suggest that the installation rate of AI-based systems in new vehicles will jump from 8% in 2015 to 109% in 2025 with the number of intelligence systems used in infotainment and advanced driver assistance systems (ADAS) going from 7 million in 2015 to 122 million by 2025. In other words, AI-based systems are expected to grow to become the standard in new vehicles over the next five years.
Moreover, according to Gartner's prediction, by 2020, there will be 250 million cars connected to each other and to the infrastructure around them via Wi-Fi systems that will allow vehicles to communicate with each other and the roadways.
AI can already diagnose a life-threatening disease and prescribe a treatment
Well, at least IBM Watson can. At the beginning of August, IBM's Watson, a supercomputer powered with AI has been reported to successfully diagnose a rare form of leukemia on a patient within minutes – something doctors failed to do after months. Watson managed to make its diagnosis after doctors from the University of Tokyo’s Institute of Medical Science fed it the patient’s genetic data, which was then compared to information from 20 million oncological studies.
If a software is already capable of diagnosing diseases better than doctors and prescribing effective treatment, intelligent and independent robot-doctors will arrive in no-time.