How does all the stuff in the world get connected, until humans live lives with the equivalent of “an angel on your shoulder,” an artificial intelligence that is pervasive, like your own thoughts?
And who the heck is going to build all that?
Such are the provocative questions that emerged during a Monday afternoon session on artificial intelligence at the Mobile World Congress trade show in Barcelona.
It was an absolutely packed auditorium, an already airless room becoming even more so, demonstrating there is a lot of interest in such questions.
Since this is a telecom show, the panel of entrepreneurs and academics threaded nimbly the connections between the emergent 5G networking technology, wearables, and something called “edge computing,” in a session dubbed “A.I. Everywhere.”
The panel’s moderator, Robert Marcus, general partner of Quantum Wave Capital, a Silicon Valley firm on the storied Sand Hill Road, talked of “massive” change that will come from enabling things.
Marcus’s point, as laid out in an initial slide, was that there was a burst of digital activity with Apple’s (AAPL) first iPhone, in 2007, which really took advantage of 4G networking with apps.
Now, he said, the advent of 5G will make possible edge computing, which will make possible “orders of magnitude” increases in compute, which will in turn make possible A.I. everywhere.
By way of background, it is increasingly clear 5G is more about connecting many devices, perhaps unmanned, such as factory robots, than it is about bringing greater speeds to human users of smartphones.
Sure, speed will rise up for users on Verizon Communications (VZ) or other networks. But the most novel technology enhancement that comes with 5G, something not even discussed in past, is a reduction in “latency,” the time it takes the first bit of a transmission to reach its destination.
Marcus’s apostle for the technical details was his first speaker, Mahadev Satyanarayanan, a professor at Carnegie Mellon University. His passion is about the emerging “tier” of edge computing, that sits between cloud computing, which is centralized, and all the billions of devices that will be connected in the world, including smartwatches and self-driving cars and on and on.
Satyanarayanan, who was referred to by Marcus as “Satya,” informed the audience he had been working on edge computing “since as long as there has been edge computing,” which sounded rather confusing given it seems like the term only popped up in the last two years.
In any event, Satya’s main point was that there needs to be something that’s not in the central facilities of Amazon (AMZN), or Alphabet’s (GOOGL) Google, or Microsoft (MSFT) Azure to interface with all the connected things, and for a variety of reasons.
“The ability to process without sending to the cloud is absolutely crucial” to the future of A.I., he said.
One reason is privacy and security, a “notion of a privacy firewall that is under your [direct] control,” he said. Another is to be able to “fall back” to local compute when those central cloud resources are unavailable. He specifically mentioned security concerns as being raised by some edge devices, such as Amazon’s “Echo” home speakers.
And here’s where it ties in to A.I.: the connected things must not have to constantly poll the central brain to understand what they are doing. An audience member asked about the machine learning phase called “training,” in which a computer is shown many, many models and detects patterns. Training, said Satya, will have to move to the edge in some fashion, because a connected thing won’t always rely on what it was trained for back in the lab.
“Suppose it’s trying to understand my walking,” proposed Satya, referring to some kind of activity tracker. “It knows how I normally walk. But what if I am now carrying a heavy load, or what if I stub my toe, and my gait is different,” so that the motion of the person with the sensor is unrecognizable. Then, the connected thing needs to learn on the spot, he said, and so it will need local computing to do so.
Other examples included multiple sensors in the car. What if cars on the road are programmed to look for a missing child. Or to participate in the Amber Alert system for known offenders in a neighborhood. Or, “every car in the city looking out for your dog” should your dog get lost.
It would be, said Satya, like Google’s “Waze” app, but without a human being. “Video cameras replace people in a Waze-style application,” he suggested.
Satya proposed the idea of “cloudlets,” little cloud-like machines that will be near the activity.
A cloudlet, said Satya, is “a small data center at the edge of the Internet.” They have the benefits of wearability, he said, but attributes of cloud-like services.”
And that’s where latency comes in, the ability to do the learning without transmitting out to the central cloud facility and all the way back. The human cognitive system is incredibly fast, he said. The challenge is akin to building that cognitive neural system across networked computers.
“If you have a human in the loop, or a machine, such as a self-driving car,” said Satya, “you not only have high bandwidth from the edge inwards, you need to send a response fast.”
Cloudlets are nice in that respect, he said, because they are “one hop away from the third tier,” and fewer hops mean lower latency since it’s a shorter trip.
Satya extended the notion to wearables, specifically smart glasses, where, he said, augmented reality would meet A.I. Smart glasses, whispering information in your ear, would become “like an angel on your shoulder,” and before long, humans will live in a world where “The angel on your shoulder is indistinguishable from the voice in your head.” With his cherubic face and halo of white hair, Satya was an apt messenger for such a prospect.
Satya proposed that the end game, at least for humans, would be the arrival of futurist Ray Kurzweil’s notion of the “singularity”: “The biological limits on human intelligence will be eliminated,” he declared. “This is path toward that vision.”
He also put up a slide of the late Mark Weiser, who postulated technology will disappear into the background. (I should note the same idea has long been propounded by Internet pioneer Leonard Kleinrock, as he articulated in my 2015 interview with him for Barron’s.)
But, who’s going to build all this?
It won’t be the cloud computer companies, he concluded. “Cloud computing is not going to do it, I’m sorry,” he said. Amazon, Google and Microsoft for the most part are too hung up on centralization. “It will be hard for them to embrace edge computing,” he said, before quickly adding, “except maybe Microsoft. Microsoft came from the world of edge computing,” the world of the traditional PC and workgroup server.
If not the cloud people , it could be the telcos. Or it could be someone else. “It’s wide open,” he assured the audience.
There will be effects for semiconductor devices, as “cloudlets can use lots of custom chips,” he said. And there is already the advent of an “A.I. processing unit,” a “new kind of chip,” he said.
At the end of the day, though, he conceded, “we don’t really know how to provision the edge, yet.”
That leaves plenty for both researchers and academics to think about.