Of all emerging technologies, executives believe the IoT will be the most important, ranking it above others such as artificial intelligence or robotics, Forbes reports. Some of the most recent estimates suggest that IoT technology revenues across 12 key smart city technologies and verticals will grow from around $25 billion in 2017 to $62 billion in 2026 at an average growth rate of 11%. But the successful real-world impact of IoT depends on enabling infrastructure, such as sensory and tracking technologies, advanced analytical capabilities running on the new generation of processing power.
Sensory technology is a particularly important and interesting field, developments in which have significant implications across industries. Sensory and tracking tech create real-time streams of invaluable data with a unique predictive potential. The wearables, trackers, and smart home devices that our lives are stuffed with create a constant stream of evolving data that allow companies to understand every moment in our lives and make a data-based assumption of the evolution of one’s needs, translating into the ability to tailor delivered services.
“Novel sensors and sensing systems will provide previously unimaginable insight into the condition of individuals and the built and natural worlds, positively impacting people, machines, and the environment,” says Brian W. Anthony, a principal research engineer at MIT and Director of the Advanced Manufacturing & Design Program.
Over time, as sensory and tracking technology accumulates a significant amount of detailed information – whether commercial or personal device information – increasingly accurate patterns and predictions will be possible. Data analytics will become an even more critical function with the evolution of hardware-powered machine learning and AI. For businesses, this would translate into cost-efficiency and the discovery of innovation avenues among other critical implications.
As Timothy Swager, the John D. MacArthur Professor in the Department of Chemistry at MIT explains, “If you look at what’s happening with sensors, you’ll see that many different disciplines have to come together. Ubiquitous sensing has so many aspects — chemical, biological, physical, radiological. With all this sensing research going on, we need a place to coordinate our synergies.”
Sensor fusion will bring the output of connected ecosystems closest to what a human brain does reading the environment by fusing together information from various sensory systems of the body. Granular streams of data from advanced sensory and tracking tech will be molded into a single model by advanced analytical capabilities of artificial intelligence running on the new generation of CPUs.
The promise of smart environments and the Internet of Things relies on robust sensing of diverse environmental facets.
The Future Interfaces Group emphasizes that the promise of smart environments and the Internet of Things relies on robust sensing of diverse environmental facets. Through what the Group calls Synthetic Sensors, raw sensor data can be virtualized into actionable feeds, whilst simultaneously mitigating immediate privacy issues. The Group deployed its system across many months and environments, the results of which demonstrated the versatility, accuracy, and potential of this approach.
“A single room can have dozens of complex environmental facets worth sensing, ranging from “is the coffee brewed” to “is the tap dripping.” A single home might have hundreds of such facets, and an office building could have thousands. The cost of hundreds of physical sensors is significant, not including the even greater cost of deployment and maintenance. Moreover, extensively instrumenting an environment in this fashion will almost certainly carry an aesthetic and social cost.
A lightweight, general-purpose sensing approach could overcome many of these issues. Ideally, a handful of “super” sensors could blanket an entire environment – one per room or less. To be minimally obtrusive, these sensors should be capable of sensing environmental facets indirectly (i.e., from afar) and be plug-and-play – foregoing batteries by using wall power, while still offering omniscience despite potential sub-optimal placement. Further, such a system should be able to answer questions of interest to users, abstracting raw sensor data (e.g., z-axis acceleration) into actionable feeds, encapsulating human semantics (e.g., a knock on the door), all while preserving occupant privacy.” – Synthetic Sensors: Towards General-Purpose Sensing, by Gierad Laput, Yang Zhang, and Chris Harrison of Human-Computer Interaction Institute, Carnegie Mellon University.