Augmented reality at architectural scale: this is what one gets by combining digitization accelerating technologies in industrial settings with the right formula. Product Life Management (PLM), augmented reality (AR), software-as-a-service (SaaS), artificial intelligence (AI), (>>italiano qui<<) Internet of Things and Industrial Internet of Things (IoT, IIoT), cloud and computer aided design (CAD) all already enhance each other synergistically used in the best combination for each specific use case.
Now a new research and actionable field is leveraging on artificial vision and depth sensors, and combining them, or, better yet, drawing on their features to produce spatial computing. Spatial computing is the big leap from applications consumers already widely adopted, such as GPS, Google Maps, Pokémon Go, ride sharing or drones software to the industrial world.
Spatial computing provides dynamic 3D visualization in real time of products, industrial spaces and workers and all their interactions. In other words, it is the ability to virtualize or digitize how machines, objects, people, and environments relate to each other in space.
Factory floor and warehouse managers know the value of the studies that show how machines and materials (just to name a couple of the many elements that cross on a shop floor) move in a given time frame, and how these studies are crucial to optimize workflows and the movements of hundreds or thousands of people, machines and objects. They also know well how slow of a process it is to draw strategies for real environments, which are 3D, from 2D reports (designs and spreadsheets)—this without even beginning to consider the time it takes to feed back the analysis of data to managers and workers operating in the plant.
“Depth sensing cameras can capture a space such as you might experience in an online virtual tour in a home that is for sale. The same technology works great to capture 3D models of factories and plants,” Jim Heppelmann, the CEO of PTC explained.
“Spatial Computing allows us to combine and integrate what we know from Iot, AR, CAD and PLM and our heritage in 3D, together with what cameras see happening in that same workspace, so that we can create an amazingly powerful 3D digital twin that understands all of the dimensions of people, product, process and place (…) and we can use VR to go there virtually anytime we want”, Heppelmann, who first introduced spatial computing at the 2020 LiveWorx event, said.
> The premier episode of LiveWorx 2021: The Limited Series will be online next Thursday, March 25th at 10:00 AM EDT. Register here.<
Using analytics on top of the data collected in real time and 3D makes the difference when it comes to making informed and effective decisions about the optimization of processes, products and how employees perform their work, is the message researchers want to convey to businesses: “By combining artificial intelligence with IoT and AR in every dimension and combination, that is, spatial computing, we can monitor and optimize entire work sites”.
Among the benefits of having a granular, dynamic and real-time view of each instance of processes are:
- Managing complexity, as in the case of the “Amazon effect” in warehouses where each day tens of thousands of objects, robots and operators move asynchronously in a synchronous way
- Simplifying operations by immediately detecting and giving feedback on objects in unfamiliar environments—e.g. while performing remote maintenance—or objects hidden from the view of workers
- Time reduction by allowing to reprogram robots on the spot, reducing downtime and without the need for engineering skills
- Increased security—a critical issue as the number of robots in plants keeps growing—by detecting and communicating in real time changes in their paths, sudden movements of loads, sudden halts of machinery or lines or changes in the emergency evacuation paths
- Equally important, it focused the designer’s attention on the ergonomics and efficacy of the movements of workers to improve their well-being and safety.
In other words, spatial computing provides the tools for producing a digital twin of virtually every process, a digital twin that engineers and operators can revisit and re-experiment any time with virtual reality (VR) to modify the process as needed in the shortest time possible—and it will be all the more so when 5G networks become vastly available.
The term Spatial computing was coined by Simon Greenwold at the MIT Media Lab in his 2003 futuristic thesis. His ideas translated into reality over the last few years with depth sensor cameras, IoT, AR and vision computer technologies maturing.
Valentin Heun is also a MIT Media Lab alumni. Vice President of Innovative Engineering at the PTC Reality Lab, he oversees the 17th floor of a building overlooking the Boston Bay. In his career, from Bauhaus Universität to MIT Media Lab to PTC’s Reality Lab, his research was awarded many prizes.
Spatial computing is forecasted to boom also thanks to a growing number of applications coming forth in non-industrial fields, including gaming with mixed, virtual or increased reality or immersive hyper-realistic experiences.
In the video, Anna Fusté, a PTC engineer, and her Lego robot both know well her apartment’s physical space. Anna controls the movements of the robot with her smartphone by pressing a virtual button in space. To reprogram the path, Anna just needs to change it on the AR interface without any traditional engineering reprogramming or downtime. With Anna, the Vuforia Spatial Toolbox communicates through her sight and hearing, with the robot, through an IoT wireless connection that works in substance with X, Y, Z coordinates.
A few other big companies, like Siemens and Dassault Systèmes, are working on industrial spatial computing solutions too, but PTC is ahead with a first open source industrial spatial computing toolbox. The goal is to help industry researchers start experimenting and creating applications for their own use cases.
On the hardware side, the ever more sophisticated smartphones’ ability to perceive space and depth is contributing to propel spatial computing. Such is the case of the iPhone 12 Pro’s LiDar, in essence the same technology autonomous cars use, or of other devices gaining traction among consumers such as AR capable glasses, like Bose Frames or North Focals—their creators predict that in combination with smartphones they will disrupt the architecture of computing in a not-too-distant future.
The features of some smartphone cameras already spurred over the last couple years a number of AR remote training and maintenance apps in several industries, from health to education, to heavy machinery or mining—used in many cases in combination with other hands-free AR devices like Microsoft‘s HoloLens 2, like in the case of frontline workers in hospitals. This in turn was a driver for cameras with further performance and quality (depth and wide angles, etc.).
In the industrial world, the optimization of work has come a long way, from Taylorism to a context of non-negotiable requirements which include the safety and well-being of operators—in in relation to the latter case, spatial computing can now, for instance, detect movements that put too much of a burden or are ineffective due to weight, size, path, etc., for a more rational orchestration of work based on a scientific analysis of processes—sort of like Iron Man’s J.A.R.V.I.S., which collected data to help Tony Stark decide how to act.
As for efficiency, there are many examples of the difference a warehouse, a plant or a process without surprise bottlenecks makes. Some of the big companies which are already leveraging AR in their processes are BWM, Volvo, Mercedes Benz and Mitsubishi in the automotive industry and Airbus, NASA, Lockheed and the US Air Force in aerospace.
“With our open-source offering we wanted developers, innovators and researchers to begin experimenting with spatial computing within their own companies, because this is clearly a field with nearly unlimited potential in a situation where a whole ecosystem of innovation will be needed to fully capture the possibilities this incredible innovation represents,” Heppelmann said. “We need to learn from 2020 for a better 2021. Many departments went overnight to remote working. Not so others, like product development, because crucial software applications, such as CAD or PLM were on the servers in the offices that had become inaccessible due to Covid,” Heppelmann added. His company had already acquired Onshape and Arena Solutions before 2020, and was therefore ready at the onset of the pandemic to offer Software as a service, SaaS. (Onshape wants students to try their software out).
In the journey towards Spatial Computing a first less challenging step is AR. According to Stacey Soohoo, research manager at IDC’s Customer Insights & Analysis, “2020 has become a major turning point where enterprises and organizations across all verticals are embracing the unarticulated need for augmented, mixed, and virtual reality,”
IDC forecasts that by 2024 manufacturing companies will spend about 11 billion dollars in these technologies.
The commercial use cases that are forecast to receive the largest investments in 2024 are training ($4.1 billion), industrial maintenance ($4.1 billion), and retail showcasing ($2.7 billion). In comparison, the three consumer use cases for AR/VR (VR gaming, VR video/feature viewing, and AR gaming) are expected to see combined spending of $17.6 billion by 2024, or, according to ReportLinker.com, a growth from $15.3 billion of 2020 to 77 billion in 2025.
“We’ve seen a huge uptick in commercial interest in both virtual and augmented reality driven by the pandemic,” Tom Mainelli, vice president of Devices and Consumer Research Group of IDC, said. “”Organizations of all sizes are leveraging the technologies to capture and transfer knowledge between experienced and new employees, enhance and streamline field operations, and increase collaboration among frontline workers.”
According to Heun at Reality Lab, while many think spatial computing to be still out in the future, he believes it won’t take long before it becomes just another type of computing.
To help operators, developers and engineers familiarize with their Reality Lab research, the group created a two volumes comics book: Vision Comic Book Volume I and Volume II: Spatial computing.