In their Harvard Business Review November cover story, How Smart, Connected Products Are Transforming Competition, PTC CEO Jim Heppelmann and Professor Michael Porter make a critical strategic point about the Internet of Things that’s obscured by just focusing on IoT technology: “…What makes smart, connected products fundamentally different is not the internet, but the changing nature of the “things.”
In the past, “things” were largely inscrutable. We couldn’t peer inside massive assembly line machinery or inside cars once they left the factory, forcing companies to base much of both strategy and daily operations on inferences about these things and their behavior from limited data (data which was also often gathered only after the fact).
Now that lack of information is being removed. The Internet of Things creates two unprecedented opportunities regarding data about things:
- data will be available instantly, as it is generated by the things
- it can also be shared instantly by everyone who needs it.
This real-time knowledge of things presents both real opportunities and significant management challenges.
Each opportunity carries with it the challenge of crafting new policies on how to manage access to the vast new amounts of data and the forms in which it can be accessed.
For example: with the Internet of Things we will be able to bring about optimal manufacturing efficiency as well as unprecedented integration of supply chains and distribution networks. Why? Because we will now be able to “see” inside assembly line machinery, and the various parts of the assembly line will be able to automatically regulate each other without human intervention (M2M) to optimize each other’s efficiency, and/or workers will be able to fine-tune their operation based on this data.
Equally important, because of the second new opportunity, the exact same assembly line data can also be shared in real time with supply chain and distribution network partners. Each of them can use the data to trigger their own processes to optimize their efficiency and integration with the factory and its production schedule.
But that possibility also creates a challenge for management.
When data was hard to get, limited in scope, and largely gathered historically rather than in the moment, what data was available flowed in a linear, top-down fashion. Senior management had first access, then they passed on to individual departments only what they decided was relevant. Departments had no chance to simultaneously examine the raw data and have round-table discussions of its significance and improve decision-making. Everything was sequential. Relevant real-time data that they could use to do their jobs better almost never reached workers on the factory floor.
That all potentially changes with the IoT – but will it, or will the old tight control of data remain?
Managers must learn to ask a new question that’s so contrary to old top-down control of information: who else can use this data?
To answer that question they will have to consider the concept of a “data lake” created by the IoT.
“In broad terms, data lakes are marketed as enterprise wide data management platforms for analyzing disparate sources of data in its native format,” Nick Heudecker, research director at Gartner, says. “The idea is simple: instead of placing data in a purpose-built data store, you move it into a data lake in its original format. This eliminates the upfront costs of data ingestion, like transformation. Once data is placed into the lake, it’s available for analysis by everyone in the organization.”
Essentially, data that has been collected and stored in a data lake repository remains in the state it was gathered and is available to anyone, versus being structured, tagged with metadata, and having limited access.
That is a critical distinction and can make the data far more valuable, because the volume and variety will allow more cross-fertilization and serendipitous discovery.
At the same time, it’s also possible to “drown” in so much data, so C-level management must create new, deft policies – to serve as lifeguards, as it were. They must govern data lake access if we are to, on one hand, avoid drowning due to the sheer volume of data, and, on the other, to capitalize on its full value:
- Senior management must resist the temptation to analyze the data first and then pass on only what they deem of value. They too will have a crack at the analysis, but the value of real-time data is getting it when it can still be acted on in the moment, rather than just in historical analyses (BTW, that’s not to say historical perspective won’t have value going forward: it will still provide valuable perspective).
- There will need to be limits to data access, but they must be commonsense ones. For example, production line workers won’t need access to marketing data, just real-time data from the factory floor.
- Perhaps most important, access shouldn’t be limited based on pre-conceptions of what might be relevant to a given function or department. For example, a prototype vending machine uses Near Field Communication to learn customers’ preferences over time, then offers them special deals based on those choices. However, by thinking inclusively about data from the machine, rather than just limiting access to the marketing department, the company shared the real-time information with its distribution network, so trucks were automatically rerouted to resupply machines that were running low due to factors such as summer heat.
- Similarly, they will have to relax arbitrary boundaries between departments to encourage mutually-beneficial collaboration. When multiple departments not only share but also get to discuss the same data set, undoubtedly synergies will emerge among them (such as the vending machine ones) that no one department could have discovered on its own.
- They will need to challenge their analytics software suppliers to create new software and dashboards specifically designed to make such a wide range of data easily digested and actionable.
Make no mistake about it: the simple creation of vast data lakes won’t automatically cure companies’ varied problems. But C-level managers who realize that if they are willing to give up control over data flow, real-time sharing of real-time data can create possibilities that were impossible to visualize in the past, will make data lakes safe, navigable – and profitable.