In pondering decisions about IoT based service strategies and necessary investments, OEMs and other manufacturers should consider more the downside of losing touch with their data and installed base rather than the still uncertain upside of revenues through new business models. It helps focus the mind.
Most of the buzz over the past couple of years has been over the Internet of Things, data analytics and other emerging technologies that will open the way to lots of new services. The technologies can be understood as enablers of business model shifts where OEMs (and most product manufacturers) will move away from “transactions” -selling a machine and then servicing it for a fee- to “relationships” –selling attributes and performance and tying compensation to outcomes via long term contracts – a process that we call “servitization”. The new technology makes possible the achievement of required performance standards for less cost (so called deadweight costs: e.g. assets will break down less) and therefore should be in high demand.
For OEMs taking the IoT and “fee for outcome” route seems inevitable in the medium term. If assets fail less, service and parts revenues will decline. The right strategy seems therefore to be to turn a problem into an opportunity and ride the performance bandwagon. Before that can happen en masse however, a number of hurdles need to be overcome:
- the bulk of the productive asset installed base is not digitized in a useful way for IoT nor to provide the (big) data necessary for meaningful analytics. It will take time and cost to upgrade the asset base –but monetizing that upgrade is a huge opportunity in itself –particularly for the suppliers of the necessary kit (sensors, communication and processing equipment, software…)
- most OEMs and manufacturers have only rudimentary capabilities in either the IoT, computing or analytics. These capabilities will have to be upgraded as well and the available resources (people) are quite scarce. This is therefore an expensive undertaking and the cost will go up before increased supply catches up with demand
- the science of prognostics and health management (PHM) is still in its infancy. The models to predict failure, remaining useful life (RUL) to high confidence levels are still being researched or piloted and are not yet mature. Data driven techniques will provide horsepower, however the data histories need to be collected or created and verified. This will take time. It will take additional time to automate diagnostics, failure analysis and interventions. We are by no means yet at the stage of “plug and play”. Possibly however the technologies will help to mitigate the effects of generational change and the fact that “deep smarts” –the people who actually built the installed base and know most about it- are exiting the workforce in ever larger numbers
- the issues related to connectivity and data ownership will have to be resolved taking into account confidentiality, security, cultural and regulatory and other constraints
- technical configurations will need to be tried out and optimized as the technology matures
- the evolution of the business value system will need to play out – in the early stages of what is essentially a new industrial relationship model nobody really knows who is the best supplier or the best owner of anything. Many companies are staking claims apart from the OEMs and the customers: sensor manufacturers, connectivity and communication providers, network companies, computing and automation companies as well as many different service providers offering analytics, modeling and other techniques.
- Finally for the realbenefits to be realized it is essential that a high degree of standardization develops so that machines can talk to other machines and computing engines regardless of vendor
Given this state of early and creative confusion what is the best way forward for OEMs? Should they try to get into the game as early as possible in spite of the lack of clarity or should they wait until things settle down? One significant risk of waiting is the claim to the data: Digitization of assets is vendor neutral. If third parties (customers, competitors, independent service providers) would digitize asset data without the OEM, then the OEM might not only endanger his position as key influencer of asset performance (jeopardizing a potential service revenue stream), but also runs an increased risk of product commoditization and limits his differentiation potential. So access, even “ownership” of the data (through an agreement with customers) is something OEMs can ill afford to lose. And the best way of keeping in touch with the data is not letting go of them in the first place –that is selling not the asset, but outcomes through business models, such as “power-by-the-hour”.
However in the absence of maturity in the IoT technical and competitive landscape, it is difficult for OEMs to develop coherent strategies. To in any way define this business meaningfully with sufficient market impact requires very significant bulk. This is where GE stepped in and firmly seized the opportunity. With its Predix platform (which it has bolstered through partnerships, including with Cisco, Intel, Amazon Cloud Services and diverse telecommunication providers) GE is trying to dominate the IoT world in the way Microsoft dominated PC operating systems, Facebook Social Media and Apple mobile computing. Given the size of GE’s installed base (access to data), technology (it ranks, surprisingly, among the top software businesses in the world, though its software is integrated into its industrial solutions and therefore not noticed) and its financial muscle it could well succeed.
Predix is a (mainly) software based system that allows vintage and vendor neutral machine-to-machine (M2M) communication and connects to the Cloud. On top of that and utilizing the generated data, GE has both built its own Apps for asset management (having made large investments in computation and analytics capabilities) and has opened the system for App development by third parties, including customers, vendors and service providers. The value of platforms increases if they become the de facto standard that everyone must use because others are already using them. GE has been the first mover in this case and has therefore significant advantage in establishing Predix as the de facto standard, supported by its own huge data streams (installed base) and investment capacity. Competitive responses have been muted or partial. Some large companies are trying to provide own alternatives or are focusing on applications rather than platforms. Others are concentrating on “cognitive engines”, i.e. smart software incorporating analytics and other tools that will power diagnostics, prognostics or interventions (e.g. IBM’s Watson platform).
In assessing strategies and responses therefore, OEMs at this stage must be guided primarily by the potential downside of being left out, disconnected from the data rather than concrete upside from new services, the timing and value of which is still uncertain. The best way forward –at least for the time being- is to invest in analytics, computing and data capability and to develop end-to-end Apps (from prognostics to remote intervention) for their installed base using probably GE’s or possibly another platform. While this requires non-trivial upfront investment (best placed here are of course process and industrial automation vendors) and some of the revenue that will eventually be generated will stick with GE, the risks of losing touch with the installed base is reduced and an opportunity opens up in the new market of industrial and asset management Apps. One certainty is that IT is moving from running supporting processes very squarely into the heart of industrial businesses.
And this is only the beginning…
Trackbacks/Pingbacks