Technology creates the conditions to achieve previously impossible outcomes at vastly reduced costs. It also changes not only the rules of the game, but the game itself. It affects all businesses, all industries and all economies. How it impacts service and how to strategize in this emerging environment is the focus of this article.

 Note: This post is in three parts. The first part contextualizes and explains technology development in terms of pivotal products and cost reduction to today. The second part provides a historical framework of service development and the impact of the technology of the time to the early 1990s. The third part will start in the 1990s, examine how the service business evolved and came to the center of strategic action and this was and is being driven by technology.

PART 1: Contextualizing Technology

For reasons we don’t yet fully understand, in the late second half of the 18th century, a process now known as the Industrial Revolution began to fundamentally alter the world. It led to an unprecedented, continuous expansion of economic output, rising living standards across the board and, eventually, across the world, and rapid population growth. It was fueled by mutually reinforcing, almost simultaneous, industrial innovations in textile machinery, energy (steam engine), and iron making, which, in turn led to advances in machine tools, transportation, medicine, and a myriad other areas, which over a century and a half led to the world of approximately 50 or 60 years ago.

For example, average global life expectancy grew from 26 years in 1820 to 31 years in 1900 and 49 years in 1950. Having barely budged at all since the dawn of humans, it rose at an annual compound rate of 0.22% and 0.92% respectively. And while GDP in industrialized economies grew by 300% during the same period, hours worked per capita declined by over 60%. Following the Industrial Revolution, we live longer, consume more and work less. Prior to the Industrial Revolution however, societies were stuck in a so called “Malthusian trap” of unending stagnation in terms of living standards and life expectancy.
While we cannot yet fully explain the Industrial Revolution in terms of how and why it came about, as well as the reasons for its timing and original location (Britain), we do know that a product preceded it which was pivotal in its impact. That product was the printing machine invented by Johannes Gutenberg in 1439, and it set off an early explosion of what today we call data (of course other fundamental inventions preceded the printing machine: the plough, which ushered in agriculture and transformed civilization; the wheel or even writing. However they did not happen in a short enough timescale to reach critical mass and unleash the chain reaction and explosive growth that came with the industrial revolution).

The chart below shows actual numbers of manuscripts and books produced per century in Britain, France, Germany, Belgium, Netherlands, Italy and Spain (log scale). Manuscript production grew from 130,000 in the 10th century to 4.5 million in the 15th, a compound centennial growth rate of approximately 80%. But the production of books was orders of magnitude greater, starting at 12.1 million in the 15th century and rising to 900 million in the 18th century, a compound centennial growth rate of 200%. In fact, this under-represents the growth in data and information, as books contained far more pages and vastly greater content variety.

Based on Eltjo Buringh and Jan Luiten van Zanden: Charting “The Rise of the West”: Manuscripts and Printed Books in Europe, a Long-Term Perspective from the 6th through 18th Centuries. In Journal of Economic History, June 2009
The explosion in demand and production of books was caused by the dramatic fall in costs. Because the impact of technology is not only the achievement of an outcome that was previously impossible, but the drastic reduction, in this case almost evaporation, of costs (resources) required to produce it. So, the cost of producing a text in 1640 compared to 1380, i.e. just before the printing press, was a staggering 1200 times less -from 208 days worth of average wages to 0.17 days of average wages. Books of course enabled data storage and retrieval at a previously unimaginable scale and therefore the permanent capture and dissemination of knowledge. They enabled the growth of social movements, the flourishing of science and ushered in the Industrial Revolution. And they didn’t only profoundly change the world; They also set the foundations for further acceleration in the development of technology and further radical change.
Another pivotal enabling innovation, electricity, had a similar effect at the close of the 19th century: For example, (non-natural) light cost in 1880 -at the time of Edison’s light bulb- roughly 400 times as much as it does today. People may have had books, but outside of daylight it was almost prohibitively expensive to read them.  The falling cost of light made not only more reading possible, and therefore accelerated knowledge accumulation and dissemination. It also allowed, among many other things, the development of large buildings (offices, factories) and extended the working day. And the costs of both books and light are continuing to drop. The marginal cost of a book in digital form is now essentially zero. And the cost of light (electricity) is also trending in that direction.
Not only has the cost of solar PV -which relies on semiconductor technology- dropped drastically. But the decline has consistently outpaced expert expectations, as has the deployment growth rate. For example, the price set at an auction in Saudi Arabia of US$ 0.018 (1.8 cents) per kWh in 2017 was not supposed to be reached until the 2030’s when volumes were expected to be far greater. A similar cost development is underway in battery technology.
Electricity, of course, also made possible the electric motor which helped revolutionize manufacturing. And set the stage for the development of computation -as we understand it today.

The early story of computation played out from roughly the early 20th century to the late 1970s, whereby the years to WW II see mainly the formulation of fundamental theoretical principles, while the later years focus primarily on practical applications, both hardware and software – computers, compilers, switches, and storage – all the way exhibiting a trend to electronic and digital systems and miniaturization, and culminating perhaps with the Apple II, an advanced desktop computer, the 8086 microprocessor by Intel and Visicalc by Visicorp, the first PC based spreadsheet. These products were highly successful commercially because they enabled outcomes at fractions of the previous costs at high levels of reliability, thereby disrupting and simultaneously expanding their markets and industries.

Development was rapid in the following decades as well: The IBM PC and Microsoft’s MS-DOS were launched in 1981 and enabled significant industrial standardization and economies of scale, supporting extraordinary growth rates –global shipped PC units grew from 48,000 in 1977 to 125 million in 2001. By 2002, over 1 billion units had been shipped and over 500 million were in use. And growth continued: By 2008, over 1 billion PCs were in use.

In the meantime however, between 1989 and 1991, Tim Berners-Lee (and his associates) developed HTML, invented the World Wide Web and wrote the first web browser. These innovations, which defined the internet, gave rise to another extraordinary explosion, this time in  connectivity, data creation and exchange. The number of websites grew to over 100 million by 2007 (CAGR=199%!) and the number of internet users surpassed 1 billion by 2005 (CAGR=39%).

And this should not forget the “back-end” technology that made it possible, including continuous innovation and miniaturization in storage (Flashdrives: 1994; DVDs: 1995; CD-RWs: 1995; Microdrives-USB Flashdrives: 1999; HD-DVD: 2004 and 1 TB (terabyte) HDDs introduced by Hitachi in 2009). Displays (LCDs, touchscreens, e-paper), Connectivity (Bluetooth: 1994; 56K modem: 1996; first Nokia smartphone: 1996; Wifi 802.11 standard: 1997; RIM Blackberry smartphone: 2002; Broadband overtakes dial-up: 2003; 3G: 2003), and many other areas. Most of all, of course, in microprocessors: For example, transistor count in CPU microprocessors went from 3.1 million on an area of 294 mm², achieving 100 MIPS in the 1993 state-of-the-art Intel Pentium processor, to 731 million on an area of 263 mm² in the 2008 Intel Core i7 (quad) achieving 82,400 MIPS. And, following innovations in graphics capabilities by NEC, Texas Instruments, Sharp and IBM, in 1999, Nvidia introduced the “first Graphics Processing Unit” (GPU: specialized EC to manipulate and render graphics and images) for the consumer (gaming). market: Presented as the “dawn of cinematic computing”, the GeForce 256 GPU had a transistor count of 22 million. By 2007, GPU transistor count had risen to approximately 250 million and combined with design and software advances had turned GPUs into general purpose computing systems, taking over numerous tasks from CPUs.

Advances were rapid in software as well, in particular operating systems (Windows, Linux, Apple); browsers (Mosaic, Netscape Navigator, Explorer); Languages (4GL languages and SQL; High level languages); office applications (spreadsheets and word processing); games; graphics and graphic art applications; computer-aided design; e-commerce and EDI, and, after 2000,  wikis, content management systems, social media platforms, and VOIP (e.g. Skype in 2003). And, of course, directories and search engines (Excite, Yahoo, Lycos, AltaVista) began to transform the internet into a vast repository and source of information, which took a quantum leap following the introduction of the Google search engine. So in 1996, Lycos identified 60 million documents that could be accessed (the largest number of all at the time) and Google started in 1997 with around 10,000 searches per day. By 2008, Cuil (a company founded by former Google employees) had indexed over 127 billion web pages, while Google was conducting the said 10,000 searches in around half a second (1.75 billion per day; this has now surpassed 3.5 billion per day). There is, in fact, significant evidence from neuroscience research suggesting that Google is now altering the human brain through the outsourcing of memory functions and processes to the search engine.

The explosive growth in computing was mirrored in imploding costs driven by innovation and scale. “Moore’s law”, the empirical statement that the number of transistors on a single chip would double every 18 months, became shorthand for semiconductor manufacturing innovation. The pacing technology has been the photolithographic processes used to pattern chips. From the early-70s through the mid to late-90s, a new “technology node”, i.e. a new generation of photolithographic and related equipment and materials was introduced in intervals of 2-3 years. As this also happened to be the time interval between introduction of next generation DRAM memory chips, storing four times the volumes in the previous generation, and DRAMs were the highest volume standardized, commodity chip produced, the scale and growth of the computer market drove investment, technology development and reducing manufacturing cost. Chips became vastly denser as transistor sizes reduced (transistor count!) and costs of transistors fell. And this measures only part of the economic benefits of the innovation dynamic: With smaller transistor sizes also came faster switching times and lower power requirements, highly significant for computer makers and their customers. For example, while Intel’s Pentium microprocessor, introduced in 1993, packed 3.1 million transistors at 0.8μ at 66 MHz clock speed, its Xeon processor, introduced in 2007, packed 820 million transistors at 0.45 nm at > 3 GHz clock speed. A size reduction of almost 1800 times.

The self-reinforcing effect of continuous innovation and cost reduction fueled the growth of the broader computer industry and expanded outwards as costs declined, initially for products and services with high information (digital) content, and then for others as they were digitized in order to become cheaper. Communications, television, photography, books, entertainment and banking are initial prime examples. Other industries got “disintermediated” with the removal of…  intermediaries, such as the travel industry and the gradual disappearance of travel agents. In medicine and biosciences, the cost of sequencing a genome, a fundamental enabling technology, started to drop rapidly, opening up entirely new horizons
The implications were significant in manufacturing as well: Many products started to incorporate “smart” (i.e. digital) features. At the operational level, the number of robots used in the US and western Europe grew fourfold from 1993 to 2007, mainly in the automotive and electronics industries. In the same period, the amount of electronics in factories as a percentage of value of the installation doubled (the impact was considerable larger, considering the reducing prices of electronics). The across the board, large scale deployment of key technologies in manufacturing automation and control such as the Programmable Logic Controller (PLC), the Distributed Control System (DCS), SCADA or the Variable Speed Drive (VSD) were made possible through the shrinking, improved performance and reducing cost of computing. Arguably, the impact was greatest in supply chain and logistics: The emergence of ERPs allowed the integration of multiple databases and large improvements in data availability and accuracy and enabled better planning and control of product flow. Ubiquitousness of barcodes and RFIDs, and new algorithms enabled deeper warehouse automation and improved inventory management, while telematics helped to make distribution more cost effective. In all likelihood, Just-in-Time or Lean Manufacturing would not have been possible without the available technological capability and supply chains would not have globalized as they did.

From spare parts logistics to field service, warranty and asset management, technology also transformed service operations. Enablers were similar technological advancements as in manufacturing and supply chain management. Service and Asset Management systems integrated into ERPs and, in addition, the foundations for semi-automated customer support were set through the internet and faster and cheaper telecommunications.

Arguably, one of the biggest impacts in service, was on equipment and machinery diagnostics through condition monitoring and particularly vibration analysis for rotating machinery, which came of age through the spread of computing. While the theoretical background for vibration analysis was set in the time between the mid- 19th and mid-20th centuries, its practical usage was made possible with the introduction of Fast Fourier analyzers, PCs and expanded data collection capabilities.

Diagnostics became more timely and accurate enabling a shift in approaches to asset management -from “break-and-fix” or time/use-based preventive to condition based/predictive maintenance. Concepts such Reliability Centered Maintenance, Risk-Based-Maintenance or the Total Productive Maintenance and Overall Equipment Effectiveness (OEE) approaches which sprung up in the 1990s had a computational pillar. For many companies, particularly those employing large critical assets, such as power generators or process industries, this enabled large cost savings, improved uptime and reliability. For example, it has been reported that the first Condition Monitoring project at the US Navy in the late 1980s produced a 15:1 return on investment. Condition monitoring -equipment (hardware and software) and services- created a significant market following introduction, estimated at around $10 billion p.a., while reducing maintenance cost by a multiple, mainly through avoidance of both costly failures and unnecessary interventions. Nevertheless, the high variable costs (mainly experienced engineers to analyze and interpret the data) meant that it could not scale significantly at the time.

The three decades of rapid innovation in miniaturization and the reducing costs in transistors, semiconductors, computer chips, sensors and displays combined with the internet, low cost connectivity and availability of innovative algorithms and applications to set the stage for the emergence of the pivotal product of the current era -the smartphone: A process that started in 1996 with the Nokia smartphone, through the RIM Blackberry and culminated in the original Apple iPhone in 2007.

The original iPhone launched June 29th, 2007, and quickly became a runaway success transforming Apple, the industry and the world in the process. Its price was U$ 499 (4 GB version). Sales rose from 270 thousand units in Q3 of 2007 (first sales quarter) to 1.12 million in Q4 and 2.32 million in Q1 2008. Compound annual sales growth through 2017 has been 35%. The device created an explosion in data and in demand for services deliverable through the phone and thus fueled revolutions in connectivity technology, sensing and actuating as well as algorithms to analyze and interpret the data. It enabled new business models and induced the disruption of numerous, if not all, industries, many at first glance completely unrelated. It ushered in the digitization era.

As the iPhone evolved, performance improved, and new features were incorporated. By the time of the iPhone 5S in 2013, the computing and communications capabilities incorporated in the device would have cost over $3.5 million in 1991. This amounted to significant “dematerialization” (the process where resources (costs) required to achieve an outcome reduce towards zero). And, in addition, by 2011 the device provided free access to over $900,000 of applications -at the recommended retail price at the time of their market introduction.
Currently, there are almost 1 billion 4G LTE mobile phones in use globally, having grown at a CAGR of almost 40% over the past 4 years. And global data growth has mirrored and exceeded the growth rate in devices and will continue to do so.
The iPhone induced also technology leaps in Micro-Electro-Mechanical Systems (MEMS), particularly sensing and actuation, in terms of performance, size and cost -both through the requirements of the device as well as for applications enabled by the device. So for example, the average cost of an IoT sensor dropped from approximately $ 1.30 in 2004 to $0.48 in 2017.
In turn, this has created demand for connectivity and incorporation of intelligence in all devices (Smart Connected Products) and beyond. Ericsson expects over 20 billion smart products/devices to be connected to the internet by 2023. The Internet of Things, and, at the same time, the Internet of Everything is rapidly emerging.
The gigantic increase in data created a need for analysis and interpretation and brought about the Development of Data Science and a section of Artificial Intelligence called Machine Learning as disciplines. This has a profound effect: Just as semiconductors and computing caused the cost of arithmetic to drop, and, as a consequence, everything that could be expressed in numbers became cheaper inducing the shift to digital -for example, from chemical to digital photography-, so does Machine Learning, combined with huge data volumes, reduce the cost of prediction. Every outcome therefore that can be expressed as a solution to a prediction problem should become drastically cheaper, for example, demand forecasting or inventory management. However it also induces other problems, not traditionally associated with prediction, to be expressed as prediction problems. This creates a shift in approaches to algorithms from “rules based” to pattern recognition and Machine Learning. In autonomous vehicles for example, while it would have been essentially impossible to devise rule based algorithms due to the huge numbers of variables involved, with Machine Learning the system predicts what a human driver would do at every instance of driving. This therefore provides the basis for enhanced automation and autonomy (products, processes)  to permeate all aspects of manufacturing, services and the economy as a whole. In broader terms, work, business and life will be affected in ways we can at present not foresee. For example, while the impact of automation on jobs and distribution of wealth is currently much debated, it remains essentially unknown.

Be that as it may, we do know that digital technologies, data and Machine Learning algorithms, by providing better control and de-risking of processes open the door to the full-blown servitization of economies – whereby the use of products rather than products themselves are the focal point of exchange and economic activity. In addition, we know that digital technologies tend to drastically drive down (even evaporate) costs, i.e. what we have called dematerialization, and we also know that in many cases, platforms tend to beat products – the iPhone itself was a triumph of a platform over product competitors. Platforms tend to come with different economics: significant “network effects” and “winner take most or all” markets (see further reading below). The future (of service) will include self-correcting, self-optimizing and even self-maintaining products/machines and processes based on predictive algorithms; large amounts of 3D-printed components and parts; and humans interacting with machines at deep levels through Augmented Reality, while many physical and customer support tasks will be carried out by robots. The nature of service will therefore shift from primarily logistics to knowledge creation and exchange. This will alter the way business is conducted and revenue generated as well as the sources of cost.  How to strategize for competitive advantage, particularly how to position businesses in value chains (or platforms) in this emerging environment is the key question to answer.

End of Part 1

Further reading

This site:

Technology is servitizing the value paradigm: It’s time to change management thinking

Platforms in industry: Disrupting the world of products

Digitization: How to know whether an industry is being disrupted

Other:

The simple economics of machine intelligence (Harvard Business Review)

A manager’s guide to Augmented Reality (Harvard Business Review)

Scale effects, network effects and Investment strategy (Harvard Business Review)

Multi-sided platforms: Foundations and strategy (Harvard Business Review)