Technology creates the conditions to achieve previously impossible outcomes at vastly reduced costs. It also changes not only the rules of the game but the game itself. It affects all businesses, all industries and all economies. How it impacts service and how to strategize in this emerging environment is the focus of this article.

 Note: This post is in three parts. The first part contextualizes and explains technology development in terms of pivotal products and cost reduction to today. The second part provides a historical framework of service development and the impact of the technology of the time to the early 1990s. The third part will start in the 1990s, examine how the service business evolved and came to the center of strategic action and how this was and is being driven by technology.

Long Read

Part 2: Contextualizing service

 

DTE Energy is a medium-sized utility and energy services company in Detroit, Michigan. Each year it recognizes employees for outstanding customer service with the “Sarah Sheridan Award”.  And Sarah Sheridan was remarkable, not only because in 1905 she was the company’s head of sales, but because she devised an offering, much imitated by others later, whereby DTE provided electric motors to manufacturers, including engineering, installation, and maintenance free of charge, as part of its effort to increase sales of electricity in its market area. That sales effort spearheaded the transition towards electrification of manufacturing, which in turn enabled mass production and the vast increases in productivity it generated. So, a utility, with a woman sales chief in 1905, was probably one of the first vendors of product-related service contracts (called product-service bundles today) for electrical machinery. Manufacturers were too far away, communications and travel were cumbersome, and it is doubtful that they would have been interested to provide the service at the time.

The early history: Four things were necessary for mass production and industrialization: electric motors; new machine tools; interchangeable parts, and the ability to rapidly repair breakdowns

Prior to the electric motor (1880) and even for 30 years after its introduction, steam was the prevalent energy source in industry with steam engines providing about 80% of mechanical drive capacity in US and European factories. By 1920 however, electricity had surpassed steam and by 1929 electric motors represented almost 80% of total capacity for driving machinery. The efforts of utilities bore fruit and the electrical transformation enabled a dramatic decline in energy use per unit of GDP and a surge in productivity through new forms of technical and managerial organization.

In the age of steam (and water) power, production machines were directly linked mechanically to centrally located prime movers (steam engines), which turned iron or steel line shafts (which ran the entire length of the factory, often even into adjacent buildings) via pulleys or leather belts. The line shafts turned secondary (or counter-) shafts, again via pulleys and belts, to which the machine tools (hammers, punches, presses, looms etc.) were connected. The entire network of main and secondary shafts operated continuously regardless of how many machines were actually in use at any given time. If a line shaft or the steam engine broke down, production ceased in the whole factory until repairs could be made. Expanding a factory or making modifications to the production line was very difficult, in many cases impossible.

Eventually electric motors replaced the in-house steam engines, eliminating the need for factories to be close to coal stores or water. Electric power could be produced at appropriate sites and transmitted to factories a significant distance away. Initially motors turned the same line shafts as steam engines -manufacturers were familiar with their behavior and problems and it was easy to replace the upstream system without making downstream changes-, but within a few years it became apparent that electricity could transform the way power was distributed within the factory to eliminate the problems associated with line shafts -large friction losses, the need to turn all the shafting regardless of the number of machines in operation and to stop all production in the event of a failure in the shaft or the motor. So, production machinery started to be arranged in groups, each with its own motor and, relatively short, shaft. Each could be operated independently. “Group drive”, as it came to be called, also allowed machinery to be positioned in more efficient ways and could now be consolidated in specialized shops or departments further improving productivity. As manufacturers gained experience, machinery was belted directly onto line shafts, eliminating counter-shafts, reducing power losses, and leading to even greater consolidation and specialization. As shafts became shorter, some forward-thinking engineers entertained the radical idea of a motor for each individual machine. “Unit drive” then eventually came to pass (with AC motors, but that’s another story), because it radically improved energy efficiency through the elimination of friction losses and the need to turn shafts for machines that were out of service. And although capital costs were higher -the motor of each machine had to be sized to cope with maximum demand of its individual machine whereas group motors could be sized to take advantage of load diversity leading to large increases in installed capacities-, the additional production that could be achieved outweighed the additional costs by an order of magnitude through better flow of production, improved machine control, improved working conditions and, primarily, ease of plant expansion at a time of rapidly growing demand. And, as we saw above with Sarah Sheridan, utilities did a good job in persuading manufacturers, particularly smaller ones, to buy their electricity as rapid expansion brought about higher efficiencies and economies of scale. At the time of DTE’s offering, 70% of power used in factories was generated on-site. By 1919, 57% was generated by electric utilities.

During most of this time, production machinery and machine tools were designed by engineers-craftsmen, hired for the job. They were highly customized – built for purpose- and there was no standardization. When they broke down, which they often did, parts had to be fabricated (at great cost and taking a lot of time) and maintenance provided either by the operator company or local independent machine shops which had started to appear in the US and Europe in the early 1800’s. Service support from any established machine manufacturers hardly existed at all.   

This started to slowly change with the emergence of the idea of interchangeable parts, which eventually led to both standardization and mass production.

In the 1760s, in an effort to rapidly rebuild French army capacity following defeat in the Seven Years’ War (French and Indian War 1754-63) in North America, General Jean-Baptiste de Gribeauval proposed manufacturing arms with interchangeable parts. The idea reached the US in the 1790s, when the Federal Government, fearing war with France, awarded a contract to American manufacturer Eli Whitney (the inventor of the cotton gin) for the supply of 10,000 “stands of arms” to be delivered within 28 months at a cost of US$ 134,000. This was the largest government contract ever awarded until then, and for an unprecedented number of muskets in a very short period of time -given that muskets, until then, had been handcrafted; The combined capacity of the federal armories was not more than 1500 firearms per year. But Whitney proposed a different, mechanized, approach: “… Machines for forging, rolling, floating, boring, grinding, polishing, etc. may all be made use of to advantage.” And later, after constructing his factory: “One of my primary objectives is to form the tools so that the tools themselves shall fashion work and give to every part its just proportions, which once accomplished, will give exceptional uniformity to the whole.” Making the guns was a formidable task. The difficulty was that, “A good musket is a complicated engine and difficult to make, difficult of execution because the conformation of most of its parts correspond with no regular geometrical figure.” In addition, what machines existed at the time were constructed of wood and iron; They were malleable and were not capable of producing parts which varied by only very minute tolerances.

As it turned out, Whitney delivered the guns 8 years late. But his methods improved and in 1812 he delivered 15,000 guns, on time, under another contract. His production system was adopted by the US government -not only for its own armories, but for later contracts with other manufacturers as well: In 1813, in another contract for pistols, the government specified that “the component parts are to correspond so exactly that any limb or part of any one Pistol may be fitted to any other Pistol of the twenty thousand.” In the following years, government and private manufacturers agreed on a strategy to standardize arms manufacture and the latter subsequently invented a variety of machine tools for the purpose (milling and grinding machines, the turret lathe, barrel-turning lathes, metal cutting machines and others), in an early modern example of defense and government procurement driving technology and standardization. By the 1830s “Armory Practice” was being extended to other types of manufacturing by trained workers who ventured into other fields.

The demand for armaments during the American Civil War caused a great expansion of America’s machine tool industry and to facilitate growth, in the following years, a uniform set of measurements and gauges in both America and Europe, were established, which were instrumental for the machine tool industry’s ability to produce “absolutely interchangeable parts”. 

Throughout the early history of the American machine tools industry, flexibility was key. The same machines could be used to make different kinds of products (which was not always the case in Europe). So, companies which had produced arms during the Civil War had no trouble in converting to making sewing machines, typewriters, or bicycles and later automobiles and airplanes. It is worth noting that both the Wright Brothers and Henry Ford constructed their first products from bicycle parts using principles of bicycle design.

During this period to the 1920’s (and electrification of machine tools), machine shops founded by independent engineers-entrepreneurs, originally trained in the armories and the shops of major contractors, appeared, mainly in the vicinity of major manufacturing works. They served both as machine tool designers/builders as well as maintenance and service providers, making and selling parts, and providing engineering and technical labor -in a way that is still familiar today.

Ford’s assembly line and the advent of high productivity/low-cost automobile manufacturing and the leaps in demand induced by the low car prices and high worker salaries, the very industrialization of America and the reconstruction of Europe following WWII, were made possible by the described breakthroughs in interchangeable parts and machine tool innovations and the ensuing ability of in-house or independent engineers and machine shops to quickly diagnose problems and fix breakdowns using interchangeable parts. A motor race cannot be won without the pit and the necessary parts on-hand and mass production would not have been possible without the ability to repair machines quickly.

By the 1950’s and through the 1980’s both mechanical and electrical equipment manufacturers were fully focused on product sales: Rebuilding in Europe and supporting the growth of the US economy. Service was mainly an afterthought, a necessary evil sometimes. Nevertheless, some manufacturers in the 1980s started separating the product from the service business, not necessarily to prioritize services, but more to avoid inherent conflicts and disruptions to product business. For example, Asea (now ABB), one of Europe’s largest motor manufacturers, started building a dedicated repair workshop network in Sweden to avoid the servicing of motors in factories, which was expensive both in terms of overhead as well as opportunity cost. The focus on minimizing service cost was widespread, as service mostly was not seen as a revenue generator. But customers increasingly saw things differently. The advent of very expensive “Flexible Manufacturing Systems” (FMS) which required very high utilization rates to pay for themselves and studies showing that cost-effectiveness of machine tools had declined by 40% in the 20 years to 1975, caused customers to put significant pressure on vendors to reduce machine downtimes (it also caused the demise of many machine tool manufacturers).

By the late 1970s, we see the first integration of (computer) technology into service business. As technology advanced, following the lead of the computer industry, some industrial manufacturers, particularly in the US and Germany, started experimenting with Remote Repair Diagnostics and Maintenance (RRDM) or “tele-maintenance” as it was then called, to save time and travel cost while providing timely and better support to customers. Already at the time, companies like Xerox, Digital Equipment Corp., and IBM had developed technologies that sent data from their equipment at customer sites over telephone lines to remote service centers which monitored the equipment and sent users alerts on suboptimal performance or major problems. They could also send technicians and replacement parts before major failures arose. Of the industrial equipment companies, one of the first to follow suit was Honeywell with its Equipment Health Management and Abnormal Situation Management systems. Both were sophisticated software packages that constantly monitored important equipment in chemical plants. Plant operators could monitor their entire facilities from a central control room. And the systems also allowed Honeywell to monitor customers’ operations centrally from its facilities in Colorado. However, progress at the time was hampered by the lack of common standards, the high cost of communications, data transfer and storage, the inadequacy of software for diagnostics and the lack of sufficient amount of electronics (sensors and, hence, data and controllers) in equipment. Nevertheless, efforts continued, made necessary also by increased exports to remote customer locations.

By the 1980s we see the second integration of technology into services: Eddy current probes and PC-based FFT analyzers jumpstarted the vibration analysis/condition monitoring industry, mainly for rotating machinery. By the early 1990s, the technology was being incorporated into industrial and process automation platforms. And in the late 1980’s/early ’90s, machine tool builders, backed also by government programs, in the US, Germany, and Japan redoubled their efforts to provide RRDM. American companies like Giddings and Lewis, Bridgeport, and Cincinnati Milacron developed technology for remote monitoring and diagnosis of their products, but progress and customer uptake were again slow. However major progress had been made since the 70’s: On standards, reduced costs for sensing and, most importantly, adoption of the internet.  The use of the internet as a communications tool provided not only an inexpensive means of data transfer, but HTTP text-based transmission provided universality as a communications medium and protocol. Any standard software could be now used to access and process data. For example, remote monitoring systems with standard Ethernet connectivity allowed the controller to connect directly to the internet or access it via direct dial-up. This development indicated a major shift from proprietary to standard controller interfaces. And Java expanded the possibilities of using standard Web technology for more complicated applications. With Java-enabled servlets located on remote systems, engineers could not only monitor and diagnose devices remotely; They could also upgrade, repair, and replace code in the embedded application. 

Still, at the time, all data interpretation, diagnostics, and interventions were performed by humans. And there were problems: In a study of machine tool makers and machine builders from 1995, the Fraunhofer Institute in Germany found that only 20% of respondents to a survey, covering 1300 companies, used RRDM; 2/3 did not consider it meaningful or of value to their business and 18% weren’t interested in the technology at all. Furthermore, those that did use it, applied it to only between 1 and 5% of their customer base; Only 25% said they wanted to expand usage and functionality. And this was in spite of significant support from the German government, which viewed RRDM as a way to capture and sustain market share for German companies in remote markets without the great expense of building service networks. Respondents countered with legal problems, the difficulties in closing RRDM contracts, customer lack of willingness to pay, difficulty in upgrading or equipping legacy equipment with sensors and last, not least, demands on qualifications and knowhow of service staff, including knowledge of basic internet technologies and/or their willingness and motivation to engage. It was apparent that for many companies the importance of service was not great enough to drive management to overcome the problems.

In a few short years, this started finally to change.

End of Part 2