One of the most consistent patterns in business is the failure of leading companies to stay at the top of their industries when technologies or markets change. Goodyear and Firestone entered the radial-tire market quite late. Xerox let Canon create the small-copier market. Bucyrus-Erie allowed Caterpillar and Deere to take over the mechanical excavator market. Sears gave way to Wal-Mart.
The pattern of failure has been especially striking in the computer industry. IBM dominated the mainframe market but missed by years the emergence of minicomputers, which were technologically much simpler than mainframes. Digital Equipment dominated the minicomputer market with innovations like its VAX architecture but missed the personal-computer market almost completely. Apple Computer led the world of personal computing and established the standard for user-friendly computing but lagged five years behind the leaders in bringing its portable computer to market.
Why is it that companies like these invest aggressively—and successfully—in the technologies necessary to retain their current customers but then fail to make certain other technological investments that customers of the future will demand? Undoubtedly, bureaucracy, arrogance, tired executive blood, poor planning, and short-term investment horizons have all played a role. But a more fundamental reason lies at the heart of the paradox: leading companies succumb to one of the most popular, and valuable, management dogmas. They stay close to their customers.
Although most managers like to think they are in control, customers wield extraordinary power in directing a company’s investments. Before managers decide to launch a technology, develop a product, build a plant, or establish new channels of distribution, they must look to their customers first: Do their customers want it? How big will the market be? Will the investment be profitable? The more astutely managers ask and answer these questions, the more completely their investments will be aligned with the needs of their customers.
This is the way a well-managed company should operate. Right? But what happens when customers reject a new technology, product concept, or way of doing business because it does not address their needs as effectively as a company’s current approach? The large photocopying centers that represented the core of Xerox’s customer base at first had no use for small, slow tabletop copiers. The excavation contractors that had relied on Bucyrus-Erie’s big-bucket steam- and diesel-powered cable shovels didn’t want hydraulic excavators because initially they were small and weak. IBM’s large commercial, government, and industrial customers saw no immediate use for minicomputers. In each instance, companies listened to their customers, gave them the product performance they were looking for, and, in the end, were hurt by the very technologies their customers led them to ignore.
We have seen this pattern repeatedly in an ongoing study of leading companies in a variety of industries that have confronted technological change. The research shows that most well-managed, established companies are consistently ahead of their industries in developing and commercializing new technologies—from incremental improvements to radically new approaches—as long as those technologies address the next-generation performance needs of their customers. However, these same companies are rarely in the forefront of commercializing new technologies that don’t initially meet the needs of mainstream customers and appeal only to small or emerging markets.
Managers must beware of ignoring new technologies that don’t initially meet the needs of their mainstream customers.
Using the rational, analytical investment processes that most well-managed companies have developed, it is nearly impossible to build a cogent case for diverting resources from known customer needs in established markets to markets and customers that seem insignificant or do not yet exist. After all, meeting the needs of established customers and fending off competitors takes all the resources a company has, and then some. In well-managed companies, the processes used to identify customers’ needs, forecast technological trends, assess profitability, allocate resources across competing proposals for investment, and take new products to market are focused—for all the right reasons—on current customers and markets. These processes are designed to weed out proposed products and technologies that do not address customers’ needs.
In fact, the processes and incentives that companies use to keep focused on their main customers work so well that they blind those companies to important new technologies in emerging markets. Many companies have learned the hard way the perils of ignoring new technologies that do not initially meet the needs of mainstream customers. For example, although personal computers did not meet the requirements of mainstream minicomputer users in the early 1980s, the computing power of the desktop machines improved at a much faster rate than minicomputer users’ demands for computing power did. As a result, personal computers caught up with the computing needs of many of the customers of Wang, Prime, Nixdorf, Data General, and Digital Equipment. Today they are performance-competitive with minicomputers in many applications. For the minicomputer makers, keeping close to mainstream customers and ignoring what were initially low-performance desktop technologies used by seemingly insignificant customers in emerging markets was a rational decision—but one that proved disastrous.
The technological changes that damage established companies are usually not radically new or difficult from a technological point of view. They do, however, have two important characteristics: First, they typically present a different package of performance attributes—ones that, at least at the outset, are not valued by existing customers. Second, the performance attributes that existing customers do value improve at such a rapid rate that the new technology can later invade those established markets. Only at this point will mainstream customers want the technology. Unfortunately for the established suppliers, by then it is often too late: the pioneers of the new technology dominate the market.
It follows, then, that senior executives must first be able to spot the technologies that seem to fall into this category. Next, to commercialize and develop the new technologies, managers must protect them from the processes and incentives that are geared to serving established customers. And the only way to protect them is to create organizations that are completely independent from the mainstream business.
No industry demonstrates the danger of staying too close to customers more dramatically than the hard-disk-drive industry. Between 1976 and 1992, disk-drive performance improved at a stunning rate: the physical size of a 100-megabyte (MB) system shrank from 5,400 to 8 cubic inches, and the cost per MB fell from $560 to $5. Technological change, of course, drove these breathtaking achievements. About half of the improvement came from a host of radical advances that were critical to continued improvements in disk-drive performance; the other half came from incremental advances.
The pattern in the disk-drive industry has been repeated in many other industries: the leading, established companies have consistently led the industry in developing and adopting new technologies that their customers demanded—even when those technologies required completely different technological competencies and manufacturing capabilities from the ones the companies had. In spite of this aggressive technological posture, no single disk-drive manufacturer has been able to dominate the industry for more than a few years. A series of companies have entered the business and risen to prominence, only to be toppled by newcomers who pursued technologies that at first did not meet the needs of mainstream customers. As a result, not one of the independent disk-drive companies that existed in 1976 survives today.
To explain the differences in the impact of certain kinds of technological innovations on a given industry, the concept of performance trajectories—the rate at which the performance of a product has improved, and is expected to improve, over time—can be helpful. Almost every industry has a critical performance trajectory. In mechanical excavators, the critical trajectory is the annual improvement in cubic yards of earth moved per minute. In photocopiers, an important performance trajectory is improvement in number of copies per minute. In disk drives, one crucial measure of performance is storage capacity, which has advanced 50%each year on average for a given size of drive.
Different types of technological innovations affect performance trajectories in different ways. On the one hand, sustaining technologies tend to maintain a rate of improvement; that is, they give customers something more or better in the attributes they already value. For example, thin-film components in disk drives, which replaced conventional ferrite heads and oxide disks between 1982 and 1990, enabled information to be recorded more densely on disks. Engineers had been pushing the limits of the performance they could wring from ferrite heads and oxide disks, but the drives employing these technologies seemed to have reached the natural limits of an S curve. At that point, new thin-film technologies emerged that restored—or sustained—the historical trajectory of performance improvement.
On the other hand, disruptive technologies introduce a very different package of attributes from the one mainstream customers historically value, and they often perform far worse along one or two dimensions that are particularly important to those customers. As a rule, mainstream customers are unwilling to use a disruptive product in applications they know and understand. At first, then, disruptive technologies tend to be used and valued only in new markets or new applications; in fact, they generally make possible the emergence of new markets. For example, Sony’s early transistor radios sacrificed sound fidelity but created a market for portable radios by offering a new and different package of attributes—small size, light weight, and portability.
In the history of the hard-disk-drive industry, the leaders stumbled at each point of disruptive technological change: when the diameter of disk drives shrank from the original 14 inches to 8 inches, then to 5.25 inches, and finally to 3.5 inches. Each of these new architectures initially offered the market substantially less storage capacity than the typical user in the established market required. For example, the 8-inch drive offered 20 MB when it was introduced, while the primary market for disk drives at that time—mainframes—required 200 MB on average. Not surprisingly, the leading computer manufacturers rejected the 8-inch architecture at first. As a result, their suppliers, whose mainstream products consisted of 14-inch drives with more than 200 MB of capacity, did not pursue the disruptive products aggressively. The pattern was repeated when the 5.25-inch and 3.5-inch drives emerged: established computer makers rejected the drives as inadequate, and, in turn, their disk-drive suppliers ignored them as well.
But while they offered less storage capacity, the disruptive architectures created other important attributes—internal power supplies and smaller size (8-inch drives); still smaller size and low-cost stepper motors (5.25-inch drives); and ruggedness, light weight, and low-power consumption (3.5-inch drives). From the late 1970s to the mid-1980s, the availability of the three drives made possible the development of new markets for minicomputers, desktop PCs, and portable computers, respectively.
Although the smaller drives represented disruptive technological change, each was technologically straightforward. In fact, there were engineers at many leading companies who championed the new technologies and built working prototypes with bootlegged resources before management gave a formal go-ahead. Still, the leading companies could not move the products through their organizations and into the market in a timely way. Each time a disruptive technology emerged, between one-half and two-thirds of the established manufacturers failed to introduce models employing the new architecture—in stark contrast to their timely launches of critical sustaining technologies. Those companies that finally did launch new models typically lagged behind entrant companies by two years—eons in an industry whose products’ life cycles are often two years. Three waves of entrant companies led these revolutions; they first captured the new markets and then dethroned the leading companies in the mainstream markets.
How could technologies that were initially inferior and useful only to new markets eventually threaten leading companies in established markets? Once the disruptive architectures became established in their new markets, sustaining innovations raised each architecture’s performance along steep trajectories—so steep that the performance available from each architecture soon satisfied the needs of customers in the established markets. For example, the 5.25-inch drive, whose initial 5 MB of capacity in 1980 was only a fraction of the capacity that the minicomputer market needed, became fully performance-competitive in the minicomputer market by 1986 and in the mainframe market by 1991. (See the graph “How Disk-Drive Performance Met Market Needs.”)
A company’s revenue and cost structures play a critical role in the way it evaluates proposed technological innovations. Generally, disruptive technologies look financially unattractive to established companies. The potential revenues from the discernible markets are small, and it is often difficult to project how big the markets for the technology will be over the long term. As a result, managers typically conclude that the technology cannot make a meaningful contribution to corporate growth and, therefore, that it is not worth the management effort required to develop it. In addition, established companies have often installed higher cost structures to serve sustaining technologies than those required by disruptive technologies. As a result, managers typically see themselves as having two choices when deciding whether to pursue disruptive technologies. One is to go downmarket and accept the lower profit margins of the emerging markets that the disruptive technologies will initially serve. The other is to go upmarket with sustaining technologies and enter market segments whose profit margins are alluringly high. (For example, the margins of IBM’s mainframes are still higher than those of PCs). Any rational resource-allocation process in companies serving established markets will choose going upmarket rather than going down.
Managers of companies that have championed disruptive technologies in emerging markets look at the world quite differently. Without the high cost structures of their established counterparts, these companies find the emerging markets appealing. Once the companies have secured a foothold in the markets and improved the performance of their technologies, the established markets above them, served by high-cost suppliers, look appetizing. When they do attack, the entrant companies find the established players to be easy and unprepared opponents because the opponents have been looking upmarket themselves, discounting the threat from below.
It is tempting to stop at this point and conclude that a valuable lesson has been learned: managers can avoid missing the next wave by paying careful attention to potentially disruptive technologies that do not meet current customers’ needs. But recognizing the pattern and figuring out how to break it are two different things. Although entrants invaded established markets with new technologies three times in succession, none of the established leaders in the disk-drive industry seemed to learn from the experiences of those that fell before them. Management myopia or lack of foresight cannot explain these failures. The problem is that managers keep doing what has worked in the past: serving the rapidly growing needs of their current customers. The processes that successful, well-managed companies have developed to allocate resources among proposed investments are incapable of funneling resources into programs that current customers explicitly don’t want and whose profit margins seem unattractive.
None of the established leaders in the disk-drive industry learned from the experiences of those that fell before them.
Managing the development of new technology is tightly linked to a company’s investment processes. Most strategic proposals—to add capacity or to develop new products or processes—take shape at the lower levels of organizations in engineering groups or project teams. Companies then use analytical planning and budgeting systems to select from among the candidates competing for funds. Proposals to create new businesses in emerging markets are particularly challenging to assess because they depend on notoriously unreliable estimates of market size. Because managers are evaluated on their ability to place the right bets, it is not surprising that in well-managed companies, mid-and top-level managers back projects in which the market seems assured. By staying close to lead customers, as they have been trained to do, managers focus resources on fulfilling the requirements of those reliable customers that can be served profitably. Risk is reduced—and careers are safeguarded—by giving known customers what they want.
Seagate Technology’s experience illustrates the consequences of relying on such resource-allocation processes to evaluate disruptive technologies. By almost-any measure, Seagate, based in Scotts Valley, California, was one of the most successful and aggressively managed companies in the history of the microelectronics industry: from its inception in 1980, Seagate’s revenues had grown to more than $700 million by 1986. It had pioneered 5.25-inch hard-disk drives and was the main supplier of them to IBM and IBM-compatible personal-computer manufacturers. The company was the leading manufacturer of 5.25-inch drives at the time the disruptive 3.5-inch drives emerged in the mid-1980s.
Engineers at Seagate were the second in the industry to develop working prototypes of 3.5-inch drives. By early 1985, they had made more than 80 such models with a low level of company funding. The engineers forwarded the new models to key marketing executives, and the trade press reported that Seagate was actively developing 3.5-inch drives. But Seagate’s principal customers—IBM and other manufacturers of AT-class personal computers—showed no interest in the new drives. They wanted to incorporate 40-MB and 60-MB drives in their next-generation models, and Seagate’s early 3.5-inch prototypes packed only 10 MB. In response, Seagate’s marketing executives lowered their sales forecasts for the new disk drives.
Manufacturing and financial executives at the company pointed out another drawback to the 3.5-inch drives. According to their analysis, the new drives would never be competitive with the 5.25-inch architecture on a cost-per-megabyte basis—an important metric that Seagate’s customers used to evaluate disk drives. Given Seagate’s cost structure, margins on the higher-capacity 5.25-inch models therefore promised to be much higher than those on the smaller products.
Senior managers quite rationally decided that the 3.5-inch drive would not provide the sales volume and profit margins that Seagate needed from a new product. A former Seagate marketing executive recalled, “We needed a new model that could become the next ST412 [a 5.25-inch drive generating more than $300 million in annual sales, which was nearing the end of its life cycle]. At the time, the entire market for 3.5-inch drives was less than $50 million. The 3.5-inch drive just didn’t fit the bill—for sales or profits.”
The shelving of the 3.5-inch drive was not a signal that Seagate was complacent about innovation. Seagate subsequently introduced new models of 5.25-inch drives at an accelerated rate and, in so doing, introduced an impressive array of sustaining technological improvements, even though introducing them rendered a significant portion of its manufacturing capacity obsolete.
While Seagate’s attention was glued to the personal-computer market, former employees of Seagate and other 5.25-inch drive makers, who had become frustrated by their employers’ delays in launching 3.5-inch drives, founded a new company, Conner Peripherals. Conner focused on selling its 3.5-inch drives to companies in emerging markets for portable computers and small-footprint desktop products (PCs that take up a smaller amount of space on a desk). Conner’s primary customer was Compaq Computer, a customer that Seagate had never served. Seagate’s own prosperity, coupled with Conner’s focus on customers who valued different disk-drive attributes (ruggedness, physical volume, and weight), minimized the threat Seagate saw in Conner and its 3.5-inch drives.
From its beachhead in the emerging market for portable computers, however, Conner improved the storage capacity of its drives by 50% per year. By the end of 1987, 3.5-inch drives packed the capacity demanded in the mainstream personal-computer market. At this point, Seagate executives took their company’s 3.5-inch drive off the shelf, introducing it to the market as a defensive response to the attack of entrant companies like Conner and Quantum Corporation, the other pioneer of 3.5-inch drives. But it was too late.
By then, Seagate faced strong competition. For a while, the company was able to defend its existing market by selling 3.5-inch drives to its established customer base—manufacturers and resellers of full-size personal computers. In fact, a large proportion of its 3.5-inch products continued to be shipped in frames that enabled its customers to mount the drives in computers designed to accommodate 5.25-inch drives. But, in the end, Seagate could only struggle to become a second-tier supplier in the new portable-computer market.
In contrast, Conner and Quantum built a dominant position in the new portable-computer market and then used their scale and experience base in designing and manufacturing 3.5-inch products to drive Seagate from the personal-computer market. In their 1994 fiscal years, the combined revenues of Conner and Quantum exceeded $5 billion.
Seagate’s poor timing typifies the responses of many established companies to the emergence of disruptive technologies. Seagate was willing to enter the market for 3.5-inch drives only when it had become large enough to satisfy the company’s financial requirements—that is, only when existing customers wanted the new technology. Seagate has survived through its savvy acquisition of Control Data Corporation’s disk-drive business in 1990. With CDC’s technology base and Seagate’s volume-manufacturing expertise, the company has become a powerful player in the business of supplying large-capacity drives for high-end computers. Nonetheless, Seagate has been reduced to a shadow of its former self in the personal-computer market.
It should come as no surprise that few companies, when confronted with disruptive technologies, have been able to overcome the handicaps of size or success. But it can be done. There is a method to spotting and cultivating disruptive technologies.
Determine whether the technology is disruptive or sustaining. The first step is to decide which of the myriad technologies on the horizon are disruptive and, of those, which are real threats. Most companies have well-conceived processes for identifying and tracking the progress of potentially sustaining technologies, because they are important to serving and protecting current customers. But few have systematic processes in place to identify and track potentially disruptive technologies.
One approach to identifying disruptive technologies is to examine internal disagreements over the development of new products or technologies. Who supports the project and who doesn’t? Marketing and financial managers, because of their managerial and financial incentives, will rarely support a disruptive technology. On the other hand, technical personnel with outstanding track records will often persist in arguing that a new market for the technology will emerge—even in the face of opposition from key customers and marketing and financial staff. Disagreement between the two groups often signals a disruptive technology that top-level managers should explore.
Define the strategic significance of the disruptive technology. The next step is to ask the right people the right questions about the strategic importance of the disruptive technology. Disruptive technologies tend to stall early in strategic reviews because managers either ask the wrong questions or ask the wrong people the right questions. For example, established companies have regular procedures for asking mainstream customers—especially the important accounts where new ideas are actually tested—to assess the value of innovative products. Generally, these customers are selected because they are the ones striving the hardest to stay ahead of their competitors in pushing the performance of their products. Hence these customers are most likely to demand the highest performance from their suppliers. For this reason, lead customers are reliably accurate when it comes to assessing the potential of sustaining technologies, but they are reliably inaccurate when it comes to assessing the potential of disruptive technologies. They are the wrong people to ask.
A simple graph plotting product performance as it is defined in mainstream markets on the vertical axis and time on the horizontal axis can help managers identify both the right questions and the right people to ask. First, draw a line depicting the level of performance and the trajectory of performance improvement that customers have historically enjoyed and are likely to expect in the future. Then locate the estimated initial performance level of the new technology. If the technology is disruptive, the point will lie far below the performance demanded by current customers. (See the graph “How to Assess Disruptive Technologies.”)
What is the likely slope of performance improvement of the disruptive technology compared with the slope of performance improvement demanded by existing markets? If knowledgeable technologists believe the new technology might progress faster than the market’s demand for performance improvement, then that technology, which does not meet customers’ needs today, may very well address them tomorrow. The new technology, therefore, is strategically critical.
Instead of taking this approach, most managers ask the wrong questions. They compare the anticipated rate of performance improvement of the new technology with that of the established technology. If the new technology has the potential to surpass the established one, the reasoning goes, they should get busy developing it.
Pretty simple. But this sort of comparison, while valid for sustaining technologies, misses the central strategic issue in assessing potentially disruptive technologies. Many of the disruptive technologies we studied never surpassed the capability of the old technology. It is the trajectory of the disruptive technology compared with that of the market that is significant. For example, the reason the mainframe-computer market is shrinking is not that personal computers outperform mainframes but because personal computers networked with a file server meet the computing and data-storage needs of many organizations effectively. Main-frame-computer makers are reeling not because the performance of personal-computing technology surpassed the performance of mainframe technology but because it intersected with the performance demanded by the established market.
Consider the graph again. If technologists believe that the new technology will progress at the same rate as the market’s demand for performance improvement, the disruptive technology may be slower to invade established markets. Recall that Seagate had targeted personal computing, where demand for hard-disk capacity per computer was growing at 30% per year. Because the capacity of 3.5-inch drives improved at a much faster rate, leading 3.5-inch-drive makers were able to force Seagate out of the market. However, two other 5.25-inch-drive makers, Maxtor and Micropolis, had targeted the engineering-workstation market, in which demand for hard-disk capacity was insatiable. In that market, the trajectory of capacity demanded was essentially parallel to the trajectory of capacity improvement that technologists could supply in the 3.5-inch architecture. As a result, entering the 3.5-inch-drive business was strategically less critical for those companies than it was for Seagate.
Locate the initial market for the disruptive technology. Once managers have determined that a new technology is disruptive and strategically critical, the next step is to locate the initial markets for that technology. Market research, the tool that managers have traditionally relied on, is seldom helpful: at the point a company needs to make a strategic commitment to a disruptive technology, no concrete market exists. When Edwin Land asked Polaroid’s market researchers to assess the potential sales of his new camera, they concluded that Polaroid would sell a mere 100,000 cameras over the product’s lifetime; few people they interviewed could imagine the uses of instant photography.
Because disruptive technologies frequently signal the emergence of new markets or market segments, managers must create information about such markets—who the customers will be, which dimensions of product performance will matter most to which customers, what the right price points will be. Managers can create this kind of information only by experimenting rapidly, iteratively, and inexpensively with both the product and the market.
For established companies to undertake such experiments is very difficult. The resource-allocation processes that are critical to profitability and competitiveness will not—and should not—direct resources to markets in which sales will be relatively small. How, then, can an established company probe a market for a disruptive technology? Let start-ups—either ones the company funds or others with no connection to the company—conduct the experiments. Small, hungry organizations are good at placing economical bets, rolling with the punches, and agilely changing product and market strategies in response to feedback from initial forays into the market.
Small, hungry organizations are good at agilely changing product and market strategies.
Consider Apple Computer in its start-up days. The company’s original product, the Apple I, was a flop when it was launched in 1977. But Apple had not placed a huge bet on the product and had gotten at least something into the hands of early users quickly. The company learned a lot from the Apple I about the new technology and about what customers wanted and did not want. Just as important, a group of customerslearned about what they did and did not want from personal computers. Armed with this information, Apple launched the Apple II quite successfully.
Many companies could have learned the same valuable lessons by watching Apple closely. In fact, some companies pursue an explicit strategy of being second to invent—allowing small pioneers to lead the way into uncharted market territory. For instance, IBM let Apple, Commodore, and Tandy define the personal computer. It then aggressively entered the market and built a considerable personal-computer business.
But IBM’s relative success in entering a new market late is the exception, not the rule. All too often, successful companies hold the performance of small-market pioneers to the financial standards they apply to their own performance. In an attempt to ensure that they are using their resources well, companies explicitly or implicitly set relatively high thresholds for the size of the markets they should consider entering. This approach sentences them to making late entries into markets already filled with powerful players.
For example, when the 3.5-inch drive emerged, Seagate needed a $300-million-a-year product to replace its mature flagship 5.25-inch model, the ST412, and the 3.5-inch market wasn’t large enough. Over the next two years, when the trade press asked when Seagate would introduce its 3.5-inch drive, company executives consistently responded that there was no market yet. There actually was a market, and it was growing rapidly. The signals that Seagate was picking up about the market, influenced as they were by customers who didn’t want 3.5-inch drives, were misleading. When Seagate finally introduced its 3.5-inch drive in 1987, more than $750 million in 3.5-inch drives had already been sold. Information about the market’s size had been widely available throughout the industry. But it wasn’t compelling enough to shift the focus of Seagate’s managers. They continued to look at the new market through the eyes of their current customers and in the context of their current financial structure.
Seagate paid the price for allowing start-ups to lead the way into emerging markets.
The posture of today’s leading disk-drive makers toward the newest disruptive technology, 1.8-inch drives, is eerily familiar. Each of the industry leaders has designed one or more models of the tiny drives, and the models are sitting on shelves. Their capacity is too low to be used in notebook computers, and no one yet knows where the initial market for 1.8-inch drives will be. Fax machines, printers, and automobile dashboard mapping systems are all candidates. “There just isn’t a market,” complained one industry executive. “We’ve got the product, and the sales force can take orders for it. But there are no orders because nobody needs it. It just sits there.” This executive has not considered the fact that his sales force has no incentive to sell the 1.8-inch drives instead of the higher-margin products it sells to higher-volume customers. And while the 1.8-inch drive is sitting on the shelf at his company and others, last year more than $50 million worth of 1.8-inch drives were sold, almost all by start-ups. This year, the market will be an estimated $150 million.
Every company that has tried to manage mainstream and disruptive businesses within a single organization failed.
To avoid allowing small, pioneering companies to dominate new markets, executives must personally monitor the available intelligence on the progress of pioneering companies through monthly meetings with technologists, academics, venture capitalists, and other nontraditional sources of information. They cannotrely on the company’s traditional channels for gauging markets because those channels were not designed for that purpose.
Place responsibility for building a disruptive-technology business in an independent organization. The strategy of forming small teams into skunk-works projects to isolate them from the stifling demands of mainstream organizations is widely known but poorly understood. For example, isolating a team of engineers so that it can develop a radically new sustaining technology just because that technology is radically different is a fundamental misapplication of the skunk-works approach. Managing out of context is also unnecessary in the unusual event that a disruptive technology is more financially attractive than existing products. Consider Intel’s transition from dynamic random access memory (DRAM) chips to microprocessors. Intel’s early microprocessor business had a higher gross margin than that of its DRAM business; in other words, Intel’s normal resource-allocation process naturally provided the new business with the resources it needed.1
Creating a separate organization is necessary only when the disruptive technology has a lower profit margin than the mainstream business and must serve the unique needs of a new set of customers. CDC, for example, successfully created a remote organization to commercialize its 5.25-inch drive. Through 1980, CDC was the dominant independent disk-drive supplier due to its expertise in making 14-inch drives for mainframe-computer makers. When the 8-inch drive emerged, CDC launched a late development effort, but its engineers were repeatedly pulled off the project to solve problems for the more profitable, higher-priority 14-inch projects targeted at the company’s most important customers. As a result, CDC was three years late in launching its first 8-inch product and never captured more than 5% of that market.
When the 5.25-inch generation arrived, CDC decided that it would face the new challenge more strategically. The company assigned a small group of engineers and marketers in Oklahoma City, Oklahoma, far from the mainstream organization’s customers, the task of developing and commercializing a competitive 5.25-inch product. “We needed to launch it in an environment in which everybody got excited about a $50,000 order,” one executive recalled. “In Minneapolis, you needed a $1 million order to turn anyone’s head.” CDC never regained the 70% share it had once enjoyed in the market for mainframe disk drives, but its Oklahoma City operation secured a profitable 20% of the high-performance 5.25-inch market.
Had Apple created a similar organization to develop its Newton personal digital assistant (PDA), those who have pronounced it a flop might have deemed it a success. In launching the product, Apple made the mistake of acting as if it were dealing with an established market. Apple managers went into the PDA project assuming that it had to make a significant contribution to corporate growth. Accordingly, they researched customer desires exhaustively and then bet huge sums launching the Newton. Had Apple made a more modest technological and financial bet and entrusted the Newton to an organization the size that Apple itself was when it launched the Apple I, the outcome might have been different. The Newton might have been seen more broadly as a solid step forward in the quest to discover what customers really want. In fact, many more Newtons than Apple I models were sold within a year of their introduction.
Keep the disruptive organization independent. Established companies can only dominate emerging markets by creating small organizations of the sort CDC created in Oklahoma City. But what should they do when the emerging market becomes large and established?
Most managers assume that once a spin-off has become commercially viable in a new market, it should be integrated into the mainstream organization. They reason that the fixed costs associated with engineering, manufacturing, sales, and distribution activities can be shared across a broader group of customers and products.
In order that it may live, a corporation must be willing to see business units die.
This approach might work with sustaining technologies; however, with disruptive technologies, folding the spin-off into the mainstream organization can be disastrous. When the independent and mainstream organizations are folded together in order to share resources, debilitating arguments inevitably arise over which groups get what resources and whether or when to cannibalize established products. In the history of the disk-drive industry, every company that has tried to manage main-stream and disruptive businesses within a single organization failed.
No matter the industry, a corporation consists of business units with finite life spans: the technological and market bases of any business will eventually disappear. Disruptive technologies are part of that cycle. Companies that understand this process can create new businesses to replace the ones that must inevitably die. To do so, companies must give managers of disruptive innovation free rein to realize the technology’s full potential—even if it means ultimately killing the mainstream business. For the corporation to live, it must be willing to see business units die. If the corporation doesn’t kill them off itself, competitors will.
The key to prospering at points of disruptive change is not simply to take more risks, invest for the long term, or fight bureaucracy. The key is to manage strategically important disruptive technologies in an organizational context where small orders create energy, where fast low-cost forays into ill-defined markets are possible, and where overhead is low enough to permit profit even in emerging markets.
Managers of established companies can master disruptive technologies with extraordinary success. But when they seek to develop and launch a disruptive technology that is rejected by important customers within the context of the mainstream business’s financial demands, they fail—not because they make the wrong decisions, but because they make the right decisions for circumstances that are about to become history.
1. Robert A. Burgelman, “Fading Memories: A Process Theory of Strategic Business Exit in Dynamic Environments,” Administrative Science Quarterly 39 (1994), pp. 24–56.