Читать книгу Precisely Wrong: Why Conventional Planning Systems Fail - Carol Ptak - Страница 12

Оглавление

CHAPTER 1

The Objective and History of Planning

Why Do We “Plan”?

First, what do we mean by “plan” or the action of “planning,” and who are the people involved? Let’s start with some definitions from the fourteenth edition of the American Production and Inventory Control Society (APICS) Dictionary:

Plan: a predetermined course of action over a specified period of time that represents a projected response to an anticipated environment to accomplish a specific set of adaptive objectives. (p. 126)

Planning: The process of setting goals for the organization and choosing various ways to use the organization’s resources to achieve the goals. (p. 127)

Planner: 1) The person normally responsible for managing inventory levels, schedules, and availability of selected items, either manufactured or purchased. 2) In an MRP system, the person responsible for reviewing and acting on order release, action, and exception messages from the system. (p. 102)

Thus the reason we plan is to orchestrate, coordinate, and synchronize an organization’s assets to a purpose, most often to sell an item or service. What do we make? When do we make it? What do we buy? When do we buy it? What do we deliver? When do we deliver it? What do we move? Where do we move it? The more complex the products, services, and supply chain scenarios, the more apparent the need for effective orchestration, coordination, and synchronization. What then should we use as tools in order to accomplish this objective?

A Brief History of Planning

To truly understand the state of planning today, it is necessary to discuss the history behind the conventional approach. Where did it come from? What did it replace?

Today most midrange and large manufacturing enterprises throughout the world use a planning method and tool called material requirements planning (MRP). This method and tool was conceived in the 1950s with the increasing availability, promise, and power of computers. Computers allowed for rapid and complex calculations about what and how much was needed to be bought and made given a specific demand input. The nature of that demand input will be of particular importance later in this book. Industry was plagued with shortages, mismatched inventory, lack of proper priorities, and a need for matched sets of parts. Practitioners were hopeful that the mathematical precision that was now possible with computers would solve these problems.

By 1965 the modern acronym “MRP” was in existence. The year 1972 saw the incorporation of capacity reconciliation into MRP called closed-loop MRP. In 1975, Joe Orlicky wrote the first book on MRP and this started a whole new software industry. Manufacturing planning and control systems continued to evolve with the development of master scheduling, modular bill of materials, planning bill of materials, and production planning in an attempt to connect the business plan to the operational plan in a meaningful way. The overall planning process was defined by a linear series of planning processes that were disaggregated into detailed plans (Figure 1-1).


FIGURE 1-1 1970s linear planning process

Ollie Wight wrote a thought leadership article in 1979 about his vision to incorporate accounting into MRP to enable better information for managers.1 In 1981 he followed up the article with a book.2 This transformed MRP into manufacturing resources planning (MRP II). The APICS Dictionary defines MRP II as:

A method for the effective planning of all resources of a manufacturing company. Ideally, it addresses operational planning in units, financial planning in dollars, and has a simulation capability to answer what-if questions. It is made up of a variety of processes, each linked together: business planning, production planning (sales and operations planning), master production scheduling, material requirements planning, capacity requirements planning, and the execution support systems for capacity and material. Output from these systems is integrated with financial reports such as the business plan, purchase commitment report, shipping budget, and inventory projections in dollars. Manufacturing resource planning is a direct outgrowth and extension of closed-loop MRP.

The capabilities and input requirements of MRP II forced planning processes to evolve from the previous linear processes into departmentalized business planning processes. In 1984 Richard (Dick) Ling began developing a process to integrate the various functions of a business to provide a more effective process for developing the master schedule input for MRP II. In 1988, Dick and Walter Goddard wrote a groundbreaking book introducing this concept called sales and operations planning (S&OP).3 At that time S&OP was defined as:

The process with which we bring together all the plans for the business (customers, sales, marketing, development, manufacturing, sourcing and financial) into an integrated set of plans. It is done at least once a month and is reviewed by senior management at an aggregate (product family) level. The process must reconcile all supply, demand and new product planning at both the detail and aggregate level and over a horizon sufficient to develop and reconcile financially the Annual Business Plan. A typical Sales and Operations plan will therefore project at least 18 months into the future. It may be longer in order to adequately plan new product launch, long lead time material planning and manufacturing capacity planning.

Figure 1-2 describes this new integrated schema connecting sales and operations planning with the planning and scheduling of resources. By 1990, as client server architecture became available and made it possible for data to be accessible to the desktop for analysis, MRP II evolved into enterprise resources planning (ERP). ERP is defined by the APICS Dictionary as:


FIGURE 1-2 1980s MRPII planning schema with sales and operations planning

[The] framework for organizing, defining and standardizing the business processes necessary to effectively plan and control an organization so the organization can use its internal knowledge to seek external advantage. An ERP system provides extensive databanks of information including master file records, repositories of cost and sales, financial detail, analysis of product and customer hierarchies and historic and current transactional data.

The promise of ERP was the ability to promote visibility across an enterprise in order to make faster and better business decisions to leverage business assets.

Throughout this entire evolution the core MRP calculation kernel stayed the same. MRP fundamentally is a very big calculator utilizing the data about what is needed, what is available in order to calculate what needs to be ordered—and when. This has grown from the computer’s first use to track inventory. At its very core even the most sophisticated ERP system utilizes these basic calculations. Typically, these calculations are implemented as a backward schedule based on a forecast or master schedule with the assumption that all the input data are accurate and that there is sufficient time to accomplish the plan. What comes out of MRP is a plan that includes date and quantity requirements for all components to support the “high-level” or end-item demand.

Perhaps the most recognized leader of the MRP charge was Joe Orlicky. His 1975 seminal work, Material Requirements Planning: The New Way of Life in Production and Inventory Management,4 provided the blueprint and codification of MRP that is still the standard today. Consider that when the book was written, only 700 companies or plants in the world had implemented MRP, almost all located in the United States. As Orlicky wrote:

As this book goes into print, there are some 700 manufacturing companies or plants that have implemented, or are committed to implementing, MRP systems. Material requirements planning has become a new way of life in production and inventory management, displacing older methods in general and statistical inventory control in particular. I, for one, have no doubt whatever that it will be the way of life in the future. (p. ix)

MRP did become the way of life in manufacturing. The codification and subsequent commercialization of MRP fundamentally changed the industrial world, and it did so relatively quickly. Orlicky with others at the time recognized the opportunity presented by changes in manufacturing circumstances and the invention of the computer that enabled a planning approach never before possible. Before the advent of the computer, planning was relatively simple but error prone, and replanning was arduous as changes occurred.

Traditional inventory management approaches, in pre-computer days, could obviously not go beyond the limits imposed by the information processing tools available at the time. Because of this almost all of those approaches and techniques suffered from imperfection. They simply represented the best that could be done under the circumstances. They acted as a crutch and incorporated summary, shortcut and approximation methods, often based on tenuous or quite unrealistic assumptions, sometimes force-fitting concepts to reality so as to permit the use of a technique.

The breakthrough, in this area, lies in the simple fact that once a computer becomes available, the use of such methods and systems is no longer obligatory. It becomes feasible to sort out, revise, or discard previously used techniques and to institute new ones that heretofore it would have been impractical or impossible to implement. It is now a matter of record that among manufacturing companies that pioneered inventory management computer applications in the 1960s, the most significant results were achieved not by those who chose to improve, refine, and speed up existing procedures, but by those who undertook a fundamental overhaul of their systems. (p. 4, emphasis added)

In his book, Orlicky made the case for a fundamental reexamination of how companies planned and managed inventory and resources. This case was so compelling that the concepts that he brought to the table proliferated throughout the industrial world within two decades. That proliferation remains largely unchanged to the present. Today we know that nearly 80% of manufacturing companies that buy an ERP system also buy and implement the MRP module associated with that system.

Perhaps the most interesting and compelling part of the passage from the original Orlicky book is the sentence that is italicized. This was simply common sense that was easily demonstrable with the results of pre-computer inventory management systems. Yet could this same description be applied to the widespread use of MRP today? Could it be that conventional planning approaches and tools are:

• Acting as a crutch?

• Incorporating summary, shortcut, and approximation methods based on tenuous assumptions?

• Force-fitting concepts to reality so as to permit the use of a technique?

In the authors’ 60+ years of combined manufacturing experience across a wide array of industries, the answer is a resounding yes to these three points. By the end of this book, the reader will also be able to understand why the answer is yes to these three points. Indeed, if the answer is yes, then there should be evidence to support the assertion that MRP systems are not living up to their expectations, that MRP systems are in fact guilty as charged in the previous three bullet points, and that they are fundamentally incapable of really supporting the true purpose of planning—to orchestrate, coordinate, and synchronize an organization’s assets to a purpose.

The Need for Flow—The True Purpose of Planning

Are we missing something fundamentally important about planning? The required orchestration, coordination, and synchronization are simply a means to an end. That much is quite easy to grasp. What is more difficult for many organizations to grasp is what fundamental principle should underlie orchestration, coordination, and synchronization.

Manufacturing and supply chain management comprise a bewildering and distracting variety of products, materials, technology, machines, and people skills obscuring the underlying elegance and simplicity of both as an integrated process. The essence of manufacturing (and supply chain in general) is (1) the flow of materials from suppliers, through plants, through distribution channels to customers and (2) the flow of information to all parties about what is planned and required, what is happening, what has happened, and what should happen—and of course (3) cash flow from the market to the supplier.

Plossl’s Law

An appreciation of this elegance and simplicity brings us to what George Plossl (another founding father of MRP and author of the second edition of Orlicky’s Material Requirements Planning) articulated as the first law of manufacturing:

All benefits will be directly related to the speed of flow of information and materials.5

“All benefits” is quite an encompassing term. It can be broken down into components that most companies measure and emphasize. These benefits encompass:

• Service. A system that has good informational and material flow produces consistent and reliable results. This has implications for meeting customer expectations, not only for delivery performance but also for quality. This is especially true for industries that have shelf-life issues.

• Revenue. When service is consistently high, market share tends to grow, or at a minimum it doesn’t erode.

• Quality. When things are flowing well, fewer mistakes are made due to confusion and expediting.

• Inventories. Purchased, work-in-process (WIP), and finished goods inventories will be minimized and directly proportional to the amount of time it takes products to flow between stages and through the total system. The less time it takes products to move through the system, the less the total inventory investment. The simple equation is

Throughput * lead time = WIP

where:

Throughput is the rate at which material is exiting the system. Lead time is the time it takes to move through the system. WIP is the amount of inventory contained between entry and exit.

A key assumption is that the material entering the system is proportionate to the amount exiting the system. The basis for this equation is the queuing theory known as Little’s law. More is available on the relationship between queuing and lead time in Appendix B.

• Expenses. When flow is poor, additional activities and expenses are incurred to close the gaps in flow. Examples would be expedited freight, overtime, rework, cross-shipping, and unplanned partial ships. Most of these activities are indicative of an inefficient overall system and directly cause cash to leave the organization. These types of expenses are described later in this chapter (see Figure 1-8) in relation to the bimodal distribution.

• Cash. When flow is maximized, the material that a company paid for is converted to cash at a relatively quick and consistent rate. This makes cash flow much easier to manage and predict. Additionally, the expedite-related expenses previously mentioned are minimized, limiting cash leaving the organization.

What happens when revenue is growing, inventory is minimized, and additional or unnecessary ancillary expenses are eliminated? Return on investment (ROI) moves in a favorable direction! In fact this relationship between flow and ROI can be easily depicted by the equation in Figure 1-3. This depiction first appeared in the book Demand Driven Performance: Using Smart Metrics.6


FIGURE 1-3 Connecting flow to return on investment (from Smith and Smith7)

Explaining this equation requires us to first define the elements and then show how they relate to each other:

• Flow is the rate at which a system converts material to product required by a customer. If the customer does not want or take the product, then that output does not count as flow. It is retained in the system as investment (captured money).

• Cash velocity is the rate of net cash generation: sales dollars minus truly variable costs (aka contribution margin) minus period operating expenses.

• Net profit/investment (captured money) is the equation for ROI.

The deltas and yield arrows in the equation explain the relationships between the components of the equation. Changes to flow directly yield changes to cash velocity in the same direction. As flow increases, so does cash velocity. Conversely as flow decreases, so does cash velocity. As cash velocity increases, so does return on investment, as the system is converting materials to cash in a quicker fashion.

When cash velocity slows down, the conversion of materials to cash slows down. The organization is simply accomplishing less with more. This scenario typically results in additional cash velocity issues related to expediting expenses. Period expenses rise (over time) or variable costs increase (fast freight, additional freight, and expedite fees). This directly reduces the net profit potential within the period and thus further erodes return on investment performance.

A “River” to ROI

We can make a simple analogy to this equation using the manner in which a river works. Water flows in a river as an autonomous response to gravity. The steeper the slope of the riverbed, the faster the water flow. The fewer number of obstructions in the river, the faster the water runs.

In supply chain management, materials flow through the supply chain like water through a river. They are combined, converted, and then moved to points of consumption. The autonomous response of this flow is demand. What else could it or should it be? The stronger the demand, the faster the rate of flow. And like rivers, supply chains have obstructions or blockages created by variability and limitations in the “riverbed.” Machines break down, critical components are often unavailable, yield problems occur, choke points exist, etc. All these issues are simply impediments to flow and result in “pools” of inventory called queues with varying depth.

With this analogy we begin to realize that flow is the very essence of why the Operations component of manufacturing and supply chain companies even exists. Operations is typically divided into functions, each of which has a primary objective that it is responsible for and held accountable to. Figure 1-4 is a simple table showing typical Operations functions and their primary objective.


FIGURE 1-4 Typical functions in Operations

All the objectives in Figure 1-4 are protected and promoted by encouraging flow. In fact if flow is impeded, so are these primary objectives. Thus, if Operations and its functions want to succeed in being truly effective, there is really only one thing to focus on—flow.

Let’s expand this view to the organization as a whole. An organization is typically divided into functions, each with its own primary objective. Figure 1-5 is another simple table showing the typical functions of a manufacturing and/or supply chain–centric company.


FIGURE 1-5 Typical organizational functions

All the functional objectives in Figure 1-5 require flow to be promoted and protected. When things are flowing well, shareholder equity, sales performance, market awareness, asset utilization, and innovation are promoted and protected.

Additionally, flow is a unifying theme within most major process improvement disciplines and their respective primary objectives:

• Theory of Constraints (Goldratt) and its objective to drive system throughput

• Lean (Ohno) and its objective to reduce waste

• Six Sigma (Deming) and its objective to reduce variability

All these objectives are advanced by focusing on flow. When considered with Plossl’s first law of manufacturing, the convergence around flow is really quite staggering. There should be little patience for ideological battles and turf wars between disciplines. It is a waste of time and counterproductive to bicker—they all need the same thing to achieve their desired goal. Among these disciplines, flow becomes a common objective from a common strategy based on simple common sense grounded in basic physics and economic principles.

The concept and power of flow is not new. It powered the rise of industrial giants and gave us the corporate management structure in use today. Leaders such as Henry Ford, F. Donaldson Brown, and Frederick Taylor made it the basis for strategy and management.

And yet what about “cost”? The calculation of standard unit cost attempts to assign a cost to an individual product or resource based on volume and rate over a particular time period. When things flow well over a given period, cost performance will be favorable. Thus, emphasizing flow should even work for the cost accountants’ objectives! This critical realization will be explored in more depth in Chapter 6 as well as in Appendix D.

Returning to the specific function of planning, Plossl’s law, while incredibly simple, should not be taken lightly. This one little statement has always defined the way to drive return on shareholder equity, and it was articulated by one of the main architects of conventional planning systems. Thus, the real reason why we plan can and should be simply stated as to ensure the protection and promotion of flow.

But Something Is Very Wrong

If the purpose of planning is to ensure the protection and promotion of flow, then planning systems should do just that. However, in reality, do conventional planning systems really ensure the protection and promotion of flow in today’s environments? The evidence overwhelmingly suggests that they do not.

Let’s examine this issue from four distinct perspectives:

• The macroeconomic level. Has the proliferation of conventional planning systems driven better return on investment performance for one of the world’s largest economies?

• The user level. Do the people that interact with conventional planning systems believe in those computer systems’ abilities to enable flow?

• The organizational level. Do the companies that use conventional planning systems exhibit the characteristics of good flow performance as evidenced by sustainable ROI?

• The supply chain level. Do supply chains featuring a collection of conventional planning systems exhibit the characteristics of good flow performance?

The Macroeconomic Level

It is no secret that the United States led the adoption of manufacturing information systems starting with MRP in the 1960s. The vast majority of those “700 manufacturing companies or plants that have implemented, or are committed to implementing, MRP systems” were located in the United States. The roots of MRP continued to run deep in the United States through the time in which this book was written; it is simply how planning occurs in most U.S. manufacturing companies of any size and scale. One would think this should provide an incredible advantage for the U.S. economy.

While these systems are expensive to purchase, implement, and maintain, the value of these formal planning systems has always been sold on the basis of the ability to better leverage the assets of a business. So, did the widespread adoption of MRP and its subsequent derivative information systems enable the U.S. economy to better manage assets?

In late 2013 Deloitte University Press released a report written by John Hagel III, John Seely Brown, Tamara Samoylova, and Michael Lui that is quite eye-opening when considered against the progression and adoption rates of information systems.8 Figure 1-6 is a chart from that report depicting the return on asset performance of the U.S. economy since 1965.

The graphic clearly depicts a steady decrease on return on assets for the U.S. economy from 1965 to 2012. But this is not the whole story. During this time period the same report shows that labor productivity (as measured by the Törnqvist aggregation) more than doubled! What is most interesting about this graphic in relation to information systems is that by 1965 we had the modern acronym MRP, but massive proliferation of information systems did not occur until after 1975 and, in particular, after 1980 with MRPII.


U.S. firm’s ROA fell to a quarter of its 1965 levels in 2012. To increacse, or even maintain, asset profitability, firms must find new ways to create value from their assets.

Graphic: Deloitte University Press | DUPress.com Source: Compustat, Deloitte analysis

FIGURE 1-6 Return on asset peformance for the U.S. economy

Obviously, there are many factors at play with this return on asset decrease, but this report would certainly lead one to conclude that the impact of the widespread adoption of MRP, MRP II, and ERP systems (at least in the United States) has not significantly helped companies manage themselves to a better return on asset performance. Indeed, when this decline is taken in combination with the increase in labor productivity, it suggests that companies may be actually accelerating in exactly the wrong direction.

But, admittedly, this is just one point of data; it is a high-level view with many unrelated factors contributing to these effects. What additional evidence do we have that conventional planning approaches are not protecting and promoting the flow that we need to drive better return on investment?

The User Level

Rather than examining the performance of an entire economy over a period of time, let’s examine a much more granular level—the day-today actions of the people charged with making decisions about how to utilize assets: the planners themselves. One hallmark of supply chains is the presence of supply orders. Supply orders are the purchase orders, stock transfer orders, and manufacturing orders that dictate and authorize the flow and activities of any supply chain. They are the signals to produce, buy, and move.

The very purpose of a planning system is to ultimately determine the timing, quantity, and collective synchronization of the supply orders up, down, and across the levels of the network. Inside most manufacturers there are tiers within the planning system where stock transfer orders could prompt manufacturing orders, which in turn could prompt purchase orders. Additionally, within most supply chains there are tiers of different planning systems at each organization linked together by these authorized orders communicating through these supply order signals. For example, purchase orders from a customer can prompt stock transfer or manufacturing orders at suppliers.

Perhaps the biggest indictment of just how inappropriate modern planning rules and tools are can be observed in how frequently people feel compelled to work around them. The typical work-around involves the use of spreadsheets. Data are extracted out of the planning system and put into a spreadsheet. The data are then organized and manipulated within the spreadsheet until a personal comfort level is established. Recommendations and orders are then put back into the planning system, essentially overriding many of the original recommendations from the formal computer system.

Consider polling on this subject by the Demand Driven Institute from 2011 to 2014. With more than 500 companies responding, 95% claim to be augmenting their planning systems with spreadsheets. Nearly 70% claim these spreadsheets are used to a significant or moderate degree. The results of this polling are consistent with other surveys by analyst firms such as Aberdeen Group. This reliance on spreadsheets has often been referred to as “Excel Hell” Validation for this proliferation can be easily provided by simply asking the members of a planning and purchasing team what would happen to their ability to do their job if their access to spreadsheets were taken away.

But why have planners and buyers become so reliant on spreadsheets? Because they know that if they stayed completely within the rules of the formal planning system, approving all recommendations, it would be very career limiting. Tomorrow they would undo or reverse half the things they did today because MRP is constantly and dramatically changing the picture. This phenomenon, known as “nervousness,” is explained in Chapter 3 and was the primary reason for the development of the master production scheduling process in the 1980s.

So instead of blindly following the system, individual planners have developed their own ways of working with tools that they have crafted and honed through their years of experience. These ways of working and spreadsheet tools are highly individualized with extremely limited ability to be transferred between individuals. This is a different, informal, and highly customized set of rules as compared with the formal computer planning system.

Worse yet, there is no oversight or auditing of these side “systems” There is no vice president of spreadsheets in any company. Everyone simply assumes that the people who created these spreadsheets built and maintain them properly. Consider an article in the Wall Street Journal’s Market Watch in 2013:

Close to 90% of spreadsheet documents contain errors, a 2008 analysis of multiple studies suggests. “Spreadsheets, even after careful development, contain errors in 1% or more of all formula cells,” writes Ray Panko, a professor of IT management at the University of Hawaii and an authority on bad spreadsheet practices. “In large spreadsheets with thousands of formulas, there will be dozens of undetected errors.”9

Perhaps a more interesting question is why are these personnel allowed to work around a system that the company has spent significant resources to implement? From a data integrity and security perspective, this is a nightmare. This also means that the fate of the company’s purchasing and planning effectiveness is in the hands of a few irreplaceable personnel. These people can’t be promoted or get sick or leave without dire consequences to the company. This also means that due to the error-prone nature of spreadsheets, globally on a daily basis there are many incorrect signals being generated across supply chains. Wouldn’t it be so much easier to just work in the system? The answer seems so obvious. The fact that reality is just the opposite shows just how big the problem is with conventional systems.

To be fair, many executives are simply not aware of how much work is occurring outside the system. Once they become aware, they are placed in an instant dilemma. Let it continue, thus endorsing it by default, or force compliance to a system that your subject-matter experts are saying is at best suspect or at worst useless? The choice is only easy the first time an executive encounters it. The authors of this book have seen countless examples of executives attempting to end the ad hoc systems only to quickly retreat when inventories balloon and, simultaneously, service levels fall dramatically. These executives may not understand what’s behind the need for the work-arounds, but they now know enough to simply look the other way. So they make the appropriate noises about how the entire company is on the new ERP system and downplay just how much ad hoc work is really occurring.

The Organizational Level

Another piece of evidence to suggest the shortcomings of conventional MRP systems has to do with the inventory performance of the companies that use these systems. In order to understand this particular challenge, consider the simple graphical depiction in Figure 1-7. In this figure you see a solid horizontal line running in both directions. This line represents the quantity of inventory. As you move from left to right, the quantity of inventory increases; right to left the quantity decreases.


FIGURE 1-7 Taguchi inventory loss function

In the figure, a curved dotted line representing return on investment bisects the inventory quantity line at two points:

• Point A—the point where a company has too little inventory. This point would be a quantity of zero, or “stocked out.” Shortages, expedites, and missed sales are experienced at this point. Point A is the point at which the part position and supply chain have become too brittle and are unable to supply required inventory. Planners or buyers that have part numbers past this point to the left typically have sales and/or operations managers screaming at them for additional supply.

• Point B—the point where a company has too much inventory. There is excessive cash, capacity, and space tied up in working capital. Point B is the point at which inventory is deemed waste. Planners or buyers that have part numbers past this point to the right typically have Finance screaming at them for misuse of financial resources.

If we know that these two points exist, then we can also conclude that for each part number, as well as the aggregate inventory level, there is an optimal range somewhere between those two points. This optimal zone (range) is in the middle. When inventory moves out of the optimal zone in either direction, it is deemed increasingly problematic. The benefit to the company of the center position is maximum return on the inventory investment.

This depiction is consistent with the graphical depiction of loss function developed by the Japanese business statistician Genichi Taguchi to describe a phenomenon affecting the value of products produced by a company. This made clear the concept that quality does not suddenly plummet when, for instance, a machinist slightly exceeds a rigid blueprint tolerance. Instead loss in value progressively increases as variation increases from the intended nominal target until the specification limit is crossed, and then it is a total loss.

The same is true for inventory. As the inventory quantity expands out of the optimal zone and moves toward point B, the return on working capital captured in the inventory becomes less and less as the flow of working capital slows down. The converse is also true; as inventory shrinks out of the optimal zone and approaches zero or less, revenue flow is impeded due to shortages.

When the aggregate inventory position is considered in an environment using traditional MRP, a bimodal distribution is frequently noted. A bimodal distribution exhibits two distinct lumps:

• A bimodal distribution can occur at the single-part level over a period of time, as a part will oscillate back and forth between excess and shortage positions. In each position, flow is threatened or directly inhibited. The bimodal position can be weighted toward one side or the other, but what makes it bimodal is a clear separation between the two groups—the lack of any significant number of occurrences in the optimal range.

• The bimodal distribution also occurs across a group of parts at any point in time. At any one point many parts will be in excess while other parts are in a shortage position. Shortages of any parts are particularly devastating in environments with assemblies and shared components because the lack of one part can block the delivery of many parent parts.

Figure 1-8 is a conceptual depiction of a bimodal distribution across a group of parts. The bimodal distribution shows a large number of parts that are in the too-little range, while still another large number of parts are in the too-much range. The Y axis represents the number of parts at any particular point on the loss function spectrum.


FIGURE 1-8 Bimodal inventory distribution

In Figure 1-8, not only is the smallest population in the optimal zone, but the time any individual part spends in the optimal zone tends to be short-lived. In fact, most parts tend to oscillate between the two extremes. The oscillation is depicted with the solid curved line connecting the two disparate distributions. That oscillation will occur every time MRP is run. At any one time, any planner or buyer can have many parts in both extremes simultaneously.

This bimodal distribution is rampant throughout industry. It can be very simply described as “too much of the wrong and too little of the right” at any point in time and “too much in total” over time. In a survey by the Demand Driven Institute between 2011 and 2014, 88% of companies reported that they experienced this bimodal inventory pattern. The sample set was over 500 organizations around the world.

There are three primary effects of the bimodal distribution evident in most companies:

• High inventories. The distribution can be disproportionate on the excess side, as many planners and buyers will tend to err on the side of too much. This results in slow-moving or obsolete inventory, additional space requirements, squandered capacity and materials, and even lower-margin performance, as discounts are frequently required to clear out the obsolete and slow-moving items.

• Chronic and frequent shortages. The lack of availability of just a few parts can be devastating to many manufacturing environments, especially those that have assembly operations and common material or components. The lack of any one part will block an assembly. The lack of common material or components will block the manufacture of all parent items calling for that common item. This means an accumulation of delays in manufacturing, late deliveries, and missed sales.

• High bimodal-related expenses. This effect tends to be undermeasured and underappreciated. It is the additional amount of money that an organization must spend in order to compensate for the bimodal distribution. When inventory is too high, third-party storage space may be required. When inventory is too low, premium and fast freight are frequently used to expedite material. Overtime is then used to push late orders through the plant. Partial shipments are made to get the customers some of what they ordered but with significantly increasing freight expenses.

These three effects are indicative of major flow problems in most organizations. Furthermore, these effects are directly tied to conventional planning activities and efforts in these organizations.

The Supply Chain Level—The Bullwhip Effect

There is a phenomenon that dominates most supply chains. This phenomenon is called the bullwhip effect. The fourteenth edition of the APICS Dictionary defines the bullwhip effect as:

An extreme change in the supply position upstream in a supply chain generated by a small change in demand downstream in the supply chain. Inventory can quickly move from being backordered to being excess. This is caused by the serial nature of communicating orders up the chain with the inherent transportation delays of moving product down the chain. The bullwhip can be eliminated by synchronizing the supply chain. (p. 19)

This definition clearly deals with important points discussed earlier in this chapter. “Inventory can quickly move from being backordered to being excess” is descriptive of the oscillation effect with the bimodal distribution. Additionally, this definition deals with both information and materials. “Communicating orders up the chain” is the information component, while “moving product down the chain” is the materials component.

The bullwhip is really the systematic and bidirectional breakdown of information and materials in a supply chain. Figure 1-9 is a graphical depiction of the bullwhip effect. The wavy arrow moving from right to left is the distortion to relevant information in the supply chain. The arrow wave grows in amplitude in order to depict that the farther up the chain you go, the more disconnected the information becomes from the origin of the signal as signal distortion is transferred and amplified at each connection point.


FIGURE 1-9 Illustrating the bidirectional bullwhip effect

A massive amount of research and literature has been devoted to the bullwhip effect. However, very little, if any, of that body of knowledge has been focused specifically on its bidirectional nature. Most of the research has been dedicated to understanding how and why demand signal distortion occurs and how to potentially fix it by synchronizing the supply chain. Remember that this is one of the basic objectives for planning.

Yet understanding the bidirectional nature of the bullwhip effect allows us to see how this single phenomenon connects directly to the other three previous perspectives. Could the bullwhip effect explain the existence of the other three perspectives?

• The user level. The user experiences conflicting and constantly changing messages and material shortages driven by updated requirements from customers. To sift out appropriate data, personnel attempt to clarify the picture by using ancillary nonintegrated tools.

• The organizational level. Materials and components quickly move from excess to back order or vice versa as updated requirements from customers appear. Company personnel often attempt to compensate by inflating inventories through an increase in safety stocks.

• The macroeconomic level. Overall supply chain performance erodes, requiring more resources and spend in order to compensate for the erosion and keep up with increasing demands from customers.

Variability and Flow

What can the bullwhip effect really teach us? What can explain conventional planning’s failure to live up to the potential that was envisioned and to protect and promote flow? In order to answer these questions, we need to expand the previous equation to include the biggest determinant in managing flow—managing variability.

In Figure 1-10 we see an expanded form of the equation previously introduced. Variability is defined as the summation of the differences between our plan and what happens. As variability rises in an environment, flow is directly impeded. Conversely, as variability decreases, flow improves. And as evidenced by the bullwhip effect, variability can be and often is bidirectional.


FIGURE 1-10 Connecting variability to the flow equation

The impact of variability must be better understood at the systemic rather than the discrete detailed process level. The war on variability that has been waged for decades has most often been focused at a discrete process level with little focus or identified impact on the total system. Variability at a local level in and of itself does not necessarily impede system flow. What impedes system flow is the accumulation and amplification of variability. Accumulation and amplification happens due to the nature of the system, the manner in which the discrete areas and environment interact (or fail to interact) with each other. “The more that variability is passed between discrete areas, steps or processes in a system, the less productive that system will be; the more areas, steps or processes and connections between them, the more erosive the effect to system productivity.”10

Quite simply, Figure 1-10 says that when things don’t go according to plan, flow is directly impacted. Is this really surprising? Methods like Six Sigma, Lean, and Theory of Constraints have recognized the need to control variability for decades. Unfortunately, many of those methods point to or become focused on limited components of an organization or supply chain. Most of them attempt to compensate for variability after the plan has been developed. This leads to two critical questions:

1. Was the plan even realistic to start with?

2. If yes, can we ever really expect everything to go according to plan?

The Rise of Complexity and Volatility

Answering these two questions becomes exceedingly difficult given the dramatic changes that have occurred across the global supply chain landscape. The world is a much different place today from what it was 50 years ago, when the conventional planning rules were developed. Figure 1-11 is a list of some dramatic changes in supply chain—related circumstances that have occurred since 1965.


FIGURE 1-11 Supply chain circumstances, 1965 versus today

A 2014 poll of over 1,000 Certified Management Accountants across 41 countries conducted by the Institute of Management Accountants shows the overwhelming recognition that supply chain complexity has increased. One of the survey questions was:

How would you describe the complexity of your company’s supply chain in the last decade?

a. Stayed the same 15.4%
b. Complexity has increased 78.2%
c. Complexity has decreased 6.3%

The circumstances under which Orlicky and his cadre developed the rules behind MRP and surrounding techniques have dramatically changed. Customer tolerance times have shrunk dramatically, driven by low informational and transactional friction largely due to the Internet. Customers can now easily find what they want at a price they are willing to pay and get it in a short period of time.

Ironically, planning complexity is largely self-induced in the face of these shorter customer tolerance times. Most companies have made strategic decisions that have directly made it much harder for them to effectively do business. Product variety has risen dramatically. Supply chains have extended around the world driven by low-cost sourcing. Product complexity has risen. Outsourcing is more prevalent. Product life and development cycles have been reduced.

This has served to create a huge gap between customer expectations and the reality of what it takes to fulfill those expectations reliably. This will not get better anytime soon. The proliferation of quicker delivery methods such as drones will simply serve to widen this disparity between customer tolerance time and the procurement, manufacturing, and distribution cycle times.

Add to this an increased amount of regulatory requirements for consumer safety and environmental protection, and there are simply more complex planning and supply scenarios than ever before. The complexity comes from multiple directions: ownership, the market, engineering and sales, and the supply base. Ultimately, this complexity manifests itself with a high degree of variability and volatility. This variability is making it much more difficult to generate realistic plans and maintain the expectation that things will go according to plan.

But are conventional planning systems just not implemented correctly, or are they having difficulty keeping up with these circumstances, or are they, in fact, making it worse? Could these systems actually be exacerbating the inherent increased levels of variability in this “new normal”? Could they be producing plans that are both exceedingly unrealistic to start with and exceedingly susceptible to the increased level of variability at the execution level, causing the organization to expend massive amounts of resources to compensate operationally as well as limiting the potential for improvement through methods like Six Sigma, Lean, and Theory of Constraints?

In order to answer these questions, we will need to expose another key component of our flow equation—the component that eludes most companies in today’s complex and volatile supply chain environments.

The Case for Relevance

There is an important factor in managing variability that must be recognized; without it the quest to reduce or manage variability at the systemic level is a quixotic one at best. This missing element is labeled “Visibility” in Figure 1-12.


FIGURE 1-12 Adding visibility to the equation

Visibility is defined simply as relevant information for decision making.11 A company cannot just indiscriminately move data and materials quickly through a system and expect to be successful. Today organizations are frequently drowning in oceans of data with little relevant information and large stocks of irrelevant materials (too much of the wrong stuff) and not enough relevant materials (too little of the right stuff). When this occurs, there is a direct and adverse effect to return on investment. Sophisticated analytics of bigger and bigger databases does not solve the problem but rather deepens the ocean of data.

Finding a Core Problem

Thus the flow of information and materials must be relevant to the required output or market expectation of the system. To be relevant, both the information and materials must synchronize the assets of a business to what the market really wants; no more, no less. Having the right information is a prerequisite to having the right materials at the right time. With this is mind, Plossl’s law can be amended to:

All benefits will be directly related to the speed of flow of relevant information and materials.

Note that this formula starts not at flow but at what makes information relevant. If we don’t fundamentally grasp how to generate and use relevant information, then we cannot operate to flow. Moreover, if we are actively blocked from generating or using relevant information, then even if people understood there was a problem, they would be powerless to do anything about it. Thus, we have reached the core problem plaguing most manufacturers today; the inability to generate and use relevant information to drive ROI. Without addressing this core problem, there can be no systemic solution for flow.12 Figure 1-13 shows the core problem area of the equation versus the area associated with Plossl’s law.


FIGURE 1-13 Core problem area of the equation (from Smith and Smith13)

Intuitively, people in organizations know that they must find and use relevant information for decision making. Yet many of those people recognize that their systems are not giving them the visibility that they need. Another question from the 2014 poll of over 1,000 Certified Management Accountants across 41 countries conducted by the Institute of Management Accountants clearly shows a problem. The question was:

How would you rate your ERP system’s ability to focus on the relevant information?

a. Poor 22.5%
b. Moderate 60.8%
c. Good 16.7%

Well over 80% rated their systems’ ability to provide relevant information either poor or moderate.

A Deeper Understanding of the Bullwhip Effect

With this expansion and segmentation of the equation, we can see the bullwhip effect in a slightly different light. The bullwhip effect is the manifestation of the core problem area in Figure 1-13. Figure 1-14 provides an amended view of the bullwhip effect.


FIGURE 1-14 Restatement of the bullwhip effect

As you can see in Figure 1-14, distortions to relevant information go up the chain, growing in size and causing wider and wider oscillations in terms of both quantity and timing requirements. Distortions to relevant materials come down the chain as delays and shortages accumulate. These distortions directly contribute to the bimodal inventory distribution and its related effects are seen at the organizational level, the need for users to employ alternative methods to make sense of it, and, finally, why return on investment performance is lagging.

Summary

With the restatement of Plossl’s law and the deeper understanding of the bullwhip effect, we must consider a restatement of the true objective of planning:

To ensure the protection and promotion of the flow of relevant information and materials

Is it possible to meet this objective with conventional planning systems? Before we can answer that question, we must thoroughly understand how those systems actually work. Chapter 2 will provide that framework. After we have a better understanding about how conventional planning works, we can begin to understand if those conventional planning systems can meet the challenge, or if, in fact, there is actually a critical flaw that directly leads to and exacerbates less relevant information and materials (aka the bullwhip effect) This will be a key point of emphasis in Chapter 3.

Precisely Wrong: Why Conventional Planning Systems Fail

Подняться наверх