Читать книгу The Demand Driven Adaptive Enterprise - Carol Ptak - Страница 10

Оглавление

CHAPTER 1

Don’t Be a Dodo

“It is not the most intellectual of the species that survives; it is not the strongest that survives; but the species that survives is the one that is able best to adapt and adjust to the changing environment in which it finds itself”.

The above quote is often falsely attributed to Charles Darwin. While undoubtedly inspired by Darwin’s work, it was Leon Megginson, Professor Emeritus at Louisiana State University, who is the source of this quote. Megginson wrote several books on small business management, published over 100 articles, and won numerous awards for teaching and research.1 Regardless of the source of the quote, the message for business leadership should be powerful: adapt or die.

The dodo bird is an extinct flightless bird that was native to the island of Mauritius, first recorded by Dutch sailors in 1598. When the Dutch colonized the island, they brought with them dogs and pigs. This resulted in an immediate and profound change to the dodo’s native environment, one to which it could simply not adapt. The last credible sighting of a dodo was in 1662. In less than 100 years the dodo was gone. It disappeared so quickly many thought it was a mythical creature until researchers in the middle of the nineteenth century thoroughly studied remains of the bird.2

What can be learned from the dodo? The dodo had no say on the changes to the environment; they were imposed upon it. In today’s world of volatility, uncertainty, complexity, and ambiguity (VUCA) there is a high degree of probability that organizations will have environmental changes imposed that will profoundly or dramatically affect their ability to compete and/or survive. This means that organizations must find a way to quickly sense and adapt to changes in the environment. What stands in the way?

Today’s conventional management practices have tremendous amounts of inertia driven by software, consulting, accounting, and academic experts. Many of these practices trace their origins back to the 1930s and 1950s. Yet the world looks nothing today like it did at that time. Companies must adapt and innovate or their very existence is threatened. Consider this astonishing research from the Harvard Business Review in an article titled, “The Biology of Corporate Survival:”

“We investigated the longevity of more than 30,000 public firms in the United States over a 50-year span. The results are stark: businesses are disappearing faster than ever before. Public companies have a one in three chance of being delisted in the next five years, whether because of bankruptcy, liquidation, M&A, or other causes. That’s six times the delisting rate of companies 40 years ago. Although we may perceive corporations as enduring institutions, they now die, on average, at a younger age than their employees. And the rise in mortality applies regardless of size, age, or sector. Neither scale nor experience guards against an early demise.

We believe that companies are dying younger because they are failing to adapt to the growing complexity of their environment. Many misread the environment, select the wrong approach to strategy, or fail to support a viable approach with the right behaviors and capabilities.”3

But what to change to? How to change and drive adaptation? Is there a safe and effective path to transform a company from a basic planning and operational model developed in the 1950s and measured by financial accounting principles developed in the 1970s and ’80s to an agile and adaptive enterprise capable of staying ahead of today’s hypercompetitive markets and highly complex supply chains? This has been the focus of the authors and their organization, The Demand Driven Institute, since 2011—to articulate a comprehensive methodology that enables a company to sense changes from the market, adapt planning, production and distribution, and drive innovation in real time, resulting in sustainable and dramatic improvements to ROI.

First, the fundamentals. There are three basic necessities that management must always be carefully considering and managing in order to avoid organizational collapse and to sustain and drive better performance:

Working capital is the capital of a business that is used in its day-to-day trading operations. It is typically calculated as the current assets minus the current liabilities. Important considerations include inventory levels, available cash, accounts receivable, the level of available credit, and accounts payable. It is an effective way to measure the immediate overall company health.

Organizational contribution margin is the rate at which the company generates cash within certain periods. It is total revenue minus variable costs and period operating expenses.

Customer base is the base of business that provides the sales volume of the organization. This includes market share, sales volume, product and/or services innovation, service levels, and quality.

Figure 1-1 depicts these three critical considerations in a strategic target chart.4 The figure has concentric circles with a green middle, followed by a yellow ring, then a red ring, and finally a dark red ring. The area is equally divided into three sections: working capital, contribution margin, and customer base. The outer circles are the biggest cause of concern. As the measure moves farther away from the center, the threat to the organization grows as performance pushes closer to and through the “edge of collapse.” The very outer ring is system collapse or failure. Any one of these three crucial necessities pushed too far outward will cause an organization to fail.


FIGURE 1-1 The strategic target chart

The black dots in Figure 1-1 represent the organization’s position for each critical consideration. Conceptually, these dots will never touch. If they are consistently green, the organization and its expectations grow; the circle (and its rings) simply expands outward. Additionally, there are lines connecting the black dots depicting the tension and connections between the three areas. For example, if the customer base erodes, contribution margin and working capital will be adversely affected. If contribution margin erodes, working capital will be adversely affected.

When any one of these considerations moves into the edge of collapse ring, signal strength will intensify, and the organization will call, “All hands on deck!” to deal with that specific threat. However, there is a strong connection and tension among the three. Organizations must be careful not to overcompensate in any one area in a manner or for a duration that might drive another over the edge instead. For example, to recover acceptable customer base, the decision could be made to dramatically reduce price then affecting the contribution margin. Management must constantly fight this battle in the current highly complex and volatile environment, now and in the future—that is their primary job!

A non-business analogy is the human body, a highly complex system that must be able to perform three basic functions. The human body must be able to perform respiration (draw breath and pass oxygen to the blood) at a sufficient rate. The body then must also be able to circulate oxygenated blood throughout the body in a constant loop (pulse and blood pressure). Finally, the body must be able to maintain a fairly tight control zone of temperature or risk vital organ failure. These three basic functions essentially define what is known as the “vital signs”.

The green zone centers for each of these vitals is well known throughout the medical community and will depend on certain patient characteristics such as age and sex. These green zones are defined below by the Cleveland Clinic.5

Respiratory rate. A person’s respiratory rate is the number of breaths you take per minute. The normal respiration rate for an adult at rest is 12 to 20 breaths per minute. A respiration rate under 12 or over 25 breaths per minute while resting is considered abnormal.

Pulse. Your pulse is the number of times your heart beats per minute. Pulse rates vary from person to person. Your pulse is lower when you are at rest and increases when you exercise (because more oxygen-rich blood is needed by the body when you exercise). A normal pulse rate for a healthy adult at rest ranges from 60 to 80 beats per minute.

Blood pressure. Blood pressure is the measurement of the pressure or force of blood against the walls of your arteries. Healthy blood pressure for an adult, relaxed at rest, is considered to be a reading less than 120/80 mm Hg. A systolic pressure of 120–139 or a diastolic pressure of 80-89 is considered “prehypertension” and should be closely monitored. Hypertension (high blood pressure) is considered to be a reading of 140/90 mm Hg or higher.

Body Temperature. The average body temperature is 98.6 degrees Fahrenheit, but normal temperature for a healthy person can range between 97.8 to 99.1 degrees Fahrenheit or slightly higher. Any temperature that is higher than a person’s average body temperature is considered a fever. A drop in body temperature below 95 degrees Fahrenheit is defined as hypothermia.

Any person that exhibits increasing difficulty with any one or a combination of these vital functions will be at an increasing risk of expiring. If that person was at a hospital during that difficulty there would be an escalation of monitoring, attention, and resources devoted to their care as they progress through admittance, a medical unit, a Definitive Observation Unit (DOU), and finally an Intensive Care Unit (ICU).


FIGURE 1-2 Alternate strategic target chart scenario

Figure 1-2 depicts three different scenarios. In each case, the relative positions of each of the three considerations are plotted. It should be pointed out that all three of these scenarios are simply a point in time reference; it could be the same company just at different points in time or it could be three different companies at one point or even different points in time. The position and tension between these three important considerations is constantly shifting.

Which scenario is healthier? The scenario on the left may represent a company that is performing relatively well with regard to contribution margin and the market but is suffering from a working capital crisis. The middle scenario depicts a company that is failing to generate cash and is suffering from a working capital crunch. The scenario on the right is a company that is generating a high amount of cash, has abundant working capital and a well-defended and growing base of customers.

But how can leadership best hope to manage these basic elements in a VUCA environment? The key to both the short-term and long-term management of these elements can be found in concepts called “coherence” and “resiliency.”

Striving for Coherence and Resiliency

Coherence and resiliency are key terms in the emerging science of complex adaptive systems. What is a complex adaptive system (CAS)? First, let’s understand that any complex system is governed by three important principles:

Nonlinearity. Complex systems are best described as web connections, not linear connections. They loop and feedback on themselves interactively. The degree of complexity resulting from dynamic interactions can reach an enormous level. Dynamic interactions are explained as high degrees of inter-dependencies, non-linear interactions, short-range interactions, and positive and negative feedback loops of interactions.

Extreme sensitivity to small initiating events. Lots of these initiating events occurring in a short time frame can produce significant nonlinear outcomes that may become extreme events. These events are often referred to as “lever point phenomena” or “butterfly events”.

Cause and effect are not proportional; a part that costs 10 cents can halt the assembly of multimillion dollar end items as quickly as a $10,000 part.6

The word “adaptive” introduces the element of how a complex system changes or reconfigures itself through a process known as “emergence.” Once emergence has occurred, then feedback and selection occur over a period of time resulting in further reconfiguration to the system. When complex systems are co-mingled or intertwined (such as highly integrated supply chains) these events and steps tend to cascade across systems, making a highly complex and evolving picture. Figure 1-3 is a modified version of Figure 10.1 from the book Demand Driven Performance—Using Smart Metrics. It lists nine characteristics of a CAS.


FIGURE 1-3 Complex adaptive system characteristics7

At this time we will focus on only two of these characteristics. The first is coherence. A complex adaptive system’s “success” depends on the coherence of all of its parts. A subsystem’s purpose has to be in alignment with the purpose of the greater system in order for there to be coherence. Without that alignment, the subsystem acts in a way that endangers the greater system it depends on. Coherence must be at the forefront of determining the signal set components, triggers, and action priorities. To maintain coherence, adaptive agents must ensure their signal sets contain the relevant information to direct their actions and are not at cross-purposes with the goals of the systems it depends on.8 The concept of coherence is consistent with the systemic approaches of thought leaders such as Ohno, Goldratt, and Deming and their respective disciplines of lean, Theory of Constraints, and Six Sigma. All of these disciplines urge management to organize and operate in a manner that emphasizes carefully aligning local actions to the global objective. Deming and Goldratt in particular were extremely opinionated about the failure of management and executives to understand and effectively embrace this concept, which is one that seems rooted in basic common sense.

The second characteristic of a CAS to be explored is resiliency. Resiliency allows a system to respond to a disturbance while maintaining equilibrium within its system boundary. In supply chain words, resilience is how well a system can return to stability when it experiences random or self-imposed variation. Resilience arises from the subsystem’s ability to respond to the feedback loops that regulate equilibrium. The ability to adapt and the diversity or flexibility of options/actions determines how quickly the system can recover and/or improve. The opposite of resiliency is rigidity.9

Obviously, if a system is insufficiently resilient relative to the level of disturbance, it is at risk of collapse. Reeves, Levin, and Ueda identified six basic risks to the resiliency of a complex system.10 Any organization wishing to avoid the threat of collapse must mitigate these risks. In the VUCA world these risks are more prevalent than ever.

The COLLAPSE risk. A change from within or outside the industry renders the firm’s business model obsolete. An obvious example is the impact that the emergence of online retailing giants such as Amazon had on the retail industry. At one time Sears and Roebuck was the largest retailer on the planet, headquartered in the tallest building in the western hemisphere, The Sears Tower. Sears initially built its empire through catalog sales, selling hardware, appliances, tools, and even plans for homes (The Craftsman). By 1980 the vast majority of the United States population had a Sears store or outlet within an hour drive. At the time of this writing Sears is but a shell of what it had been, struggling to meet cash commitments and desperately trying to find a way to survive. After being bought by Kmart to form Sears Holding, the Sears Tower is now the Willis Tower. The building is no longer the tallest in the western hemisphere and United Airlines now occupies much of the building. Sears Holding continues to sell brands and close stores from its location in Hoffman Estates, IL, a suburb of Chicago.

The CONTAGION risk. Shocks in one part of the business spread rapidly to other parts of the business. In 2018 Ford had to close its profitable F150 assembly plant due to a fire in a Chinese-owned supplier located in Michigan. The fire affected many auto suppliers, but the hardest hit was Ford, specific to the F150 truck. The F150 is a multibillion dollar brand for Ford and substantially drives Ford’s profits.

The FAT-TAIL risk. Rare but large shocks, such as natural disasters, terrorism, and political turmoil. Examples here include the tsunami in Japan that affected the automotive and electronics industries. September 11, 2001 stopped industry across the United States and many companies never recovered, especially the airline industry. Massive consolidation and cutbacks in flights and service redefines the new airline industry.

The DISCONTINUITY risk. The business environment evolves abruptly in ways that are difficult to predict. The financial crisis of 2008 created the biggest disruption to the U.S. housing market since the Great Depression. Increased foreclosure rates in 2006 and 2007 led to a banking crisis in 2008. Concerns about the impact of the collapsing housing and credit markets on the general U.S. economy caused the U.S. President to announce a limited bailout of the country’s housing market for homeowners who were unable to pay their mortgage debt. This spilled over into other markets. People simply did not have money to spend. The automotive industry was shocked by the bankruptcy of General Motors. GM was the world’s largest car maker and now it faced collapse because it no longer had sufficient cash to continue operation. It took the U.S. government stepping in to save a national icon and the jobs associated with it.

The OBSOLESCENCE risk. The enterprise fails to adapt to changing consumer needs, competitive innovations, or altered circumstances. Blackberry was the first “smart” phone on the market. Market acceptance of this innovative device that could do email, phone calls, Internet, and a variety of other tasks, however, was leapfrogged by Apple’s iPhone innovation. Blackberry quickly became viewed as obsolete as it lacked a new visual intuitive user interface as well as the access to thousands of specific “apps.”

The REJECTION risk. Participants in the business’s ecosystem reject the business as a partner. The impact of social media has dramatically increased this type of risk. In 2017 a passenger filmed United Airlines personnel forcibly dragging a bloodied passenger off one of its planes. The video went viral, prompting an outcry in both the United States and China (the passenger was of Chinese descent). The airline posted a profit plunge of almost 70 percent in that quarter.

With the exploration of these two characteristics (coherence and resiliency) of a CAS, consider two pivotal questions:

With regard to coherence: what is the goal of a for-profit company and how can the subsystems’ purposes be best aligned to that goal? The three vital metrics of contribution margin, working capital, and customer base are far too remote from the subsystem’s decisions to make them the metric’s focus at the subsystem level. Is there some concept or principle that can ensure those three basic necessities at the higher level but that translates effectively all the way down to and through the subsystems and their respective operational levels? Without this answer, maintaining coherence is under constant threat. This question will be answered in the next section of this chapter.

With regard to resilience: where is the starting point for an organization to create a framework to best mitigate these six risks? The exploration of an answer will begin in Chapter 2.

Authors’ note: This is an extremely abbreviated description of complex adaptive systems. Readers seeking a deeper dive behind this science should consider reading Chapter 10 of Demand Driven Performance —Using Smart Metrics (Smith and Smith, McGraw-Hill, 2014) and the additional resources listed in that chapter.

Flow as an Objective and Purpose for Systems and Subsystems

What is the objective for every organization and a purpose for its subsystems to effectively tie to and drive that objective? Is there a basic fundamental principle to focus every business?

Now more than ever business is a bewildering and distracting variety of products, services, materials, technologies, machines, and people skills obscuring the underlying elegance and simplicity of it as a process. The required orchestration, coordination, and synchronization is simply a means to an end. That much is quite easy to grasp. What is more difficult for many organizations to grasp is what fundamental principle should underlie that orchestration, coordination, and synchronization.

The essence of any business is about flow. The flow of materials and/ or services from suppliers, perhaps through multiple manufacturing plants and then through delivery channels to customers. The flow of information to all parties about what is planned and required, what is happening, what has happened, and what should happen. The flow of cash back from the market to and through the suppliers.

Is this some sort of inspired revelation? No. Flow has always been the primary purpose of most services and supply chains. Very simply put, you must take things or concepts, convert or assemble them into different things or offerings and then get those new things or offerings to a point where others are willing to pay you for them. The faster you can make, move, and deliver all things and offerings, the better the performance of the organization tends to be. This incredibly simple concept is best described in what is known as Plossl’s Law.

Plossl’s Law

George Plossl was an instrumental figure in the formation and proliferation of Material Requirements Planning (MRP), the original planning and information system that would eventually evolve into modern-day Enterprise Resource Planning (ERP) products. He is commonly referred to as one of three founding fathers of these manufacturing systems; Joe Orlicky and Oliver Wight are the other two.

In 1975 Joe Orlicky wrote the book Material Requirements Planning—The New Way of Life in Production and Inventory Management. This book became the blueprint for commercial software products and practitioners alike. By 1981, led by Oliver Wight, Material Requirements Planning had evolved into Manufacturing Resources Planning (commonly referred to as MRP II). MRP II went well beyond the simple planning calculations used in MRP. It incorporated and used a Master Production Schedule, capacity planning and scheduling, and accounting (costing) data. Oliver Wight passed away in 1983 and Orlicky passed away in 1986. In 1994, George Plossl took up the torch to paint a vision for the future in the second edition of Joe’s seminal work, Orlicky’s Material Requirements Planning 2e.

Nineteen ninety-four, however, was an interesting and difficult time to write a book about the vision and implementation of technology. Mainframes had given way to client-server configurations and the birth of the Internet had people puzzled about its industrial application. But Plossl successfully navigated these difficulties by sticking to a key transcendent principle, one that had not been well articulated previously in manufacturing systems literature. He called it the First Law of Manufacturing. We now simply know it as Plossl’s Law.

All benefits will be directly related to the speed of flow of information and materials.

It should be noted that Plossl was focused on manufacturing centric entities. As such there is an obvious omission to the law about services. But providing services is also all about flow. For example, the process of obtaining a mortgage flows through a series of steps. The longer it takes, the more risk to the sale and more dissatisfied the customer. Having a surgical procedure at a hospital is about flow. The longer the process, the higher the cost and risk to the patient. Repairing a downed piece of equipment is about flow. The longer the repair takes, the less revenue that piece of equipment can generate. As such, consider an amended Plossl’s Law.

All benefits will be directly related to the speed of flow of information, materials, and services.

This statement is not just simple; it is elegant. It also requires a critical caveat in the modern services and supply chain landscape. We will get to that critical caveat in due time. First, let’s further explore the substance of Plossl’s Law.

“All benefits” is quite an encompassing statement. It can be broken down into components that most companies measure and emphasize. All benefits encompass:

Service. A system that has good information and material and/ or services flow produces consistent and reliable results. Most markets and customers have an appreciation for consistency and reliability. Consistency and reliability are key for meeting customer expectations, not only for delivery performance, but also for things like quality. This is especially true for industries that have shelf-life issues and erratic or volatile demand patterns.

Quality. When things are flowing well, fewer mistakes are made due to less confusion and expediting. This is not to say that qualities issues will not occasionally happen, but what it does say is that quality issues related to poor flow will most likely be minimized. This is important in industries with large assemblies with deep and complex bills of material and complicated routings to be scheduled. Frequent and chronic shortages cause work to be set aside to wait for parts, creating large work-in-process queues and then the inevitable expediting to get the work through the system.

Revenue. When service and quality are consistently high, a company is afforded the opportunity to better exploit the total market potential. This means higher revenue volume from both the protection and growth of margin and market share.

Inventories. With good flow purchased, work-in-process (WIP) and end item inventories will be minimized and directly proportional to the amount of time it takes to flow between stages and through the total system. The less time it takes products to move through the system, the less the total inventory investment. The simple equation is Throughput × Lead Time = WIP. Throughput is the rate at which material is exiting the system. Lead time is the time it takes to move through the system and WIP is the amount of inventory contained between entry and exit. A key assumption is that the material entering the system is proportionate to the amount exiting the system. The basis for this equation is the queuing theory known as Little’s Law.

It is also worth noting that to maintain flow, inventories cannot be eliminated. Flow requires at least a minimal amount of inventory. Too little inventory disrupts flow and too much inventory also disrupts flow. Thus, when a system is flowing well, inventories will be “right-sized” for that level of flow. What we will find out later is that the placement and composition of inventory queues will be a critical determinant in how well flow is protected and promoted and what “right-sized” really means in terms of quantity and working capital commitment.

Expenses. When flow is poor, additional activities and expenses are incurred to correct or augment flow problems. In the short term it could mean expedited freight, overtime, rework, cross-shipping, and unplanned partial ships. In the longer term it could mean additional and redundant resources and third-party capacity and/ or storage. These additional short- and long-term efforts and activities to supplement flow are indicative of an inefficient overall system and directly leads to cash exiting the organization.

Cash. When flow is maximized, the material that a company paid for is converted to cash at a relatively quick and consistent rate. Additionally, the expedite-related expenses previously mentioned are minimized, reducing cash unnecessarily leaving the organization. This makes cash flow much easier to manage and predict and will also lead to less borrowing related expenses.

Furthermore, the concept of flow is also crucial for project management. R&D and innovation efforts that flow well can impact and amplify all of the above benefits as the company exploits these efforts.

What critical business equation is defined with these six basic benefits? It is an equation that defines and measures the very purpose of every for-profit organization: to protect and grow shareholder equity. This is and always has been the basic responsibility and duty of every executive.

In its simplest form the equation to quantify this purpose is:

Net Profit ÷ Investment = Return on Investment

Net profit is a company’s revenue, which consists of total sales dollars collected through a particular period subtracting the operating expenses, cost of goods sold (COGS), and interest and taxes within that same period. Investment is simply the captured money in the system needed to produce the output. The simplicity of this formula can make it easy to manipulate depending on how one defines time periods, but essentially it is a measure of the money that can be returned by a system versus the money it takes to start and maintain that system. Thus, the output of the equation is called return on investment. The higher the rate of return on investment (both in the short run and the anticipated long run), the more valuable the shareholder equity.

Of course, a full DuPont analysis would provide a more detailed perspective incorporating profit margin, asset turnover, and financial leverage. The above equation is simply a conceptual shortcut that can be used to make a crucial connection between flow and return on investment via Plossl’s Law.

Hopefully, that connection is now readily apparent. The previously mentioned benefits of flow (perhaps with the exclusion of taxes) are all direct inputs into the ROI equation. This makes flow the single biggest lever in determining the objective of a for-profit organization. This can be expressed as the equation in Figure 1-4. This depiction first appeared in the book Demand Driven Performance–Using Smart Metrics (Smith and Smith, McGraw-Hill, 2014, p. 71).


FIGURE 1-4 Connecting flow to return on investment (ROI)

Explaining this equation requires first a definition of the elements and then how they relate to each other.

Flow. The rate at which a system converts material to product required by a customer. If the customer does not want or take the product, then that output does not count as flow. It is retained in the system as investment (captured money).

Cash velocity. The rate of net cash generation; sales dollars minus truly variable costs (aka contribution margin) minus period operating expenses.

Net profit/investment. Net profit divided by investment (captured money) is the equation for ROI.

The delta and yield arrows in the equation explain the relationships between the components of the equation. Changes to flow directly yield changes to cash velocity in the same direction. As flow increases so does cash velocity. Conversely, as flow decreases so does cash velocity. As cash velocity increases so does return on investment as the system is converting materials to cash more quickly.

When cash velocity slows down, the conversion of materials to cash slows down. The organization is simply accomplishing less with more. This scenario typically results in additional cash velocity issues related to expediting expenses. Period expenses rise (overtime) or variable costs increase (fast freight, additional freight, and expedite fees). This directly reduces the net profit potential within the period and thus further erodes return on investment performance.

The River Analogy

The simple analogy to this equation is the manner in which a river works. Water flows in a river as an autonomous response to gravity. The steeper the slope of the river bed, the faster the water flows. Additionally, the fewer number of obstructions in the river, the faster the water runs.

In service and supply chain management, materials and/or services flow through the network like water through a river. Materials are combined, converted, and then moved to points of consumption. Services are offered, scheduled, and delivered to customers. The autonomous response of these flows is demand. What else could it or should it be? Ideally, the stronger the demand, the faster the rate of flow of materials and services. And like rivers, service and supply chains have obstructions or blockages created by variability, volatility, and limitations in the “river bed.” Machines break down, critical components are often unavailable, yield problems occur, choke points exist, capacity bottlenecks exist, etc. All of these issues are simply impediments to flow and result in “pools” of inventory with varying depth. A river without flow is not a river, it is a lake. Operations with out flow is a disaster.

With this analogy we begin to realize that flow is the very essence of why the Operations subsystem of manufacturing and supply chain companies even exist. The Operations subsystem is typically divided into functions, each of which have a primary objective for which they are responsible and accountable. Figure 1-5 is a simple table showing typical Operations functions and their primary objective.


FIGURE 1-5 Typical functions in operations

All of these objectives are protected and promoted by encouraging flow. Under what scenario does a cost-based focus enable you to synchronize supply and demand or sequence activity to meet commitments? In fact, a cost-based focus most often leads to the exact opposite of these objectives. Thus, if Operations and its functions want to succeed in being truly effective, there is really only one focus–flow. Flow must become the common framework for communications, metrics, and decision making in Operations.

Let’s expand this view to the organization as a whole. An organization is typically divided into many subsystems, not just Operations. Each subsystem is typically tasked with its own primary objective. Figure 1-6 is another simple table showing the typical subsystems of a manufacturing and/or supply chain-centric company.


FIGURE 1-6 Typical organizational functions

All of these functional objectives require flow to be promoted and protected to drive maximum effectiveness. When things are flowing well, shareholder equity, sales performance, market awareness, asset utilization, and innovation are promoted and protected and costs are under control. This was discussed extensively earlier in this chapter. Thus, flow must become the common framework for communications, metrics, and decision making across the organization.

Additionally, flow is also a unifying theme within most major process improvement disciplines and their respective primary objectives.

Theory of Constraints (Goldratt) and its objective of driving system throughput

Lean (Ohno) and its objective to reduce waste

Six Sigma (Deming) and its objective to reduce variability

All of these objectives are advanced by focusing on flow first and foremost. This should not be surprising since we have already mentioned these thought leaders regarding systemic coherence. When considered with Plossl’s First Law of Manufacturing, the convergence of ideas around flow is quite staggering. There should be little patience for ideological battles and turf wars between these improvement disciplines; it is a complete waste of time. All need the same thing to achieve their desired goal: flow. Among these disciplines, flow becomes the common objective from a common strategy based on simple common sense grounded in basic physics, economic principles, and complex systems science.

The concept and power of flow is not new, but today it seems almost an inconvenient afterthought that managers must, if pressed, acknowledge as important. It powered the rise of industrial giants and gave us much of the corporate structure in use today. Leaders such as Henry Ford, F. Donaldson Brown, and Frederick Taylor made it the basis for strategy and management. The authors believe these leaders would be astonished at how off the mark modern companies are when it comes to flow; they are surviving in spite of themselves.

Thus, Plossl’s Law, while incredibly simple, should not be taken lightly. This one little statement has always defined the way to drive shareholder equity and it was articulated by one of the main architects of conventional planning systems! Embracing flow is the key, not to just surviving but adapting, taking a leadership position or being a fierce and dangerous competitor. It is the first step to becoming a Demand Driven Adaptive Enterprise. In order for that first step to be taken, however, a huge obstacle must be overcome: the universal fixation, emphasis, and obsession over cost.

Cost and Flow

Executives of corporations around the world obsess over cost performance, most particularly unit cost. It dominates discussions on a daily basis and constantly influences a majority of strategic, tactical, and operational decisions throughout the organization. We need to understand what unit cost is (and is not) and why it became so prominent.

Any unitized cost calculation has always been based on past activity within a certain period. The calculation of standard unit cost attempts to assign a cost to an individual product and/or resource based on volume and rate over a particular time period. Essentially, fixed and variable expenses within a period are accumulated and divided by volume within that period to produce a cost per unit. This cost per unit can also be calculated by resource and location. This cost then becomes a foundation for many metrics and decisions at the operational, tactical, and strategic levels in the present and for the future.

Where did unit cost come from? In short, unit cost can trace its origin from the 1934 adoption of Generally Accepted Accounting Principles (GAAP) by the United States government as an answer to the U.S. stock market crash of 1929. Many industrialized countries have since followed this example. GAAP is an imposed requirement for the fair and consistent presentation of financial statements to external users (typically shareholders, regulatory agencies, and tax collection entities). GAAP reporting and the unitized cost calculations based on it was then incorporated into information systems circa 1980 with the advent of manufacturing resources planning (MRP II), the pre-cursor to enterprise resource planning (ERP). That incorporation continues today in every major ERP system.

Why was GAAP incorporated into information systems? It was not driven by the need to manage cost today, or to make management decisions or develop strategy in the future; it was driven by the need to fulfill the financial reporting requirements of GAAP in a much easier and quicker fashion. Even today most ERP implementations begin with the financial module. In the United States the Sarbanes-Oxley act of 2002 drove ERP software companies to provide technology that allowed even faster financial reporting using these rules.

GAAP, however, does not and should not care about providing internal management reporting. Why? Because GAAP’s entire purpose is to provide a consistent reporting picture about what happened over a past period, not what should be done today or suggestions or predictions for the future. GAAP is simply a forensic snapshot of past performance within a certain defined time period, meaning that if it is done as required by the law, it is always 100 percent accurate in determining past cost performance information only.

The incorporation of these cost data quickly led to its numbers and equations becoming the default way to judge performance and make future decisions. Why? The higher-level metrics like return on investment, contribution margin, working capital, etc. are much too remote to drive through the organization. Management needed something to drive down through the organization to measure performance. Cost numbers were readily available and constantly updated in the system.

Now, unfortunately, most of management actually believes or accepts that these numbers are a true representation of cash or potential. The assigned standard fixed cost rate, coupled with the failure to understand the basic aspects of a complex system leads managers to believe that every resource minute saved anywhere is computed as a dollar cost savings to the company. GAAP unit costs are used to estimate both cost improvement opportunities and cost savings for batching decisions, improvement initiatives and capital acquisition justifications. In reality the “cost” being saved has no relationship to cash expended or generated and will not result in an ROI gain of the magnitude expected. Cost savings are being grossly overstated.11

In 2018 a joint study released by APICS and the Institute of Management Accountants named three significant issues regarding costing information that supply chain professionals receive from costing systems.

“An overreliance on external financial reporting systems: many organizations rely on externally-oriented financial accounting systems that employ oversimplified methods of costing products and services to produce information supporting internal business decision making.

Using outdated costing models: traditional cost accounting practices can no longer meet the challenges of today’s business environment but are still used by many accountants.

Accounting and finance’s resistance to change: With little pressure from managers who use accounting information to improve data accuracy and relevance, accountants are reluctant to promote new, more appropriate practices within their organizations”.12

It is the job of management accounting, which is a different profession with an entirely different body of knowledge than financial cost accounting, to provide meaningful and relevant information for decision making. While the body of knowledge still exists there are few with real deep expertise in it. What happened to all the management accountants? They were essentially stripped out of mid-management in the 1980s because they were deemed largely irrelevant because executives believed that the system could now effectively tell the organization what to do via the automated cost data.

Two important points must be made about cost at this point. First, any measurement based on past activity is guaranteed to be wrong in the future. Assuming that past cost performance will be indicative of future cost performance in the VUCA environment is simply nonsensical.

Second and most importantly, good flow control actually yields the best cost control. If things flow well within a period, the previously described benefits of flow occur during that period. If those previously described benefits happen, then fixed (depending on the length of the period) and variable expenses are effectively controlled in combination with better volume performance. This will be reflected in the GAAP statements produced for that entire past period. Thus, emphasizing flow will actually be more effective even for cost accountants!

This leads us to an interesting yet simple rule about cost and flow, a corollary to Plossl’s Law:

When a business focuses on flow performance, better cost performance will follow. The opposite, however, is not the case.

It should be noted that embracing a flow-based focus is not license or a strategy to overspend on massive amounts of capacity, constantly employ overtime, and expedite everything. That is not a flow-based focus. Those tactics become necessary mainly because a company is not primarily focused on flow.

In Chapter 2 we will dive into this corollary in more depth. At this point, however, the conclusion should be relatively solid: if a business wants to manage cost performance it must first and foremost design and manage to flow performance. The mistake to use GAAP-generated cost numbers and metrics as operational tools is actually a self-imposed limitation by an organization’s management. But what can we use in its place? It has to be something that emphasizes flow now and in the future.

With this flow and systems perspective there are at least two additional corollaries to Plossl’s Law that are worth mentioning at this point:

Something is productive if and only if it leads to better promotion and protection of system flow.

Something is deemed efficient if and only if it leads to better promotion and protection of system flow.

Variability and Flow

To fundamentally understand how to emphasize flow now and in the future, we must first understand the biggest determinant in the management of flow: the effective management of variability. In Figure 1-7 we see an expanded form of the equation previously introduced. Variability is defined as the summation of the differences between our plan and what actually happens. As variability rises in an environment, flow is directly impeded. Conversely, as variability decreases, flow improves.


FIGURE 1-7 Connecting variability to the flow equation

The impact of variability must be better understood at the systemic rather than the discrete detailed process level. The war on variability that has waged for decades has most often been focused at a discrete process level with little focus or identified impact to the total system; Deming would not be pleased. Variability at a local level in and of itself does not necessarily impede system flow. What impedes system flow is the accumulation and amplification of variability across a system. Accumulation and amplification happens due to the nature of the system, the manner in which the discrete areas and environment interact (or fail to interact) with each other. Remember the three characteristics of complex systems: nonlinearity, extreme sensitivity to small initiating events, and a disproportion between cause and effect. Smith and Smith proposed the Law of System Variability.

The more that variability is passed between discrete areas, steps or processes in a system, the less productive that system will be; the more areas, steps or processes and connections between them, the more erosive the effect to system productivity.13

Quite simply, Figure 1-7 says that when things don’t go according to plan, flow is directly impacted. Is this really surprising? Methods like Six Sigma, lean and Theory of Constraints have recognized the need to control variability for decades. Unfortunately, many of those methods point to or get focused on limited components or subsystems of an organization or supply chain. Most of them attempt to compensate for variability after a plan has been developed and implemented (a plan that is typically built utilizing a design that assumes everything will go according to plan—an extremely poor assumption).

The Rise of VUCA

The world is a much different place today than it was 50 years ago, when the conventional operational rules and systems were developed. Figure 1-8 is a list of some dramatic changes in supply chain–related circumstances that have occurred since 1965.


FIGURE 1-8 Supply chain circumstances in 1965 versus today

The circumstances under which Orlicky and his cadre developed the rules behind MRP and surrounding techniques have dramatically changed. Customer tolerance times have shrunk dramatically, driven by low informational and transactional friction largely due to the Internet. Customers can now easily find what they want at a price they are willing to pay for it and get it in a short period of time.

Ironically, much of this complexity is largely self-induced in the face of these shorter customer tolerance times. Most companies have made strategic decisions that have directly made it much harder for them to effectively do business. Product variety has risen dramatically. Supply chains have extended around the world driven by low cost sourcing. Product complexity has risen. Outsourcing is more prevalent. Product life and development cycles have reduced.

This has served to create a huge disconnect between customer expectations and the reality of what it takes to fulfill those expectations reliably and consistently. This will not get better any time soon. The proliferation of quicker delivery methods such as drones will simply serve to widen the disparity between customer tolerance time and the procurement, manufacturing, and distribution cycle times. Many supply chains are ill prepared for this storm intensifying.

Add to this an increased amount of regulatory requirements for consumer safety and environmental protection and there are simply more complex planning and supply scenarios than ever before. The complexity comes from multiple directions: ownership, the market, engineering, and sales and the supply base. Ultimately, this complexity manifests itself with a high degree of volatility, uncertainty, and ambiguity. It is making it much more difficult to generate realistic plans and maintain the expectation that things will go according to plan, especially when those plans are based on GAAP-derived drivers.

The key to protecting and promoting flow at the system level is to understand and manage variability at the system level. What then is the key to managing variability? In order to answer that question we will need to expose another key component of the flow equation, the component that eludes most companies in today’s complex and volatile supply chain environments.

Relevance Found

There is an important factor in managing variability that must be recognized; without it, the quest to reduce or manage variability at the systemic level is a quixotic one at best. This missing element is labeled as “Visibility” in Figure 1-9.


FIGURE 1-9 Adding visibility to the equation

Visibility is defined simply as access to relevant information for decision making.14 This provides an extremely important caveat to Plossl’s Law. A company cannot just indiscriminately move data and materials quickly through a system and expect to be successful. Today organizations are frequently drowning in oceans of data with little relevant information and large stocks of irrelevant materials (too much of the wrong stuff) and not enough relevant materials (too little of the right stuff). Furthermore, sophisticated real time analytics of bigger and bigger databases will not solve the problem but instead will create a deeper, wider, and stormier ocean of data and materials unless we understand how to better sift through that ocean to determine what is truly relevant for decision making now and in the future.

Note that this formula now starts not with flow but with what makes information relevant. If we don’t fundamentally grasp how to generate and use relevant information, then we cannot hope to manage variability and consequently facilitate flow. Moreover, if we are actively blocked from generating or using relevant information by systems, then even if people (adaptive agents) understood there was a problem, they would be essentially powerless to do much about it.

What makes the flow of information, materials, and services relevant is its relationship to the required output or market expectation of the system now and in the future, not what was accomplished (or not accomplished) in the past. To be relevant, the information, materials, and services must synchronize the assets of a business to what the market really wants now and in the future; no more, no less.

Thus, we have reached the core problem plaguing most organizations today: the inability to generate and use relevant information to effectively manage variability to then protect and promote flow and drive ROI performance. Without addressing this core problem, there can be no systemic solution for flow.15 Figure 1-10 shows the core problem area of the equation versus the area associated with Plossl’s Law as first stated.


FIGURE 1-10 Core problem area of the equation16

Having visibility to the right information is a prerequisite to effectively managing variability and ensuring the flow of the right materials at the right time. With this is mind, Plossl’s Law now must be amended to:

All benefits will be directly related to the speed of flow of relevant information, materials, and services.

A New Appreciation for the Bullwhip Effect

But this core problem is not confined to individual organizations. As discussed previously, complex systems (organizations) interact collectively with other complex systems to create an even larger complex system. What is happening at this higher level?

There is a phenomenon involving the stated core problem that dominates most supply chains and complex systems. This phenomenon is called the “bullwhip effect.” The fourteenth edition of the APICS dictionary defines the bullwhip effect as:

“An extreme change in the supply position upstream in a supply chain generated by a small change in demand downstream in the supply chain. Inventory can quickly move from being backordered to being excess. This is caused by the serial nature of communicating orders up the chain with the inherent transportation delays of moving product down the chain. The bullwhip can be eliminated by synchronizing the supply chain.”17

A massive amount of research and literature has been devoted to the phenomenon known as the bullwhip effect, starting with Jay Forrester in 1961. However, very little, if any, of that body of knowledge has been devoted specifically to its bidirectional nature. Most of the research has been dedicated to understanding how and why demand signal distortion occurs and how to potentially fix it through better forecasting algorithms, tightly synchronizing the supply chain.

The bullwhip is really the systematic and bidirectional breakdown of information and materials in a supply chain. Figure 1-11 is a graphical depiction of the bullwhip effect. Distortions to relevant information go up the chain, growing in size and causing wider and wider oscillations both in terms of quantity and timing requirements. The wavy arrow moving from right to left represents that distortion to relevant information in the supply chain. The arrow wave grows in amplitude in order to depict that the farther up the chain you go, the more disconnected the information becomes from the origin of the signal as signal distortion is transferred and amplified at each connection point.


FIGURE 1-11 Illustrating the bidirectional bullwhip effect

Distortions to relevant materials come down the chain as delays and shortages accumulate. The more connections that exist, the more pronounced the delays and shortages. The wavy arrow from left to right depicts that distortion. Lead times expand, shortages are more frequent, expedites are common, and flow breaks down.

Summary

Creating visibility to relevant information and managing the risks to coherence and resiliency is no trivial task, but it is the only path to sustainable organizational success as measured by ROI. The more relevant information our organization has, the more immediate and enduring success it will have—it is really that elementary. Intuitively, people in organizations know that they must find and use relevant information for decision making. Yet many of those people recognize that their information systems are not giving them the visibility that they need. What will it take to get more relevant information throughout the organization? This question is explored in Chapter 2.

The Demand Driven Adaptive Enterprise

Подняться наверх