Читать книгу The Essentials of Modern Software Engineering - Ivar Jacobson - Страница 13

Оглавление

2

Software Engineering Methods and Practices

In this chapter we present how the way of working to develop software is organized, and to some extent what additional means are needed (e.g., notations for specifications). In particular, we

• describe the challenges in software engineering covering a wide range of aspects like how to proceed step by step, involve people, methods and practices;

• outline various key concepts of some commonly used software engineering methods created during the last four decades (i.e., waterfall methods, iterative lifecycle methods, structured methods, component methods, agile methods); and

• describe the motivation behind the initiative to create the Essence standard as a basic and extendable foundation for software engineering.

This will also take the reader briefly through the development of software engineering.

2.1 Software Engineering Challenges

From Smith’s specific single-person view of software engineering, we move to take a larger worldview in this chapter and the next. We will return to Smith’s journey in Chapter 4. From 2012–2014, the IEEE Spectrum published a series of blogs on IT hiccups.1 There are all kinds of bloopers and blunders occurring in all kinds of industries, a few of which we outline here.

• According to the New Zealand Herald, the country’s police force in February 2014 apologized for mailing over 20,000 traffic citations to the wrong drivers. Apparently, the NZ Transport Agency, which is responsible for automatically updating drivers’ details and sending them to the police force, failed to do so from October 22 to December 16, 2013. As a result, “people who had sold their vehicles during the two-month period … were then incorrectly ticketed for offenses incurred by the new owners or others driving the vehicles.” In New Zealand, unlike the U.S., license plates generally stay on a vehicle for its life.2

• The Wisconsin State Journal reported in February 2013 that “glitches” with the University of Wisconsin’s controversial payroll and benefits system had resulted in US $1.1 million in improper payments which the university would likely end up having to absorb. This was after a news report in the previous month indicated that problems with the University of Wisconsin’s payroll system had resulted in $33 million in improper payments being made over the past two years.3

These types of highlighted problems seem to be those which we can find amusing; however they are really no laughing matter if you happen to be one of the victims. What is more surprising is that the problem with these situations is that they can be prevented, but they almost inevitably do occur.

2.2 The Rise of Software Engineering Methods and Practices

Just as we have compressed Smith’s journey from a young student to a seasoned software engineer in a few paragraphs, we will attempt to compress some 50 years of software engineering into a few paragraphs. We will do that with a particular perspective in mind: what resulted in the development of a common ground in software engineering—the Essence standard. A more general description of the history is available in Appendix A.

However, the complexity of software programs did not seem to be the only root cause of the so-called “software crisis.” Software endeavors and product development are not just about programming; they are also about many other things such as understanding what to program, how to plan the work, how to lead the people and getting them to communicate and collaborate effectively.

For the purpose of this introductory discussion, we define a method as providing guidance for all the things you need to do when developing and sustaining software. For commercial products “all the things” are a lot. You need to work with clients and users to come up with “the what” the system is going to do for its users—the requirements. Further, you need to design, code, and test. However, you also need to set up a team and get them up to speed, they need to be assigned work, and they need a way of working.

These things are in themselves “mini-methods” or what many people today would call practices. There are solution-related “practices,” such as work with requirements, work with code, and conduct testing. There are endeavor-related practices, such as setting up a collaborative team and an efficient endeavor as well as improving capability of the people and collecting metrics. There are of course customer-related practices, such as making sure that what is built is what the customers really want.

The interesting discovery we made more than a decade ago was that even if the number of methods in the world was huge, it seemed that all these methods were just compositions of a much smaller collection of practices, maybe a few hundred of such practices in total. Practices are what we call reusable because they can be used over and over again to build different methods.

To understand how we as a software engineering community have improved our knowledge in software engineering, we provide a description of historical developments. Our purpose with this brief history is to make it easier for you to understand why Essence was developed.

2.2.1 There Are Lifecycles

From the ad hoc approach used in the early years of computing came the waterfall method around the 1960s; actually, it was not just one single method—it was a whole class of methods. The waterfall methods describe a software engineering project as going through a number of phases such as Requirements, Design, Implementation (Coding), and Verification (i.e., testing and bug-fixing) (see Figure 2.1).

While the waterfall methods helped to bring some discipline to software engineering, many people tried to follow the model literally, which caused serious problems especially on large complex efforts. This was because software engineering is not as simple as this linear representation indicates.

A way to describe the waterfall methods is this: What do you have once you think you have completed the requirements? Something written on “paper.” (You may have used a tool and created an electronic version of the “paper,” but the point is that it is just text and pictures.) But since it has not been used, do you know for sure at this point if they are the right requirements? No, you don’t. As soon as people start to use the product being developed based on your requirements, they almost always want to change it.


Figure 2.1 Waterfall lifecycle.

Similarly, what do you have after you have completed your design? More “paper” of what you think needs to be programmed? But are you certain that it is what your customer really intended? No, you are not. However, you can easily claim you are on schedule because you just write less and with less quality.

Even after you have programmed according to the design, you still don’t know for sure. However, all of the activities you have conducted don’t provide proof that what you did is correct.

Now you may feel you have done 80%. The only thing you have left is to test. At this point the endeavor almost always falls apart, because what you have to test is just too big to deal with as one piece of work. It is the code coming from all the requirements. You thought you had 20% left but now you feel you may have 80% left. This is a common well-known problem with waterfall methods.

There are some lessons learned. Believing you can specify all requirements upfront is just a myth in the vast majority of situations today. This lesson learned has led to the popularity of more iterative lifecycle methods. Iterating means you can specify some requirements and you can build something meeting these requirements, but as soon as you start to use what you have built you will know how to make it a bit better. Then you can specify some more requirements and build, and test these until you have something that you feel can be released. But to gain confidence you need to involve your users in each iteration to make sure what you have provides value. These lessons gave rise at the end of the 1980s to a new lifecycle approach called iterative development, a lifecycle adopted by the agile paradigm now in fashion (see Figure 2.2).


Figure 2.2 Iterative lifecycle.

New practices came into fashion. The old project management practices fell out of fashion and practices relying on the iterative metaphor became popular. The most prominent practice was Scrum, which started to become popular at the end of the 1990s and still is very popular. We will discuss this more deeply in Part III of the book.

2.2.2 There Are Technical Practices

Since the early days of software development, we have struggled with how to do the right things in our projects. Originally, we struggled with programming because writing code was what we obviously had to do. The other things we needed to do were ad hoc. We had no real guidelines for how to do requirements, testing, configuration management, project management, and many of these other important things.

Later new trends became popular.

2.2.2.1 The Structured Methods Era

In the late 1960s to mid-1980s, the most popular methods separated the software to be developed into the functions to be executed and the data that the functions would operate upon: the functions living in a program store and the data living in a data store. These methods were not farfetched because computers at that time had a program store, for the functions translated to code, and a data store. We will just mention two of the most popular methods at that time: SADT (Structured Analysis and Design Technique) and SA/SD (Structured Analysis/Structured Design). As a student, you really don’t need to learn anything more about these methods. They were used for all kinds of software engineering. They were not the only methods in existence. There were a large number of published methods available and around each method there were people strongly defending it. It was at this time in the history of software engineering that the methods war started. And, unfortunately, it has not yet finished!


Figure 2.3 SADT basis element.

Every method brought with it a large number of practices such as requirements, design, test, defect management, and the list goes on.

Each had its own blueprint notation or diagrams to describe the software from different viewpoints and with different levels of abstraction (for example, see Figure 2.3 on SADT). Tools were built to help people use the notation and to keep track of what they were doing. Some of these practices and tools were quite sophisticated. The value of these approaches was, of course, that what was designed was close to the realization—to the machine: you wrote the program separate from the way you designed your data. The problems were that programs and data are very interconnected and many programs could access and change the same data. Although many successful systems were developed applying this approach, there were far many more failures. The systems were hard to develop and even harder to change safely, and that became the Achilles’ heel for this generation of methods.

2.2.2.2 The Component Methods Era

The next method paradigm shift4 came in early 1980 and had its high season until the beginning of the 2000s.

In simple terms, a software system was no longer seen as having two major parts: functions and data. Instead, a system was a set of interacting elements—components (see also Sidebar 2.1). Each component had an interface connecting it with other components, and over this interface messages were communicated. Systems were developed by breaking them down into components, which collaborated with one another to provide for implementation of the requirements of the system. What was inside a component was less important as long as it provided the interfaces needed to its surrounding components. Inside a component could be program and data, or classes and objects, scripts, or old code (often called legacy code) developed many years ago. Components are still the dominating metaphor behind most modern methods. An interesting development of components that has become very popular is microservices, which we will discuss in Part III.

Sidebar 2.1 Paradigm Shift in Detail

In more detail, this paradigm shift was inspired by a new programming metaphor—object-oriented programming—and the trigger was the new programming language Smalltalk. However, the key ideas behind Smalltalk were derived from an earlier programming language, Simula 67, that was released in 1967. Smalltalk and Simula 67 were fundamentally different from previous generations of programming languages in that the whole software system was a set of classes embracing its own data, instead of programs (subroutines, procedures, etc.) addressing data types in some data store. Execution of the system was carried out through the creation of objects using the classes as templates, and these objects interacted with one another through exchanging messages. This was in sharp contrast to the previous model in which a process was created when the system was triggered, and this process executed the code line by line, accessing and manipulating the concrete data in the data store. A decade later, around 1990, a complement to the idea of objects received widespread acceptance inspired, in particular, by Microsoft. We got components.

With components, a completely new family of methods evolved. The old methods with their practices were considered to be out of fashion and were discarded. What started to evolve were in many cases similar practices with some significant differences but with new terminology. In the early 1990s, about 30 different component methods were published. They had a lot in common, but it was almost impossible to find the commonalities since each method author created his/her own terminology.

In the second half of the 1990s, OMG (a standards body called Object Management Group) felt that it was time to at least standardize how to represent software drawings, namely notations used to develop software. This led to a task force being created to drive the development of a new standard. The work resulted in the Unified Modeling Language (UML; see Figure 2.4), which will be used later in the book. This development basically killed all methods other than the Unified Process (marketed under the name Rational Unified Process (RUP)). The Unified Process dominated the software engineering world around the year 2000. Again, a sad step, because many of the other methods had very interesting and valuable practices that could have been made available in addition to some of the Unified Process practices. However, the Unified Process became in fashion and everything else was considered out of fashion and more or less thrown out.


Figure 2.4 A diagram (in fact a Use-Case diagram) from the Unified Modeling Language standard.

Over the years, many more technical practices other than the ones supported by the 30 component methods arrived. More advanced architectural practices or sets of practices, e.g., for enterprise architecture (EA), service-oriented architecture (SOA), product-line architecture (PLA), and recently architecture practices for big data, the cloud, mobile internet, and the internet of things (IoT) evolved. At the moment, it is useful to see these practices as pointers to areas of software engineering interest at a high level of abstractio: suffice it to say that EA was about large information systems for, e.g., the finance industry; SOA was organizing the software as a set of possibly optional service packages; and PLA was the counterpart of EA but for product companies, e.g., in the telecom or defense industry. More important is to know that again new methodologies grew up as mushrooms around each one of these technology trends. With each new such trend method authors started over again and reinvented the wheel. Instead of “standing on the shoulders of giants,”5 they preferred to stand on another author’s toes. They redefined already adopted terminology and the methods war just continued.

2.2.2.3 The Agile Methods Era

The agile movement—often referred to just as agile—is now the most popular trend embraced by the whole world. Throughout the history of software engineering, experts have always been trying to improve the way software is being developed. The goal has been to compress timescales to meet the ever-changing business demands and realities. If agile were to have a starting date, one can pinpoint it to the time when 17 renowned industry experts came together and penned the words of the agile manifesto. We will present the manifesto in Part IV and how Essence contributes to agile. But for now, it suffices to say that agile involves a set of technical and people-related practices. Most important is that agile emphasizes an innovative mindset such that the agile movement continuously evolves its practices.

Agile has evolved the technical practices utilized with components. However, its success did not come from introducing many new technical practices, even if some new practices, such as continuous integration, backlog-driven development, and refactoring, became popular with agile. Continuous integration suggests that developers several times daily integrate their new code with the existing code base and verify it. Backlog-driven development means that the team keeps a backlog of requirement items to work with in coming iterations. We will discuss this practice in more detail when we discuss Scrum in Part III. Refactoring is to continuously improve existing code iteration by iteration.

Agile rather simplified what was already in use to assist working in an iterative style and providing releasable software over many smaller iterations, or sprints as Scrum calls them.

2.2.3 There Are People Practices

As strange as it may sound, the methods we employed in the early days did not pay much attention to the human factors. Everyone understood of course that software was developed by people, but very few books or papers were written about how to get people motivated and empowered in developing great software. The most successful method books were quite silent on the topic. It was basically assumed that in one way or the other this was the task of management.

However, this assumption changed dramatically with agile methods. Before, there was a high reliance on tools so that code could be automatically generated from design documents such as UML diagrams. Accordingly, the role of programmers was downgraded, and other roles were more prestigious, such as project managers, analysts, and architects. With agile methods programming became reevaluated as a creative job. The programmers, the people who eventually created working software, were “promoted” and coding became again a prestigious task.

With agile many new practices evolved, for instance self-organizing teams, pair programming, and daily standups.

A self-organizing team includes members who are more generalists than specialists—most know how to code even if some are experts. It is like a soccer team—everyone knows how to kick the ball even if some are better at scoring goals and someone else is better at keeping the ball out of the goal.

Pair programming means that two programmers are working side-by-side developing the same piece of code. It is expected that the code quality is improved and that the total cost will be reduced. Usually one of the two, is more senior than the other, so this is also a way to improve team competency.

Daily standup is a practice intended to reduce impediments that team members have, as well as to retain motivation. Every morning the team meets for 15 min to go through each member’s situation: what he/she has done and what he/she will be doing. Any impediments are brought up but not addressed during the meeting. The issues will be discussed in separate meetings. This practice is part of the Scrum practice discussed in Part III.

Given the impact agile has had on the empowerment of programmers, it is easy to understand that agile has become very popular. Moreover, given the positive impact agile has had on our development of software, there is no doubt it has deserved to become the latest paradigm.

2.2.4 Consequences

There is a methods war going on out there. It started 50 years ago, and it still goes on. Jokingly, we can call it the Fifty Years’ War, and there is still no truce. In fact, there are no signs that this will stop by itself.

• With every major paradigm shift such as the shift from structured methods to component methods and from the latter to the agile methods, basically the industry throws out all they know about software engineering and starts all over with new terminology with little relation to the old. Old practices are viewed as irrelevant and new practices are hyped. To make this transition from the old to the new is extremely costly to the software industry in the form of training, coaching, and change of tooling.

• With every major technical innovation, for instance cloud computing, requiring a new set of practices, the method authors also “reinvent the wheel.” Although the costs are not as huge as in the previous point, since some of the changes are not fundamental across everything we do (it is no paradigm shift) and thus the impact is limited to, for instance, cloud development, there is still foolish waste.

• Within every software engineering trend there are many competing methods. For instance, back as early as 1990 there were about 30 competing object-oriented methods. When this book was written, there were about 10 competing methods on scaling agile to large organizations; some of the most famous ones are Scaled Agile Framework (SAFe), Disciplined Agile Delivery (DAD), Large Scale Scrum (LeSS), and Scaled Professional Scrum (SPS). They typically include some basic widely used practices such as Scrum, user stories or alternatively use cases, and continuous integration, but the method author has “improved” them—sarcastically stated. There is reuse of ideas, but not reuse of original text, so the original inventor of the practice feels he or she has been robbed of his/her work; there is no collaboration between method authors, but instead they are “at war” as competing brands.

Within these famous methods, there are some often useful practices that are specific for each one. The problem is that all these methods are monolithic, not modular, which means that you cannot easily mix and match practices from different methods. If you select one, you are more or less stuck with it. This is not what teams want, and certainly not their companies. This is, of course, what most method authors whose method is selected like, even if it was never what they intended.

Typically, every recognized method has a founding parent, sometimes more than one parent. If successful, this parent is raised to guru status. The guru more or less dictates what goes into his/her method. Thus, once you have adopted a method, you get the feeling you are in a method prison controlled by the guru of that method. Ivar Jacobson, together with Philippe Kruchten, was once such a guru governing the Unified Process prison. Jacobson realized that this was “the craziest thing in the world,” a situation unworthy in any industry and in particular in such a huge industry as the software industry. To eradicate such unnecessary method wars and method prisons, the SEMAT (Software Engineering Method and Theory) initiative was founded.

2.3 The SEMAT Initiative

As of the writing of this book there are about 20 million software developers6 in the world and the number is growing year by year. It can be guesstimated that there are over 100,000 different methods to develop software, since basically every team has developed their own way of working even if they didn’t describe it explicitly.

Over time, the number of methods is growing much faster than the number of reusable practices. There is no problem with this. In fact, this is what we want to happen, because we want every team or organization to be able to set up its own method. The problem is that until now we have not had any means to really do that. Until now, creating your own method has invited the method author(s) to reinvent everything they liked to change. This has occurred because we haven’t had a solid common ground that we all agreed upon to express our ideas. We didn’t have a common vocabulary to communicate with one another, and we didn’t have a solid set of reusable practices from which we could start creating our own method.

In 2009, several leaders of the software engineering community came together, initiated by Ivar Jacobson, to discuss the future of software engineering. Through that, the SEMAT (Software Engineering And Theory) initiative commenced with two other leaders founding it: Bertrand Mayer and Richard Soley.

The SEMAT call for action in 2009 read as follows.

Software engineering is gravely hampered today by immature practices. Specific problems include:

• The prevalence of fads more typical of fashion industry than of an engineering discipline.

• The lack of a sound, widely accepted theoretical basis.

• The huge number of methods and method variants, with differences little understood and artificially magnified.

• The lack of credible experimental evaluation and validation.

• The split between industry practice and academic research.

We support a process to re-found software engineering based on a solid theory, proven principles, and best practices that:

• Include a kernel of widely agreed elements, extensible for specific uses

• Address both technology and people issues

• Are supported by industry, academia, researchers and users

• Support extension in the face of changing requirements and technology.

This call for action was signed by around 40 thought leaders in the world coming from most areas of software engineering and computer science; 20 companies and about 20 universities have signed it, and more than 2,000 individuals have supported it. You should see the “specific problems” identified earlier as evidence that the software world has severe problems. When it comes to the solution “to re-found software engineering” the keywords here are “a kernel of widely agreed elements,” which is what this book has as a focus.

It was no easy task to get professionals around the world to agree on what software engineering is about, let alone how to do it. It led, of course, to significant controversy. However, the supporters of SEMAT persevered. Never mind that the world is getting more complex, and there is no single answer, but there ought to be some common ground—a kernel.

2.4 Essence: The OMG Standard

After several years of hard work, the underlying language and kernel of software engineering was accepted in June 2014 as a standard by the OMG and it was given the name Essence. As is evident from the call for action, the SEMAT leaders realized already at the very start that a common ground of software engineering (a kernel) needed to be widely accepted. In 2011, after having worked two years together and having reached part of a proposal for a common ground, we evaluated where we were and understood that the best way to get this common ground widely accepted was to get it established as a formal standard from an accredited standards body. The choice fell on OMG. However, it took three more years to get it through the process of standardization. Based upon experience from the users of Essence, it continues to be improved by OMG through a task force assigned to this work.

In the remainder of this part of the book, we will introduce Essence, the key concepts and principles behind Essence, and the value and use cases of Essence. This material is definitely useful for all students and professionals alike. Readers interested in learning more, please see Jacobson et al. [2012, 2013a, 2013b], and Ng [2014].

What Should You Now Be Able to Accomplish?

After studying this chapter, you should be able to:

• explain the meaning of a method (as providing guidance for all the things you need to do when developing and sustaining software);

• explain the meaning of a practice and its types (i.e., solution-related practices, endeavor-related practices, customer-related practices);

• explain the meaning of waterfall methods and their role in the history of software engineering;

• explain the iterative lifecycle methods, structured methods, component methods, and agile methods, as well as their characteristics;

• give examples of some practices (e.g., self-organizing teams, pair programming, and daily standups as examples of agile practices);

• explain the “method prison” issue discussed in the chapter; and

• explain the SEMAT initiative and the motivation behind the Essence standard.

Again we point to additional reading, exercises, and further material at www.software-engineering-essentialized.com.

1. http://spectrum.ieee.org/riskfactor/computing/it/it-hiccups-of-the-week

2. http://spectrum.ieee.org/riskfactor/computing/it/new-zealand-police-admits-sending-20-000-traffic-tickets-to-the-wrong-motorists

3. http://spectrum.ieee.org/riskfactor/computing/it/it-hiccups-of-the-week-university-of-wisconsin-loses-another-11-million-in-payroll-glitches

4. Wikipedia: “A paradigm shift, as identified by American physicist and philosopher Thomas Kuhn, is a fundamental change in the basic concepts and experimental practices of a scientific discipline.”

5. From Wikipedia: “The metaphor of dwarfs standing on the shoulders of giants … expresses the meaning of ‘discovering truth by building on previous discoveries’.”

6. https://www.infoq.com/news/2014/01/IDC-software-developers

The Essentials of Modern Software Engineering

Подняться наверх