Current Trends in Software Development

Systems Analysis and Design in a Changing World
Chapter 17: Current Trends in System Development

ISBN: 0619213256 Author: John W. Satzinger, Robert B. Jackson, Stephen D. Burd, Richard Johnson
copyright © 2004 Course Technology
Current Trends in System Development
Learning Objectives

After reading this chapter, you should be able to:
* Describe rapid application development and the prototyping, spiral, eXtreme Programming, and Unified Process development approaches
* Compare the prototyping, spiral, eXtreme Programming, and Unified Process development approaches with more traditional development approaches
* Choose an appropriate development approach to match project characteristics
* Implement a risk management process
* Describe rapid development techniques, including joint application design, tool-based development, and code reuse
* Describe components, the process by which they are developed and deployed, and their impact on the systems development life cycle

Chapter Outline

* Rapid Application Development
* The Prototping Approach to Development
* The Spiral Approach to Development
* Extreme Programming
* The Unified Process
* Rapid Development Techniques
* Components

EUROBANC: FASTER, BETTER, AND CHEAPER SYSTEM DEVELOPMENT?

Erik Maastricht waited nervously, lost in thought, outside the office of Andrea Ramos, CEO of EuroBanc. He was thinking, “I wonder if today's the day I'll be cleaning out my desk?” Just then, a voice startled him back to reality. “Ms. Ramos will see you now.”

As he entered the office, Andrea rose and walked over to shake Erik's hand and said, “Come in. Make yourself comfortable.” The serious look in Andrea's eyes only heightened Erik's apprehension, so he sat quickly and quietly.

“I'll get right to the point,” Andrea said. “You were at the last board meeting, and you know the importance of our next few IT projects. The board wants them all deployed within 12 months. The pace of our business gets faster every month, and those projects are critical for catching up to our competitors and …, ” Andrea paused and looked Erik squarely in the eyes, “completing our rather late start into 21st century e-business.”

Andrea continued, “What you're probably not aware of is how close you came to losing your job during the closed part of the meeting. You had exactly one supporter in the room—me! And it was difficult for me to defend your recent performance in delivering new systems and upgrades on time and within budget.”

Erik waited a moment, then said, “I know that you've been unhappy with the delays in some of our recent projects, and I suspected that a few board members had me lined up in their sights.”

Andrea replied, “The board members quiz me about IT project progress at nearly every meeting. Two of the members presented an analysis of our last six big projects that showed average budget overruns of 25 percent and late deliveries ranging from 4 to 12 months. How can I defend a record like that?”

Erik swallowed hard before speaking. “I'm a firm believer in the old engineer's saying—‘faster, better, cheaper, pick two.' You and the board have always been generous with resources, so money isn't the problem. But with the last few projects, I've found myself squarely faced with the trade-off between system quality and development speed. I think you'll agree that the systems I've delivered have consistently been of good quality.”

“I know that the pyramids weren't built in a day, but I also know that construction methods have advanced considerably since then,” replied Andrea. “I know very little about building systems, but I do know that other companies, including our biggest competitors, regularly introduce new systems faster than we do. They've been gaining market share and improving their stock price at our expense. I'm worried that a few more late deliveries will put us so far behind them we'll never catch up.”

Erik replied, “I've been experimenting with rapid development and deployment techniques on some of our smaller projects. I guess it's time to stop experimenting and use those approaches in all of our projects.”

With an exasperated wave of her hand, Andrea said, “I don't care if you change your development tools, hire Martian consultants, or figure out a way to alter the space-time continuum. Just speed up those projects, or you'll have one less friend at the next board meeting.”
Overview

The pace of change in business and other organizations accelerated throughout the 20th century, and no one expects that pace to slow during the 21st century. Rapid change in both business practices and information technology has created a significant management problem: how to quickly develop and deploy information systems that implement the latest business practices and employ cutting-edge technologies. Yet, changing technology is a two-edged sword. One edge provides system developers with new tools and techniques to improve their own productivity. The other increases users' expectations for system functionality. Organizations that can quickly develop systems to exploit new technologies thrive and prosper. Those that don't stagnate and flounder.

The tools and techniques described in earlier chapters provide a foundation for rapid system development, but additional tools and techniques can be applied to speed up the process. This chapter presents a small but important subset of those tools and techniques. The first half of the chapter concentrates on rapid development techniques. The chapter begins with a discussion of what rapid application development is and isn't. We then describe alternative approaches to system development. Then we discuss a set of techniques that can be used with the spiral and other life cycles to increase development speed or reduce the risk of schedule overruns.

The latter half of the chapter focuses on two technologies—object frameworks and components. Both technologies can significantly increase development speed through code reuse and flexible software construction. We will describe the technologies and ways they can be successfully integrated into the systems development life cycle (SDLC) to speed software development.

Think of the concepts covered in this chapter as a set of ingredients for fine cuisine at a five-star restaurant. Every menu item is a unique combination of ingredients, skillfully combined by the chef. As the accompanying RMO progress memo illustrates, alternative development approaches, tools, and techniques are a set of ingredients that may or may not be applicable to a particular project. A project leader must choose those that best match the project's characteristics and skillfully apply them to rapidly produce a high-quality system.

[Click to enlarge]
Rapid Application Development

Rapid application development (RAD) is an overused and poorly understood term. RAD is something that most software developers claim they do but cannot precisely define. Often, RAD has been equated with tools and techniques such as prototyping, fourth-generation programming languages, CASE tools, and object-oriented (OO) analysis, design, and development. Methodologies have also been developed that are either called RAD or claim to result in RAD. Tool vendors frequently include the term RAD in their product descriptions. Given the blizzard of competing and confusing claims, it's little wonder that few people can precisely define what RAD is.

For the moment, we'll avoid defining the term. Instead, we'll first describe some of the factors that influence development speed. After we have provided that background, we'll return to a discussion of RAD.
Reasons for Slow Development

The reasons for slow development are far too numerous to list or describe exhaustively in this text. Steve McConnell, in his book Rapid Development: Taming Wild Software Schedules, provides many pages of reasons. This chapter focuses on three broad categories that have delayed many a project: rework, shifting requirements, and inadequate or inappropriate tools and techniques. The following sections briefly describe the problem areas. Methods of addressing the problems are discussed later in the chapter.
Rework

Think of the process of building a house. A foundation is laid, the house is framed, the roof and exterior walls are covered, windows and doors are installed, interior mechanical systems are installed, interior fixtures are installed, and finally finishing touches such as paint and flooring are added. Now imagine that after each stage is initially completed, the builder decides that the construction is poor quality. So the work is ripped out and done over. The result is that each construction stage is performed twice and the project schedule doubles.

This scenario may sound contrived, but similar scenarios are relatively common in system development. It is not uncommon for software construction tasks to be performed multiple times to correct earlier problems. Clearly, one way to avoid lengthening the development schedule is to ensure that each construction activity is performed only once.

You can avoid rework by ensuring that:

* The right software (and only the right software) is constructed or procured.
* The development process always produces software that meets minimum quality standards.

The only way to ensure that the right software is built is to be certain that requirements and overall design constraints are fully known before design and construction begins. There are many ways to ensure that construction always produces high-quality results, such as fully specifying important design parameters before beginning construction, assigning well-trained and motivated people to construction tasks, and providing system builders with the right tools for the job.
Shifting Requirements

One reason that many projects fall behind schedule is that requirements change during the project. Changes to requirements necessitate corresponding changes to design and construction. The later in the project a change occurs, the more costly it is to incorporate it into the system. For example, changes to a house design (such as enlarging a room and putting in additional windows) before details are added require relatively little effort. Changes after detailed design require drawing up new blueprints and modifying the materials list and construction schedule. Changes during construction usually require undoing some already completed construction (that is, rework).

Changes to software requirements are also subject to increasing costs as development progresses. A commonly used estimate is that a change that could be implemented for $1 during analysis costs $10 during architectural design, $100 during detailed design, $1,000 during coding and testing, and $10,000 after the system is operational Figure 17-1.

Figure 17-1 The cost of implementing a requirements change increases in each life cycle phase.

[Click to enlarge]

Some shift in requirements is to be expected in most software development projects. Users aren't always sure what they want at the beginning of the project. Also, a rapidly changing environment may necessitate changes before the system is fully developed. Ignoring important requirements changes only guarantees that the system will not meet the users' needs when it's delivered. To avoid schedule extensions while satisfying user requirements, a developer should anticipate and incorporate changes into the development process.
Tools and Techniques

No one set of tools and techniques is best suited to all system development projects. Different system requirements, development methodologies, and target operating environments may require different tools for analysis, design, and construction. For example, programming languages such as assembly and C are typically used for systems that require maximum execution speed. Object-oriented development and programming techniques are generally best for systems that will evolve over relatively long periods of time (such as many business applications).

Using the wrong tool or technique for a given project can reduce quality, increase development time, or both. Building a house with power tools is fast, and the result is generally high quality. Building a house with nothing more than hammers and hand saws will take much longer and probably result in a lower-quality product. Finishing a 100-story building on schedule requires not only the right tools and skills but also efficient construction and project management techniques. The parallels to software development are obvious. A development team can avoid slow development and poor quality by matching tools and techniques to the project at hand.
What Is RAD?

The time required to develop a system depends on a number of factors, including user requirements, budget, development approach, development tools and techniques, the developer's experience with similar projects, and management of the development process. Assume that both user requirements and budget are fixed (they usually are). Assume further that the budget is sufficient to buy the right tools and hire a well-trained and experienced staff. Given these assumptions, the only variables that affect development speed are the development approach, techniques, and management. Choosing the right approach and techniques and managing the development process efficiently and effectively will result in the shortest possible schedule.

Rapid application development (RAD) is a collection of development approaches, techniques, tools, and technologies, each of which has been proven to shorten development schedules under some conditions. Rapid application development is not a silver bullet that shortens development schedules for every project. In addition, no universal RAD approach shortens every project schedule, nor does any one technique, tool, or technology fit every project. No single approach or technique by itself is sufficient, and none will succeed for every project.
rapid application development (RAD)

a collection of development approaches, techniques, tools, and technologies, each of which has been proven to shorten development schedules under some conditions

The key to shortening development schedules is to identify the overall development approach and the set of techniques, tools, and technologies most suitable to that approach and the specific project. For some projects, RAD may require an unconventional development approach and a host of newer techniques. For other projects, a conventional development approach supplemented by a few specific techniques and tools will yield the shortest development schedule.

The remainder of this chapter explores a diverse set of RAD concepts. As you read the material, keep in mind that the approaches, tools, techniques, and technologies need to be mixed and matched to specific projects. They are all related by their ability to speed development under some circumstances, but they are a set of alternatives that must be individually evaluated for their effectiveness with a given development project.
RAD in Perspective

Previous chapters presented a relatively conventional approach to system development. We divided the SDLC into phases, and each phase into a set of activities. We described a linear flow of activities and a series of models, moving from requirements, to architectural design, and then to detailed design. Presenting development activities in sequential order is a logical way to teach systems analysis and design. But it isn't necessarily the best way to do analysis and design for a particular project.

The conventional approach to system development tends to be sequential. In particular,-it stresses completely defining requirements before design and making major design decisions before implementation begins. The conventional approach does provide some opportunities for parallel activities, particularly within major life cycle phases, but significant use of parallelism and iteration is not the norm in conventional development.

The oldest approaches to software development were purely sequential. They had their roots in an era when:

* Systems were relatively simple and independent of one another.
* Computer hardware resources were very expensive.
* Software development tools were relatively primitive.

A sequential approach to system development is well suited to such an environment.

Underlying older development approaches are these assumptions: Simple systems are simple to analyze and model, and it is reasonable to assume that requirements can be captured fully and accurately before any design or construction activity begins. Complete analysis prior to design and construction is not only reasonable but an economic necessity. The high cost of computer resources and the labor-intensive nature of programming make building the “wrong” system a very expensive and time-consuming mistake. Complete design prior to construction is also necessary. Issues of technical feasibility and operational efficiency are paramount when computing resources are expensive, and those issues are best addressed by designing the entire system all at once.

Now consider the current validity of the assumptions just listed. Information systems today often consist of millions of lines of code and are interconnected with many other systems. Computer power has doubled every two to three years at no increase in cost. Labor costs have risen, and there is a shortage of skilled personnel in most areas of software development. Software development tools leverage greater hardware power to allow programs to be developed quickly and cheaply. Clearly, none of the original assumptions that motivated sequential system development apply today. But does that mean that sequential development should never be used? The answer is no, but the reasons are complex and interdependent.

As described earlier, shifting requirements are common because of changes in the external environment and uncertainties about requirements at the beginning of a project. For example, consider the following scenario. A competitor has successfully implemented a new customer support system based on new technology. Another firm responds by initiating a project to develop a similar system, but it has no experience with that type of system or with the underlying technology.

In this scenario, it isn't reasonable to assume that all system requirements can be specified completely and accurately before design and construction begin. Also, technical, operational, and economic feasibility cannot be completely determined at the beginning of the project. Thus, a sequential development approach is poorly suited to the project. An evolutionary approach—with experimentation and learning—is more appropriate. Such an approach allows requirements to be discovered and gradually refined. It also allows both users and developers to work their way up the learning curve of the new technology gradually.

Now let's revisit an earlier premise—the shortest possible schedule cannot be achieved if software is constructed multiple times. To ensure that you don't build software more than once, you must ensure that you build the right software and that your efforts produce quality products. The first of these requirements is essentially a restatement of the need for complete and accurate analysis and design prior to construction. But how is it possible if requirements are complex and shifting or feasibility is uncertain?

Obviously, you can't avoid all rework. Thus, you must accept some as a consequence of uncertainty about requirements or feasibility, and you must restructure your approach to system development to accommodate these uncertainties. The exact way in which you restructure your approach will depend on:

* Project size
* The degree of (un)certainty of requirements or feasibility at the start of the project
* The expected rate of change in user requirements during the life of the project
* The experience and confidence that developers have in the proposed implementation technology

A large project size, uncertain or shifting requirements, and new technologies are all indicators of a need to depart from sequential development. Projects combining several of these characteristics require the most significant SDLC modifications, as summarized in Figure 17-2.

This is not to suggest that you completely abandon sequential or conventional development. Sequential or conventional development is still the fastest way to develop software when uncertainty about requirements and feasibility is low. We're also not suggesting that you abandon the techniques described in earlier chapters. All of the techniques described in earlier chapters are useful and efficient for many types of projects (otherwise, we wouldn't have spent hundreds of pages describing them!). But when technology is new or requirements can't be completely specified, then techniques must be reorganized within an iterative development approach.

Figure 17-2 The development approach as a function of project characteristics.

[Click to enlarge]
The Prototping Approach to Development

Prototyping was described earlier in Chapter 4 in the section “Build Prototypes.” Here we briefly review that material and then describe a complete life cycle based on prototyping. Prototyping is the process of building a partially or fully functioning system model that looks and acts as much as possible like a real system. A prototype is a system in its own right, but it may be incomplete or may not exactly match the final set of user requirements.

Two types of prototypes are commonly used in software development. A discovery prototype is often used during analysis and occasionally during design. A developmental prototype is a prototype system that is not intended to be thrown away. Developmental prototypes become all or part of the final system. Developmental prototypes are primarily used in iterative software development. Typically, a simple prototype is developed, and more features are added in subsequent development phases. Or, a series of independent prototypes may be developed and later combined to form a complete system.
discovery prototype

a prototype system used to discover or refine system requirements or design parameters
developmental prototype

a prototype system that is iteratively developed until it becomes the final system
Steps in the Prototyping Development Approach

Figure 17-3 shows a prototyping development approach. The planning phase is as described in Chapter 2. The analysis phase can be implemented with traditional or OO techniques, provided that the techniques are compatible with those used for design and implementation. Analysis models may be less detailed than with conventional development approaches, under the assumption that some requirements will be discovered or fully specified by prototyping.
prototyping development approach

a development approach based on iterative refinement of a developmental prototype

Design and implementation start out similarly to conventional design phases but differ substantially after architectural design is completed. Architectural design establishes the overall top-level design parameters and implementation environment. The analyst plans the remaining design and implementation activities by defining a series of prototypes or modifications to them. Typically, the requirements are divided into subsets based on specific functions (for example, order entry, order fulfillment, and purchasing) and architectural system boundaries (such as client and server). These subsets may be further divided into a series of implementation steps, starting with a core set and adding functions in each step.

Figure 17-3 A system development approach based on developmental prototypes.

[Click to enlarge]

Once the prototype series has been defined, detailed design and implementation proceed in an iterative fashion. Each cycle begins by defining the requirements and design of a single prototype or prototype version, including aspects of analysis that were left unfinished earlier (such as detailed user-interface specifications). Once the developer determines the requirements and detailed design of a prototype, it is developed, tested, and evaluated. Testing and evaluation determine whether the prototype meets the objectives for the current cycle and whether any additional cycles are required. Information uncovered during testing and evaluation may require the developer to modify the development plan or, in unusual cases, the architectural design.

Some requirements of the system may be left out of the prototype development cycle and completed after the final cycle. Examples of such requirements include backup and recovery operations and systemwide security measures. Requirements with no interactive interfaces and those with a systemwide impact are generally good candidates for postprototype implementation.
When to Use Prototyping Approach

The prototyping development approach usually results in faster system development than a conventional development approach under some or all of the following conditions:

* Some portion of the requirements cannot be fully specified independently of architectural or detailed design.
* Technical feasibility for some system functions is unknown or uncertain.
* Prototype development tools are powerful enough to create a fully functional system.

As discussed in Chapter 4, prototyping is a good means of solidifying uncertain requirements. Users can generally provide better feedback about requirements when examining a prototype than when examining graphical or textual models. A prototyping development approach allows full specification of some requirements that can be combined with design and implementation. Some of the requirements take longer to specify than others, and those can be prototyped to avoid delays in the entire project. However, a prototyping approach does work best when most of the requirements are understood in advance. If significant uncertainty exists, other techniques and development approaches may be better for the project.

A prototyping development approach also works well to ascertain the technical feasibility of a design. Development and testing of early prototypes help sort out problems in design or implementation. Unsuccessful attempts are simply discarded, and the cycle is retried with a different set of parameters. Once success is achieved, the developer can revisit the architectural design and revise the development plan of subsequent prototypes. As with requirements, successful prototyping requires that most aspects of architectural design and technical feasibility be specified in advance. Otherwise, the initial prototype is simply too big and cumbersome for rapid development and testing.

Some functions or requirements may not be suited to prototyping. Examples include systems or portions of a system that are:

* Noninteractive (such as a program that automatically generates orders to suppliers)
* Internally complex (such as a module that schedules deliveries for fastest delivery time or minimal cost using a complex algorithm)
* Subject to stringent performance or security requirements (such as a program that generates thousands of electronic payments per hour)

Noninteractive programs and systems exhibit little observable behavior to test directly or to validate. Thus, a developer can derive little value from prototyping their requirements. Internally complex programs or systems often have relatively well-defined processing requirements that can be formally stated (for example, mathematically). Such requirements are more easily implemented by using a conventional development approach based on structured or OO requirements and design models. Iterative development actually wastes development time for those systems because the repeated design, testing, and evaluation are unnecessary. Finally, software with stringent performance requirements must be constructed with tools that are optimized to produce efficient executable code. Unfortunately, such tools are often not optimal for development speed.
Prototyping Tool Requirements

Successful prototyping requires system developers to employ tools with power, flexibility, and developmental efficiency. Many of the modern “visual” programming tools (such as Microsoft Visual Studio .NET and PowerBuilder) satisfy these requirements. Many more specialized tools (such as Oracle Forms) satisfy the requirements for specialized applications and technical environments.

The key requirement for a successful prototyping tool is development speed. The ability to build and examine many prototypes is crucial to elicit complete and accurate user requirements and to determine technical feasibility through experimentation. The development tools must make it possible to construct, modify, or augment a prototype in a matter of minutes, hours, or at most a few days. Prototyping tool requirements are sometimes summarized by the phrase FPF principle: Make it Functional, make it Pretty, and make it Fast.

Flexibility and power are necessary for rapid development. Successful prototyping tools employ a highly interactive approach to application development. Other typical techniques and capabilities include:

* WYSIWYG (what you see is what you get) development of user-interface components
* Generation of complete programs, program skeletons, and database schema from graphical models or application templates
* Rapid customization of software libraries or components
* Sophisticated error-checking and debugging capabilities

Figure 17-4 shows an example of WYSIWYG development of a user-interface prototype for the RMO customer support system. The dialog box appears exactly as it would when displayed by an application program. To add user-interface components to the dialog box, you drag templates from the tool set on the left. You can customize features by clicking on a component and altering values in its properties list. To edit the underlying application program code, you can double-click the component and edit the statements in the pop-up code editor window that appears.

Different prototyping tools provide varying capabilities. No one tool is best in all categories, and many tools are specialized for specific technical environments (such as operating or database management systems) or application types (such as data entry and query to or from a DBMS and Web site development). Thus, it is important to choose the tool that best matches the characteristics of the project at hand.

Prototyping tools must also be chosen according to specific project conditions, including:

* Suitability to the technical environment in which the system will be deployed
* Ability to implement all system requirements
* Ability to interface with software developed with other tools

For example, a developer may use a simple PC software package such as Microsoft Office to develop the prototype, but if the deployment environment is an IBM mainframe supporting thousands of interactive users, then the prototype is unlikely to function reliably in that environment. When prototyping and deployment environments differ substantially, it is typically more feasible to redevelop the application from scratch, using the prototype only as a tangible representation of system requirements.

Some prototyping tools are designed to develop software components that can be easily incorporated into (or extended by) other software modules. Others are designed to produce stand-alone software, which is fine as long as the tool is capable of building software for all of the required system features. If there are any gaps in the tool's capabilities, then that tool is infeasible because the prototype will not be able to function with software developed by other tools.

Figure 17-4 Development of a user-interface prototype using a WYSIWYG dialog box editor.

[Click to enlarge]

Fortunately, in recent years, incompatibility of tools has become less of a problem. In particular, object-oriented, component-based, and Web service technologies and standards have made software interoperability more achievable and someday perhaps will make it the norm. For the present, incompatibility is a potential problem that must be recognized and avoided.
The Spiral Approach to Development

The spiral development approach is an iterative development approach in which each iteration may include a combination of planning, analysis, design, or development steps. First described by Barry Boehm (see “Further Resources”), the approach has since become widely used for software development. There are many different ways of implementing the spiral development approach. Figure 17-5 illustrates one version that uses developmental prototypes (for other versions, see the Boehm and McConnell references in the “Further Resources” section). It is a more radical departure from traditional development than the prototyping development approach described earlier.
spiral development approach

an iterative development approach in which each iteration may include a combination of planning, analysis, design, or development steps

Figure 17-5 The spiral life cycle.

[Click to enlarge]
Steps in the Spiral Development Approach

The spiral development approach begins with an initial planning phase, as shown in the center of Figure 17-5. The purpose of the initial planning phase is to gather just enough information to begin developing an initial prototype. Planning phase activities include a feasibility study, a high-level user requirements survey, generation of implementation alternatives, and choice of an overall design and implementation strategy. A key difference between the spiral approach and the prototyping approach is the detail of user requirements and software design. The spiral approach uses less detail than the prototyping approach.

As in the prototyping approach, you can also develop preliminary plans for a series of prototypes. But these plans must be very flexible because analysis and design activities are very limited. Because this approach bypasses many analysis and design details, you often must alter plans later in the project. Thus, you should avoid detailed planning for future prototypes.

After the initial planning is completed, work begins in earnest on the first prototype Figure 17-5. For each prototype, the development process follows a sequential path through analysis, design, construction, testing, integration with previous prototype components, and planning for the next prototype. When planning for the next prototype is completed, the cycle of activities begins again. Although the figure shows four prototypes, you can adapt the spiral development approach for any number of prototypes.

Which features to implement in each cycle's prototype is an important decision. Developers should choose features based on a number of criteria, including:

* User priorities
* Uncertain requirements
* Function reuse
* Implementation risk

How these criteria are evaluated varies widely from one project to another.

System requirements are typically prioritized into categories such as “must have,” “should have,” and “nice to have.” One way to minimize schedule length is to include the “must have” and “should have” requirements in the earliest prototypes. Doing so greatly increases the probability that you can deliver a usable system to the customer while minimizing development time. If work takes longer than expected, lower-priority requirements can be delayed until a future system upgrade, added after installation, or ignored entirely.

As described earlier, prototyping is an excellent tool for firming up uncertain or poorly defined user requirements, and including them in early prototypes allows you to explore them and specify them fully as soon as possible. That knowledge can then guide subsequent prototype development. Delaying the development of poorly defined portions of the system may result in unanticipated rework if they are incompatible with portions that are already implemented.

Functions that will be used many times are excellent candidates for inclusion in early prototypes. Typical examples include data-entry screens, data lookup and retrieval functions, and problem domain classes. High-risk functions are also excellent candidates for inclusion in early prototypes. Risk analysis is discussed further in the next section.
Benefits and Risks of Spiral Development

The spiral development approach has many advantages over traditional and prototyping approaches, including:

* High parallelism. Many opportunities arise to overlap activities both within and among prototyping cycles. For example, planning of the next prototype can usually overlap testing and integration of the previous prototype.
* High user involvement. Users can be involved at each planning, analysis, and testing stage. Frequent and continual user participation produces a better system and higher user satisfaction.
* Gradual resource commitment. Resource consumption is much more evenly spread out over a spiral life cycle, which may lead to more efficient utilization of some resources (such as personnel). However, the total development cost is generally higher. Figure 17-6 compares typical expenditures for traditional and spiral development approaches.
* Frequent product delivery. Every prototype is a working system in its own right. With sufficient planning, you can put these prototypes to work immediately. Frequent product delivery also leads to more testing, thus improving the product by catching more bugs.

So why isn't every system developed with the spiral development approach? The primary drawbacks of the spiral approach are management and design complexity. Projects that use spiral development are more complex to manage than traditional projects because more activities occur in parallel and more people are working on the project at earlier stages. Also, because not all analysis and design occurs before construction, some rework is more likely to be necessary.

In a sequential development approach, most or all design activities are completed before construction begins. When a design is specified completely for an entire system, the result is generally of higher quality than if the same system were designed one piece at a time. In the spiral approach, fewer high-level design activities are performed before construction begins, but design decisions that may seem optimal when looking at a portion of the system may be less desirable when looking at the entire system. In mathematics and management science, this situation is sometimes described as “locally optimal but globally suboptimal.”

Figure 17-6 Cumulative cost plotted against time for spiral and sequential development.

[Click to enlarge]

A designer using the spiral development approach runs a much greater risk of making decisions that are globally suboptimal (possibly resulting in lower performance, a greater number of bugs, or more difficult maintenance). Whether or not these problems are significant depends on several factors, including:

* Interdependence among system functions
* Expected life of the system
* Design team experience and skill
* Luck!

Systems with highly interdependent functions run a greater risk of design problems when the spiral approach is employed because design decisions for one function also affect many other parts of the system. Parts of the system that are constructed in later iterations may inherit a large number of design constraints that prove difficult or impossible to resolve.

Systems with a long expected lifetime (for example, more than five years) need more careful and all-encompassing design than those with short lives. Because almost all information systems are modified during their lifetimes, the longer the life, the greater the number of modifications. Modifications have a cumulative effect on software quality. As you add more features to the system, it becomes less efficient, less reliable, and more difficult to maintain. Eventually, these problems build to the point that you must scrap or reimplement the system.

Although no one can anticipate all future needs, a designer can allow for future modifications and enhancements. Systems can be designed for rapid development, maximum ease of modification, or both. But designing for future upgrades and flexibility is a task best performed on an entire system, not iteratively on its subsystems. Traditional development approaches provide greater flexibility in the design because more design activities are completed earlier in the project life cycle.

It is difficult to decide how much analysis and design to do in the initial planning phase and what can be left for later phases. The experience and skill of the analysis and design team are critical to this decision. Luck is also a factor. Even the most skilled and experienced software developers can misidentify a critical analysis or design area. The eventual discovery of its importance may require redoing or throwing away some already completed work.
Extreme Programming

EXtreme Programming (XP) is a system development approach created by Kent Beck in the mid-1990s. XP borrows heavily from earlier development approaches and techniques such as prototyping, object-oriented development, and pair programming. However, XP combines elements and techniques borrowed from other approaches in a unique way.
eXtreme Programming (XP)

a rapid development approach focused on creating user stories, delivering releases of a system, and quickly testing
XP Activities

Figure 17-7 shows an overview of the XP system development approach. The XP development approach is divided into three levels—system (the green ring), release (the red ring), and iteration (the orange ring). System-level activities occur once during each development project. A system is delivered to users in multiple stages called releases. Each release is a fully functional system that performs a subset of the full system requirements. A release is developed and tested within a period of no more than a few weeks or months. Releases are divided into multiple iterations. During each iteration, developers code and test a specific functional subset of a release. Iterations are coded and tested in a few days or weeks.

Figure 17-7 The eXtreme Programming system development approach.

[Click to enlarge]

The first XP development activity is creating user stories, which are similar to use cases in OO analysis. A team of developers and users quickly documents all of the user stories that will be supported by the system. Developers then create a class diagram to represent objects of interest within the user stories. In XP, the class diagram is called a system metaphor.
user story

a use case; used in eXtreme Programming
system metaphor

a class diagram that represents a system design in eXtreme Programming

Developers and users then create a set of acceptance tests for each user story. Releases that pass the acceptance tests for the user stories that they support are considered finished. The final system-level activity is to create a development plan for a series of releases. The first release supports a subset of the user stories, and subsequent releases add support for additional stories. Each release is delivered to users and performs real work, thus providing an additional level of testing and feedback.

The first release-level activity is planning a series of iterations. Each iteration focuses on a small (possibly just one) system function or user story. The iterations' small size allows developers to code and test them within a few days. A typical release is developed using a few to a few dozen iterations.

Once the iteration plan is complete, work begins on the first iteration-level activity. Code units are divided among multiple programming teams, and each team develops and tests its own code. XP recommends a test-first approach to coding. Test code is written before system code. As code modules pass unit testing, they are combined into larger units for integration testing. (Testing was covered in more detail in Chapter 16.) When an iteration passes integration testing, work begins on the next iteration.

When all iterations of a release have been completed, the release undergoes acceptance testing. If a release fails acceptance testing, the team returns it to the iteration level for repair. Releases that pass acceptance testing are delivered to end users, and work begins on the next release. When acceptance testing of the final release is completed, the development project is finished.
XP Principles and Techniques

Describing the activities and iteration levels of XP doesn't paint a complete picture of the approach. XP embodies a number of principles and techniques that are woven throughout the approach, including:

* Continuous automated testing. Testing activities occur every day in the XP approach. When work begins on a software module or system function, tests related to that module or function are added to the permanent test suite, and all iterations and releases are tested against that suite. The intensive level of testing requires automated testing software.
* Continuous integration. As soon as a software module passes unit testing, it is tested in concert with other software modules. That way, errors are discovered quickly. Development of additional modules, iterations, and releases does not proceed until problems are resolved.
* Heavy user involvement. One or more users are permanently assigned to the development team. If the organization is unwilling to do so, then perhaps the project's importance should be questioned. Users participate in all important decisions, including technical ones.
* Team programming. Two programmers sitting in front of one workstation and sharing a single display and keyboard develop all software. This method is called pair programming. Also, programmers regularly review each other's work. Any programmer is allowed to change any program at any time.
* Specific attention to human interactions and limitations. The entire development team works within a common room—as large as needed. Programming workstations are placed in the center, and cubicle work spaces are placed around the perimeter. Every team member can see and communicate with every other member. Forty-hour work weeks are the norm. Overtime is prohibited to avoid burnout and excessive mistakes.

pair programming

a programming method in which software is developed by two programmers sitting in front of one workstation sharing a single display and keyboard
XP Compared with Other Development Approaches

XP techniques are inherently object-oriented, although the terminology for those activities differs from that used in the Unified Modeling Language and Unified Process. Other aspects of XP are borrowed from other development approaches and techniques, including the spiral development approach and joint application development.

Figure 17-8 compares the number of iterations and the relative effort expended on planning, analysis, design, and implementation for traditional development, spiral development, and XP. Moving from traditional to spiral to XP, the number of iterations increases and the total effort expended on planning, analysis, and design decreases. Within implementation, XP also differs from traditional and spiral development in that coding is more tightly interwoven with testing.

Figure 17-8 Comparison of traditional development, spiral development, and eXtreme Programming.

[Click to enlarge]

The merits and limitations of XP have been debated in many articles, books, and Web sites over the last several years. A clear consensus has not emerged yet, but XP does enjoy a substantial following in the software development community.

Arguments in favor of XP include:

* It borrows many proven techniques and principles from other development approaches.
* It cuts out the “fat” of long analysis and design phases. The potential negative implications of less stringent analysis and design are mitigated by continuous testing and occasional shuffling of existing code.
* It builds group ownership of and enthusiasm about the project.
* It has been proven successful in published case studies.

Arguments against XP include:

* Its team-based approach works well for smaller projects but scales up poorly to larger projects.
* It places too little emphasis on analysis and architectural design, creating the possibility that later iterations and releases will be suboptimal implementations.

When to Use XP

The published case studies and commentaries to date paint a glowing picture of XP's success under some very specific conditions:

* Small development teams of a dozen or fewer members
* Talented development personnel with a broad range of modeling, technical, and communication capabilities
* Scope limited to stand-alone systems, new systems, and systems with minimal interfaces to legacy hardware and software
* Extensive use of high-quality OO development and testing tools

For development projects with these resources and parameters, XP is a proven way to shorten the development life cycle.

XP is least successful in larger projects with many assigned staff. The close-knit teams and instant communication required by XP are simply not possible in large projects. As an analogy, compare a dozen people building a research prototype in a lab with the personnel and effort required to manufacture thousands of prototype copies per week. Large-scale projects, whether for manufacturing or software development, require specialization of personnel, hierarchical organization, formal methods of communication, and written documentation—all of which XP purposely avoids.

Systems that extend the capabilities of or interface with legacy systems may or may not be candidates for development with XP. Because of XP's reliance on OO methodologies and tools, it is not ideal for extending a legacy system built with other technology. Also, systems involving legacy hardware and software leave developers little freedom. In general, such systems require more thorough analysis and documentation than XP provides.
The Unified Process

The Unified Process (UP) is a comprehensive development approach that combines practices from other development approaches and adapts them to OO models, tools, and techniques. It was originally developed by Jacobsen, Booch, and Rumbaugh in the late 1990s. The UP is currently the dominant approach for developing software with OO models and tools.
Unified Process

an OO development approach that emphasizes frequent iteration, risk management, testing, and user feedback
The UP Compared with Other Development Approaches

The UP adopts its most important principle—iteration—from the prototyping and spiral development approaches. Development proceeds through a series of iterations, each of which can incorporate planning, analysis, design, and construction activities. Each iteration produces a working but incomplete subset of the final system, with each iteration adding to or modifying the outputs of the previous iteration so that models and working software grow to become the finished product.

The primary differences between the UP and earlier iterative approaches are its exclusive reliance on OO models and tools and specific restrictions on iteration length and activities. Use cases are the primary means of documenting requirements, and other OO analysis and design models fill a necessary but supporting role. The UP is not dogmatic about the exact number and type of models that are developed. Rather, the mix of models is chosen based on specific project characteristics.

Compared with XP, the UP is a more formal process and generally includes more upfront planning, analysis, and design activities, which are always carried out within the context of well-defined iterations. As in XP, models are developed only as needed to build working software—the models themselves are not an end product. Even though the UP is similar to XP in its spare use of models, XP avoids modeling to a greater extent than does the UP. In essence, XP and the UP are moderately different descendants of the prototyping and spiral development approaches, one specialized for speed with tight-knit development groups and the other formalized for larger projects.
How the UP Organizes Software Development

To make a clean break with older, sequential development approaches, the UP adopts new terms to describe SDLC activities. The UP's SDLC includes four high-level activities:

* Inception
* Elaboration
* Construction
* Transition

Inception is a high-level activity similar to project planning as described in Chapter 3. During inception, key project parameters such as scope, participants, business purpose, and initial budget and schedule estimates are defined. Scope is defined primarily through use cases, though they are not necessarily complete or exhaustive.
inception

earliest activity of the UP's systems development life cycle, encompassing strategic planning

Note that the definitions of inception and other high-level activities avoid using the word phase. The UP avoids that term because many people believe that it implies a strict sequence of activity types. In the UP, there are no phases per se, only iterations. But different iterations have different emphases, as shown in Figure 17-9, and incorporate different mixes of activities. The term inception describes the emphasis of one or more early project iterations.

Figure 17-9 A UP development project is a series of iterations in which emphasis shifts from inception, to elaboration, to construction, to transition.

[Click to enlarge]

Elaboration is a high-level activity that embodies aspects of planning, analysis, design, and construction. The purpose of elaboration is to move beyond inception by defining the requirements and scope in more detail, estimating the budget and schedule with greater precision, and designing and constructing key architectural aspects of the final system. Elaboration concentrates on the highest-risk portions of the system. By dealing with these aspects in early iterations, the UP moves quickly to minimize overall project risk and reduce the uncertainty associated with later iterations.
elaboration

second activity of the UP's systems development life cycle, encompassing planning, analysis, design, and construction of the highest-risk portions of a system

The common interpretation of the term construction implies a large and pervasive set of activities such as programming, installation, and testing. Within the UP, construction iterations do include such activities, but only for lower-risk and simpler portions of the system that were not addressed in earlier iterations. In addition, construction iterations may incorporate analysis and design activities that were not performed earlier. All iteration types include planning activities for subsequent iterations. The key difference between elaboration and construction is the amount of requirements discovery and the degree of risk. Elaboration iterations focus on higher-risk aspects of the system and typically embody more discovery activities than construction iterations.
construction

third activity of the UP's systems development life cycle, encompassing programming, installation, and testing of lower-risk and simpler portions of a system

Transition is a high-level activity that moves a system from development into production. Transition iterations shift the focus from adding and testing incremental features to testing the system as whole and deploying it within its operational environment.
transition

fourth activity of the UP's systems development life cycle, which moves a system from development to production

As shown in Figure 17-10, different projects may have very different arrangements of high-level activities. For example, the arrangement depicted in Figure 17-10 might be appropriate for a project that develops cutting-edge software to support an entirely new business paradigm. Inception and elaboration cover many iterations due to the significant number of risks and discovery activities. Construction is relatively short because most software is developed and tested during elaboration.

Figure 17-10 Distribution of high-level activities in three different projects.

[Click to enlarge]

In Figure 17-10, inception is relatively short, as might occur when the technology and business purpose are well understood before the project begins. In Figure 17-10, construction and transition overlap through many iterations, as might occur when a system is tested and placed into production in several stages. In Figure 17-10, construction covers the majority of iterations, as might occur when rewriting an existing application in a new programming language.
Iterations and Disciplines

Note that the dashed boxes representing iterations in Figure 17-9 And Figure 17-10 are all the same size. This reflects a key UP concept called timeboxing, which simply means that all iterations are the same length. Development work is packaged to fit the time boxes, not vice versa. If it appears that an iteration will exceed its schedule, then the scope of the work is reduced to fit the remaining allotted time.
timeboxing

organizing a complex task or project into a series of equal-length time intervals

The benefits of iterative development are best realized when iterations are relatively short and when each iteration produces a concrete result. The UP recommends short iterations—from several weeks to a few months. Short iterations result in more frequent testing, more frequent and immediate feedback from users, and higher motivation of and a greater sense of accomplishment by all project participants.

So far, we have focused primarily on iterations and higher-level organization of the UP's SDLC. But what about the detailed activities that occur within iterations? To describe those activities, the UP defines more terms, the most important of which is discipline. A discipline is a set of functionally related activities that can occur in many different iterations. Jacobsen, Booch, and Rumbaugh define a core set of UP disciplines including:

* Business modeling
* Requirements
* Design
* Implementation
* Testing
* Deployment
* Configuration and change management
* Project management
* Environment

discipline

a set of functionally related lower-level UP activities

Like the high-level activities of inception, elaboration, construction, and transition, the disciplines are actually categories of lower-level activities, organized by functional specialty instead of by the time in the SDLC at which they are emphasized. Because the entire SDLC is based on iterations, activities from many different disciplines are performed within each iteration. Some disciplines, such as business modeling, tend to occur in early iterations, and others, such as deployment, tend to occur in later iterations. Other disciplines, such as project management and testing, occur in nearly every iteration although their relative emphasis and specific details may vary.

Figure 17-11 shows the distribution of effort for five disciplines across inception, elaboration, construction, and transition for a sample project. Color intensity increases as the relative effort for a discipline increases. In this project, activities related to the business modeling discipline occur primarily during inception and decrease after the project moves into elaboration. The exact mix of disciplines within each iteration varies from project to project and among iterations within a project.

Figure 17-11 Distribution of disciplines across high-level activities in a sample project.

[Click to enlarge]
When to Use the Unified Process

The benefits and risks of the UP mirror those described earlier for spiral development. Since the UP was formalized in the late 1990s, there have been numerous studies of its effectiveness, and the consensus is that it is an effective process capitalizing on the theoretical benefits of iterative development. The major obstacles to its adoption include complex project management (compared with sequential development) and the need to adopt OO models, tools, and techniques throughout the project. Some early projects were hampered by development staff's lack of experience with OO analysis and design methods. But many developers are now trained in those methods, so that problem has diminished.

In deciding whether to use the UP or its primary competitor, XP, the critical trade-off is between speed and formality. XP avoids process formality and model development and directs the saved effort toward developing and testing software. As described earlier, this approach has demonstrated success with small and talented developer teams, systems with limited external interfaces, and projects that use high-quality development and deployment tools. Considerable debate still rages over whether and how XP practices can be adapted to larger projects.

The formality embedded within the UP is a reflection of its bias toward larger projects, particularly those with big, geographically dispersed development teams and projects that are completed under contract by external developers. In such projects, the UP's formal steps, well-defined roles, and significant attention to model building and validation address issues of project control and communication, which are handled much less formally under XP. Many developers and most managers of large-scale development projects believe that they need the formality of UP in their development environment.
Rapid Development Techniques

Chapter 2 defined a technique as a collection of guidelines that help an analyst complete a system development activity or task. Many techniques have been developed over the years to speed development. Some live up to their claims, although none is suited to all projects and development scenarios.

This section presents a small group of techniques proven to shorten project schedules. None is unique to a particular system development approach, although some work better with (or are required by) a particular approach.
Risk Management

Within the context of software development, a risk is a condition or event that can cause a project to exceed its shortest possible schedule. Examples include changing user requirements; failure of hardware, support software, or tools; and loss of needed development resources such as funding or personnel. Risks also arise from dependency on others, including clients, suppliers, and other organizational units. Risks are present regardless of what approach is taken to system development.
risk

a condition or event that can cause a project to exceed its shortest possible schedule

Conditions are states that exist but are unknown or poorly understood at the present time. For example, when upgrading an existing system, a developer may become aware of bugs in critical software components only after a new feature is implemented. Events are things that may occur but haven't yet. Examples include user requirement changes, reassignment of key personnel, and failure of a hardware vendor to meet a delivery deadline.

Figure 17-12 describes several important categories of software development risks. A longer list with more specific examples can be found in McConnell (see the “Further Resources” section). All software development projects face risks that can result in schedule delays.

Figure 17-12 Major categories of development schedule risk.

[Click to enlarge]
Steps in Risk Management

Risk management is a systematic process of identifying and mitigating software development risks. The underlying principles of risk management are based on the following ideas:

* Most risks can be identified if specific attention is directed to them.
* Risks appear, disappear, increase, and decrease as the development process proceeds.
* Small risks should be monitored, whereas large risks should be actively mitigated.

risk management

a systematic process of identifying and mitigating software development risks

Figure 17-13 contains a flowchart for risk management. The process begins at the start of the development project and continues until it is completed. The first step is to identify all risks that are likely to affect the project. This task is relatively difficult but critically important since risks that aren't identified can't be managed. A group of project participants (including users) usually identifies risks, since more heads are better than few to generate and evaluate a large number of ideas.

For each risk, the next step is to estimate its probability, determine possible outcomes, and estimate the probability and schedule delay associated with each outcome. For example, the risk of failing a performance test might be assigned a probability of 0.1 (10 percent) and determined to have two possible outcomes:

* Reprogram key software modules to improve performance.
* Purchase and install faster hardware.

Reprogramming might be assigned an estimated probability of 0.8 (80 percent) and schedule delay of four weeks. Purchasing and installing faster hardware might be assigned an estimated probability of 0.2 (20 percent) and a schedule delay of two weeks. Note that the probability assigned to an outcome is a conditional probability for a specific risk. For example, the 20 percent probability that faster hardware will need to be acquired only is relevant if the system actually fails a performance test.

Figure 17-13 Steps in risk management.

[Click to enlarge]

Developers can compute an expected schedule delay for each outcome by multiplying the risk probability, outcome probability, and outcome schedule delay. For example, the expected schedule delay for reprogramming to improve performance is 0.1 × 0.8 × 4 weeks = 0.32 weeks. If all outcomes for a risk are mutually exclusive, developers can add the expected schedule delays for all risk outcomes to produce an expected schedule delay for the risk.

Once outcomes have been analyzed, the risks must be prioritized. Highest priority should go to risks with long expected schedule delays. Risks with long outcomes and low probabilities are also given high priority since probability estimates are inherently inaccurate. The goal of prioritizing risks is to generate a list of “high” risks and a list of “low” risks. Low-priority risks are simply monitored to ensure that they don't become higher-priority risks. High-priority risks call for more active management approaches.

Risk mitigation is any steps taken to minimize an expected schedule delay. Specific mitigation steps can vary widely. For example, the risk of a customer's adding significant requirements late in the project might be mitigated by developing an early throwaway prototype to validate requirements, actively involving the customer in all phases of project planning, or requiring the customer to sign a contract rigidly specifying the system features. Risk mitigation for a failed performance test might include detailed benchmarking of the tool and hardware, development of a test prototype, or parallel development with multiple tools.
risk mitigation

any step(s) taken to minimize the expected schedule delay of a risk

As a project proceeds, more information about risks becomes available. The additional information may arise as a natural by-product of development activities, or it may arise from specific risk mitigation efforts. In either case, new and revised information should be periodically evaluated. Thus, the entire risk management process is actually a loop whose frequency varies from project to project. In projects with few risks and moderately long schedules, developers may reevaluate risks infrequently (for example, after completion of each SDLC phase or after one or more iterations of the spiral, XP, or UP development approaches). Projects with many risks and longer development schedules may require formal reexamination of risks more frequently (for example, at the end of every week).
When to Implement Risk Management

Risk management should always be used—regardless of development approach. But inherently riskier projects call for more thorough and active risk management. Unknown requirements and feasibility are principal elements of overall project risk. Since such unknowns normally motivate developers to use iterative development, active and thorough risk management should be key elements of those approaches. When possible, developers should incorporate mitigation measures for the highest-priority risks into the earliest development activities.
Joint Application Design

Joint application design (JAD) was described in Chapter 4 in the section “Conduct Joint Application Design Sessions.” JAD is an effective technique for quickly defining system requirements. It is primarily used as a systems analysis technique and occasionally to specify some higher-level design parameters. JAD shortens the time needed to specify system requirements by including all key decision makers in one or more intensive sessions. It shortens the project schedule by concentrating the efforts of many people in a short space of time. Participation by all key decision makers ensures that the project gets off to a speedy start with solid planning and clearly defined objectives.

Any development approach can incorporate JAD, but it is especially well suited to iterative development because both emphasize prototypes and development speed. JAD can be used to implement most or all of the initial planning phase of the spiral development approach. The user-interface prototypes developed during the JAD session can form the starting point for the first full-fledged developmental prototype in the spiral development approach. JAD is not a formal part of XP or UP, although many JAD principles underlie both approaches.

When used with a conventional development approach, a JAD session does not normally address all of the design issues that the initial planning phase of the spiral development approach does, such as choice of implementation tools, overall architectural design, and initial definition of the evolutionary prototypes. However, these decisions are next in line after those normally considered in a JAD session. The JAD session can be extended to include these decisions, or the results of the JAD session can be the input to a later phase.
Tool-Based Development

Chapter 2 defined the term tool as software that helps create models or other deliverables required in a project. The intervening chapters have discussed a large number of tools, including CASE tools, project management tools, database management systems, and integrated development environments. This chapter focuses on tools that are used directly to build software. Such tools include compilers, code generators, and object frameworks but exclude tools such as project management and model development software, unless they are included within a CASE tool or integrated development environment.

No one tool is best for producing all types of software. Tools have different strengths and weaknesses. Some parts of a system can be built very quickly because they match development tool capabilities very well. Other parts are much more difficult to build because of limitations or gaps in tool capabilities. Building these parts of the system requires much more time because human ingenuity and experimentation are required to get the tool to do something it does poorly or was never designed to do.

The premise of tool-based development is simple: Choose a tool or tools that best match the requirements and don't develop any requirements that aren't easily implemented with the tool(s). In essence, tool-based development applies the generic 80/20 or 90/10 rule: Resources are best used to construct a system that satisfies the 80 to 90 percent of the requirements most easily implemented, and the 10 to 20 percent of the requirements that are difficult to implement with the tool(s) are discarded.
tool-based development

a development approach that chooses a tool or tools that best match system requirements and doesn't develop any requirements that aren't easily implemented with the tool(s)

Although tool-based development is easy to describe, it is very difficult to implement in an organization because it requires a developer to say no early and often. The developer and the customer must agree on what requirements the system will or won't include. The developer must be willing and able to say no to future requests to add difficult requirements. Saying no so often may not be possible because of user needs or project politics.

User satisfaction may be much lower than with other development methods because some desired functions are left out of the system. The developer can mitigate dissatisfaction by clearly describing the schedule and cost impacts of all requirements that don't directly match tool capabilities. On the plus side, design and development can proceed much more quickly, and reductions in coding and testing time are dramatic. Thus, tool-based implementation forces a clear and direct trade-off between development speed and system functionality.

Figure 17-14 shows a simple process for tool-based development in the context of a sequential development approach. New activities are added to the end of the analysis phase, including:

* Investigating development tools that support the system requirements
* Selecting tools and modifying system requirements to match the tool set

Note that requirement priorities are assumed to drive the tool selection, not the other way around. The underlying capability and methodology of the chosen tool set drive subsequent development phases. The developer can also incorporate tool-based development into the prototyping or spiral development approaches by embedding a tool selection activity within the analysis or initial planning phase of those approaches.

If a new tool is chosen, then project costs will increase because the new tool must be purchased and developers must be trained. Project managers must plan carefully to avoid having a new tool (or tools) lengthen the project schedule. Tool acquisition and installation must be completed before implementation activities begin. Training should be concurrent with systems analysis, and it should be completed prior to architectural or detailed design.

A developer must take great care when choosing multiple tools to build a new system. Tools that aren't designed to work together may have hidden incompatibilities that don't become apparent until late in the implementation phase. For example, development tools may use incompatible methods for representing data. Such incompatibilities may not become apparent until integration testing, when data stored in a program developed with one tool are passed via a function call or method invocation to software developed with another tool. Smoothing over these incompatibilities can add as much work and time to a project as adding system features that aren't well matched to a specific tool.

Figure 17-14 Tool-based development applied within a traditional sequential life cycle.

[Click to enlarge]

Another danger of tool-based development is the tendency to choose a different tool set for every project. This is particularly problematic when users hear success stories about a new tool set and they want a system similar to the one described in the success story. The problem is that each new tool adopted by a development team dramatically decreases productivity. Every tool has a learning curve, and productivity gains can't be realized until the development team has worked its way up the curve. The first project that uses the new tool typically does not achieve high productivity. Tools less suited to the project's requirements may actually deliver a better system on a shorter schedule if the development team has extensive experience with the tool.
Software Reuse

Software reuse (or code reuse) is any mechanism that allows software used for one purpose to be reused for another purpose. Software reuse can significantly shorten a development schedule by reducing the effort, time, and money required to develop or modify information systems. Whether software reuse actually reduces development effort depends on many specifics, including:

* The effort required to identify potentially reusable software
* The extent to which existing software functions require modification to suit a new purpose
* The extent to which existing software must be repackaged into a form that can be plugged into a new system

software reuse, or code reuse

any mechanism that allows software used for one purpose to be reused for another

There are many different ways of reusing software. At one extreme, an existing system (for example, an accounting software package) can be purchased, thus avoiding the vast majority of implementation phase activities. At the other extreme, small portions of code from previously written programs can be reused in another program. Of course, there are many kinds of software reuse between these extremes, and many different approaches, techniques, tools, and technologies can be used.

OO design and programming techniques have provided new tools to address software reuse. One of the reasons that OO development methods have become so popular is the premise that classes and objects are easier to reuse than programs built with traditional programming tools. Note the use of the word premise in the previous sentence. There is still too little hard evidence to say definitively that OO development is faster than traditional development. But most software developers do believe that OO development is faster, and ongoing research should soon close the gap between belief and knowledge.

Figure 17-15 shows two particular methods of OO software reuse: source code and executable code reuse. A project team can reuse OO source code by deriving new classes from existing classes or entire object frameworks. Executable objects are sometimes called components. A component can be a single object, an entire system, or anything in between. The last two sections of this chapter describe object frameworks and components in detail.

Figure 17-15 A comparison of various object-oriented code reuse methods.

[Click to enlarge]

Software reuse is a technique applicable to any system development approach. Analysts and developers must actively search for reusable software. For large-scale reuse, the search must begin before architectural design; however, it can be time-consuming and expensive. The effort needed to search, evaluate, adapt, and integrate existing software for a new purpose may be greater than the effort to build a new system from scratch.

The search for smaller-scale reusable software can occur during systems design or development and is usually less time intensive. In many cases, developers need look no further than their programming toolkit. Most modern programming toolkits reuse software in one or more forms. Examples include object frameworks, program templates, code generators, and component libraries. Modern programming toolkits also provide tools for adapting and integrating reusable code.
Object Frameworks

Similar functions are embedded in many different types of systems. For example, the graphical user interface (GUI) is nearly ubiquitous in modern software. Many features of a GUI— such as drop-down menus, help screens, and drag-and-drop manipulation of on-screen objects—are used in many or most GUI applications. Other functions—such as searching, sorting, and simple text editing—are also common to many applications. Reusing software to implement such common functions is a decades-old development practice. But such reuse was awkward and cumbersome with older programming languages. Object-oriented programming languages provide a simpler method of software reuse.

An object framework is a set of classes that are specifically designed to be reused in a wide variety of programs. The object framework is supplied to a developer as a precompiled library or as program source code that can be included or modified in new programs. The classes within an object framework are sometimes called foundation classes. Foundation classes are organized into one or more inheritance hierarchies. Programmers develop application-specific classes by deriving them from existing foundation classes. Programmers then add or modify class attributes and methods to adapt a “generic” foundation class to the requirements of a specific application.
object framework

a set of classes that are specifically designed to be reused in a wide variety of programs or systems
foundation classes

the classes within an object framework
Object Framework Types

Object frameworks have been developed for a variety of programming needs. Examples include:

* User-interface classes. Classes for commonly used objects within a graphical user interface, such as windows, menus, toolbars, and file open and save dialog boxes.
* Generic data structure classes. Classes for commonly used data structures such as linked lists, indices, and binary trees and related processing operations such as searching, sorting, and inserting and deleting elements.
* Relational database interface classes. Classes that allow OO programs to create database tables, add data to a table, delete data from a table, or query the data content of one or more tables.
* Classes specific to an application area. Classes specifically designed for use in application areas such as banking, payroll, inventory control, and shipping.

General-purpose object frameworks typically contain classes from the first three categories. Classes in these categories can be reused in a wide variety of application areas. Application-specific object frameworks provide a set of classes for use in a specific industry or type of application. Third parties usually design application-specific frameworks as extensions to a general-purpose object framework. Many large organizations have moved aggressively to develop their own application frameworks. Smaller firms usually do not do so because the resource requirements are substantial. An application- or company-specific framework requires a significant development effort typically lasting several years. The effort is repaid over time through continuing reuse of the framework in newly developed systems. The effort is also repaid through simplified maintenance of existing systems. But the payoffs occur far in the future, often making them less valuable than the current funds required to build the frameworks.
The Impact of Object Frameworks on Design and Implementation Tasks

Developers need to consider several issues when determining whether to use object frameworks. Object frameworks affect the process of systems design and development in several different ways:

* Frameworks must be chosen before detailed design begins.
* Systems design must conform to specific assumptions about application program structure and operation that the framework imposes.
* Design and development personnel must be trained to use a framework effectively.
* Multiple frameworks may be required, necessitating early compatibility and integration testing.

The process of developing a system using one or more object frameworks is essentially one of adaptation. The frameworks supply a template for program construction and a set of classes that provide generic capabilities. Systems designers adapt the generic classes to the specific requirements of the new system. Frameworks must be chosen early so that designers know the application structure imposed by the frameworks, the extent to which needed classes can be adapted from generic foundation classes, and the classes that cannot be adapted from foundation classes and thus must be built from scratch.

Of the three object layers typically used in OO system development (view, business logic, and data), the view and data layers most commonly derive from foundation classes. User interfaces and database access tend to be the areas of greatest strength in object frameworks, and they are typically the most tedious classes to develop from scratch. It is not unusual for 80 percent of a system's code to be devoted to view and data classes. Thus, constructing view and data classes from foundation classes provides significant and easily obtainable code reuse benefits. Adapting view classes from foundation classes has the additional benefit of ensuring a similar look and feel of the user interface across systems and across application programs within systems.

Successful use of an object framework requires a great deal of up-front knowledge about its class hierarchies and program structure. That is, designers and programmers must be familiar with a framework before they can successfully use it. Thus, a framework should be selected as early as possible in the SDLC, and developers must be trained in use of the framework before they begin to implement the new system.
Components

In addition to using object frameworks, developers often use components to speed system development. A component is a software module that is fully assembled, is ready to use, and has well-defined interfaces to connect it to clients or other components. Components may be single executable objects or groups of interacting objects. A component may also be a non-OO program or system “wrapped” in an OO interface. Components implemented with non-OO technologies must still implement objectlike behavior. In other words, they must implement a public interface, respond to messages, and hide their implementation details.
component

a standardized and interchangeable software module that is fully assembled and ready to use and that has well-defined interfaces to connect it to clients or other components

Components are standardized and interchangeable software parts. They differ from objects or classes because they are binary (executable) programs, not symbolic (source code) programs. This distinction is important because it makes components much easier to reuse and reimplement than source code programs.

For example, consider the grammar-checking function in most word processing programs. A grammar-checking function can be developed as an object or as a subroutine. Other parts of the word processing program can call the subroutine or object methods via appropriate source code constructs (for example, a C++ method invocation or a BASIC subroutine call). The grammar-checking function source code is integrated with the rest of word processor source code during program compilation and linking. The executable program is then delivered to users.

Now consider two possible changes to the original grammar-checking function:

* The developers of another word processing program want to incorporate the existing grammar-checking function into their product.
* The developers of the grammar-checking function discover new ways to implement the function that result in greater accuracy and faster execution.

To integrate the existing function into a new word processor, the source code of the grammar-checking function must be provided to the word processor developers. They then code appropriate calls to the grammar checker into their word processor source code. The combined program is then compiled, linked, and distributed to users.

When the developers of the grammar checker revise their source code to implement the faster and more accurate function, they deliver the source code to the developers of both word processors. Both development teams integrate the new grammar-checking source code into their word processors, recompile and relink the programs, and deliver a revised word processor to their users.

So what's wrong with this scenario? Nothing in theory, but a great deal in practice. The grammar-checker developers can provide their function to other developers only as source code, which opens up a host of potential problems concerning intellectual property rights and software piracy. Of greater importance, the word processor developers must recompile and relink their entire word processing programs to update the embedded grammar checker. The revised binary program must then be delivered to users and installed on their computers. This is an expensive and time-consuming process. Delivering the grammar-checking program in binary form would eliminate or minimize most of these problems.

A component-based approach to software design and construction solves both of these problems. Component developers, such as the developers of the grammar checker, can deliver their product as a ready-to-use binary component. Users, such as the developers of the word processing programs, can then simply plug in the component. Updating a single component doesn't require recompiling, relinking, and redistributing the entire application. Perhaps applications already installed on user machines could query an update site via the Internet each time they started and automatically download and install updated components.

At this point, you may be thinking that component-based development is just another form of code reuse. But structured design, object frameworks, and client-server architecture all address code reuse in different ways. What makes component-based design and construction different are the following:

* Components are reusable packages of executable code. Structured design and object frameworks are methods of reusing source code.
* Components are executable objects that advertise a public interface (that is, a set of methods and messages) and hide (encapsulate) the implementation of their methods from other components. Client-server architecture is not necessarily based on OO principles. Component-based design and construction are an evolution of client-server architecture into a purely OO form.

Components provide an inherently flexible approach to systems design and construction. Developers can design and construct many parts of a new system simply by acquiring and plugging in an appropriate set of components. They can also make newly developed functions, programs, and systems more flexible by designing and implementing them as collections of components. Component-based design and construction has been the norm in the manufacturing of physical goods (such as cars, televisions, and computer hardware) for decades. However, it has only recently become a viable approach to designing and implementing information systems.
Component Standards and Infrastructure

Interoperability of components requires standards to be developed and readily available. For example, consider the video display of a typical IBM-compatible personal computer. The plug on the end of the video signal cable follows an interface standard. The plug has a specific form, and each connector in the plug carries a well-defined electrical signal. Years ago, a group of computer and video display manufacturers defined a standard that describes the physical form of the plug and the type of signals carried through each connector. Adherence to this standard guarantees that any video display unit will work with any compatible personal computer and vice versa.

Components may also require standard support infrastructure. For example, video display units are not internally powered. Thus, they require not only a standard power plug but also an infrastructure to supply power to the plug. A component may also require specific services from an infrastructure. For example, a cellular telephone requires the cellular service provider to assign a transmission frequency with the nearest cellular radio tower, to transfer the connection from one tower to another as the user moves among telephone cells, to establish a connection to another person's telephone, and to relay all voice data to and from the other person's telephone via the public telephone grid. All cellular telephones require these services.

Software components have a similar need for standards. Components could be hard-wired together, but this reduces their flexibility. Flexibility is enhanced when components can rely on standard infrastructure services to find other components and establish connections with them.

In the simplest systems, all components execute on a single computer under the control-of a single operating system. Connection is more complex when components are located on different machines running different operating systems and when components can be moved from one location to another. In this case, a network protocol independent of the hardware platform and operating system is required. In fact, a network protocol is desirable even when components all execute on the same machine because such a protocol guarantees that systems can be used in different environments—from a single machine to a network of computers.

Modern networking standards have largely addressed the issue of common hardware and communication software to connect distributed software components. Internet protocols are a nearly universal standard and thus provide a ready means of transmitting messages among components. Internet standards can also be used to exchange information among two processes executing on the same machine. However, Internet standards alone do not fully supply a component connection standard. The missing pieces are:

* Definition of the format and content of valid messages and responses
* A means of uniquely identifying each component on the Internet and routing messages to and from that component

To address these issues, some organizations have developed and continue to modify standards for component development and reuse.
Common Object Request Broker Architecture (CORBA)

a standard for software component connection and interaction developed by the Object Management Group (OMG)
CORBA

The Common Object Request Broker Architecture (CORBA) was developed by the Object Management Group (OMG), a consortium of computer software and hardware vendors. CORBA was designed as a platform- and language-independent standard. The standard is currently in its third revision and is widely used.
Object Request Broker (ORB)

a CORBA service that provides component directory and communication services
Internet Inter-ORB Protocol (IIOP)

a CORBA protocol for communication among objects and object request brokers

The core elements of the CORBA standard are the Object Request Broker (ORB) service and the Internet Inter-ORB Protocol (IIOP) for component communication. A component user contacts an ORB server to locate a component and determine its capabilities and interface requirements. Messages sent between a component and its user are routed through the ORB, which performs any necessary translation services.
COM+

The Component Object Model Plus (COM+) is a Microsoft-developed standard for component interoperability. It is widely implemented in Windows-based application software, and it is often used in three-tier distributed applications based on Microsoft Internet Information Server and Transaction Server. Most Windows office suites, such as Microsoft Office, are constructed as a cooperating set of COM+ components.
Component Object Model Plus (COM+)

a standard for software component connection and interaction developed by Microsoft

COM+ components are registered by individual computer systems within the Windows registry, which limits COM+ components to computer systems running Windows operating systems. Once components locate one another through the registry, they communicate directly using a network protocol or Windows interprocess communication facilities.
Enterprise JavaBeans

Java is an OO programming language developed by Sun Microsystems. Most people have heard of Java in connection with applets that execute on Web pages. Java differs from other OO programming languages in several important ways, including the following:

* Java programs are compiled into object code files that can execute on many hardware platforms under many operating systems.
* The Java language standard includes an extensive object framework, called the Java Development Kit (JDK), which includes classes for GUIs, database manipulation, and internetworking.

The JDK defines a number of classes and naming conventions that support component development. One class enables a Java object to convert its internal state into a sequence of bytes that can be stored or transmitted across a network. Other classes allow components to enumerate a Java object's internal variables. Naming conventions allow components to deduce the names of methods that manipulate those variables. An object of a class that implements all of the required component methods and follows the required naming conventions is called a JavaBean.
JavaBean

an object that implements the required component methods and follows the required naming conventions of the JavaBean standard

An enterprise JavaBean (EJB) is a JavaBean that can execute on a server and communicate with clients and other components using CORBA. EJBs provide additional capabilities beyond JavaBeans including:

* Multicomponent transaction management
* Packaging of multiple components into larger run-time units
* Sophisticated object storage and retrieval in relational or object DBMSs
* Component and object access controls

enterprise JavaBean (EJB)

a JavaBean that can execute on a server and communicate with clients and other components using CORBA

The JavaBean and EJB standards have created new opportunities for software developers to use component-based technologies. Applications built on these standards can be easily deployed across a wide range of software and hardware platforms. The platform independence of JavaBean components makes them more portable and scalable.
SOAP and .NET

Both CORBA and COM+ have some significant disadvantages for building distributed component-based software. For CORBA, the primary problem is the complexity of the standard and the need for ORB servers. For COM+, the primary problem is dependence on proprietary technology and limited support outside Microsoft products.

Simple Object Access Protocol (SOAP) is a standard for distributed object interaction that attempts to address the shortcomings of both CORBA and COM+. Unlike CORBA, SOAP has few infrastructure requirements, and its programming interface is relatively simple. SOAP is an open standard developed by the World Wide Web Consortium (W3C). Perhaps the best evidence of SOAP's long-term potential for success is that Microsoft has adopted it as the basis of its .NET distributed software platform.
Simple Object Access Protocol (SOAP)

a standard for component communication over the Internet using HTTP and XML

SOAP is based on existing Internet protocols, including Hypertext Transport Protocol (HTTP) and eXtensible Markup Language (XML). Messages among objects are encoded in XML and transmitted using HTTP, which enables the objects to be located anywhere on the Internet. Figure 17-16 shows two components communicating with SOAP messages. The same transmission method supports server-to-client and component-to-component communication. The SOAP encoder/decoder and HTTP connection manager are standard components of a SOAP programmers' toolkit. Applications can also be embedded scripts that use a Web server to provide SOAP message-passing services.

Figure 17-16 Component communication using SOAP.

[Click to enlarge]

Although SOAP is a promising component-communication standard, it is still in its formative period. Current standard development activity is addressing many missing or underdeveloped parts of the initial standard, including security, message delivery guarantees, and specific conversion rules between programming language and CPU data types and XML. SOAP is often considered a “lighter-weight” version of CORBA because it is simpler and requires little supporting infrastructure to enable component communication. However, the standard may “gain weight” as it evolves to include capabilities needed by high-availability mission-critical applications.

SOAP and XML have enabled a new era of component-based applications, commonly described by the phrase Web services. Simply put, a Web service is a component or entire application that communicates using SOAP. Because SOAP components communicate using XML, they can be easily incorporated into applications that use a Web-browser interface. Complex applications can be constructed using multiple SOAP components that communicate via the Internet. We are only now beginning to see the potential of such applications.
Web services

a component or entire application that communicates using SOAP
Components and the Development Life Cycle

Component purchase and reuse is a viable approach to speed completion of a system. Two development scenarios involve components:

* Purchased components can form all or part of a newly developed or reimplemented system.
* Components can be designed in-house and deployed in a newly developed or reimplemented system.

Each scenario has different implications for the SDLC, as explored in the following sections.
Purchased Components

Figure 17-17 shows activities added to SDLC phases when purchased components form part of a new system. Components change the project planning phase because they may alter the analyst's estimate of the project schedule and his or her evaluation of the project's financial and technical feasibility. Purchasing and using components is generally cheaper and takes much less time than building equivalent software. Purchased components may also solve technical problems that developers could not easily or inexpensively solve themselves.

Figure 17-17 Activities added to SDLC phases when components are purchased.

[Click to enlarge]

The search for suitable components must begin during the analysis phase, but it cannot-begin until user requirements are understood well enough to evaluate their match to component capabilities. When developers purchase entire software packages, the match between component capabilities and user requirements is seldom exact. Thus, developers may need to refine user requirements based on the capabilities of available components, particularly if the development project has a short schedule.

Components operate within an extensive infrastructure based on standards such as CORBA or SOAP. Many system software packages implement key parts of each standard. Thus, choosing a component isn't simply a matter of choosing an application software module. Developers must also choose compatible hardware and system software to support components.

The reliance of purchased components on a particular infrastructure has several implications for SDLC activities, including:

* The standards and support software required by purchased components must become part of the technical requirements defined during the analysis phase.
* A component's technical support requirements restrict the options considered during architectural design.
* Hardware and system software that provide component services must be acquired, installed, and configured before testing begins during the implementation phase.
* The components and their support infrastructure must be maintained after system delivery.

Many development projects, particularly large ones, may use components from many different vendors, which raises compatibility issues. The component search and selection process must carefully consider compatibility—often eliminating some choices and altering the desirability of others. Preliminary testing activities may have to be added to the end of the analysis phase to verify component performance and compatibility before the architectural design is structured around those components and their support infrastructure. Support and maintenance are also more complicated because significant portions of the system are not under the direct control of the system owner or the in-house IS staff.
Internally Developed Components

System developers can also choose to develop their own components for systems that will be developed internally. Although building components is more costly and time-consuming than purchasing them, savings are realized later during the support phase, system upgrades, and other development projects that reuse the components. Component-based development of a new system also makes it easier to incorporate purchased components later.

In-house component development has far fewer impacts on systems analysis and architectural design than purchasing components from outside. Developers do not need to search for external components during the analysis phase, and their infrastructure requirements are not carried into architectural design. However, they still must choose a suitable component infrastructure, and that choice may influence other aspects of the system design, including the choice of hardware, operating systems, and database management systems.
Components and Object-Oriented Techniques

It is possible to develop component-based applications without using OO analysis, design, or development techniques, but it isn't recommended. A component is a distributed object that passes messages to (or calls methods of) other components. Since objects are the basis of component construction and interaction, the proper analysis and design tools and techniques are object-oriented. Traditional structured analysis and design techniques are poorly matched to component-based systems.

From the users' viewpoint, the requirements for a component-based system are no different from the requirements of a system implemented with more traditional technology. That is, the behavioral aspects of the system—such as data inputs, the user interface, and basic processing functions—are the same. Thus, analyzing and documenting user requirements for a component-based system are the same as for any other OO system.

Design phase issues for a newly developed component-based system are similar to those of any system developed with object-oriented techniques. A suitable class hierarchy must be defined, and the messages and methods must be fully specified. The techniques and models used to do this are the same as for any other OO analysis and design effort.
Designing Components for Reuse

Most of the advantages of component-based systems arise from their reusability. Thus, object (component) reuse deserves extra attention in the analysis and design phases of the SDLC. Component reuse also has significant implications for maintaining information systems.

OO analysis and design concentrate on developing generally useful objects by modeling problem domain objects and their behavior. To the extent the problem domain is accurately modeled, the developed objects can be reused in other systems and across programs within a system. A component-based system exploits object reuse by sharing executable objects among many applications.

When a system reuses objects by sharing source code, a perfect fit between an old object and a new system is not required. The designers of the new system are free to modify existing objects as needed to suit new requirements. By using inheritance appropriately and overriding existing methods, designers can make such modifications relatively quickly, with maximum reuse of existing code.

In contrast, modifying an existing component to accommodate the requirements of a new system is much more complex. An existing component has already been installed and is in use as part of one or more information systems. Any change to the component for use in a new system may require:

* Modifying the existing component
* Developing a new component with many redundant functions
* Developing a new component that encapsulates the existing component and adds new functions

Modifying an existing component can be risky because it may break existing applications. The modified component may have subtle differences that cause errors in older applications. Thorough testing can determine whether such problems exist. But the effort required to test older systems can be substantial.

Both CORBA and COM+ support multiple component versions. Thus, two components can use the same name but a different version number. Older applications can use the older version, and new applications can use the newer version. This ensures that each application uses the component it was originally designed to use. It also eliminates the possibilities of introducing new bugs into old applications.

But component versioning has its own set of problems. Developers have more components to keep track of, and redundancy among their functions can create maintenance problems. If a redundant function does have a bug, then a developer must modify multiple components to fix the problem. One way around this problem is to design a new component version to call methods in the older version. The new component thus uses (wraps) the old one and adds new or modified functions. However, this approach creates a complex dependency chain among component versions, which can be difficult to manage if there are many versions. They can also be inefficient because of excessive message passing among versions.

The best long-term strategy is usually to modify or wrap existing components, thus avoiding the problem of redundant maintenance. Old systems can be tested with the new components and converted to use them if testing succeeds. Applications that do not function can be forced to use older components until they (or the new components) are repaired. Once all applications using the old component have been repaired, the old component can be deleted or merged with the newer version.

Designers are sometimes tempted to bypass future maintenance problems by designing components with forward-looking features. They try to guess what features not currently needed might be useful in future applications. They then add these features in the hope that future applications will be able to use existing components without modification.

Designing for the future may sound like a good strategy, but it introduces several problems, including:

* Inaccuracy and waste. It's difficult to guess what component behavior will be needed in future applications. If the designer guesses wrong, then the extra design and implementation effort is wasted.
* Cost. The extra effort adds cost to the current project and lengthens its schedule.

Component designers must walk a fine line between developing components that are general enough to be reused in the future but specific enough to be useful and efficient in a current application.
System Performance

Component-based software is usually deployed in a distributed environment. Components typically are scattered among client and server machines and among local area network (LAN) and wide area network (WAN) locations. Distributing components across machines and networks raises the issue of performance. System performance depends on the location of the components (that is, component topology), the hardware capacity of the computers on which they reside, and the communication capacity of the networks that connect computers. Performance also depends on demands on network and server capacity made by other applications and communication traffic, such as telephone, video, and interactions among traditional clients and servers.

The details of analyzing and fine-tuning computer and network performance are well beyond the scope of this text. But anyone planning to build and deploy a distributed component-based system should be aware of the performance issues. These issues must be carefully considered during systems design, implementation, and support.

Steps developers should take to ensure adequate performance include the following:

* Examine component-based designs to estimate network traffic patterns and demands on computer hardware.
* Examine existing server capacity and network infrastructure to determine their ability to accommodate communication among components.
* Upgrade network and server capacity prior to development and testing.
* Test system performance during development and make any necessary adjustments.
* Continuously monitor system performance after installation to detect emerging problems.
* Redeploy components, upgrade server capacity, and upgrade network capacity to reflect changing conditions.

Implementing these steps requires a thorough understanding of computer and network technology as well as detailed knowledge of existing applications, communications needs, and infrastructure capability and configuration. Applying this knowledge to real-world problems is a complex task typically performed by highly trained specialists.
Summary

Rapid application development (RAD) is a broad term that covers a variety of tools, techniques, and development approaches with the goal of speeding application development. RAD techniques include risk management, joint application design, tool-based design, and software reuse. RAD approaches include prototyping, eXtreme Programming, spiral development, the Unified Process, and (under the right circumstances) traditional development life cycles. RAD tools include object frameworks and components and their supporting infrastructure.

No RAD tool, technique, or development approach speeds development for all projects. Developers must carefully examine project characteristics to determine which RAD concepts are most likely to speed development. Some parts of RAD (such as risk management and software reuse) will speed development for most projects. Other parts (such as tool-based design and spiral development) are applicable to far fewer projects.

Software reuse is a fundamental approach to rapid development. It has a long history, although it has been applied with greater success since the advent of object-oriented programming, object frameworks, and component-based design and development. Object frameworks provide a means of reusing existing software through inheritance. They provide a library of reusable source code, and inheritance provides a means of quickly adapting that code to new application requirements and operating environments.

Components are units of reusable executable code that behave as distributed objects. They are plugged into existing applications or combined to make new applications. Like the concept of software reuse, component-based design and implementation are not new, but the standards and infrastructure required to support component-based applications have only recently emerged. Thus, components are only now entering the mainstream of software development techniques.
Key Terms

* code reuse
* Common Object Request Broker Architecture (CORBA)
* component
* Component Object Model Plus (COM+)
* construction
* developmental prototype
* discipline
* discovery prototype
* elaboration
* enterprise JavaBean (EJB)
* eXtreme Programming (XP)
* foundation classes
* inception
* internet Inter-ORB Protocol (IIOP)
* JavaBean
* object framework
* Object Request Broker (ORB)
* pair programming
* prototyping development approach
* rapid application development (RAD)
* risk
* risk management
* risk mitigation
* simple Object Access Protocol (SOAP)
* software reuse
* spiral development approach
* system metaphor
* timeboxing
* tool-based development
* transition
* unified Process (UP)
* user stories
* Web services

Review Questions

1. During what life cycle phase is it least expensive to implement a requirements change? During which phase is it most expensive?
2. Is RAD a single approach or technique? Why or why not?
3. What factors determine the fastest development approach for a specific project?
4. Under what condition(s) is the sequential development approach likely to be faster than alternative development approaches?
5. Under what condition(s) is the spiral development approach likely to be faster than alternative development approaches?
6. Under what condition(s) is the UP likely to be faster than alternative development approaches?
7. How does the process (composition and order of phases and activities) of prototype-based system development differ from a sequential development approach? How does it differ from a spiral development approach? How does it differ from the UP?
8. How does the process (composition and order of phases and activities) of spiral system development differ from a sequential development approach?
9. What are the common characteristics of the prototyping, spiral, UP, and XP development approaches? What are their differences?
10. Which approach to system development (conventional, prototyping, spiral, UP, or XP) is likely to result in the shortest possible development time when user requirements or technical feasibility are poorly understood at the start of the project? Why?
11. What development tool characteristics are required for successful use of the prototyping, spiral, UP, and XP development approaches?
12. Define the following terms and phrases: risk, risk management, and risk mitigation.
13. With which development approaches should risk management be used?
14. How should JAD be incorporated into the prototyping or spiral approaches to software development?
15. Describe tool-based development. With which development approaches (conventional, prototyping, spiral, XP, or UP) can it be used?
16. What is an object framework? How is it different from a library of components?
17. What are the differences between general-purpose and application-specific foundation classes?
18. For which layers of an OO program are off-the-shelf components most likely to be available?
19. What is a software component?
20. Why have software components only recently come into widespread use?
21. In what ways do components make software construction and maintenance faster?
22. Describe four interoperability standards for software components. Compare and contrast the standards.

Thinking Critically

1. Consider the capabilities of the programming language and development tools used in your most recent programming or software development class. Are they powerful enough to implement developmental prototypes for single-user software on a personal computer? Are they sufficiently powerful to implement developmental prototypes in a multiuser, distributed, database-oriented, and high-security operating environment? If they were being used with a tool-based development approach, what types of user requirements might be sacrificed because they didn't fit language or tool capabilities?
2. Consider XP's team-based programming approach in general and its principle of allowing any programmer to modify any code at any time in particular. No other development approach or programming management technique follows this particular principle. Why not? In other words, what are the possible negative implications of this principle? How does XP minimize these negative implications?
3. The Object Data Management Group (ODMG) was briefly described in Chapter 10. Visit the Web sites of the OMG (www.omg.org) and ODMG (www.odmg.org) and gather information to answer the following questions. What is the goal or purpose of each standard? What overlap, if any, exists among the two standards? Are the standards complementary?
4. Read the article by Scott Lewandowski listed in the “Further Resources” section. Compare and contrast the CORBA and Microsoft COM+ approaches to component-based development. (Note: COM+ is called DCOM in the article.) Which approach appears better positioned to deliver on the promises of component-based design and development? Which approach is a true implementation of distributed objects? Which approach is likely to dominate the market in the near future?
5. Visit the Web site of the World Wide Web Consortium (www.w3.org) and review recent developments related to the SOAP standard. What new capabilities have been added, and what is the effect of those capabilities on the standard's complexity and infrastructure requirements?
6. Compare and contrast object frameworks and components in terms of ease of modification before system installation, ease of modification after system installation, and overall cost savings from code reuse. Which approach is likely to yield greater benefits for a unique application system (such as a distribution management system that is highly specialized to a particular company)? Which approach is likely to yield greater benefits for general-purpose application software (such as a spreadsheet or virus protection program)?
7. Assume that a project development team has identified risks, outcomes, and probabilities for a new Web-based insurance-pricing system, as summarized in the table below. Compute the expected schedule delay for each risk and for the entire project. Which risks should be actively managed? What mitigation measures might be appropriate? Why do the outcome probabilities for some risks sum to more than 100 percent?
8. Consider the similarities and differences between component-based design and the construction of computer hardware (such as personal computers) and the design and construction of computer software. Can the “plug-compatible” nature of computer hardware ever be achieved with computer software? Does your answer depend on the type of software (for example, system or application software)? Do differences in the expected lifetime of computer hardware and software affect the applicability or desirability of component-based techniques?

[Click to enlarge]
Experiential Exercises

1. Talk with someone at your school or place of employment about a recent development project that was canceled because of slow development. What development approach was employed for the project? Would a different development approach have resulted in faster development?
2. Talk with someone at your school or place of employment about a recent development project that failed to satisfy user or technical requirements. Review the reasons for failure and the risk management processes that were used. What changes (if any) in the risk management process might have prevented the failure or mitigated its effects?
3. Consider a project to replace the student advising system at your school with one that employs modern features (for example, Web-based interfaces, instant reports of degree program progress, and automatic course registration based on a long-term degree plan). Now consider how such a project would be implemented using tool-based development. Investigate alternative tools such as Visual Studio, PowerBuilder, and Oracle Forms, and determine (for each tool) what requirements would need to be compromised for the sake of development speed if the tool were chosen.
4. Examine the capabilities of a modern programming environment such as Microsoft Visual Studio .NET, IBM WebSphere Studio, or Borland Enterprise Studio. Is an object framework or component library provided? Does successful use of the programming environment require a specific development approach? Does successful use require a specific development methodology?
5. Examine the technical description of a complex end-user software package such as Microsoft Office. In what ways was component-based software development used to build the software?

Case Studies
Midwestern Power Services

Midwestern Power Services (MPS) provides natural gas and electricity to customers in four Midwestern states. Like most power utilities, MPS is facing the prospect of significant federal and state deregulation over the next several years. Federal deregulation has opened the floodgates of change but provided little guidance or restriction on the future shape of the industry. State legislatures in two of the states MPS serves have already begun deliberations about deregulation, and the other two states are expected to follow shortly.

The primary features of the proposed state-level deregulation are:

* Separating the supply and distribution portions of the current regulated power utility business
* Allowing customers to choose alternate suppliers regardless of what company actually distributes electricity or natural gas

The deregulation proposals seek to increase competition in electricity and natural gas by regulating only distribution. Natural gas extraction and electricity generation would be unregulated, and consumers would have a choice of wholesale suppliers. The final form of deregulation is unknown, and its exact details will probably vary from state to state.

MPS wants to get a head start on preparing its systems for deregulation. Three systems are most directly affected—one for purchasing wholesale natural gas, one for purchasing wholesale electricity, and one for billing customers for combined gas and electric services. The billing system is not currently structured to separate supply and distribution charges, and it has no direct ties to the natural gas and electricity purchasing systems. MPS's general ledger accounting system is also affected because it is used to account for MPS's own electricity-generating operations.

MPS plans to restructure its accounting, purchasing, and billing systems to match the proposed deregulation framework:

* Customer billing statements will clearly distinguish between charges for the supply and distribution of both gas and electricity. The wholesale suppliers of each power commodity will determine prices for supply. Revenues will be allocated to appropriate companies (such as distribution charges to MPS and supply charges to wholesale providers).
* MPS will create a new payment system for wholesale suppliers to capture per-customer revenues and to generate payments from MPS to wholesale suppliers. Daily payments will be made electronically based on actual payments by customers.
* MPS will restructure its own electricity-generating operations into a separate profit center, similar to other wholesale power providers. Revenues from customers who choose MPS as their electricity supplier will be matched to generation costs.

MPS's current systems were all developed internally. The general ledger accounting and natural-gas purchasing systems are mainframe based. They were developed in the mid-1980s, and incremental changes have been made ever since. All programs are written in COBOL, and DB2 (a relational DBMS) is used for data storage and management. There are approximately 50,000 lines of COBOL code.

The billing system was rewritten from the ground up in the mid-1990s and has been slightly modified since that time. The system runs on a cluster of servers using the UNIX operating system. The latest version of Oracle (a relational DBMS) is used for data storage and management. Most of the programs are written in C++, although some are written in C and others use Oracle Forms. There are approximately 80,000 lines of C and C++ code.

MPS has a network that is used primarily to support terminal-to-host communications, Internet-access, and printer and file sharing for microcomputers. The billing system relies on the network for communication among servers in the cluster. The mainframe that supports the accounting and purchasing systems is connected to the network, although that connection is primarily used to back up data files and software to a remote location. The company has experimented with Web-browser interfaces for telephone customer support and on-line statements. However, no functioning Web-based systems have been completed or installed.

MPS is currently in the early stages of planning the system upgrades. It has not yet committed-to specific technologies or development approaches. MPS has also not yet decided whether to upgrade individual systems or replace them entirely. The target date for completing all system modifications is three years from now, but the company is actively seeking ways to shorten that schedule.

1. Describe the pros and cons of the traditional, prototyping, spiral, UP, and XP development approaches to upgrading the existing systems or developing new ones. Do the pros and cons change if the systems are replaced instead of upgraded? Do the pros and cons vary by system? If so, should different development approaches be used for each system?
2. Is tool-based development a viable development approach for any of the systems? If so, identify the system(s) and suggest tools that might be appropriate. For each tool suggested, identify the types of requirements likely to be sacrificed because of a poor match to tool capabilities.
3. Assume that all systems will be replaced with custom-developed software. Will an object framework be valuable for implementing the replacements? Is an application-specific framework likely to be available from a third party? Why or why not?
4. Assume that all systems will be replaced with custom-developed software. Should MPS actively pursue component-based design and development? Why or why not? Does MPS have sufficient skills and infrastructure to implement a component-based system? If not, what skills and infrastructure are lacking?
5. List the risks and risk outcomes that may affect the planned upgrade or replacement. Which risks require active management? What mitigation measures should be pursued?

Rethinking Rocky Mountain Outfitters

Now that you've studied most of the material in this textbook, you'll be able to make more informed and in-depth choices regarding development approach and techniques for the RMO customer support system (CSS). Review the CSS development project charter in Figure 3-4, the detailed scope description in Figure 3-7, the RMO memo at the beginning of Chapter 2, and the “Rethinking RMO” case at the end of Chapter 2. You may also need to look at other RMO material from Chapter 2 Chapter 3 to answer the following questions:

1. Consider the criteria discussed in this chapter for choosing among the traditional, prototyping, and spiral approaches to system development. Which CSS project characteristics favor traditional development? Which favor spiral development? Which approach is best suited to the CSS development project?
2. RMO had no experience developing Web-based systems prior to undertaking the CSS development project. What are the implications of RMO's inexperience for risk management within the CSS development project? What types of risks should system developers actively mitigate during design and implementation?
3. Should tool-based development or joint application design be used in the CSS development project? Why or why not?
4. Should RMO consider using purchased components within the new CSS? If so, when should it begin looking for components? How will a decision to use components affect the analysis, design, and implementation phases? If purchased components are used, should the portions of the system developed in-house also be structured as components? Will a decision to pursue component-based design and development make it necessary to adopt OO analysis and design methods?

Focusing on Reliable Pharmaceutical Service

Reread the Reliable Pharmaceutical Service cases in Chapter 8 Chapter 9. Armed with the new knowledge that you've gained from reading this chapter, revisit the questions posed in the last paragraph of the Chapter 9 case:

1. Which of the development approaches described in this chapter seem best suited to the project? Why? Plan the first six weeks of the project under your chosen development approach?
2. What role will components play in the system being developed for Reliable? Does it matter on which component-related standards they're based? Why or why not?

Further Resources

* Kent Beck, Extreme Programming Explained: Embrace Change, Addison-Wesley Publishing Company, 1999.
* Barry W. Boehm, “A Spiral Model of Software Development and Enhancement,” Computer, May 1988, 61-72.
* Frederick P. Brooks, Jr., “No Silver Bullet: Essence and Accidents of Software Engineering,” Computer, April 1987, 10-19.
* Cetus Team, “Links on Objects & Components,” http://www.cetus-links.org.
* Craig Larman, Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and the Unified Process (2nd ed.). Prentice-Hall, 2002.
* Scott M. Lewandowski, “Frameworks for Component-Based Client/Server Computing,” ACM Computing Surveys, Volume 30: 1, March 1998, 3-27.
* Steve McConnell, Rapid Development: Taming Wild Software Schedules. Microsoft Press, 1996.
* Object Data Management Group (developer of an object database management standard) home page, http://www.odmg.org.
* Robert Orfali, Dan Harkey, and Jeri Edwards, Client/Server Survival Guide (3rd ed.). John Wiley & Sons, 1999.
* Jawed Siddiqi, “An Exposition of XP But No Position on XP,” IEEE Computer Society, http://computer.org/seweb/dynabook/Index.htm.
* Software Productivity Consortium (www.software.org), Component Evaluation Process. SPC-98091-CMC, May 1999.
* Steve Sparks, Kevin Benner, and Chris Faris, “Managing Object-Oriented Framework Reuse,” Computer, Volume 29: 9, September 1996, 53-61.
* Jane Wood, and Denise Silver, Joint Application Development (2nd ed.). John Wiley & Sons, 1995.XProgramming.com, http://www.xprogramming.com.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License