Anda di halaman 1dari 28

Software Development Process

A software development process, also known as a software development life cycle (SDLC), is a structure imposed on the development of a software product. It is often considered a subset of systems development life cycle. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. Some people consider a lifecycle model a more general term and a software development process a more specific term. For example, there are many specific software development processes that 'fit' the spiral lifecycle model. ISO 12207 is an ISO standard for software lifecycle processes. It aims to be the standard that defines all the tasks required for developing and maintaining software. The large and growing body of software development organizations implement process methodologies. A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking. Organizations may create a Software Engineering Process Group (SEPG), which is the focal point for process improvement. Composed of line practitioners who have varied skills, the group is at the center of the collaborative effort of everyone in the organization who is involved with software engineering process improvement. Software development activities

The activities of the software development process represented in the waterfall model. There are several other models to represent this process.

Planning An important task in creating a software product is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result, but not what software should do. Incomplete, ambiguous, or even contradictory requirements are recognized by skilled and experienced software engineers at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Once the general requirements are gathered from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document. Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be considered a legal document so that if there are ever disputes, any ambiguity of what was promised to the client can be clarified. Implementation, testing and documenting Implementation is the part of the process where software engineers actually program the code for the project. Software testing is an integral and important phase of the software development process. This part of the process ensures that defects are recognized as soon as possible. Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the writing of an API, be it external or internal. It is very important to document everything in the project. Deployment and maintenance Deployment starts after the code is appropriately tested, is approved for release and sold or otherwise distributed into a production environment. Software Training and Support is important and a lot of developers fail to realize that. It would not matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are often resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, it is very important to have training classes for new clients of your software. Maintaining and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. It may be necessary to add code that does not fit the original design to correct an unforeseen problem or it may be that a customer is requesting more functionality and code can be added to accommodate their requests. If the labor cost of the maintenance phase exceeds 25% of the prior-phases' labor cost, then it is likely that the overall quality of at least one prior phase is poor.[citation needed] In that case, management should consider the option of rebuilding the system (or portions) before maintenance cost is out of control.

Waterfall Model
The waterfall model is a sequential design process, often used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design, Construction, Testing, Production/Implementation and Maintenance. The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development. The first known presentation describing use of similar phases in software engineering was held by Herbert D. Benington at Symposium on advanced programming methods for digital computers on 29 June 1956.[1] This presentation was about the development of software for SAGE. In 1983 the paper was republished with a foreword by Benington pointing out that the process was not in fact performed in strict top-down, but depended on a prototype. The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce,[3] though Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model (Royce 1970). This, in fact, is how the term is generally used in writing about software developmentto describe a critical view of a commonly used software practice.

The unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall.

The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development. The first known presentation describing use of similar phases in software engineering was held by Herbert D. Benington at Symposium on advanced programming methods for digital computers on 29 June 1956.[1] This presentation was about the development of software for SAGE. In 1983 the paper was republished[2] with a foreword by Benington pointing out that the process was not in fact performed in strict top-down, but depended on a prototype. The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce,[3] though Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model (Royce 1970). This, in fact, is how the term is generally used in writing about software developmentto describe a critical view of a commonly used software practice. Requirements analysis is the first stage in the systems engineering process and software development process. Requirements analysis in systems engineering and software engineering, encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users. Requirements analysis is critical to the success of a development project.[2] Requirements must be documented, actionable, measurable, testable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design. Requirements can be architectural, structural, behavioral, functional, and non-functional.

Requirements analysis is the first stage in the systems engineering process and software development process. Software design is a process of problem solving and planning for a software solution. After the purpose and specifications of software are determined, software developers will design or employ designers to develop a plan for a solution. It includes low-level component and algorithm implementation issues as well as the architectural view. The software requirements analysis (SRA) step of a software development process yields specifications that are used in software engineering. If the software is "semiautomated" or user centered, software design may involve user experience design yielding a story board to help determine those specifications. If the software is completely automated (meaning no user or user interface), a software design may be as simple as a flow chart or text describing a planned sequence of events. There are also semi-standard methods like Unified Modeling Language and Fundamental modeling concepts. In either case some documentation of the plan is usually the product of the design. A software design may be platform-independent or platform-specific, depending on the availability of the technology called for by the design. Software design can be considered as putting solution to the problem(s) in hand using the available capabilities. Hence the main difference between Software analysis and design is that the output of the analysis of a software problem will be smaller problems to solve and it should not deviate so much even if it is conducted by different team members or even by entirely different groups. But since design depends on the capabilities, we can have different designs for the same problem depending on the capabilities of the environment that will host the solution (whether it is some OS, web , mobile or even the new cloud computing paradigm). The solution will depend also on the used development environment (Whether you build a solution from scratch or using reliable frameworks or at least implement some suitable design patterns) Implementation: Computer programming (often shortened to programming or coding) is the process of designing, writing, testing, debugging, and maintaining the source code of computer programs. This source code is written in a programming language. The purpose of programming is to create a program that exhibits a certain desired behavior. The process of writing source code often requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic. Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.[1] Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects). Software testing can be stated as the process of validating and verifying that a software program/application/product:

1. meets the requirements that guided its design and development; 2. works as expected; and 3. can be implemented with the same characteristics. Software testing, depending on the testing method employed, can be implemented at any time in the development process. However, most of the test effort occurs after the requirements have been defined and the coding process has been completed. As such, the methodology of the test is governed by the software development methodology adopted. Different software development models will focus the test effort at different points in the development process. Newer development models, such as Agile, often employ test driven development and place an increased portion of the testing in the hands of the developer, before it reaches a formal team of testers. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed. Testing can never completely identify all the defects within software.[2] Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts,[3] comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment. Software deployment is all of the activities that make a software system available for use. The general deployment process consists of several interrelated activities with possible transitions between them. These activities can occur at the producer site or at the consumer site or both. Because every software system is unique, the precise processes or procedures within each activity can hardly be defined. Therefore, "deployment" should be interpreted as a general process that has to be customized according to specific requirements or characteristics. A brief description of each activity will be presented later. Deployment activities

Release
The release activity follows from the completed development process. It includes all the operations to prepare a system for assembly and transfer to the customer site. Therefore, it must determine the resources required to operate at the customer site and collect information for carrying out subsequent activities of deployment process.

Install and activate


Activation is the activity of starting up the executable component of software. For simple system, it involves establishing some form of command for execution. For complex systems, it should make all the supporting systems ready to use. In larger software deployments, the working copy of the software might be installed on a production server in a production environment. Other versions of the deployed software may be installed in a test environment, development environment and disaster recovery environment.

Deactivate
Deactivation is the inverse of activation, and refers to shutting down any executing components of a system. Deactivation is often required to perform other deployment activities, e.g., a software system may need to be deactivated before an update can be performed. The practice of removing infrequently used or obsolete systems from service is often referred to as application retirement or application decommissioning.

Adapt
The adaptation activity is also a process to modify a software system that has been previously installed. It differs from updating in that adaptations are initiated by local events such as changing the environment of customer site, while updating is mostly started from remote software producer.

Update
The update process replaces an earlier version of all or part of a software system with a newer release.

Built-In
Mechanisms for installing updates are built into some software systems. Automation of these update processes ranges from fully automatic to user initiated and controlled. Norton Internet Security is an example of a system with a semi-automatic method for retrieving and installing updates to both the antivirus definitions and other components of the system. Other software products provide query mechanisms for determining when updates are available.

Version tracking
Version tracking systems help the user find and install updates to software systems installed on PCs and local networks.

Web based version tracking systems notify the user when updates are available for software systems installed on a local system. For example: VersionTracker Pro checks software versions on a user's computer and then queries its database to see if any updates are available. Local version tracking system notifies the user when updates are available for software systems installed on a local system. For example: Software Catalog stores version and other information for each software package installed on a local system. One click of a button launches a browser window to the upgrade web page for the application, including auto-filling of the user name and password for sites that require a login. Browser based version tracking systems notify the user when updates are available for software packages installed on a local system. For example: wfx-

Versions is a Firefox extension which helps the user find the current version number of any program listed on the web.

Uninstall
Uninstallation is the inverse of installation. It is the removal of a system that is no longer required. It also involves some reconfiguration of other software systems in order to remove the uninstalled systems files and dependencies.

Retire
Ultimately, a software system is marked as obsolete and support by the producers is withdrawn. It is the end of the life cycle of a software product.

Prototype
A prototype is an early sample or model built to test a concept or process or to act as a thing to be replicated or learned from. In semantics, prototypes or proto instances combine the most representative attributes of a category. Prototypes are typical instances of a category that serve as benchmarks against which the surrounding, less representative. In many fields, there is great uncertainty as to whether a new design will actually do what is desired. New designs often have unexpected problems. A prototype is often used as part of the product design process to allow engineers and designers the ability to explore design alternatives, test theories and confirm performance prior to starting production of a new product. Engineers use their experience to tailor the prototype according to the specific unknowns still present in the intended design. For example, some prototypes are used to confirm and verify consumer interest in a proposed design whereas other prototypes will attempt to verify the performance or suitability of a specific design approach. In general, an iterative series of prototypes will be designed, constructed and tested as the final design emerges and is prepared for production. With rare exceptions, multiple iterations of prototypes are used to progressively refine the design. A common strategy is to design, test, evaluate and then modify the design based on analysis of the prototype. In many products it is common to assign the prototype iterations Greek letters. For example, a first iteration prototype may be called an "Alpha" prototype. Often this iteration is not expected to perform as intended and some amount of failures or issues are anticipated. Subsequent prototyping iterations (Beta, Gamma, etc.) will be expected to resolve issues and perform closer to the final production intent. In many product development organizations, prototyping specialists are employed - individuals with specialized skills and training in general fabrication techniques that can help bridge between theoretical designs and the fabrication of prototypes.

Basic prototype categories There is no general agreement on what constitutes a "prototype" and the word is often used interchangeably with the word "model" which can cause confusion. In general, "prototypes" fall into five basic categories: Proof-of-Principle Prototype (Model) (in electronics sometimes built on a breadboard). A Proof of concept prototype is used to test some aspect of the intended design without attempting to exactly simulate the visual appearance, choice of materials or intended manufacturing process. Such prototypes can be used to "prove" out a potential design approach such as range of motion, mechanics, sensors, architecture, etc. These types of models are often used to identify which design options will not work, or where further development and testing is necessary. Form Study Prototype (Model). This type of prototype will allow designers to explore the basic size, look and feel of a product without simulating the actual function or exact visual appearance of the product. They can help assess ergonomic factors and provide insight into visual aspects of the product's final form. Form Study Prototypes are often hand-carved or machined models from easily sculpted, inexpensive materials (e.g., urethane foam), without representing the intended color, finish, or texture. Due to the materials used, these models are intended for internal decision making and are generally not durable enough or suitable for use by representative users or consumers. User Experience Prototype (Model). A User Experience Model invites active human interaction and is primarily used to support user focused research. While intentionally not addressing possible aesthetic treatments, this type of model does more accurately represent the overall size, proportions, interfaces, and articulation of a promising concept. This type of model allows early assessment of how a potential user interacts with various elements, motions, and actions of a concept which define the initial use scenario and overall user experience. As these models are fully intended to be used and handled, more robust construction is key. Materials typically include plywood, REN shape, RP processes and CNC machined components. Construction of user experience models is typically driven by preliminary CAID/CAD which may be contructed from scratch or with methods such as industrial CT scanning. Visual Prototype (Model) will capture the intended design aesthetic and simulate the appearance, color and surface textures of the intended product but will not actually embody the function(s) of the final product. These models will be suitable for use in market research, executive reviews and approval, packaging mock-ups, and photo shoots for sales literature. Functional Prototype (Model) (also called a working prototype) will, to the greatest extent practical, attempt to simulate the final design, aesthetics, materials and functionality of the intended design. The functional prototype may be reduced in size (scaled down) in order to reduce costs. The construction of a fully working full-scale prototype and the ultimate test of concept, is the engineers' final check for design flaws and allows last-minute improvements to be made before larger production runs are ordered.

Characteristics and limitations of prototypes Engineers and prototyping specialists seek to understand the limitations of prototypes to exactly simulate the characteristics of their intended design. A degree of skill and experience is necessary to effectively use prototyping as a design verification tool. It is important to realize that by their very definition, prototypes will represent some compromise from the final production design. Due to differences in materials, processes and design fidelity, it is possible that a prototype may fail to perform acceptably whereas the production design may have been sound. A counter-intuitive idea is that prototypes may actually perform acceptably whereas the production design may be flawed since prototyping materials and processes may occasionally outperform their production counterparts. In general, it can be expected that individual prototype costs will be substantially greater than the final production costs due to inefficiencies in materials and processes. Prototypes are also used to revise the design for the purposes of reducing costs through optimization and refinement. It is possible to use prototype testing to reduce the risk that a design may not perform acceptably, however prototypes generally cannot eliminate all risk. There are pragmatic and practical limitations to the ability of a prototype to match the intended final performance of the product and some allowances and engineering judgement are often required before moving forward with a production design. Building the full design is often expensive and can be time-consuming, especially when repeated several timesbuilding the full design, figuring out what the problems are and how to solve them, then building another full design. As an alternative, "rapid-prototyping" or "rapid application development" techniques are used for the initial prototypes, which implement part, but not all, of the complete design. This allows designers and manufacturers to rapidly and inexpensively test the parts of the design that are most likely to have problems, solve those problems, and then build the full design.

Spiral Model
The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. Also known as the spiral lifecycle model (or spiral development), it is a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is intended for large, expensive and complicated projects.

Spiral model (Boehm, 1986). This should not be confused with the Helical model of modern systems architecture that uses a dynamic programming (mathematical not software type programming!) approach in order to optimise the system's architecture before design decisions are made by coders that would cause problems.

History The spiral model was defined by Barry Boehm in his 1986 article "A Spiral Model of Software Development and Enhancement".[1] This model was not the first model to discuss iterative development. As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far. Analysis and engineering efforts are applied at each phase of the project, with an eye toward the end goal of the project. The model The spiral model combines the idea of iterative development (prototyping) with the systematic, controlled aspects of the waterfall model. It allows for incremental releases of the product, or incremental refinement through each time around the spiral. The spiral model also explicitly includes risk management within software development. Identifying major risks, both technical and managerial, and determining how to lessen the risk helps keep the software development process under control.[2] The spiral model is based on continuous refinement of key products for requirements definition and analysis, system and software design, and implementation (the code). At each iteration around the cycle, the products are extensions of an earlier product. This model uses many of the same phases as the waterfall model, in essentially the same order, separated by planning, risk assessment, and the building of prototypes and simulations.[2] Documents are produced when they are required, and the content reflects the information necessary at that point in the process. All documents will not be created at the beginning of the process, nor all at the end (hopefully). Like the product they define, the documents are works in progress. The idea is to have a continuous stream of products produced and available for user review.[2] The spiral lifecycle model allows for elements of the product to be added in when they become available or known. This assures that there is no conflict with previous requirements and design. This method is consistent with approaches that have multiple software builds and releases and allows for making an orderly transition to a maintenance activity. Another positive aspect is that the spiral model forces early user involvement in the system development effort. For projects with heavy user interfacing, such as user application programs or instrument interface applications, such involvement is helpful.[2] Starting at the center, each turn around the spiral goes through several task regions [2]:

Determine the objectives, alternatives, and constraints on the new iteration. Evaluate alternatives and identify and resolve risk issues. Develop and verify the product for this iteration. Plan the next iteration.

Note that the requirements activity takes place in multiple sections and in multiple iterations, just as planning and risk analysis occur in multiple places. Final design, implementation, integration, and test occur in iteration 4. The spiral can be repeated multiple times for multiple builds. Using this method of development, some functionality can be delivered to the user faster than the waterfall method. The spiral method also helps manage risk and uncertainty by allowing multiple decision points and by explicitly admitting that all of anything cannot be known before the subsequent activity starts.[2] Applications The spiral model is mostly used in large projects. For smaller projects, the concept of agile software development is becoming a viable alternative. The US military had adopted the spiral model for its Future Combat Systems program. The FCS project was canceled after six years (20032009), it had a two year iteration (spiral). The FCS should have resulted in three consecutive prototypes (one prototype per spiralevery two years). It was canceled in May 2009. The spiral model thus may suit small (up to $3 million) software applications and not a complicated ($3 billion) distributed, interoperable, system of systems. Also it is reasonable to use the spiral model in projects where business goals are unstable but the architecture must be realized well enough to provide high loading and stress ability. For example, the Spiral Architecture Driven Development is the spiral based Software Development Life Cycle (SDLC) which shows one possible way how to reduce the risk of non-effective architecture with the help of a spiral model in conjunction with the best practices from other models.

Rad Model
Rapid application development (RAD) is a software development methodology that uses minimal planning in favor of rapid prototyping. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster, and makes it easier to change requirements.

Rapid Application Development (RAD) Model

Rapid application development is a software development methodology that involves methods like iterative development and software prototyping. According to Whitten (2004), it is a merger of various structured techniques, especially data-driven Information Engineering, with prototyping techniques to accelerate software systems development. In rapid application development, structured techniques and prototyping are especially used to define users' requirements and to design the final system. The development process starts with the development of preliminary data models and business process models using structured techniques. In the next stage, requirements are verified using prototyping, eventually to refine the data and process models. These stages are repeated iteratively; further development results in "a combined business requirements and technical design statement to be used for constructing new systems".[1] RAD approaches may entail compromises in functionality and performance in exchange for enabling faster development and facilitating application maintenance. Rapid application development is a term originally used to describe a software development process introduced by James Martin in 1991. Martin's methodology involves iterative development and the construction of prototypes. More recently, the term and its acronym have come to be used in a broader, general sense that encompasses a variety of methods aimed at speeding application development, such as the use of software frameworks of varied types, such as web application frameworks. Rapid application development was a response to non-agile processes developed in the 1970s and 1980s, such as the Structured Systems Analysis and Design Method and other Waterfall models. One problem with previous methodologies was that applications took so long to build that requirements had changed before the system was complete, resulting in inadequate or even unusable systems. Another problem was the assumption that a methodical requirements analysis phase alone would identify all the critical requirements. Ample evidence[citation needed] attests to the fact that this is seldom the case, even for projects with highly experienced professionals at all levels. Starting with the ideas of Brian Gallagher, Alex Balchin, Barry Boehm and Scott Shultz, James Martin developed the rapid application development approach during the 1980s at IBM and finally formalized it by publishing a book in 1991, Rapid Application Development. Four phases of RAD 1. Requirements Planning phase combines elements of the system planning and systems analysis phases of the System Development Life Cycle (SDLC). Users, managers, and IT staff members discuss and agree on business needs, project scope, constraints, and system requirements. It ends when the team agrees on the key issues and obtains management authorization to continue. 2. User design phase during this phase, users interact with systems analysts and develop models and prototypes that represent all system processes, inputs, and outputs. The RAD groups or subgroups typically use a combination of Joint Application Development

(JAD) techniques and CASE tools to translate user needs into working models. User Design is a continuous interactive process that allows users to understand, modify, and eventually approve a working model of the system that meets their needs. 3. Construction phase focuses on program and application development task similar to the SDLC. In RAD, however, users continue to participate and can still suggest changes or improvements as actual screens or reports are developed. Its tasks are programming and application development, coding, unit-integration and system testing. 4. Cutover phase resembles the final tasks in the SDLC implementation phase, including data conversion, testing, changeover to the new system, and user training. Compared with traditional methods, the entire process is compressed. As a result, the new system is built, delivered, and placed in operation much sooner. Its tasks are data conversion, full-scale testing, system changeover, user training.

Iterative and Incremental development


Iterative and Incremental development is at the heart of a cyclic software development process developed in response to the weaknesses of the waterfall model. It starts with an initial planning and ends with deployment with the cyclic interactions in between.

An iterative development model Itative and incremental development are essential parts of the Rational Unified Process, Extreme Programming and generally the various agile software development frameworks. It follows a similar process to the plan-do-check-act cycle of business process improvement.

The basic idea A common mistake is to consider "iterative" and "incremental" as synonyms, which they are not. In software/systems development, however, they typically go hand in hand. The basic idea is to develop a system through repeated cycles (iterative) and in smaller portions at a time (incremental), allowing software developers to take advantage of what was learned during development of earlier parts or versions of the system. Learning comes from both the development and use of the system, where possible key steps in the process start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added. The procedure itself consists of the initialization step, the iteration step, and the Project Control List. The initialization step creates a base version of the system. The goal for this initial implementation is to create a product to which the user can react. It should offer a sampling of the key aspects of the problem and provide a solution that is simple enough to understand and implement easily. To guide the iteration process, a project control list is created that contains a record of all tasks that need to be performed. It includes such items as new features to be implemented and areas of redesign of the existing solution. The control list is constantly being revised as a result of the analysis phase. The iteration involves the redesign and implementation of a task from the project control list, and the analysis of the current version of the system. The goal for the design and implementation of any iteration is to be simple, straightforward, and modular, supporting redesign at that stage or as a task added to the project control list. The level of design detail is not dictated by the interactive approach. In a light-weight iterative project the code may represent the major source of documentation of the system; however, in a critical iterative project a formal Software Design Document may be used. The analysis of an iteration is based upon user feedback, and the program analysis facilities available. It involves analysis of the structure, modularity, usability, reliability, efficiency, & achievement of goals. The project control list is modified in light of the analysis results.

Iterative development. Phases Incremental development slices the system functionality into increments (portions). In each increment, a slice of functionality is delivered through cross-discipline work, from the requirements to the deployment. The unified process groups increments/iterations into phases: inception, elaboration, construction, and transition.

Inception identifies project scope, risks, and requirements (functional and non-functional) at a high level but in enough detail that work can be estimated. Elaboration delivers a working architecture that mitigates the top risks and fulfills the non-functional requirements. Construction incrementally fills-in the architecture with production-ready code produced from analysis, design, implementation, and testing of the functional requirements. Transition delivers the system into the production operating environment.

Each of the phases may be divided into 1 or more iterations, which are usually time-boxed rather than feature-boxed. Architects and analysts work one iteration ahead of developers and testers to keep their work-product backlog full.

The unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall. Contrast with Waterfall development Waterfall development completes the project-wide work-products of each discipline in one step before moving on to the next discipline in the next step. Business value is delivered all at once, and only at the very end of the project. Backtracking is possible in an iterative approach.

Implementation guidelines Guidelines that drive the implementation and analysis include:

Any difficulty in design, coding and testing a modification should signal the need for redesign or re-coding. Modifications should fit easily into isolated and easy-to-find modules. If they do not, some redesign is possibly needed. Modifications to tables should be especially easy to make. If any table modification is not quickly and easily done, redesign is indicated. Modifications should become easier to make as the iterations progress. If they are not, there is a basic problem such as a design flaw or a proliferation of patches. Patches should normally be allowed to exist for only one or two iterations. Patches may be necessary to avoid redesigning during an implementation phase. The existing implementation should be analysed frequently to determine how well it measures up to project goals. Program analysis facilities should be used whenever available to aid in the analysis of partial implementations. User reaction should be solicited and analysed for indications of deficiencies in the current implementation.

Agile Software Development


Agile Software Development is a group of software development methodologies based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. The Agile Manifesto[1] introduced the term in 2001.

Agile software development poster

Incremental software development methods have been traced back to 1957. In 1974, a paper by E. A. Edmonds introduced an adaptive software development process. So-called lightweight software development methods evolved in the mid-1990s as a reaction against heavyweight methods, which were characterized by their critics as a heavily regulated, regimented, micromanaged, waterfall model of development. Proponents of lightweight methods (and now agile methods) contend that they are a return to development practices from early in the history of software development. Early implementations of lightweight methods include Scrum (1995), Crystal Clear, Extreme Programming (1996), Adaptive Software Development, Feature Driven Development, and

Dynamic Systems Development Method (DSDM) (1995). These are now typically referred to as agile methodologies, after the Agile Manifesto published in 2001. Agile Manifesto In February 2001, 17 software developers[5] met at the Snowbird, Utah resort, to discuss lightweight development methods. They published the Manifesto for Agile Software Development[1] to define the approach now known as agile software development. Some of the manifesto's authors formed the Agile Alliance, a nonprofit organization that promotes software development according to the manifesto's principles. Agile Manifesto reads, in its entirety, as follows: We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. The meanings of the Manifesto items on the left within the agile software development context are described below. Individuals and Interactions in agile development, self-organization and motivation are important, as are interactions like co-location and pair programming. Working software working software will be more useful and welcome than just presenting documents to clients in meetings. Customer collaboration requirements cannot be fully collected at the beginning of the software development cycle, therefore continuous customer or stakeholder involvement is very important. Responding to change agile development is focused on quick responses to change and continuous development.[6] Twelve principles underlie the Agile Manifesto, including:

Customer satisfaction by rapid delivery of useful software Welcome changing requirements, even late in development Working software is delivered frequently (weeks rather than months) Working software is the principal measure of progress Sustainable development, able to maintain a constant pace Close, daily co-operation between business people and developers Face-to-face conversation is the best form of communication (co-location)

Projects are built around motivated individuals, who should be trusted Continuous attention to technical excellence and good design Simplicity Self-organizing teams Regular adaptation to changing circumstances

In 2005, a group headed by Alistair Cockburn and Jim Highsmith wrote an addendum of project management principles, the Declaration of Interdependence,[8] to guide software project management according to agile development methods. There are many specific agile development methods. Most promote development, teamwork, collaboration, and process adaptability throughout the life-cycle of the project. Agile methods break tasks into small increments with minimal planning, and do not directly involve long-term planning. Iterations are short time frames (timeboxes) that typically last from one to four weeks. Each iteration involves a team working through a full software development cycle including planning, requirements analysis, design, coding, unit testing, and acceptance testing when a working product is demonstrated to stakeholders. This minimizes overall risk and allows the project to adapt to changes quickly. Stakeholders produce documentation as required. An iteration may not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration.[9] Multiple iterations may be required to release a product or new features. Team composition in an agile project is usually cross-functional and self-organizing without consideration for any existing corporate hierarchy or the corporate roles of team members. Team members normally take responsibility for tasks that deliver the functionality an iteration requires. They decide individually how to meet an iteration's requirements. Agile methods emphasize face-to-face communication over written documents when the team is all in the same location. Most agile teams work in a single open office (called a bullpen), which facilitates such communication. Team size is typically small (5-9 people) to simplify team communication and team collaboration. Larger development efforts may be delivered by multiple teams working toward a common goal or on different parts of an effort. This may require a co-ordination of priorities across teams. When a team works in different locations, they maintain daily contact through videoconferencing, voice, e-mail, etc. No matter what development disciplines are required, each agile team will contain a customer representative. This person is appointed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer mid-iteration problem-domain questions. At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment (ROI) and ensuring alignment with customer needs and company goals. Most agile implementations use a routine and formal daily face-to-face communication among team members. This specifically includes the customer representative and any interested stakeholders as observers. In a brief session, team members report to each other what they did

the previous day, what they intend to do today, and what their roadblocks are. This face-to-face communication exposes problems as they arise. Agile development emphasizes working software as the primary measure of progress. This, combined with the preference for face-to-face communication, produces less written documentation than other methods. The agile method encourages stakeholders to prioritize wants with other iteration outcomes based exclusively on business value perceived at the beginning of the iteration (also known as value-driven).[10] Specific tools and techniques such as continuous integration, automated or xUnit test, pair programming, test driven development, design patterns, domain-driven design, code refactoring and other techniques are often used to improve quality and enhance project agility. The Four Phases of Traditional Software Development

What follows is a description of the traditional/waterfall lifecycle, which agile is not: 1. Requirements. The first step in the Traditional Software Development Process is to identify requirements as well as the scope of the release. It encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users. 2. Architecture and Design. The goal of the architecture and design phase is to try to identify an architecture that has a good chance of working. The architecture is often defined using free-form diagrams which explore the technical infrastructure, and the major business entities and their relationships. The design is derived in a modeling session, in which issues are explored, until the team is satisfied that they understand what needs to be delivered. 3. Development. The AGILE development phase uses an evolutionary method that is an iterative and incremental approach to software development. Instead of creating a comprehensive prerequisite such as a requirements specification, that you review and accept before creating a complete design model; the critical development piece evolves over time in an iterative manner. The system is delivered incrementally over time, in small modules that have immediate business value, rather than building and then delivering a system in a single big bang release. By focusing development on smaller modules, agile projects are able to control costs despite the seeming lack of planning. 4. Test and Feedback. One of the key principles of the Agile Methodology is to conduct the testing of the software as it is being developed. The software development is test driven. The unit testing is achieved from the developers perspective and the acceptance testing is conducted from the customers perspective.

Extreme Programming
Extreme Programming (XP) is a software development methodology which is intended to improve software quality and responsiveness to changing customer requirements. As a type of agile software development, it advocates frequent "releases" in short development cycles (timeboxing), which is intended to improve productivity and introduce checkpoints where new customer requirements can be adopted. Other elements of extreme programming include: programming in pairs or doing extensive code review, unit testing of all code, avoiding programming of features until they are actually needed, a flat management structure, simplicity and clarity in code, expecting changes in the customer's requirements as time passes and the problem is better understood, and frequent communication with the customer and among programmers.[2][3][4] The methodology takes its name from the idea that the beneficial elements of traditional software engineering practices are taken to "extreme" levels, on the theory that if some is good, more is better. Critics have noted several potential drawbacks, including problems with unstable requirements, no documented compromises of user conflicts, and a lack of an overall design specification or document.

Planning and feedback loops in Extreme Programming. Extreme Programming was created by Kent Beck during his work on the Chrysler Comprehensive Compensation System (C3) payroll project. Beck became the C3 project leader in March 1996 and began to refine the development method used in the project and wrote a book on the method (in October 1999, Extreme Programming Explained was published). Chrysler cancelled the C3 project in February 2000, after the company was acquired by Daimler-Benz.

Although extreme programming itself is relatively new, many of its practices have been around for some time; the methodology, after all, takes "best practices" to extreme levels. For example, the "practice of test-first development, planning and writing tests before each micro-increment" was used as early as NASA's Project Mercury, in the early 1960s (Larman 2003). To shorten the total development time, some formal test documents (such as for acceptance testing) have been developed in parallel (or shortly before) the software is ready for testing. A NASA independent test group can write the test procedures, based on formal requirements and logical limits, before the software has been written and integrated with the hardware. In XP, this concept is taken to the extreme level by writing automated tests (perhaps inside of software modules) which validate the operation of even small sections of software coding, rather than only testing the larger features. Some other XP practices, such as refactoring, modularity, bottom-up design, and incremental design were described by Leo Brodie in his book published in 1984. Origins Software development in the 1990s was shaped by two major influences: internally, objectoriented programming replaced procedural programming as the programming paradigm favored by some in the industry; externally, the rise of the Internet and the dot-com boom emphasized speed-to-market and company-growth as competitive business factors. Rapidly-changing requirements demanded shorter product life-cycles, and were often incompatible with traditional methods of software development. Goals Extreme Programming Explained describes Extreme Programming as a software development discipline that organizes people to produce higher quality software more productively. XP attempts to reduce the cost of changes in requirements by having multiple short development cycles, rather than one long. In this doctrine changes are a natural, inescapable and desirable aspect of software development projects, and should be planned for instead of attempting to define a stable set of requirements. Extreme programming also introduces a number of basic values, principles and practices on top of the agile programming framework. Activities XP describes four basic activities that are performed within the software development process: coding, testing, listening, and designing. Each of those activities is described below. Coding The advocates of XP argue that the only truly important product of the system development process is code - software instructions a computer can interpret. Without code, there is no working product.

Coding can also be used to figure out the most suitable solution. Coding can also help to communicate thoughts about programming problems. A programmer dealing with a complex programming problem and finding it hard to explain the solution to fellow programmers might code it and use the code to demonstrate what he or she means. Code, say the proponents of this position, is always clear and concise and cannot be interpreted in more than one way. Other programmers can give feedback on this code by also coding their thoughts. Testing Extreme programming's approach is that if a little testing can eliminate a few flaws, a lot of testing can eliminate many more flaws.

Unit tests determine whether a given feature works as intended. A programmer writes as many automated tests as they can think of that might "break" the code; if all tests run successfully, then the coding is complete. Every piece of code that is written is tested before moving on to the next feature. Acceptance tests verify that the requirements as understood by the programmers satisfy the customer's actual requirements. These occur in the exploration phase of release planning.

A "testathon" is an event when programmers meet to do collaborative test writing, a kind of brainstorming relative to software testing. Listening Programmers must listen to what the customers need the system to do, what "business logic" is needed. They must understand these needs well enough to give the customer feedback about the technical aspects of how the problem might be solved, or cannot be solved. Communication between the customer and programmer is further addressed in the Planning Game. Designing From the point of view of simplicity, of course one could say that system development doesn't need more than coding, testing and listening. If those activities are performed well, the result should always be a system that works. In practice, this will not work. One can come a long way without designing but at a given time one will get stuck. The system becomes too complex and the dependencies within the system cease to be clear. One can avoid this by creating a design structure that organizes the logic in the system. Good design will avoid lots of dependencies within a system; this means that changing one part of the system will not affect other parts of the system.[citation needed] Values Extreme Programming initially recognized four values in 1999. A new value was added in the second edition of Extreme Programming Explained. The five values are:

Communication Building software systems requires communicating system requirements to the developers of the system. In formal software development methodologies, this task is accomplished through documentation. Extreme programming techniques can be viewed as methods for rapidly building and disseminating institutional knowledge among members of a development team. The goal is to give all developers a shared view of the system which matches the view held by the users of the system. To this end, extreme programming favors simple designs, common metaphors, collaboration of users and programmers, frequent verbal communication, and feedback. Simplicity Extreme Programming encourages starting with the simplest solution. Extra functionality can then be added later. The difference between this approach and more conventional system development methods is the focus on designing and coding for the needs of today instead of those of tomorrow, next week, or next month. This is sometimes summed up as the "you ain't gonna need it" (YAGNI) approach.[9] Proponents of XP acknowledge the disadvantage that this can sometimes entail more effort tomorrow to change the system; their claim is that this is more than compensated for by the advantage of not investing in possible future requirements that might change before they become relevant. Coding and designing for uncertain future requirements implies the risk of spending resources on something that might not be needed. Related to the "communication" value, simplicity in design and coding should improve the quality of communication. A simple design with very simple code could be easily understood by most programmers in the team. Feedback Within extreme programming, feedback relates to different dimensions of the system development:

Feedback from the system: by writing unit tests, or running periodic integration tests, the programmers have direct feedback from the state of the system after implementing changes. Feedback from the customer: The functional tests (aka acceptance tests) are written by the customer and the testers. They will get concrete feedback about the current state of their system. This review is planned once in every two or three weeks so the customer can easily steer the development. Feedback from the team: When customers come up with new requirements in the planning game the team directly gives an estimation of the time that it will take to implement.

Feedback is closely related to communication and simplicity. Flaws in the system are easily communicated by writing a unit test that proves a certain piece of code will break. The direct feedback from the system tells programmers to recode this part. A customer is able to test the system periodically according to the functional requirements, known as user stories. To quote Kent Beck, "Optimism is an occupational hazard of programming. Feedback is the treatment."

Courage Several practices embody courage. One is the commandment to always design and code for today and not for tomorrow. This is an effort to avoid getting bogged down in design and requiring a lot of effort to implement anything else. Courage enables developers to feel comfortable with refactoring their code when necessary.[5] This means reviewing the existing system and modifying it so that future changes can be implemented more easily. Another example of courage is knowing when to throw code away: courage to remove source code that is obsolete, no matter how much effort was used to create that source code. Also, courage means persistence: A programmer might be stuck on a complex problem for an entire day, then solve the problem quickly the next day, if only they are persistent. Respect The respect value includes respect for others as well as self-respect. Programmers should never commit changes that break compilation, that make existing unit-tests fail, or that otherwise delay the work of their peers. Members respect their own work by always striving for high quality and seeking for the best design for the solution at hand through refactoring. Adopting the four earlier values leads to respect gained from others in the team. Nobody on the team should feel unappreciated or ignored. This ensures a high level of motivation and encourages loyalty toward the team and toward the goal of the project. This value is very dependent upon the other values, and is very much oriented toward people in a team. Rules The first version of rules for XP was published in 1999 by Don Wells[11] at the XP website. 29 rules are given in the categories of planning, managing, designing, coding, and testing. Planning, managing and designing are called out explicitly to counter claims that XP doesn't support those activities. Another version of XP rules was proposed by Ken Auer[12] in XP/Agile Universe 2003. He felt XP was defined by its rules, not its practices (which are subject to more variation and ambiguity). He defined two categories: "Rules of Engagement" which dictate the environment in which software development can take place effectively, and "Rules of Play" which define the minute-by-minute activities and rules within the framework of the Rules of Engagement. Principles The principles that form the basis of XP are based on the values just described and are intended to foster decisions in a system development project. The principles are intended to be more concrete than the values and more easily translated to guidance in a practical situation.

Feedback Extreme programming sees feedback as most useful if it is done rapidly and expresses that the time between an action and its feedback is critical to learning and making changes. Unlike traditional system development methods, contact with the customer occurs in more frequent iterations. The customer has clear insight into the system that is being developed. He or she can give feedback and steer the development as needed. Unit tests also contribute to the rapid feedback principle. When writing code, the unit test provides direct feedback as to how the system reacts to the changes one has made. If, for instance, the changes affect a part of the system that is not in the scope of the programmer who made them, that programmer will not notice the flaw. There is a large chance that this bug will appear when the system is in production. Assuming simplicity This is about treating every problem as if its solution were "extremely simple". Traditional system development methods say to plan for the future and to code for reusability. Extreme programming rejects these ideas. The advocates of extreme programming say that making big changes all at once does not work. Extreme programming applies incremental changes: for example, a system might have small releases every three weeks. When many little steps are made, the customer has more control over the development process and the system that is being developed. Embracing change The principle of embracing change is about not working against changes but embracing them. For instance, if at one of the iterative meetings it appears that the customer's requirements have changed dramatically, programmers are to embrace this and plan the new requirements for the next iteration. The incremental build model is a method of software development where the model is designed, implemented and tested incrementally (a little more is added each time) until the product is finished. It involves both development and maintenance. The product is defined as finished when it satisfies all of its requirements. This model combines the elements of the waterfall model with the iterative philosophy of prototyping. The product is decomposed into a number of components, each of which are designed and built separately (termed as builds). Each component is delivered to the client when it is complete. This allows partial utilisation of product and avoids a long development time. It also creates a large initial capital outlay with the subsequent long wait avoided. This model of development also helps ease the traumatic effect of introducing completely new system all at once. There are, overall, few problems with this model.