Anda di halaman 1dari 342

DBA 1656

QUALITY MANAGEMENT

NOTES

UNIT I INTRODUCTION TO QUALITY MANAGEMENT


INTRODUCTION
Todays organization can gain importance on their products and services embedded with quality .The quality tentacles could be seen everywhere in the organization which is ultimately called as Total Quality Management. Customer is in the forefront of the Total Quality Management process because the entire exercise is focused towards customer satisfaction ultimately ending up with customer delight. The Total Quality Management helps in shaping the future. This unit deals with Quality Management Definitions, TQM framework, Benefits, Awareness and Obstacles, Quality Vision, Mission and Policy Statements, Customer Focus Customer Perception of Quality, Translating needs into requirements, Customer retention, Dimensions of Product and Service Quality, Cost of quality. LEARNING OBJECTIVES Upon completion of this unit, you will be able to: Have a feel of Quality Get a framework on TQM Get an introduction on vision Understand the indispensability of the customer Identify the dimensions of product and service quality

1.1 QUALITY MANAGEMENT DEFINITIONS Principles and Philosophies of Quality Management Quality management is becoming increasingly important to the leadership and management of all organizations. It is necessary to identify Quality Management as a distinct discipline of management and lay down universally understood and accepted rules for this discipline. The ISO technical committee working on the ISO9000 standards had published a document detailing the quality management principles and application guidelines. The latest revision (version 2000) of ISO9000 standards are based on these principles.
1 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Definition of Quality Management Principle A quality management principle is a comprehensive and fundamental rule / belief, for leading and operating an organization, aimed at continually improving performance over the long term by focusing on customers while addressing the needs of all other stake holders. The eight principles are: 1. Customer-Focused Organization 2. Leadership 3. Involvement of People 4. Process Approach 5. System Approach to Management 6. Continual Improvement 7. Factual Approach to Decision-Making and 8. Mutually Beneficial Supplier Relationships. Now let us examine the principles in detail. Principle 1 - Customer-Focused Organization Organizations depend on their customers and therefore should understand current and future customer needs, meet customer requirements and strive to exceed customer expectations. Steps in application of this principle are: 1. Understand customer needs and expectations for products, delivery, price, dependability, etc. 2. Ensure a balanced approach among customers and other stake holders (owners, people, suppliers, local communities and society at large) needs and expectations. 3. Communicate these needs and expectations throughout the organization. 4. Measure customer satisfaction and act on results, and 5. Manage customer relationships. Principle 2 - Leadership Leaders establish unity of purpose and direction of the organization. They should create and maintain the internal environment in which people can become fully involved in achieving the organizations objectives. Steps in application of this principle are: 1. Be proactive and lead by example.

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

2. Understand and respond to changes in the external environment. 3. Consider the needs of all stake holders including customers, owners, people, suppliers, local communities and society at large. 4. Establish a clear vision of the organizations future. 5. Establish shared values and ethical role models at all levels of the organization. 6. Build trust and eliminate fear. 7. Provide people with the required resources and freedom to act with responsibility and accountability. 8. Inspire, encourage and recognize peoples contributions. 9. Promote open and honest communication. 10. Educate, train and coach people. 11. Set challenging goals and targets, and 12. Implement a strategy to achieve these goals and targets. Principle 3 - Involvement of People People at all levels are the essence of an organization and their full involvement enables their abilities to be used for the organizations benefit. Steps in application of this principle are: 1. Accept ownership and responsibility to solve problems. 2. Actively seek opportunities to make improvements, and enhance competencies, knowledge and experience. 3. Freely share knowledge and experience in teams. 4. Focus on the creation of value for customers. 5. Be innovative in furthering the organizations objectives. 6. Improve the way of representing the organization to customers, local communities and society at large. 7. Help people derive satisfaction from their work, and 8. Make people enthusiastic and proud to be part of the organization. Principle 4 - Process Approach A desired result is achieved more efficiently when related resources and activities are managed as a process. Steps in application of this principle are: 1. 2. 3. 4. Define the process to achieve the desired result. Identify and measure the inputs and outputs of the process. Identify the interfaces of the process with the functions of the organization. Evaluate possible risks, consequences and impacts of processes on customers, suppliers and other stake holders of the process. 5. Establish clear responsibility, authority, and accountability for managing the process.
3

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

6. Identify internal and external customers, suppliers and other stake holders of the process, and 7. When designing processes, consider process steps, activities, flows, control measures, training needs, equipment, methods, information, materials and other resources to achieve the desired result. Principle 5 - System Approach to Management Identifying, understanding and managing a system of interrelated processes for a given objective improves the organizations effectiveness and efficiency. Steps in application of this principle are: 1. Define the system by identifying or developing the processes that affect a given objective. 2. Structure the system to achieve the objective in the most efficient way. 3. Understand the interdependencies among the processes of the system. 4. Continually improve the system through measurement and evaluation, and 5. Estimate the resource requirements and establish resource constraints prior to action. Principle 6 - Continual Improvement Continual improvement should be a permanent objective of the organization. Steps in application of this principle are: 1. Make continual improvement of products, processes and systems an objective for every individual in the organisation. 2. Apply the basic improvement concepts of incremental improvement and breakthrough improvement. 3. Use periodic assessments against established criteria of excellence to identify areas for potential improvement. 4. Continually improve the efficiency and effectiveness of all processes. 5. Promote prevention based activities. 6. Provide every member of the organization with appropriate education and training, on the methods and tools of continual improvement such as the PlanDo-Check-Act cycle, problem solving, process re-engineering, and process innovation. 7. Establish measures and goals to guide and track improvements, and 8. Recognize improvements.

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Principle 7 - Factual Approach to Decision Making Effective decisions are based on the analysis of data and information. Steps in application of this principle are: 1. Take measurements and collect data and information relevant to the objective. 2. Ensure that the data and information are sufficiently accurate, reliable and accessible. 3. Analyze the data and information using valid methods. 4. Understand the value of appropriate statistical techniques, and 5. Make decisions and take action based on the results of logical analysis balanced with experience and intuition. Principle 8 - Mutually Beneficial Supplier Relationships An organization and its suppliers are interdependent, and a mutually beneficial relationship enhances the ability of both to create value. Steps in application of this principle are: 1. Identify and select key suppliers. 2. Establish supplier relationships that balance short-term gains with long-term considerations for the organization and society at large. 3. Create clear and open communications. 4. Initiate joint development and improvement of products and processes. 5. Jointly establish a clear understanding of customers needs. 6. Share information and future plans, and 7. Recognize supplier improvements and achievements.

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

1.2 TQM FRAMEWORK, BENEFITS, AWARENESS AND OBSTACLES

FIGURE 1.1 Total Quality Management


For organizations to survive and grow in todays challenging marketplace they need true commitment to meet customer needs through communication, planning and continuous process improvement activities. Creating this culture change can improve the products and services of your organization as well as improve employee attitudes and enthusiasm. All of these help with the ultimate goal of improved quality, productivity and customer satisfaction which is an important competitive advantage in todays marketplace. Total Quality Management (TQM) is a management strategy aimed at embedding awareness of quality in all organizational processes. TQM has been widely used in manufacturing, education, government, and service industrie176s, as well as NASA space and science programs. Total Quality provides an umbrella under which everyone in the organization can strive and create customer satisfaction. TQ is a people focused management system that aims at continual increase in customer satisfaction at continually lower real costs. Definition TQM is composed of three paradigms:

Total: Organization wide Quality: With its usual definitions, with all its complexities (External Definition)
6

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Management: The system of managing with steps like Plan, Organize, Control, Lead, Staff, etc.

NOTES

As defined by the International Organization for Standardization (ISO): TQM is a management approach for an organization, centered on quality, based on the participation of all its members and aiming at, long-term success through customer satisfaction, and benefits to all members of the organization and to the society. In Japan, TQM comprises four process steps, namely: 1. Kaizen Focuses on Continuous Process Improvement, to make processes visible, repeatable and measurable. 2. Atarimae Hinshitsu The idea that things will work as they are supposed to (eg. a pen will write.). 3. Kansei Examining the way the user applies the product leads to improvement in the product itself. 4. Miryokuteki Hinshitsu The idea that things should have an aesthetic quality (eg. a pen will write in a way that is pleasing to the writer.) TQM requires that the company maintain this quality standard in all aspects of its business. This requires ensuring that things are done right the first time and that defects and waste are eliminated from operations. Origins Total Quality Control was the key concept of Armand Feigenbaums 1951 book, Quality Control: Principles, Practice, and Administration, a book that was subsequently released in 1961 under the title, Total Quality Control (ISBN 0-07020353-9). Joseph Juran, Philip B. Crosby, and Kaoru Ishikawa also contributed to the body of knowledge now known as TQM. The American Society for Quality says that the term Total Quality Management was first used by the U.S. Naval Air Systems Command to describe its Japanesestyle management approach to quality improvement. This is consistent with the story that the United States Navy Personnel Research and Development Center began researching the use of statistical process control (SPC); the work of Juran, Crosby, and Ishikawa; and the philosophy of W. Edwards Deming to make performance improvements in 1984. This approach was first tested at the North Island Naval Aviation Depot. In his paper, The Making of TQM: History and Margins of the Hi(gh)-Story from 1994, Xu claims that Total Quality Control is translated incorrectly from Japanese since, there is no difference between the words control and management in Japanese. William Golimski refers to Koji Kobayashi, former CEO of NEC, and the first person to use TQM, which he did during a speech when he got the Deming Prize in 1974.
7 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

TQM in Manufacturing Quality assurance through statistical methods is a key component in a manufacturing organization, where TQM generally starts by sampling a random selection of the product. The sample can then be tested for things that matter most to the end users. The causes of any failures are isolated, secondary measures of the production process are designed, and then the causes of the failure are corrected. The statistical distributions of important measurements are tracked. When parts measures drift into a defined error band, the process is fixed. The error band is usually a tighter distribution than the failure band, so that the production process is fixed before failing parts can be produced. It is important to record not just the measurement ranges, but what failures caused them to be chosen. In that way, cheaper fixes can be substituted later (say, when the product is redesigned) with no loss of quality. After TQM has been in use, it is very common for parts to be redesigned so that critical measurements either cease to exist, or become much wider. It took people awhile to develop tests to find emergent problems. One popular test is a life test in which the sample product is operated until a part fails. Another popular test is called shake and bake, in which the product is mounted on a vibrator in an environmental oven, and operated at progressively more extreme vibration and temperatures until something fails. The failure is then isolated and engineers design an improvement. A commonly-discovered failure is for the product to disintegrate. If fasteners fail, the improvements might be to use measured-tension nutdrivers to ensure that screws dont come off, or improved adhesives to ensure that parts remain glued. If a gearbox wears out first, a typical engineering design improvement might be to substitute a brushless stepper motor for a DC motor with a gearbox. The improvement is that a stepper motor has no brushes or gears to wear out, so it lasts ten or more times longer. The stepper motor is more expensive than a DC motor, but cheaper than a DC motor combined with a gearbox. The electronics are radically different, but equally expensive. One disadvantage might be that a stepper motor can hum or whine, and usually needs noise-isolating mounts. Often, a TQMed product is cheaper to produce because of efficiency/ performance improvements and because theres no need to repair dead-on-arrival products, which represents an immensely more desirable product.

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

TQM and contingency-based research TQM has not been independent of its environment. In the context of management accounting systems (MCSs), Sim and Killough (1998) show that incentive pay enhanced the positive effects of TQM on customer and quality performance. Ittner and Larcker (1995) demonstrated that product focused TQM was linked to timely problem solving information and flexible revisions to reward systems. Chendall (2003) summarizes the findings from contingency-based research concerning management control systems and TQM by noting that TQM is associated with broadly based MCSs including timely, flexible, externally focused information; close interactions between advanced technologies and strategy; and non-financial performance measurement. TQM, just another Management fad? Abrahamson (1996) argued that fashionable management discourse such as Quality Circles tends to follow a lifecycle in the form of a bell curve. Ponzi and Koenig (2002) showed that the same can be said about TQM, which peaked between 1992 and 1996, while rapidly losing popularity in terms of citations after these years. Dubois (2002) argued that the use of the term TQM in management discourse created a positive utility regardless of what managers meant by it (which showed a large variation), while in the late 1990s the usage of the term TQM in implementation of reforms lost the positive utility attached to the mere fact of using the term and sometimes associations with TQM became even negative. Nevertheless, management concepts such as TQM leave their traces, as their core ideas can be very valuable. For example, Dubois (2002) showed that the core ideas behind the two management fads Reengineering and TQM, without explicit usage of their names, can even work in a synergistic way. Benefits of TQM TQM as a slogan has been around since 1985. TQM provides a management system, using various combinations of tools that have been in existence for much longer. Each tool has its own use, with benefits that follow. TQM leads to a synergy of benefits.

NOTES

Through the application of TQM, senior management will empower all levels of management, including self-management at worker level, to manage quality systems. Outlined below are some advantages to be gained by a hotel, from the use of TQM. These are split into the five key areas of TQM. Continuous Improvement. People wish to improve themselves and get a better lifestyle. If the desire for individual improvement is transferred to systems within the workplace, then these systems will improve.
9 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Management can, at times, be a restraint to innovation through relying on historical systems. This will result in always do what you have always done and you will always get what you have always got. A good chef will know how best to prepare and present food. If given the freedom to innovate, then the standard of food will improve. When mistakes are made by staff, it is rarely through a desire to make a mistake. The system used is at fault. With departments constantly striving for improvement, hotel systems will improve, leading to reduced internal costs and a better service for customers. Multifunctional Teams. Within the hotel, departments are customers and suppliers for each other. A waiter is the chefs supplier giving information on what has been ordered from the menu including any special requests about how the food should be prepared (medium, well done etc). When the food is ready, the roles are reversed. The chef becomes the waiters supplier providing the food. If the hotels organization is structured in such a way, that people in different departments work with each other to solve problems as a team, traditional inter departmental barriers will be removed. Inter departmental communication on a day to day basis is essential for effective management. Multifunctional teamwork allows the problems and requirements of each department to be passed on at worker level, throughout the hotel. This will lead to a better understanding of how the hotel systems work, by all employees. Individuals will work with each other, identifying causes of problems rather than blaming each other for the results of a problem. This will remove the blame culture. Reduction in Variation. Systems are influenced by many elements which cause variation. For example, as new staffs are employed, they will do their jobs in a different way to previous staff. This can lead to a change in the service provided by the hotel. If such influences can be minimized, then standards of service can be maintained. This can be achieved by documenting systems. Giving workers the opportunity to own their processes by letting them do this documentation boosts morale. Also, the documentation will accurately reflect what is actually done rather than what management thinks is done. Initially this will lead to guests receiving a known standard of service which will result in repeat business. Once the standard is stabilized, changes made to improve the service can be measured and directly linked to the improvement made.
10

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Supplier Integration: By involving your suppliers directly with your staff, two way communications can be established. Your chef can explain the standard of food required. Your supplier can explain the difficulties in supplying such food. Any problems that arise can be solved jointly, using your knowledge of hotel systems, the chefs knowledge of food and the suppliers expertise in supply systems. This prevents waste through returned goods for the supplier. Expense through: Obtaining replacements from another supplier. A drop in service by not being able to provide what was intended. Education and Training. Through education, management and staff are given the tools to achieve all the above. Education provides for guided innovation from all levels. Training, which is a cost, shows a commitment by management to: Individual staff self improvement, which is a motivator. TQM. Summary. Staff will collectively provide continual improvement of hotel systems. By working together, communication/departmental barriers will be broken down. The standard of service can be set, maintained and then improved. Suppliers will be working with rather than working for the hotel. The standard of staff and management will improve through education. The adoption of a new attitude to work, by everyone embracing the ideas of TQM.

NOTES

The primary, long-term benefits of TQM in the public sector include better services, reduced costs and satisfied customers. Progressive improvement in the management systems and the quality of services offered result in increasingly satisfied customers. In addition a number of other benefits are: y y y observable including improved skills, morale and confidence among public service staff, enhanced relationships between governments and its constituents, increased government accountability and transparency and Improved productivity and efficiency of public services. A philosophy that improves business from top to bottom A focused, systematic and structured approach to enhancing customers satisfaction y Process improvement methods that reduce or eliminate problems i.e. non conformance costs
11 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

y Tools and techniques for improvement - quality operating system y Delivering what the customer wants in terms of service, product and the whole experience y Intrinsic motivation and improved attitudes throughout the workforce y Workforce is proactive - prevention orientated y Enhanced communication y Reduction in waste and re-work y Increase in process ownership- employee involvement and empowerment y Everyone from top to bottom educated y Improved customer/supplier relationships (internally & externally) y Market competitiveness y Quality based management system for ISO 9001:2000 certification 1.3 QUALITY VISION, MISSION AND POLICY STATEMENTS SAP : Vision SAPs vision for quality management is to consistently deliver high-quality solutions focused on improving customer satisfaction. Mission The mission of quality management at SAP is to:

Research and develop new methods and standards Proactively communicate and share knowledge Apply the knowledge to enhance our products, processes, and services Continually monitor and improve our performance against set targets Strive for prevention of failure, defect reduction, and increased customer satisfaction

Policy Quality is the basic requirement for the satisfaction of our customers and the resulting competitiveness and economic success of SAP. The Executive Board dedicates itself to implementing and monitoring the following global quality policy principles:

SAP strives to further intensify the close cooperation with its internal and external customers and partners, and the performance-oriented communication with its suppliers.
12

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

The continual improvement of our products, processes, and services combined with innovation is at the center of our endeavors. For this purpose, we strive to further optimize our organizational, operational, and technical processes. Quality management supports the business-oriented behavior of all parties involved. Promoting employee satisfaction and quality awareness are major managerial functions in the entire company. Commitment, professional competence, and personal responsibility are required from all SAP employees to achieve the goals based on the global quality policy principles. Employees know the input requirements to comply with quality in their area. Internal education is provided to help SAP employees fulfill their tasks. The quality goals based on this policy are regularly defined, implemented, and monitored by the responsible parties within the framework of quality management at SAP.

NOTES

Honeywell Technology Solutions Lab (HTSL) HTSL Vision Be the premier growth company delivering unsurpassed value to Honeywell customers by providing Innovative Total Solutions and Services enhancing the safety, security, comfort, energy efficiency and productivity of the environment where they live, work and travel. HTSL Mission Maximize the value and impact on Honeywell businesses and customers by providing Technology Product and Business Solutions and Services setting standards of world class performance. HTSL Quality Policy To delight our customers by providing six sigma quality total solutions, demonstrating value and continuous improvement through competent and disciplined professionals. Quality is the basis for the long-term profitability and growth of K-Tron. In the industries we serve, we strive to be every customers first choice quality supplier.

13

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Our quality policy is based on: Customer Satisfaction Our organization is focused on our customers. We are committed to satisfying the needs and expectations of our customers and other interested parties, including their economic, social and environmental concerns. Continual Improvements We are committed to implementing continual improvements to every K-Tron process, product, service and quality management system. Quality Involvement Top management reviews our quality policies and objectives to ensure their continual suitability and the framework of the quality management system. We believe that they complement our mission, vision and strategy. The quality policies, objectives, resource needs and effectiveness of the quality management system are communicated and understood within the organization. Employee awareness, development, involvement and self-responsibility are essential to the effectiveness of our quality management system. All employees are challenged to do it right the first time. Adherence to our quality management system is a permanent commitment of all employees, suppliers and partners of K-Tron. We obligate ourselves to maintain a quality management system in accordance with ISO 9001:2000. DoD : Mission and Vision Vision

Lead Defense Acquisition to meet DoDs needs with excellence every time

Mission

Identify and develop best acquisition policies and practices to promote flexibility and take advantage of the global marketplace Integrate policy creation, training, and communication to quickly and effectively deliver new policies and practices to the community Provide timely and sound acquisition advice to Federal leadership and DoD personnel as the DoD Acquisition Ombudsman

Anna University Chennai

14

DBA 1656

QUALITY MANAGEMENT

Assist the acquisition community to obtain the best quality weapons, equipment and services for war fighters Lead the DoD acquisition, technology, and logistics community in recruiting, retaining, and training the right workforce with the right skills in the right place at the right time with the right pay Leverage use of technology to provide best possible tools to the acquisition workforce Continuously assess the results of our efforts and make improvements

NOTES

1.4 CUSTOMER FOCUS CUSTOMER PERCEPTION OF QUALITY, TRANSLATING NEEDS INTO REQUIREMENTS, CUSTOMER RETENTION Customer perception of Quality There are three key elements of quality: customer, process and employee. Everything we do to remain a world-class quality company focuses on these three essential elements. the customer Delighting Customers Customers are the center of GEs universe: they define quality. They expect performance, reliability, competitive prices, on-time delivery, service, clear and correct transaction processing and more. In every attribute that influences customer perception, we know that just being good is not enough. Delighting our customers is a necessity, because if we dont do it, someone else will! ...the Process Outside-In Thinking Quality requires us to look at our business from the customers perspective, not ours. In other words, we must look at our processes from the outside-in. By understanding the transaction lifecycle from the customers needs and processes, we can discover what they are seeing and feeling. With this knowledge, we can identify areas where we can add significant value or improvement from their perspective.

15

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 1.2 ...the Employee Leadership Commitment People create results. Involving all employees is essential to GEs quality approach. GE is committed to providing opportunities and incentives for employees to focus their talents and energies on satisfying customers. All GE employees are trained in the strategy, statistical tools and techniques of Six Sigma quality. Training courses are offered at various levels: Quality Overview Seminars: basic Six Sigma awareness. Team Training: basic tool introduction to equip employees to participate on Six Sigma teams. Master Black Belt, Black Belt and Green Belt Training: in-depth quality training that includes high-level statistical tools, basic quality control tools, Change Acceleration Process and Flow technology tools. Design for Six Sigma (DFSS) Training: prepares teams for the use of statistical tools to design it right the first time. Quality is the responsibility of every employee. Every employee must be involved, motivated and knowledgeable if we are to succeed. Our Customers Feel the Variance, Not the Mean Often, our inside-out view of the business is based on average or mean-based measures of our recent past. Customers dont judge us on averages, they feel the variance in each transaction, each product we ship. Six Sigma focuses first on reducing process variation and then on improving the process capability. Customers value consistent, predictable business processes that deliver worldclass levels of quality. This is what Six Sigma strives to produce. GE success with Six Sigma has exceeded our most optimistic predictions. Across the company, GE associates embrace Six Sigmas customer-focused, data-driven philosophy and apply it to everything we do. We are building on these successes by sharing best practices across all of our businesses, putting the full power of GE behind our quest for better, faster customer solutions.
Anna University Chennai 16

DBA 1656

QUALITY MANAGEMENT

Defining Relationships Why define Relationships between Lists? Relationships between lists indicate how the two lists are related to each other. They are generally used to prioritize one list based upon the priorities of another list. Relationships can be defined by answering a particular question for each cell in a Matrix. For example, the Relationships between Customer Requirements and Design Measures might be defined by asking To what degree does this Measure predict the Customers Satisfaction with this Requirement? By asking this same question consistently for each Measure and Requirement combination, a set of relationships will be defined in the Matrix which will help to determine which Measures are most important to control in order to achieve a desired level of Customer Satisfaction. Another question which can be asked in order to define relationships is What percent of this Requirement is handled by this Design Measure? The relationships defined using this question would result in the highest priority being assigned to the Measures which control most of the functionality. These may not be the same as the Measures defined in order to predict Customer Satisfaction. Given these examples, you can see that it is critical that a team understands what question they are trying to answer before they start defining Relationships. It is also critical that they use the same question consistently. By doing so, the team will be able to prioritize the Output List accurately. How can Relationships be defined? Relationships are defined within the Matrix Window of QFD/CAPTURE. There are two different methods of defining the Relationships. The Spreadsheet View provides a familiar format for entering the relationship values. The Tree View provides a snapshot of the relationship values for each Input row, and makes it easier to assess each relationships value relative to the other Relationships. The Spreadsheet View presents the Lists being related in a Spreadsheet format. The entries in the Input Lists form the row headings and the entries in the Output Lists form the column headings. The cells within the spreadsheet contain the relationships between the Input and Output entries. The Tree View presents all of the Output List Entries which are related to the currently selected Input List Entry using a Tree structure. This allows the team to consider the relative strength of the relationships. It is equivalent to defining an entire row of relationships across a Matrix. What scales should we use? The scales used to define the relationships can have a significant impact on the Prioritization of the Output List Entries. The main consideration is the tradeoff between the number of levels in a scale, the speed of relationship definition, and the relative
17

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

accuracy of the resulting Prioritization. In general, the more levels in the scale, the more accurate the relative prioritization. The different values of the scale allow the team to indicate the levels of relationships that the Output List Entries have with the Input List Entries. For example, the team may choose to use relationships with values of 1 through 10. Using this scale a value of 6 would indicate that one Output List Entry is twice as important to the satisfaction of an Input List Entry as an Output List Entry with a value of 3. relationship definition usually goes much faster if the team limits their choices of relationship values to just a few. Standard QFD practice usually supports the values 1, 3, and 9. This Standard QFD Scale accentuates the Strong Relationships (value of 9). Output List Entries with several Strong Relationships to Input List Entries will tend to be given a higher level of priority than Output List Entries related to many Input List Entries with either moderate or weak values. Other common scales include 1, 2, 3 and 1, 3, 5. With greater frequency, teams are defining relationships using advanced methods such as the Analytic Hierarchy Process to establish scales with an infinite number of levels. The resulting relationship values usually represent the percent contribution of each Output List Entry to the selected Input List Entry. QFD/CAPTURE supports all of these different scales. Each time a new matrix is created, the user is given the opportunity to specify which of the standard scales will be used for the relationships. You may define your own unique scales as well. The software will also allow the team to define relationship values as real numbers that represent percentages. Identifying Tradeoffs Why evaluate Tradeoffs? The tradeoffs, located in the Roof of the House of Quality, indicate the synergistic or detrimental impacts of changes in the Design Measures. They are used to identify critical compromises in the design. Since these compromises are likely to be encountered sooner or later, they may as well be examined as part of the QFD effort so that any required design changes are as inexpensive as possible. How should we evaluate Tradeoffs? As with other matrices, the team should agree upon the question that they will ask in order to define the Relationships of this Matrix. A common question used is If we improve our performance against this Measure, what is the impact on this other Measure? The team will determine if improving performance of one Measure helps or hurts the products performance against another Measure. Generally, positive and negative values are used to indicate the positive or negative impact. The Tradeoffs Scale provided by QFD/CAPTURE can be used as a scale for Relationships defined within this Matrix. QFD/CAPTURE allows a team to capture the tradeoffs it identifies. A matrix is created to capture the tradeoff information. One list forms both the input and the output of the matrix.

Anna University Chennai

18

DBA 1656

QUALITY MANAGEMENT

How should we document Actions? If a trade-off is identified, there is usually some action which is required in order to reduce the impact or work around the potential compromise. These actions can be documented in several ways. One approach is to create a document within QFD/ Capture and record each action as a paragraph within the document. Another approach would be to define a list of actions. This would give the team the opportunity to relate actions with the Customer Requirements, Design Measures, or any other List defined in the project. This approach would support prioritization of the actions based upon their effect on the satisfaction of the related Input List. 1.5 DIMENSIONS OF PRODUCT AND SERVICE QUALITY When it comes to measuring the quality of your services, it helps to understand the concepts of product and service dimensions. Users may want a keyboard that is durable and flexible for using on the wireless carts. Customers may want a service desk assistant who is empathetic and resourceful when reporting issues. Quality is multidimensional. Product and service quality are comprised of a number of dimensions, which determine how customer requirements are achieved. Therefore it is essential that you consider the entire dimension that may be important to your customers. Product quality has two dimensions:

NOTES

Physical dimension - A products physical dimension measures the tangible product itself and includes such things as length, weight, and temperature. Performance dimension - A products performance dimension measures how well a product works and includes such things as speed and capacity.

While performance dimensions are more difficult to measure and obtain when compared to physical dimensions, but the efforts will provide more insight into how the product satisfies the customer. Like product quality, service quality has several dimensions:

Responsiveness - Responsiveness refers to the reaction time of the service. Assurance - Assurance refers to the level of certainty a customer has, regarding the quality of the service provided. Tangibles - Tangibles refer to a services look or feel. Empathy - Empathy is when a service employee shows that she understands and sympathizes with the customers situation. The greater the level of this understanding, the better. Some situations require more empathy than others.
19 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Reliability - Reliability refers to the dependability of the service providers and their ability to keep their promises.

The quality of products and services can be measured by their dimensions. Evaluating all dimensions of a product or service helps to determine how well the service stacks up against meeting the customer requirements. Quality Framework Garvin proposes eight critical dimensions or categories of quality that can serve as a framework for strategic analysis: Performance, features, reliability, conformance, durability, serviceability, aesthetics, and perceived quality. 1. Performance Performance refers to a products primary operating characteristics. For an automobile, performance would include traits like acceleration, handling, cruising speed, and comfort. Because this dimension of quality involves measurable attributes, brands can usually be ranked objectively on individual aspects of performance. Overall performance rankings, however, are more difficult to develop, especially when they involve benefits that not every customer needs. 2. Features Features are usually the secondary aspects of performance, the bells and whistles of products and services, and those characteristics that supplement their basic functioning. The line separating primary performance characteristics from secondary features is often difficult to draw. What is crucial is that features involve objective and measurable attributes; objective individual needs, not prejudices, affect their translation into quality differences. 3. Reliability This dimension reflects the probability of a product malfunctioning or failing within a specified time period. Among the most common measures of reliability are the mean time to first failure, the mean time between failures, and the failure rate per unit time. Because these measures require a product to be in use for a specified period, they are more relevant to durable goods than to products or services that are consumed instantly. 4. Conformance Conformance is the degree to which a products design and operating characteristics meet established standards. The two most common measures of failure in conformance are defect rates in the factory and, once a product is in the hands of the customer, the incidence of service calls. These measures neglect other deviations from
Anna University Chennai 20

DBA 1656

QUALITY MANAGEMENT

standard, like misspelled labels or shoddy construction, that do not lead to service or repair. 5. Durability A measure of product life, durability has both economic and technical dimensions. Technically, durability can be defined as the amount of use one gets from a product before it deteriorates. Alternatively, it may be defined as the amount of use one gets from a product before it breaks down and replacement is preferable to continued repair. 6. Serviceability Serviceability is the speed, courtesy, competence, and ease of repair. Consumers are concerned not only about a product breaking down but also about the time before service is restored, the timeliness with which service appointments are kept, the nature of dealings with service personnel, and the frequency with which service calls or repairs fail to correct outstanding problems. In those cases where problems are not immediately resolved and complaints are filed, a companys complaints handling procedures are also likely to affect customers ultimate evaluation of product and service quality. 7. Aesthetics Aesthetics is a subjective dimension of quality. How a product looks, feels, sounds, tastes, or smells is a matter of personal judgment and a reflection of individual preference. On this dimension of quality, it may be difficult to please everyone. 8. Perceived Quality Consumers do not always have complete information about a products or services attributes; indirect measures may be their only basis for comparing brands. A products durability for example can seldom be observed directly; it must usually be inferred from various tangible and intangible aspects of the product. In such circumstances, images, advertising, and brand names - inferences about quality rather than the reality itself can be critical. 1.6 COST OF QUALITY The price of nonconformance (Philip Crosby) or the cost of poor quality (Joseph Juran), the term Cost of Quality, refers to the costs associated with providing poor quality product or service. Why is it important? Quality processes cannot be justified simply because everyone else is doing them - but return on quality (ROQ) has dramatic impacts as companies mature.
21

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Research shows that the costs of poor quality can range from 15%-40% of business costs (e.g., rework, returns or complaints, reduced service levels, lost revenue). Most businesses do not know what their quality costs are because they do not keep reliable statistics. Finding and correcting mistakes consume an inordinately large portion resources. Typically, the cost to eliminate a failure in the customer phase is five times greater than it is at the development or manufacturing phase. Effective quality management decreases production costs because the sooner an error is found and corrected, the less costly it will be. When to use it? Cost of quality comprises of four parts:

External Failure Cost: cost associated with defects found after the customer receives the product or service ex: processing customer complaints, customer returns, warranty claims, product recalls.

Internal Failure Cost : cost associated with defects found before the customer receives the product or service ex: scrap, rework, re-inspection, re-testing, material review, material downgrades.

Inspection (appraisal) Cost: cost incurred to determine the degree of conformance to quality requirements (measuring, evaluating or auditing) ex: inspection, testing, process or service audits, calibration of measuring and test equipment.

Prevention Cost: Cost incurred to prevent (keep failure and appraisal cost to a minimum) poor quality ex: new product review, quality planning, supplier surveys, process reviews, quality improvement teams, education and training.

How to use it? Gather some basic information about the number of failures in the system, apply some basic assumptions to that data in order to quantify the data, chart the data based on the four elements listed above and study it, allocate resources to combat the weakspots, do this study on a regular basis and evaluate your performance The Cost of Quality has other version too. 1. Like all things there is a price to pay for quality. This total cost can be split into two fundamental areas:
22

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

a. Nonconformance. This area covers the price paid by not having quality systems or a quality product. Examples of this are: (1)

NOTES

Rework. Doing the job over again because it wasnt right the first time. (2) Scrap. Throwing away the results of your work because it is not up to the required standard. (3) Waiting. Time wasted whilst waiting for other people. (4) Down Time. Not being able to do your job because a machine is broken. b. Conformance. Conformance is an aim of quality assurance. This aim is achieved at a price. Examples of this are: (1) Documentation. Writing work instructions, technical instructions and producing paperwork. (2) Training. On the job training, quality training, etc. (3) Auditing. Internal, external and extrinsic. (4) Planning. Prevention, do the right thing first time and poka yoke. (5) Inspection. Vehicles, equipment, buildings and people. 2. These two main areas can be split further as shown below:

FIGURE 1.3 This shows the four segments of quality costs: a. b. c. Prevention. This area covers avoiding defects (poka yoke), planning, preparation, training, preventative maintenance and evaluation. Appraisal. This area covers finding defects by inspection (poka yoke), audit, calibration, test and measurement. Internal Failure. This area covers the costs that are borne by the organization itself such as scrap, rework, redesign, modifications, corrective action, down time, concessions and overtime.
23 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES
3.

d.

External Failure. This area covers the costs that are borne by the customer such as equipment failure, down time, warranty, administrative cost in dealing with failure and the loss of goodwill.

Whilst aiming to reduce failure through appraisal and prevention it must be clear that these also cost as shown below:

Figure 1.4 4. The graph shows that there is a minimum Total Quality cost, which is a combination of prevention, appraisal and failure. Reducing any of these reduces the total. The key to minimum cost, is striking the correct balance between the three. Clearly prevention reduces both appraisal and failure costs, however eventually the cost of prevention itself starts to increase the total cost and so this must be controlled and set at an effective level. The next graph shows that when Total Quality is initially introduced into an organisation, there are huge savings that can be made:

5.

6.

FIGURE 1.5
Anna University Chennai 24

DBA 1656

QUALITY MANAGEMENT

7.

However when Total Quality is introduced into a well organised system, that is using inspection as a major standard setter, then the benefits are not so dramatic. The main benefits to be achieved are within management. The fat of the organisation can then be cut.

NOTES

FIGURE 1.6 8. The graph below shows the four stages of Total Quality acceptance / implementation and what happens theoretically to the four segments of the cost of quality:

FIGURE 1.7 9. The minimum total cost, is shown below as being achieved at 98% perfection. This percentage is also known as best practice. That is, the cost of achieving an improvement outweighs the benefits of that improvement.

25

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 1.8 SUMMARY The eight principle, which act as a base for defining Quality Management is presented in detail. The Total Quality Management is presented in framework that comprises of three paradigms and each one compliments the other .The process quality, involvement of leaders and employee, culture of the organization will all lead towards the establishment of TQM in organization, The benefits are many and using an example of hotel, it is emphasized for the benefit of readers. Vision, mission and policy statements on quality are presented using examples of SAP, Honeywell Technology Solutions Lab, Department of Defence, USA to give an idea. The indispensability of the customer and the focus required to be is emphasized are presented in detail. The quality dimensions of product and service are deliberated in detail. The cost of quality, its importance, application areas are elaborated for the benefit of the learners. REVIEW QUESTIONS 1. 2. 3. 4. 5. 6. Enumerate various principles of quality management and explain their usefulness to managers. The TQM framework is dynamic and is fast changing-Critically examine. Describe the benefit of TQM in streamlining the operations of an organization. Discuss the importance of vision, mission and policy statements on quality in organizations. Describe the process of its evolution. Explain the impact on quality due to nonretention of customers. Explain various cost elements in achieving quality in an organization. Demonstrate how do they interact with each other
26

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

UNIT-II PRINCIPLES AND PHILOSOPHIES OF QUALITY MANAGEMENT


INTRODUCTION
Different schools of thought on management dominate the minds of the industrialists and practitioners. The western school of thought accelerates the decision making process but when it comes to the stages of implementation there is a slow down comparatively. On the other side, the Japanese Management spend a lot more time in arriving at the particular decision and because of which the implementation is faster. Many quality management principles and philosophies have emerged from Japanese soil. The notable contributions and the profile of the contributors are presented in this part. This unit deals with Overview of the contributions of Walter Shewhart, Overview of the contributions of Deming, Overview of the contributions of Juran, Overview of the contributions of Crosby, Overview of the contributions of Masaaki Imai, Overview of the contributions of Feigenbaum, Overview of the contributions of Ishikawa, Overview of the contributions of Taguchi, Overview of the contributions of Shingeo, Concepts of quality circle, Japanese 5S Principles, 8D Methodology. LEARNING OBJECTIVES Upon completion of this unit, you will be able to: Have an understanding of western and Japanese thinking on quality Appreciate the evolution process of various dominant techniques in quality Understand the process of evolution of various techniques The contributors profile and their contributions in particular

27

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

2.1

OVERVIEW OF THE CONTRIBUTIONS OF WALTER SHEWHART

Walter A. Shewhart Walter Andrew Shewhart (pronounced like Shoe-heart, March 18, 1891 - March 11, 1967) was an American physicist, engineer and statistician, sometimes known as the father of statistical quality control. W. Edwards Deming said of him: As a statistician, he was, like so many of the rest of us, self-taught, on a good background of physics and mathematics. Born in New Canton, Illinois to Anton and Esta Barney Shewhart, he attended the University of Illinois before being awarded his doctorate in physics from the University of California, Berkeley in 1917. Work on industrial quality Bell Telephones engineers had been working to improve the reliability of their transmission systems. Because amplifiers and other equipment had to be buried
Anna University Chennai 28

DBA 1656

QUALITY MANAGEMENT

underground, there was a business need to reduce the frequency of failures and repairs. When Dr. Shewhart joined the Western Electric Company Inspection Engineering Department at the Hawthorne Works in 1918, industrial quality was limited to inspecting finished products and removing defective items. That all changed on May 16, 1924. Dr. Shewharts boss, George D Edwards, recalled: Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it, set forth all of the essential principles and considerations which are involved in what we know today as process quality control. Shewharts work pointed out the importance of reducing variation in a manufacturing process and the understanding that continual processadjustment in reaction to nonconformance actually increased variation and degraded quality. Shewhart framed the problem in terms of assignable-cause and chance-cause variation and introduced the control chart as a tool for distinguishing between the two. Shewhart stressed that bringing a production process into a state of statistical control, where there is only chance-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically. Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes never produce a normal distribution curve (a Gaussian distribution, also commonly referred to as a bell curve). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times. Shewhart worked to advance the thinking at Bell Telephone Laboratories from their foundation in 1925 until his retirement in 1956, publishing a series of papers in the Bell System Technical Journal. His work was summarised in his book Economic Control of Quality of Manufactured Product (1931).Shewharts charts were adopted by the American Society for Testing and Materials (ASTM) in 1933 and advocated to improve production during World War II in American War Standards Z1.1-1941, Z1.2-1941 and Z1.31942.

NOTES

Later work
From the late 1930s onwards, Shewharts interests expanded out from industrial quality to wider concerns in science and statistical inference. The title of his second
29 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

book Statistical Method from the Viewpoint of Quality Control (1939) asks the audacious question: What can statistical practice, and science in general, learn from the experience of industrial quality control? Shewharts approach to statistics was radically different from that of many of his contemporaries. He possessed a strong operationalist outlook, largely absorbed from the writings of pragmatist philosopher C. I. Lewis, and this influenced his statistical practice. In particular, he had read Lewiss Mind and the World Order many times. Though he lectured in England in 1932 under the sponsorship of Karl Pearson (another committed operationalist) his ideas attracted little enthusiasm within the English statistical tradition. The British standards nominally based on his work, in fact, diverge on serious philosophical and methodological issues from his practice. His more conventional work led him to formulate the statistical idea of tolerance intervals and to propose his data presentation rules, which are listed below: 1. Data has no meaning apart from its context. 2. Data contains both signal and noise. To be able to extract information, one must separate the signal from the noise within the data. Walter Shewhart visited India in 1947-48 under the sponsorship of P. C. Mahalanobis of the Indian Statistical Institute. Shewhart toured the country, held conferences and stimulated interest in statistical quality control among Indian industrialists.He died at Troy Hills, New Jersey in 1967. Walter Shewharts invention of statistical control charts the pioneering of industrial quality control methods. Pioneer of Modern Quality Control 3 3 3 3 3 3 3 Recognized the need to separate variation into assignable and Unassignable causes (defined in control.) Founder of the control chart (eg. X-bar and R chart). Originator of the plan-do-check-act cycle. Perhaps the first to successfully integrate statistics, Engineering and economics. Defined quality in terms of objective and subjective quality objective quality: quality of a thing independent of people. subjective quality: quality is relative to how people perceive it.(Value)
30

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Influence In 1938 his work came to the attention of physicists W. Edwards Deming and Raymond T. Birge. The two had been deeply intrigued by the issue of measurement error in science and had published a landmark paper in Reviews of Modern Physics in 1934. On reading of Shewharts insights, they wrote to the journal to wholly recast their approach in the terms that Shewhart advocated. The encounter began a long collaboration between Shewhart and Deming that involved work on productivity during World War II and Demings championing of Shewharts ideas in Japan from 1950 onwards. Deming developed some of Shewharts methodological proposals around scientific inference and named his synthesis the Shewhart cycle. Achievements and honours In his obituary for the American Statistical Association, Deming wrote of Shewhart: As a man, he was gentle, genteel, never ruffled, never off his dignity. He knew disappointment and frustration, through failure of many writers in mathematical statistics to understand his point of view. He was the founding editor of the Wiley Series in Mathematical Statistics, a role that he maintained for twenty years, always championing freedom of speech and confident to publish views at variance with his own. His honours included: y Founding member, fellow and president of the Institute of Mathematical Statistics; y Founding member, first honorary member and first Shewhart Medalist of the American Society for Quality Control; y Fellow and president of the American Statistical Association; y Fellow of the International Statistical Institute; y Honorary fellow of the Royal Statistical Society; y Holley medal of the American Society of Mechanical Engineers; y Honorary Doctor of Science, Indian Statistical Institute, Calcutta. 1. All chance systems of causes are not alike in the sense that they enable us to predict the future in terms of the past. 2. Constant systems of chance causes do exist in nature. 3. Assignable causes of variation may be found and eliminated.
31

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Based upon evidence such as already presented, it appears feasible to set up criteria by which to determine when assignable causes of variation in quality have been eliminated so that the product may then be considered to be controlled within limits. This state of control appears to be, in general, a kind of limit to which we may expect to go economically in finding and removing causes of variability without changing a major portion of the manufacturing process as, for example, would be involved in the substitution of new materials or designs. The definition of random in terms of a physical operation is notoriously without effect on the mathematical operations of statistical theory because so far as these mathematical operations are concerned random is purely and simply an undefined term. The formal and abstract mathematical theory has an independent and sometimes lonely existence of its own. But when an undefined mathematical term such as random is given a definite operational meaning in physical terms, it takes on empirical and practical significance. Every mathematical theorem involving this mathematically undefined concept can then be given the following predictive form: If you do so and so, then such and such will happen. Every sentence in order to have definite scientific meaning must be practically or at least theoretically verifiable as either true or false upon the basis of experimental measurements either practically or theoretically obtainable by carrying out a definite and previously specified operation in the future. The meaning of such a sentence is the method of its verification. In other words, the fact that the criterion we happen to use has a fine ancestry of highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. Presentation of data depends on the intended actions Rule 1. Original data should be presented in a way that will preserve the evidence in the original data for all the predictions assumed to be useful. Rule 2. Any summary of a distribution of numbers in terms of symmetric functions should not give an objective degree of belief in any one of the inferences or predictions to be made there for that would cause human action significantly different from what this action would be if the original distributions had been taken as evidence. The original notions of Total Quality Management and continuous improvement trace back to a former Bell Telephone employee named Walter Shewhart. One of W. Edwards Demings teachers, he preached the importance of adapting management processes to create profitable situations for both businesses and consumers, promoting the utilization of his own creation the SPC control chart.

Anna University Chennai

32

DBA 1656

QUALITY MANAGEMENT

Dr. Shewhart believed that lack of information greatly hampered the efforts of control and management processes in a production environment. In order to aid a manager in making scientific, efficient, economical decisions, he developed, statistical process control methods. Many of the modern ideas regarding quality owe their inspiration to Dr. Shewhart. He also developed the Shewhart cycle-learning and Improvement cycle, combining both creative management thinking with statistical analysis. This cycle contains four continuous steps: Plan, Do, Study and Act. These steps (commonly referred to as the PDSA cycle), Shewhart believed, ultimately lead to total quality improvement. The cycle draws its structure from the notion that constant evaluations of management practices as well as the willingness of management to adopt and disregard unsupported ideas are keys to the evolution of a successful enterprise. 2.2 OVERVIEW OF THE CONTRIBUTIONS OF DEMING Understanding the Deming Management Philosophy

NOTES

FIGURE 2.1 W. Edwards Deming called it the Shewhart cycle, giving credit to its inventor, Walter A. Shewhart. The Japanese have always called it the Deming cycle in honor of the contributions Deming made to Japans quality improvement efforts over many years. Some people simply call it the PDCAplan, do, check, and actcycle. Regardless of its name, the idea is well-known to process improvement engineers, quality professionals, quality improvement teams and others involved in continuous improvement efforts. The model can be used for the ongoing improvement of almost anything and it contains the following four continuous steps: Plan, Do, Study and Act. Students, facilitated by their teacher should be able to complete the following steps using information from a classroom data center. Building staffs, facilitated by their principal or district
33 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

quality facilitators should be able to complete the steps of PDSA using information from a building data center. Similarly, district level strategic planning committees can use the same process. In the first step (PLAN), based on data, identify a problem worthy of study to effect improvement. Define the specific changes you want. Look at the data [numerical information] related to the current status. List a numerical measure for the future target. In the second step (DO), the plan is carried out, preferably on a small scale. Identify the process owners who are on the team. Within the action plan and on the storyboard, explain why new steps are necessary. Collect and chart the baseline data. Form a hypothesis of possible causes that are related to the current performance results. A quality tool such as a fish-bone diagram or an affinity diagram would be useful at this stage. Implement the strategy to bring about the change. In the third step (STUDY), the effects of the plan are observed. Monitor the data. On the baseline data chart, continue with the next data points. Explain when and how data analysis with appropriate people takes place. Explain what is being learned though the improvement process. Identify trends, if any can be discerned. Conduct a gap analysis. Use comparisons, and benchmark [best practice] data. If the data result is negative, undergo another cycle of PDSA. In the last step (ACT), the results are studied to determine what was learned and what can be predicted. If positive data result, standardize the process/strategy. Standardize and keep the new process going. Repeat the cycle starting with PLAN to define a new change you want. William Edwards Deming was an American statistician, college professor, author, lecturer, and consultant. Deming is widely credited with improving production in the United States during World War II, although he is perhaps best known for his work in Japan. There, from 1950 onward he taught top management how to improve design (and thus service), product quality, testing and sales (the latter through global markets). Deming made a significant contribution to Japan becoming renowned for producing innovative high-quality products. Deming is regarded as having had more impact upon Japanese manufacturing and business than any other individual not of Japanese heritage. After World War II (1947), Deming was involved in early planning for the 1951 Japanese Census. He was asked by the Department of the Army to assist in this census. While he was there, his expertise in quality control techniques, combined with his involvement in Japanese society, led to his receiving an invitation by the Japanese Union of Scientists and Engineers (JUSE). JUSE members had studied Shewharts techniques, and as part of Japans reconstruction efforts they sought an expert to teach statistical control. During June-

Anna University Chennai

34

DBA 1656

QUALITY MANAGEMENT

August 1950, Deming trained hundreds of engineers, managers, and scholars in statistical process control (SPC) and concepts of quality. He also conducted at least one session for top management. Demings message to Japans chief executives: improving quality will reduce expenses while increasing productivity and market share. Perhaps the best known of these management lectures was delivered at the Mt. Hakone Conference Center in August of 1950. A number of Japanese manufacturers applied his techniques widely, and experienced theretofore unheard of levels of quality and productivity. The improved quality combined with the lowered cost created new international demand for Japanese products. Deming declined to receive royalties from the transcripts of his 1950 lectures, so JUSEs board of directors established the Deming Prize (December 1950) to repay him for his friendship and kindness. The Deming Prize, especially the Deming Application Prize that is given to companies, has exerted an immeasurable influence directly or indirectly on the development of quality control and quality management in Japan. In 1960, the Prime Minister of Japan (Nobusuke Kishi), acting on behalf of Emperor Hirohito, awarded Dr. Deming Japans Order of the Sacred Treasures, Second Class. The citation on the medal recognizes Demings contributions to Japans industrial rebirth and its worldwide success. The first section of the meritorious service record describes his work in Japan: 1 2 3 1947 Rice Statistics Mission member 1950 Assistant to the Supreme Commander of the Allied Powers Instructor in sample survey methods in government statistics

NOTES

The second half of the record lists his service to private enterprise through the introduction of epochal ideas, such as quality control and market survey techniques Contributions of Deming The recovery of Japan after World War II has many explanations: Japan was forbidden to be involved in military industries, the Japanese concentrated on consumer products; powerful conglomerates of industry and banks (zaibatsus) poured money into selected companies; the Japanese people consented to great sacrifices in order to support the recovery. The Japanese themselves point to American W. Edwards Deming as one factor of their success. During the War, Deming was one of many who helped apply statistical quality control methods developed by Walter Shewhart at Bell Labs to help with the industrial mobilization. After the war, Deming was disappointed by American industrys
35 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

rejection of these methods. Deming visited Japan after the war as representative of the US government, to help the Japanese set up a census. He met with Japanese engineers interested in applying Shewharts methods. In 1950, the Japanese Union of Engineers invited Deming to give a series of lectures on quality control, which were attended by top Japanese industrialists.Within months, they found amazing increase in productivity and statistical quality control took off in Japan. The top people came to Deming with a desire to learn that bordered on obssession. The Japanese integrated the statistical methods into their companies, involving all the workers in the movement to improve quality. American industry flourished in the postwar boom in the US, but found itself getting hints and finally clear indications of Japanese competition in the 1970s. Hal Sperlich, a Ford executive, visited a Japanese auto factory in the early seventies and was amazed to find that the factories had no area dedicated to repairing shoddy work; in fact the plant had no inspectors. Sperlich left that factory somewhat shaken: In America, he thought, we have repair bins the size of football fields. William Ouchi wrote that when he began to study Japanese practices in 1973, there was little interest in the US in his findings. When his book, Theory X, was published in 1981, interest had grown tremendously and the book was a best seller. However, even in 1981, a top officer in Motorola warned American manufacturers of computer chips that they were complacent and not paying enough attention to Japanese quality. In 1981, Ford engineers compared automatic transmissions, some built by Mazda for the Ford Escorts, and some built by Ford. The ones made in Japan were well liked by our customers; many of those from Ohio were not. Ours were more erratic; many shifted poorly through the gears, and customers said they didnt like the way they performed. The difference was due to the tighter tolerances in the Japanese made transmissions. NBC aired a documentary, If Japan Can, Why Cant We. The documentary explained what Japan was doing and especially stressed the contributions of Deming. Donald Peterson, then president of Ford, was one of many CEOs motivated to call Deming. Deming said his phone rang off the hook. Deming began with statistical quality control, but he recognized that success depended on involving everyone. His 14 points are a manifesto for worker involvement and worker pride. Peterson sent teams from Ford to visit Japanese companies: Before those visits, many of the people at Ford believed that the Japanese were succeeding because they used highly sophisticated machinery. Other thought their industry was orchestrated by Japans government. The value of our visits, however, lay in Ford peoples discovery that the real secret was how the people worked together how the Japanese companies organized their people into teams, trained their workers with the skills they needed, and gave them the power to do their jobs properly. Somehow or other, they had managed to hold on to a fundamental simplicity of human enterprise, while we built layers of bureaucracy.

Anna University Chennai

36

DBA 1656

QUALITY MANAGEMENT

In a return to using the brain, not just the brawn, of the worker, the Japanese methods, building on Deming, actually deTaylorize work. The classical Taylor model of scientific management, which favored the separation of mental from physical labor and the retention of all decision-making in the hands of management, is abandoned in favor of a cooperative team approach designed to harness the full mental capabilities and work experience of everyone involved in the process .... While Demings principles as filtered through the Japanese methods argue for reskilling work and reject of Taylors belief that workers should just do what they are told, Taylorism lives on, only now it is called McDonaldization. Taiichi Ohno, Toyota Production System developed between 1945 and 1970. While the US had much to learn from Japanese methods, careful observers realized that the differences between Japanese and American societies were so great that not all ideas could be imported (some of the cooperation among Japanese companies would violate US antitrust laws), that the Japanese methods were not always what they seemed (for example, life time employment was limited to a minority), and American companies, unheralded, were already using many of the new Japanese methods. Ouchi in Theory Z, examined Japanese practices in their treatment of workers, distilled them to the central ideas which he called Theory Z, and discovered that the best examples of Theory Z management were American companies. 2.3 OVERVIEW OF THE CONTRIBUTIONS OF JURAN Joseph Juran Juran expressed his approach to quality in the form of the Quality trilogy. Managing for quality involved three basic processes: Quality Planning: This involves identifying the customer (both internal and external), determining their needs, design goods and services to meet these needs at the established quality and cost goals. Then design the process and transfer this to the operators. Quality Control: Establish standards or critical elements of performance, identify measures and methods of measurements, compare actual to standard and take action if necessary. Quality Improvement: Identify appropriate improvement projects, organize the team, discover the causes and provide remedies and finally develop mechanisms to control the new process and hold the gains. The relationship among the three processes is shown in the Quality Trilogy figure below:
37

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 2.2 The errors made during the initial planning result in a higher cost which Juran labeled Chronic Waste: At the beginning, the process stays within control limits. A quality improvement project is initiated and succeeds in reducing chronic waste. Juran also created the concept of Cost of Quality. There are four elements comprising the cost of quality. Prevention Costs: Initial design quality, actions during product creation (e.g., marketing research, establishing product specifications, determining consumer needs, training workers, vendor evaluation, quality audit, preventive maintenance, etc.) Appraisal Costs: Inspection testing of raw materials, work-in-progress, finished goods, procedures for testing, training manuals, laboratories. External Costs: Returned merchandise, making warranty repairs or refunds, credibility loss, lawsuits. Internal Costs: Scrap, rework, redesign, downtime, broken equipment, reduced yield, selling products at a discount etc. The graph in the following exhibit shows that costs of conformance (appraisal and prevention) increase as defect rate declines. However, the costs of nonconformance (internal and external failures) decrease. The trade-off leads to an optimal conformance level.
Anna University Chennai 38

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 2.3 The Quality Trilogy Quality Planning: Determine quality goals; implementation planning; resource planning; express goals in quality terms; create the quality plan. Quality Control: Monitor performance; compare objectives with achievements: act to reduce the gap. Quality Improvement: Reduce waste; enhance logistics; improve employee morale; improve profitability; satisfy customers. Philosophy Management is largely responsible for quality Quality can only be improved through planning Plans and objectives must be specific and measurable Training is essential and starts at the top Three step process of planning, control and action

The Quality Planning Roadmap Step 1 Step 2 Step 3 Step 4 : Identify who are the customers : Determine the needs of those customers : Translate those needs into our language (the language of the organization) : Develop a product that can respond to those needs
39 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Step 5 Step 6 Step 7 Step 8 Step 9

: Optimize the product features so as to meet the companys needs as well as customers needs : Develop a process, which is able to produce the product : Optimize the process : Prove that the process can produce the product under operating conditions : Transfer the process to operations

Ten Steps to Continuous Quality Improvement Step 1 Step 2 Step 3 : Create awareness of the need and opportunity for quality improvement : Set goals for continuous improvement : Build an organization to achieve goals by establishing a quality council, identifying problems, selecting a project, appointing teams and choosing facilitators : Give everyone training : Carry out projects to solve problems : Report progress : Show recognition : Communicate results : Keep a record of successes. : Incorporate annual improvements into the companys regular systems and processes and thereby maintain momentum.

Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 10

2.4 OVERVIEW OF THE CONTRIBUTIONS OF CROSBY Philip Crosby Crosby described quality as free and argued that zero defects was a desirable and achievable goal. He articulated his view of quality as the four absolutes of quality management: 1. Quality means conformance to requirements. Requirements needed to be clearly specified so that everyone knew what was expected of them. 2. 3. 4. With Quality comes prevention, and prevention was a result of training, discipline, example, leadership, and more. Quality performance standard is zero defect. Errors should not be tolerated. Quality measurement is the price of nonconformance.

Anna University Chennai

40

DBA 1656

QUALITY MANAGEMENT

In addition to the above, Crosby developed a quality management maturity grid in which he listed five stages of managements maturity with quality issues. These five are Uncertainty, Awakening, Enlightenment, Wisdom and Certainty. In the first stage, management fails to see quality as a tool; problems are handled by firefighting and are rarely resolved; there are no organized quality improvement activities. By the last stage, the company is convinced that quality is essential to the companys success; problems are generally prevented; and quality improvement activities are regular and continuing. Five absolutes of quality management: Philip B. Crosby Quality is defined as conformance to requirements, not as goodness nor elegance. There is no such thing as a quality problem. It is always cheaper to do it right the first time. The only performance measurement is the cost of quality. The only performance standard is zero defects. Crosbys Quality Vaccine Integrity Dedication to communication and customer satisfaction Company-side policies and operations which support the quality thrust

NOTES

FIGURE 2.4 Fourteen Step Quality Programme: Philip B. Crosby Step 1 Establish management commitment it is seen as vital that the whole management team participates in the programme, a half hearted effort will fail. Form quality improvement teams the emphasis here is on multidisciplinary team effort. An initiative from the quality departments
41 Anna University Chennai

Step 2

DBA 1656

QUALITY MANAGEMENT

NOTES
Step 3

will not be successful. It is considered essential to build team work across arbitrary, and often artificial, organizational boundaries. Establish quality measurements these must apply to every activity throughout the company. A way must be found to capture every aspect, design, manufacturing, delivery and so on. These measurements provide a platform for the next step. Evaluate the cost of quality this evaluation must highlight, using the measures established in the previous step, where quality improvement will be profitable. Raise quality awareness this is normally undertaken through the training of managers and supervisors, through communications such as videos and books, and by displays of posters etc. Take action to correct problems this involves encouraging staff to identify and rectify defects, or pass them on to higher supervisory levels where they can be addressed. Zero defects planning establish a committee, or working group to develop ways to initiate and implement a zero defects programme. Train supervisors and managers this step is focused on achieving understanding by all managers and supervisors of the steps in the quality improvement programme in order that they can explain it in turn. Hold a Zero Defects day to establish the attitude and expectation within the company. Crosby sees this as being achieved in a celebratory atmosphere accompanied by badges, buttons and balloons. Encourage the setting of goals for improvement goals are ofcourse of no value unless they are related to appropriate time-scales for their achievement. Obstacle reporting This is encouragement to employees to advise management of the factors which prevent them from achieving error free work. This might cover defective or inadequate equipment, poor quality components, etc. Recognition for contributors Crosby considers that those who contribute to the programme should be rewarded through a formal, although non-monetary, reward scheme. Readers may be aware of the Gold Banana award given by Foxboro for scientific achievement (Peters and Waterman, 1982).

Step 4

Step 5

Step 6

Step 7 Step 8

Step 9

Step 10

Step 11

Step 12

Anna University Chennai

42

DBA 1656

QUALITY MANAGEMENT

Step 13

Establish quality councils these are essentially forums composed of quality professional and team leaders allowing them to communicate and determine action plans for further quality improvement. Do it all over again the message here is very simple achievement of quality is an ongoing process. However far you have got, there is always further to go!

NOTES

Step 14

2.5

OVERVIEW OF THE CONTRIBUTIONS OF MASAAKI IMAI

Masaaki Imai, a quality management consultant, was born in Tokyo in 1930. In 1955, he received his bachelors degree from the University of Tokyo, where he also did graduate work in international relations. In the 1950s he worked for five years in Washington, D.C. at the Japanese Productivity Center where his principle duty was escorting groups of Japanese business people through major U.S. Plants. In 1962, he founded Cambridge Corp., an international management and executive recruiting firm based in Tokyo. As a consultant, he assisted more than 200 foreign and joint-venture companies in Japan in fields including recruiting, executive development, personnel management and organizational studies. From 1976 to 1986, Imai served as president of the Japan Federation of Recruiting and Employment Agency Associations. In 1986, Imai established the Kaizen Institute , to help Western companies introduce kaizen concepts, systems and tools. That same year, he published his book on Japanese management, Kaizen: The Key to Japans Competitive Success. This best-selling book has since been translated into 14 languages. Other book by Imai include, 16 Ways To Avoid Saying No, Never Take Yes for an Answer and Gemba Kaizen published in 1997. Till date, the Kaizen institute is operating in over 22 countries and continues to act as an enabler to companies to accomplish their manufacturing, process, and service goals. The True Total Quality according to Masaaki Imai is important to recognise the importance of the commonsense approach of gemba (shop floor) kaizen to quality improvement, as against the technology-only approach to quality practised in the west. The production system (batch production) employed by over 90% of all the companies in the world is one of the biggest obstacles to quality improvement. A conversion from a batch to a JIT (just-in-time)/lean production system should be the most urgent task for any manufacturing company today in order to survive in the next millennium.

43

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

2.6

OVERVIEW OF THE CONTRIBUTIONS OF FEIGENBAUM

Mitchell Jay Feigenbaum (born December 19, 1944; Philadelphia, USA) is a mathematical physicist whose pioneering studies in chaos theory led to the discovery of the Feigenbaum constant. The son of a Polish and a Ukrainian Jewish immigrants, Feigenbaums education was not a happy one. Despite excelling in examinations, his early schooling at Tilden High School, Brooklyn, New York, and the City College of New York seemed unable to stimulate his appetite to learn. However, in 1964 he began his graduate studies at the Massachusetts Institute of Technology (MIT). Enrolling for graduate study in electrical engineering, he changed his area to physics. He completed his doctorate in 1970 for a thesis on dispersion relations, under the supervision of Professor Francis Low. After short positions at Cornell University and the Virginia Polytechnic Institute, he was offered a long-term post at the Los Alamos National Laboratory in New Mexico to study turbulence in fluids. Although that group of researchers was ultimately unable to unravel the currently intractable theory of turbulent fluids, his research led him to study chaotic mappings. Some mathematical mappings involving a single linear parameter exhibit the apparently random behavior, known as chaos, when the parameter lies within certain ranges. As the parameter is increased towards this region, the mapping undergoes bifurcations at precise values of the parameter. At first there is one stable point, then bifurcating to an oscillation between two values, then bifurcating again to oscillate between four values and so on. In 1975, Dr. Feigenbaum, using the small HP-65 computer he had been issued, discovered that the ratio of the difference between the values at which such successive period-doubling bifurcations occur tends to a constant of around 4.6692... He was then able to provide a mathematical proof of that fact, and he then showed that the same behavior, with the same mathematical constant, would occur within a wide class of mathematical functions, prior to the onset of chaos. For the first time, this universal result enabled mathematicians to take their first step to unravelling the apparently intractable random behavior of chaotic systems. This ratio of convergence is now known as the Feigenbaum constant. The Logistic map is a prominent example of the mappings that Feigenbaum studied in his noted 1978 article: Quantitative Universality for a Class of Nonlinear Transformations. During Dr Feigenbaums duty at the Los Alamos Lab, he acquired a unique position which led to many scientists being sorry to see him leave. When anyone in any of the many fields of work going on at the Los Alamos Lab was stuck on a problem, it eventually became a common practice to seek out Feigenbaum, and then go for a walk to discuss the problem. Dr. Feigenbaum frequently helped others to understand the
Anna University Chennai 44

DBA 1656

QUALITY MANAGEMENT

problem they were dealing with better, and he often turned out to have read a paper that would help them; he was usually able to tell them the title, authors, and publication date to make things easier, and he did so straight off the top of his head most of the time. The amount of reading he was doing must have been formidable, and that would have left many without time to do any of their assigned research. Yet, his appetite for work was such that he continued to make a significant contribution to the work that he was assigned to do. It should be noted that the people who found him helpful in this manner were working in a very wide range of different kinds of scientific work. Few men would have stood a chance of being able to understand these all in enough depth to help out. Not my field would have been the response of most of them if they had discussed these matters with one another instead. Feigenbaums other contributions include important new fractal methods in cartography, starting when he was hired by Hammond to develop techniques to allow computers to assist in drawing maps. The introduction to the Hammond Atlas (1992) states:Using fractal geometry to describe natural forms such as coastlines, mathematical physicist Mitchell Feigenbaum developed software capable reconfiguring coastlines, borders, and mountain ranges to fit a multitude of map scales and projections. Dr. Feigenbaum also created a new computerized type placement program which places thousands of map labels in minutes, a task which previously required days of tedious labor. In 1983 he was awarded a MacArthur Fellowship, and in 1986, he was awarded the Wolf Prize in Physics. He has been Toyota Professor at Rockefeller University since 1986. 2.7 OVERVIEW OF THE CONTRIBUTIONS OF ISHIKAWA

NOTES

ISHIKAWA DIAGRAM An Ishikawa diagram, also known as a Fishbone diagram or cause and effect diagram, is a diagram that shows the causes of a certain event. It was first used by Kaoru Ishikawa in the 1960s, and is considered one of the seven basic tools of quality management, including the histogram, Pareto chart, check sheet, control chart, cause and effect diagram, flowchart, and scatter diagram.. Because of its shape, an Ishikawa diagram can be known as a Fishbone Diagram. It is also known as a cause and effect diagram. A common use of the Ishikawa diagram is in product design, to identify desirable factors leading to an overall effect. Mazda Motors famously used a Ishikawa diagram in the development of the Miata sports car, where the required result was Jinba Ittai or Horse and Rider as One. The main causes included such aspects as touch and braking with the lesser causes including highly granular factors such as 50/50 weight distribution and able to rest elbow on top of drivers door. Every factor identified in the diagram was included in the final design.
45 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 2.5 A generic Ishikawa diagram showing general and more refined causes for an event. People sometimes call Ishikawa diagrams fishbone diagrams because of their fish-like appearance. Most Ishikawa diagrams have a box at the right hand side in which is written the effect that is to be examined. The main body of the diagram is a horizontal line from which stem the general causes, represented as bones. These are drawn towards the left hand corners of the paper, and they are each labeled with the causes to be investigated. Off each of the large bones there may be smaller bones highlighting more specific aspects of a certain cause. When the most probable causes have been identified, they are written in the box along with the original effect. Definition: A graphic tool used to explore and display opinion about sources of variation in a process. (Also called a Cause-and-Effect or Fishbone Diagram.) Purpose: To arrive at a few key sources that contributes most significantly to the problem being examined. These sources are then targeted for improvement. The diagram also illustrates the relationships among the wide variety of possible contributors to the effect. The figure below shows a simple Ishikawa diagram. Note that this tool is referred to by several different names: Ishikawa diagram, Cause-and-Effect diagram, Fishbone diagram, and Root Cause Analysis. The first name is after the inventor of the tool, Kaoru Ishikawa (1969) who first used the technique in the 1960s. The basic concept in the Cause-and-Effect diagram is that the name of a basic problem of interest is entered at the right of the diagram at the end of the main bone. The main possible causes of the problem (the effect) are drawn as bones off of the main backbone. The Four-M categories are typically used as a starting point: Materials, Machines, Manpower, and Methods. Different names can be chosen to suit the problem at hand, or these general categories can be revised. The key is to have three to six main categories that encompass all possible influences. Brainstorming is typically done to add possible causes to the main bones and more specific causes to the bones on the main bones. This subdivision into ever increasing specificity
Anna University Chennai 46

DBA 1656

QUALITY MANAGEMENT

continues as long as the problem areas can be further subdivided. The practical maximum depth of this tree is usually about four or five levels. When the fishbone is complete, one has a rather complete picture of all the possibilities about what could be the root cause for the designated problem.

NOTES

FIGURE 2.6 The Cause-and-Effect diagram can be used by individuals or teams; probably most effectively by a group. A typical utilization is the drawing of a diagram on a blackboard by a team leader who first presents the main problem and asks for assistance from the group to determine the main causes which are subsequently drawn on the board as the main bones of the diagram. The team assists by making suggestions and, eventually, the entire cause and effect diagram is filled out. Once the entire fishbone is complete, team discussion takes place to decide what are all the most likely root causes of the problem. These causes are circled to indicate items that should be acted upon, and the use of the tool is complete. The Ishikawa diagram, like most quality tools, is a visualization and knowledge organization tool. Simply collecting the ideas of a group in a systematic way facilitates the understanding and ultimate diagnosis of the problem. Several computer tools have been created for assisting in creating Ishikawa diagrams. A tool created by the Japanese Union of Scientists and Engineers (JUSE) provides a rather rigid tool with a limited number of bones. Other similar tools can be created using various commercial tools.
47 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Only one tool has been created that adds computer analysis to the fishbone. Bourne et al. (1991) reported using Dempster-Shafer theory (Shafer and Logan, 1987) to systematically organize the beliefs about the various causes that contribute to the main problem. Based on the idea that the main problem has a total belief of one, each remaining bone has a belief assigned to it based on several factors; these include the history of problems of a given bone, events and their causal relationship to the bone, and the belief of the user of the tool about the likelihood that any particular bone is the cause of the problem. How to Construct: 1. Place the main problem under investigation in a box on the right. 2. Have the team generate and clarify all the potential sources of variation. 3. Use an affinity diagram to sort the process variables into naturally related groups. The labels of these groups are the names for the major bones on the Ishikawa diagram. 4. Place the process variables on the appropriate bones of the Ishikawa diagram. 5. Combine each bone in turn, insuring that the process variables are specific, measurable, and controllable. If they are not, branch or explode the process variables until the ends of the branches are specific, measurable, and controllable. Tip: Take care to identify causes rather than symptoms. Post diagrams to stimulate thinking and get input from other staff. Self-adhesive notes can be used to construct Ishikawa diagrams. Sources of variation can be rearranged to reflect appropriate categories with minimal rework. Insure that the ideas placed on the Ishikawa diagram are process variables, not special caused, other problems, tampering, etc. Review the quick fixes and rephrase them, if possible, so that they are process variables. 2.8 OVERVIEW OF THE CONTRIBUTIONS OF TAGUCHI Taguchi Methods: Introduction Dr. Genichi Taguchi has played an important role in popularising Design Of Experiments (DOE). However, it would be wrong to think that the Taguchi Methods are just another way of performing DOE. He has developed a complete philosophy and the associated methods for Quality Engineering. His most important ideas are:

Anna University Chennai

48

DBA 1656

QUALITY MANAGEMENT

A quality product is a product that causes a minimal loss (expressed in money!) to society during its entire life. The relation between this loss and the technical characteristics is expressed by the loss function Quality must be built into products and processes. There has to be much more attention to Off Line Quality Control in order to prevent problems from occurring in production. Different types of noise (variation within tolerance, external conditions, dissipation from neighbouring systems, ) have an influence on our system and lead to deviations from the optimal condition. To avoid the influence of these noises we need to develop robust products and processes. The robustness of a system is its ability to function optimally even under changing noise conditions.

NOTES

FIGURE 2.7

FIGURE 2.8
49 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Taguchi methods FIGURE 2.9Taguchi methods are statistical methods developed by Genichi Taguchi to improve the quality of manufactured goods and, more recently, to biotechnology, marketing and advertising. Taguchi methods are controversial among many conventional Western statisticians unfamiliar with the Taguchi methodology. Taguchis principle contributions to statistics are: 1. Taguchi loss-function; 2. The philosophy of off-line quality control; and 3. Innovations in the design of experiments. Loss functions Taguchis reaction to the classical design of experiments methodology of R. A. Fisher was that it was perfectly adapted in seeking to improve the mean outcome of a process. As Fishers work had been largely motivated by programmes to increase agricultural production, this was hardly surprising. However, Taguchi realised that in much industrial production, there is a need to produce an outcome on target, for example, to machine a hole to a specified diameter or to manufacture a cell to produce a given voltage. He also realised, as had Walter A. Shewhart and others before him, that excessive variation lay at the root of poor manufactured quality and that reacting to individual items inside and outside specification was counter-productive. He therefore, argued that quality engineering should start with an understanding of the cost of poor quality in various situations. In much conventional industrial engineering the cost of poor quality is simply represented by the number of items outside specification multiplied by the cost of rework or scrap. However, Taguchi insisted that manufacturers broaden their horizons to consider cost to society. Though the short-term costs may simply be those of nonconformance, any item manufactured away from nominal would result in some loss to the customer or the wider community through early wear-out; difficulties in interfacing with other parts, themselves probably wide of nominal; or the need to build-in safety margins. These losses are externalities and are usually ignored by manufacturers. In the wider economy the Coase Theorem predicts that they prevent markets from operating efficiently. Taguchi argued that such losses would inevitably find their way back to the originating corporation (in an effect similar to the tragedy of the commons) and that by working to minimise them, manufacturers would enhance brand reputation, win markets and generate profits. Such losses are, of course, very small when an item is near to nominal. Donald J. Wheeler characterised the region within specification limits as where we deny that losses exist. As we diverge from nominal, losses grow until the point where losses are too great to deny and the specification limit is drawn. All these losses are, as W. Edwards Deming would describe them, ...unknown and unknowable, but Taguchi wanted to find a useful way of representing them within statistics.

Anna University Chennai

50

DBA 1656

QUALITY MANAGEMENT

Taguchi specified three situations: 1. Larger the better (for example, agricultural yield); 2. Smaller the better (for example, carbon dioxide emissions); and 3. On-target, minimum-variation (for example, a mating part in an assembly). The first two cases are represented by simple monotonic loss functions. In the third case, Taguchi adopted a squared-error loss function on the grounds: It is the first symmetric term in the Taylor series expansion of any reasonable, real-life loss function, and so is a first-order approximation; Total loss is measured by the variance. As variance is additive it is an attractive model of cost; and There was an established body of statistical theory around the use of the least squares principle.

NOTES

The squared-error loss function had been used by John von Neumann and Oskar Morgenstern in the 1930s. Though much of this thinking is endorsed by statisticians and economists in general, Taguchi extended the argument to insist that industrial experiments seek to maximize an appropriate signal to noise ratio representing the magnitude of the mean of a process, compared to its variation. Most statisticians believe Taguchis signal to noise ratios to be effective over too narrow a range of applications and they are generally deprecated. Off-line quality control Taguchi realised that the best opportunity to eliminate variation is during design of a product and its manufacturing process (Taguchis rule for manufacturing). Consequently, he developed a strategy for quality engineering that can be used in both contexts. The process has three stages: 1. System design; 2. Parameter design; and 3. Tolerance design. System design This is the design at the conceptual level involving creativity and innovation. Parameter design Once the concept is established, the nominal values of the various dimensions and design parameters need to be set, the detailed design phase of conventional engineering. In 1802, philosopher William Paley had observed that the inverse-square law of gravitation was the only law that resulted in stable orbits if the planets were
51 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

perturbed in their motion. Paleys understanding that engineering should aim at designs robust against variation led him to use the phenomenon of gravitation as an argument for the existence of God. William Sealey Gosset in his work at the Guinness brewery suggested as early as the beginning of the 20th century that the company might breed strains of barley that not only yielded and malted well but whose characteristics were robust against variation in the different soils and climates in which they were grown. Taguchis radical insight was that the exact choice of values required is under-specified by the performance requirements of the system. In many circumstances, this allows the parameters to be chosen so as to minimize the effects on performance arising from variation in manufacture, environment and cumulative damage. This approach is often known as robust design or Robustification. Tolerance design With a successfully completed parameter design, and an understanding of the effect that the various parameters have on performance, resources can be focused on reducing and controlling variation in the critical few dimensions. Design of experiments Taguchi developed much of his thinking in isolation from the school of R. A. Fisher, only coming into direct contact in 1954. His framework for design of experiments is idiosyncratic and often flawed but contains much that is of enormous value. He made a number of innovations. Outer arrays In his later work, R. A. Fisher started to consider the prospect of using design of experiments to understand variation in a wider inductive basis. Taguchi sought to understand the influence that parameters had on variation, not just on the mean. He contended, as had W. Edwards Deming in his discussion of analytic studies, that conventional sampling is inadequate here as there is no way of obtaining a random sample of future conditions. In conventional design of experiments, variation between experimental replications is a nuisance that the experimenter would like to eliminate whereas, in Taguchis thinking, it is a central object of investigation. Taguchis innovation was to replicate each experiment by means of an outer array, itself an orthogonal array that seeks deliberately to emulate the sources of variation that a product would encounter in reality. This is an example of judgement sampling. Though statisticians following in the Shewhart-Deming tradition have embraced outer arrays, many academics are still skeptical. An alternative approach proposed by Ellis R. Ott is to use a chunk variable. Management of interactions Many of the orthogonal arrays that Taguchi has advocated are saturated allowing no scope for estimation of interactions between control factors, or inner array factors. This is a continuing topic of controversy. However, by combining orthogonal arrays with an outer array consisting of noise factors, Taguchis method provides complete

Anna University Chennai

52

DBA 1656

QUALITY MANAGEMENT

information on interactions between control factors and noise factors. The strategy is that these are the interactions of most interest in creating a system that is least sensitive to noise factor variation. Followers of Taguchi argue that the designs offer rapid results and that control factor interactions can be eliminated by proper choice of quality characteristic (ideal function) and by transforming the data. Notwithstanding, a confirmation experiment offers protection against any residual interactions. In his later teachings, Taguchi emphasizes the need to use an ideal function that is related to the energy transformation in the system. This is an effective way to minimize control factor interactions. Western statisticians argue that interactions are part of the real world and that Taguchis arrays have complicated alias structures that leave interactions difficult to disentangle. George Box, and others, have argued that a more effective and efficient approach is to use sequential assembly.

NOTES

Analysis of experiments Taguchi introduced many methods for analysing experimental results including novel applications of the analysis of variance and minute analysis. Little of this work has been validated by Western statisticians. Assessment Genichi Taguchi has made seminal and valuable methodological innovations in statistics and engineering, within the Shewhart-Deming tradition. His emphasis on loss to society; techniques for investigating variation in experiments and his overall strategy of system, parameter and tolerance design have been massively influential in improving manufactured quality worldwide.

Cost of Quality
I Assessing the cost of Quality The quality of a product is one of the most important factors that determine a companys sales and profit. Quality is measured in relation with the characteristics of the products that customers expect to find on it, so the quality level of the products is ultimately determined by the customers. The customers expectations about a products performance, reliability and attributes are translated into Critical-To-Quality (CTQ) characteristics and integrated in the products design by the design engineers. While designing the products, they must also take into account the resources capabilities (machines, people, materials), i.e., their ability to produce products that meet the customers expectations. They specify with exactitude the quality targets for every aspect of the products. But quality comes with a cost. The definition of the Cost Of Quality is contentious. Some authors define it as the cost of nonconformance, i.e., how much producing nonconforming products would cost a company. This is a one-sided approach, since it
53 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

does not consider the cost incurred to prevent nonconformance and above all in a competitive market, the cost of improving the quality targets. For instance, in the case of an LCD (Liquid Crystal Display) manufacturer, if the market standard for a 15 LCD with a resolution of 1024x768 is 786,432 pixels and a higher resolution requires more pixels, improving the quality of the 15 LCDs, pushing the companys specifications beyond the market standards would require the engineering of LCDs with more pixels which would require extra cost. The cost of quality is traditionally measured in terms of the costs conformance and the cost of nonconformance to which we will add the cost of innovation. The cost of conformance includes the appraisal and preventive costs while the cost of nonconformance includes the costs of internal and external defects. Cost of conformance Preventive Costs The costs incurred by the company to prevent nonconformance. It includes the costs of: o o o Process capability assessment and improvement The planning of new quality initiatives (process changes, quality improvement projects.) Employee training

Appraisal Cost.

The cost incurred while assessing, auditing, inspecting products and procedures to conform products and services to specifications. It is intended to detect quality related failures. It includes: o o o o o Cost of process audits Inspection of products received from suppliers Final inspection audit Design review Pre-release testing

Cost of nonconformance The cost of nonconformance is in fact the cost of having to rework products and the loss of customers that results from selling poor quality products. Internal Failure o o Cost of reworking products that failed audit Cost of bad marketing

Anna University Chennai

54

DBA 1656

QUALITY MANAGEMENT

Scrap

NOTES

External Failure o o o o o o Cost of customer support Cost of shipping returned products Cost of reworking products returned from customers Cost of refunds Loss of customer goodwill Cost of discounts to recapture customers

In the short term, there is a positive correlation between quality improvement and the cost of conformance and a negative correlation between quality improvement and the cost of nonconformance. In other words, an improvement in the quality of the products will lead to an increase in the cost of conformance that generated it. This is because an improvement in the quality level of a product might require extra investment in R&D, more spending in appraisal cost, more investment in failure prevention and so on. But a quality improvement will lead to a decrease in the cost of nonconformance because fewer products will be returned from the customers, therefore less operating cost of customer support and there will be less internal rework. For instance, one of the CTQs (Critical-To-Quality) for an LCD (Liquid Crystal Display) is the number of pixels it contains. The brightness of each pixel is controlled by individual transistors that switch the backlights on and off. The manufacturing of LCDs is very complex and very expensive and it is very hard to determine the number of dead pixels on an LCD before the end of the manufacturing process. So, in order to reduce the number of scrapped units, if the number of dead pixels is infinitesimal or the dead pixels are almost invisible, the manufacturer would consider the LCDs as good enough to be sold. Otherwise, the cost of scrap or internal rework would be so prohibitive that it would jeopardize the cost of production. Improving the quality level of the LCDs to zero dead pixels would therefore increase the cost of conformance. On the other hand, not improving the quality level of the LCDs will lead to an increase in the probability of having returned products from customers and internal rework, therefore increasing the cost of nonconformance. The following graph plots the relationship between quality improvement and the cost of conformance on one hand and the cost of nonconformance on the other hand.
55 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 2.9 If the manufacturer determines the quality level at Q2, the cost of conformance would be low (C1), but the cost of nonconformance would be high (C2) because the probability for customer dissatisfaction will be high and more products will be returned for rework therefore increasing the cost of rework, the cost of customers services and shipping and handling. The Total cost of Quality would be the sum the cost of conformance and the cost of nonconformance, that cost would be C3 for a quality level of Q2. C3 = C1 + C2.

FIGURE 2.10 Should the manufacturer decide that the quality level would be at Q1, the cost of conformance (C2) would be higher than the cost of nonconformance (C1) and the Total cost of Quality would be at C3.
Anna University Chennai 56

DBA 1656

QUALITY MANAGEMENT

The Total Cost Of Quality is minimized only when the cost of conformance and the cost of nonconformance are equal. It is worth to note that currently, the frequently used graph to represent the throughput yield in manufacturing is the normal curve. For a given target and specified limits, the normal curve helps estimate the volume of defects that should be expected. So while the normal curve estimates the volume of defects, the U curve estimates the cost incurred as a result of producing parts that do not match the target. The following graph represents both the volume of expected conforming and nonconforming parts and the costs associated to them at every level.

NOTES

FIGURE 2.11 II Taguchis Loss Function

In the now traditional quality management acceptance, the engineers integrate all the CTQs in the design of their new products and they clearly specify the target for their production processes as they define the characteristics of the products to be sent to the customers, but because of unavoidable common causes of variation (variations that are inherent to the production process and that are hard to eliminate) and the high costs of conformance, they are obliged to allow some variation or tolerance around the target. Any product that falls within the specified tolerance is considered as meeting the customers expectations, and any product outside the specified limits would be considered as nonconforming. But according to Taguchi, the products that do not match the target, even if they are within the specified limits do not operate as intended and any deviation from the target, be it within the specified limits or not will generate financial loss to the customers, the company and to society and the loss is proportional to the deviation from the target. Suppose that a design engineer specifies the length and diameter of a certain bolt that needs to fit a given part of a machine. Even if the customers do not notice it, any deviation from the specified target will cause the machine to wear out faster causing the company financial loss under the form of repair of the products under warranty or a loss of customers if the warranty has expired.
57 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Taguchi Constructed a Loss Function equation to determine how much society loses every time the parts produced do not match the specified target. The Loss Function determines the financial loss that occurs every time a CTQ of a product deviates from its target. The loss function is the square of the deviation multiplied by a constant k, with k being the ratio of the cost of defective product and the square of the tolerance. The loss function quantifies the deviation from the target and assigns a financial value to the deviation.

= cost of a defective product And m = LSL T or m = T - USL According to Taguchi, the cost of quality in relation with the deviation from the target is not linear because the customers frustration increases (at a faster rate) as more defects are found on a product. Thats why the loss function is quadratic.

FIGURE 2.12 The graph that depicts the financial loss to society that results from a deviation from the target resembles the Total Cost of quality U graph that we built earlier but the premises that helped build them are not the same. While the Total Cost curve was built based on the costs of conformance and nonconformance, Taguchis Loss Function is primarily based on the deviation from the target and measures the loss from the customers expectation perspective. Example: Suppose a machine manufacturer specifies the target for the diameter of a given rivet to be 6 inches and the upper and lower limits of 5.98 and 6.02 inches respectively.
Anna University Chennai 58

DBA 1656

QUALITY MANAGEMENT

A bolt measuring 5.99 inches is inserted in its intended hole of a machine. Five months after the machine was sold, it breaks down as a result of loose parts. The cost of repair is estimated at $95, find the loss to society incurred as a result of the part not matching its target. Solution: We must first determine the value of the constant k

NOTES

T=6 USL = 6.02 m = (USL - T) = 6.02 - 6 = 0.02 = 95 K = (95 / 0.004) = 237500 Therefore

Not producing a bolt that match the target would have resulted in a financial loss to society that amounted to $23.75. Taguchi Method :Variability Reduction Since the deviation from the target is the source of financial loss to society, what needs to be done in order to prevent any deviation from the set target? The first thought might be to reduce the specification range and improve the online quality control, to bring the specified limits closer to the target and inspect more samples during the production process in order to find the defective products before they reach the customers. But this would not be a good option since it would only address the symptoms and not the root causes of the problem. It would be an expensive alternative because it would require more inspection which would at best help detect nonconforming parts early enough to prevent them from reaching the customers. The root of the problem is in fact the variation within the production process, i.e., the value of sigma, the standard deviation from the mean. Lets illustrate this assertion with an example. Lets suppose that the length of a screw is a Critical-To-Quality (CTQ) characteristic and the target is determined to be 15 with a LCL of 14. 96 and a UCL of 15.04. The following sample was taken for testing:
59 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

15.02 14.99 14.96 15.03 14.98 14.99 15.03 15.01 14.99 All the observed items in this sample fall within the control limits even though all of them do not match the target. The mean is 15 and the standard deviation is 0.023979. Should the manufacturer decide to improve the quality of the output by reducing the range of the control limits to 14.98 and 15.02, three of the items in the sample would have failed audit and would have to be reworked or discarded. Lets suppose that the manufacturer decides instead to reduce the variability (the standard deviation) around the target and leave the control limits untouched. After process improvement, the following sample is taken: 15.01 15 14.99 15.01 14.99 14.99 15 15.01 15 The mean is still 15 but the standard deviation has been reduced to 0.00866 and all the observed items are closer to the target. Reducing the variability around the target has resulted in improving quality in the production process at a lower cost. This is not to suggest that the tolerance around the target should never be reduced; addressing the tolerance limits should be done under specific conditions and only after the variability around the target has been reduced.

Anna University Chennai

60

DBA 1656

QUALITY MANAGEMENT

Since variability is a source of financial loss to producers, customers and society at large, it is necessary to determine what the sources of variation are so that actions can be taken to reduce them. According to Taguchi, these sources of variation that he calls Noise factors can be reduced to three: The Inner Noise Inner noises are deteriorations due to time. Product wear, metal rust or fading colors, material shrinkage and product waning are among the Inner Noise factors. The Outer Noises which are environmental effects on the products. They are factors such as heat, humidity, operating conditions or pressure. These factors have negative effects on products or processes. In the case of my notebook, at first the LCD would not display until it heats up so humidity was the noise factor that was preventing it from operating properly. The manufacturer has no control over these factors. The Product Noise or manufacturing imperfections Product noises are due to production malfunctions, they can come from bad materials, inexperienced operator or bad machine settings.

NOTES

But if the online quality control is not the appropriate way to reduce production variations, what needs to be done to prevent deviations from the target? According to Taguchi, a pre-emptive approach must be taken to thwart the variations in the production processes. That pre-emptive approach that he calls Offline Quality control consists in creating a robust design, in other words designing products that are insensitive to the noise factors. Concept Design The production of a product starts with the concept design, which consists in choosing the product or service to be produced and defining its structural design and the production process that will be used to generate it. These factors are contingent upon among other factors the cost of production, the companys strategy, the current technology and the market demand. So the concept design will consist in: Determining the intended use of the product and its basic functions Determining the materials needed to produce the selected product Determining the production process needed to produce it

Parameter Design The next step in the production process is the parameter design. After the design architecture has been selected; the producer will need to set the parameter design. The parameter design consists in selecting the best combination of control factors that would optimize the quality level of the product by reducing the products sensitivity to noise factors. Control factors are parameters over which the designer
61 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

has control. When an engineer designs a computer, he has control on factors such as the CPU, System board, LCD, memory, LCD cables. etc. He determines what CPU best fits a motherboard, what memory stick and what wireless network card to use and how to design the system board that would make it easier for the parts to fit in. The way he combines those factors will impact the quality level of the computer. The producer wants to design products at the lowest possible cost and at the same time have the best quality result under current technology. To do so, the combination of the control factors must be optimal while the effect of the noise factors must be so minimal that they will not have any negative impact on the functionality of the products. So the experiment that leads to the optimal results will require the identification of the noise factors because they are part of the process and their effects need to be controlled. Signal to Noise Ratio One of the first steps the designer will take is to determine what the optimal quality level is. He will need to determine what the functional requirements are, assess the Critical-To-Quality characteristics of the product and specify their targets. The determination of the CTQs and their targets depends among other criteria on the customer requirements, the cost of production and current technology. The engineer is seeking to produce the optimal design, a product that is insensitive to noise factors! The quality level of the CTQ characteristics of the product under optimal conditions depends on whether the response experiment is static or dynamic. The response experiment (or output of the experiment) is said to be dynamic when the product has a signal factor that steers the output. For instance, when I switch on the power button on my computer, I am sending a signal to the computer to load my Operating System. It should power up and display within 5 seconds and it should do so exactly the same way every time I switch it on. If, as in the case of my computer, it fails to display because of the humidity, I conclude that the computer is sensitive to humidity and that humidity is a noise factor that negatively impacts the performance of my computer.

FIGURE 2.13 The response experiment is said to be static when the quality level of the CTQ characteristic is fixed. In that case, the optimization process will seek to determine the
Anna University Chennai 62

DBA 1656

QUALITY MANAGEMENT

optimal combination of factors that enables to reach the targeted value. This happens in the absence of a signal factor, the only input factors are the control factors and the noise factors. When we build a table, we determine all the CTQ target and we want to produce a balanced table with all the parts matching the targets. The optimal quality level of a product depends on the nature of the product itself. In some cases, the more a CTQ characteristic is found on a product, the happier the customers are, in other cases the less the CTQ is present, the better it is. Some products require the CTQs to match their specified targets. According to Taguchi, to optimize the quality level of his products, the producer must seek to minimize the noise factors and maximize the Signal-To-Noise (S/N) ratio. Taguchi uses log functions to determine the Signal-To-Noise ratios that optimize the desired output. The Bigger-The-Better If the number of minutes per dollar customers get from their cellular phone service provider is critical to quality, the customers will want to get the maximum number of minutes they can for every dollar they spend on their phone bills. If the lifetime of a battery is critical to quality, the customers will want their batteries to last forever. The longer the battery lasts, the better it is. The Signal-To-Noise ratio for the bigger-the-better is: S/N = -10*log (mean square of the inverse of the response)

NOTES

The Smaller-The-Better Impurity in drinking water is critical to quality. The less impurities customers find in their in their drinking water, the better it is. Vibrations are critical to quality for a car, the less vibration the customers feel while driving their cars the better, the more attractive the cars are. The Signal-To-Noise ratio for the Smaller-The-Better is: S/N = -10 *log (mean square of the response)

63

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

The Nominal-The-Best. When a manufacturer is building matching parts, he would want every part to match the predetermined target. For instance, when he is creating pistons that need to be anchored on a given part of a machine, failure to have the length of the piston to match a predetermined size will result in it being either too small or too long resulting in lowering the quality of the machine. In that case, the manufacturer wants all the parts to match their target. When a customer buys ceramic tiles to decorate his bathroom, the size of the tiles is critical to quality, having tiles that do not match the predetermined target will result in them not being correctly lined up against the bathroom walls. The S/N equation for the Nominal-The-Best is: S/N = 10 * log (the square of the mean divided by the variance)

Tolerance Design. Parameter design may not completely eliminate variations from the target. Thats why tolerance design must be used for all parts of a product to limit the possibility of producing defective products. The tolerance around the target is usually set by the design engineers; it is defined as the range within which variation may take place. The tolerance limits are set after testing and experimentation. The setting of the tolerance must be determined by criteria such as the set target, the safety factors, the functional limits, the expected quality level and the financial cost of any deviation from the target. The safety limits measure the loss incurred when products that are outside the specified limits are produced.

With being the loss incurred when the functional limits are exceeded and A being the loss when the tolerance limits are exceeded. tolerance specifications for the response factor will be:

With

being the functional limit.

Example: The functional limits of a conveyor motor are +/- 0.05 of the response RPM. The adjustments made at the audit station before a motor left the company cost $2.5 and the cost associated to defective motors once it has been sold is on average $180.
Anna University Chennai 64

DBA 1656

QUALITY MANAGEMENT

Find the tolerance specification for a 2500 RPM motor. Solution: We need first of all to find the economical factor which is determined by the loss incurred when the functional limits or/and the tolerance limits are exceeded.

NOTES

Now, we can determine the tolerance specification. The tolerance specification will be the value of the response factor plus or minus the allowed variation from the target. Tolerance specification for the response factor:

The variation from the target: 2500 * 0.0059 = 14.73 The tolerance specification will be 2500 +/- 14.73. 2.9 OVERVIEW OF THE CONTRIBUTIONS OF SHINGEO Shingeo

Mistake Proofing or Poka-Yoke was pioneered By Shingeo Shingeo and detailed in his Zero Defects Model. Poke Yoke is defined as a simple, inexpensive device that is non operator dependent, build into the production process at the source of the operation for the purpose of preventing safety hazards and quality defects 100% of the time. It has many applications from a production environment to a lean office or paper trail process and is used as a method for introducing a mistake proofing idea into a process to eliminate defects in that process.
65 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Examples in everyday include: Bathroom sinks have a mistake-proofing device. It is the little hole near the top of the sink that helps prevent overflows An iron turns off automatically when it is left unattended or when it is returned to its holder The window in the envelope is not only a labour saving device. It prevents the contents of an envelope intended for one person being inserted in an envelope address to another

Shigeo Shingos life-long work has contributed to the well-being of everyone in the world. Shigeo Shingo along with Taiichi Ohno, Kaoru Ishikawa and others has helped to revolutionise the way we manufacture goods. His improvement principles vastly reduce the cost of manufacturing - which means more products to more people. They make the manufacturing process more responsive while opening the way to new and innovative products with less defects and better quality. He was the first to create some of the strategies on continuous and total involvement of all employees. Shingos never-ending spirit of inquiry challenges the status quo at every level- he proposed that everything could be improved. Shingo believed that inventory is not just a necessary evil, but all inventory is absolute evil. He is one of the pioneers of change management. He brought about many new concepts such as ZD (Zero Defects), shifting use of statistics for acceptance or rejection from SQC (Statistical Quality Control) to SPC (Statistical Process Control), SMED (Single Minute Exchange of Dies), POKA-YOKE (mistake proofing), Defining Processes & Operations in two-dimensions of VA (Value Addition) and non-VA, etc. 2.10 CONCEPTS OF QUALITY CIRCLE Quality circle Quality is conformance to the claims made. A quality circle is a volunteer group composed of workers who meet together to discuss workplace improvement, and make presentations to management with their ideas. Typical topics are improving safety, improving product design, and improvement in manufacturing process. Quality circles have the advantage of continuity, the circle remains intact from project to project. Quality Circles were started in Japan in 1962 ( Kaoru Ishikawa has been credited for creating Quality Circles) as another method of improving quality. The movement in Japan was coordinated by the Japanese Union of Scientists and Engineers (JUSE). Prof. Ishikawa, who believed in tapping the creative potential of workers,
Anna University Chennai 66

DBA 1656

QUALITY MANAGEMENT

innovated the Quality Circle movement to give Japanese industry that extra creative edge. A Quality Circle is a small group of employees from the same work area who voluntarily meet at regular intervals to identify, analyze, and resolve work related problems. This can not only improve the performance of any organization, but also motivate and enrich the work life of employees. The use of Quality Circles in many highly innovative companies in the Scandinavian countries has been proven. The practice of it is recommended by many economist/business scholars. Dictionary meaning of Quality circle is: A group of employees who perform similar duties and meet at periodic intervals, often with management, to discuss workrelated issues and to offer suggestions and ideas for improvements, as in production methods or quality control. Business Dictionary defines Quality Circles as: Small groups of employees meeting on a regular basis within an organization for the purpose of discussing and developing management issues and procedures. Quality circles are established with management approval and can be important in implementing new procedures. While results can be mixed, on the whole, management has accepted quality circles as an important organizational methodology. As per the Small Business Encyclopedia, Quality Circle is identified as: A quality circle is a participatory management technique that enlists the help of employees in solving problems related to their own jobs. In their volume Japanese Quality Circles and Productivity, Joel E. Ross and William C. Ross define a quality circle as a small group of employees doing similar or related work who meet regularly to identify, analyze, and solve product-quality and production problems and to improve general operations. The circle is a relatively autonomous unit (ideally about ten workers), usually led by a supervisor or a senior worker and organized as a work unit. Employees who participate in quality circles usually receive training in formal problem-solving methodssuch as brainstorming, pareto analysis, and cause-and-effect diagramsand then are encouraged to apply these methods to either specific or general company problems. After completing an analysis, they often present their findings to management and then handle implementation of approved solutions. Although most commonly found in manufacturing environments, quality circles are applicable to a wide variety of business situations and problems. They are based on two ideas: that employees can often make better suggestions for improving work processes than management; and that employees are motivated by their participation in
67

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

making such improvements. Thus, implemented correctly, quality circles can help a small business reduce costs, increase productivity, and improve employee morale. Other potential benefits that may be realized by a small business include greater operational efficiency, reduced absenteeism, improved employee health and safety, and an overall better working climate. In their book Production and Operations Management, Howard J. Weiss and Mark E. Gershon called quality circles the best means today for meeting the goal of designing quality into a product. The interest of U.S. manufacturers in quality circles was sparked by dramatic improvements in the quality and economic competitiveness of Japanese goods in the post-World War II years. The emphasis of Japanese quality circles was on preventing defects from occurring rather than inspecting products for defects following a manufacturing process. Japanese quality circles also attempted to minimize the scrap and downtime that resulted from part and product defects. In the United States, the quality circle movement evolved to encompass the broader goals of cost reduction, productivity improvement, employee involvement, and problem-solving activities. Background Quality circles were originally associated with Japanese management and manufacturing techniques. The introduction of quality circles in Japan in the postwar years was inspired by the lectures of W. Edwards Deming (1900-1993), a statistician for the U.S. government. Deming based his proposals on the experience of U.S. firms operating under wartime industrial standards. Noting that American management had typically given line managers and engineers about 85 percent of the responsibility for quality control and line workers only about 15 percent, Deming argued that these shares should be reversed. He suggested redesigning production processes to more fully account for quality control, and continuously educating all employees in a firmfrom the top downin quality control techniques and statistical control technologies. Quality circles were the means by which this continuous education was to take place for production workers. Deming predicted that if Japanese firms adopted the system of quality controls he advocated, nations around the world would be imposing import quotas on Japanese products within five years. His prediction was vindicated. Demings ideas became very influential in Japan, and he received several prestigious awards for his contributions to the Japanese economy. The principles of Demings quality circles simply moved quality control to an earlier position in the production process. Rather than relying upon post-production

Anna University Chennai

68

DBA 1656

QUALITY MANAGEMENT

inspections to catch errors and defects, quality circles attempted to prevent defects from occurring in the first place. As an added bonus, machine downtime and scrap materials that formerly occurred due to product defects were minimized. Demings idea that improving quality could increase productivity led to the development in Japan of the Total Quality Control (TQC) concept, in which quality and productivity are viewed as two sides of a coin. TQC also required that a manufacturers suppliers make use of quality circles. Quality circles in Japan were part of a system of relatively cooperative labormanagement relations, involving company unions and lifetime employment guarantees for many full-time permanent employees. Consistent with this decentralized, enterpriseoriented system, quality circles provided a means by which production workers were encouraged to participate in company matters and by which management could benefit from production workers intimate knowledge of the production process. In 1980 alone, changes resulting from employee suggestions resulted in savings of $10 billion for Japanese firms and bonuses of $4 billion for Japanese employees. Active American interest in Japanese quality control began in the early 1970s, when the U.S. aerospace manufacturer Lockheed organized a tour of Japanese industrial plants. This trip marked a turning point in the previously established pattern, in which Japanese managers had made educational tours of industrial plants in the United States. Lockheeds visit resulted in the gradual establishment of quality circles in its factories beginning in 1974. Within two years, Lockheed estimated that its fifteen quality circles had saved nearly $3 million, with a ratio of savings to cost of six to one. As Lockheeds successes became known, other firms in the aerospace industry began adopting quality circles. Thereafter, quality circles spread rapidly throughout the U.S. economy; by 1980, over one-half of firms in the Fortune 500 had implemented or were planning on implementing quality circles. In the early 1990s, the U.S. National Labor Relations Board (NLRB) made several important rulings regarding the legality of certain forms of quality circles. These rulings were based on the 1935 Wagner Act, which prohibited company unions and management-dominated labor organizations. One NLRB ruling found quality programs unlawful that were established by the firm, that featured agendas dominated by the firm, and addressed the conditions of employment within the firm. Another ruling held that a companys labor-management committees were in effect labor organizations used to bypass negotiations with a labor union. As a result of these rulings, a number of employer representatives expressed their concern that quality circles, as well as other kinds of labor-management co-operation programs, would be hindered. However, the NLRB stated that these rulings were not general indictments against quality circles and labormanagement cooperation programs, but were aimed specifically at the practices of the companies in question.
69

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Requirements for Successful Quality Circles In his book Productivity Improvement: A Guide for Small Business, Ira B. Gregerman outlined a number of requirements for a small business contemplating the use of quality circles. First, the small business owner should be comfortable with a participative management approach. It is also important that the small business have good, co-operative labor-management relations, as well as the support of middle managers for the quality circle program. The small business owner must be willing and able to commit the time and resources needed to train the employees who will participate in the program, particularly the quality circle leaders and facilitators. It may even be necessary to hire outside facilitators if the time and expertise does not exist in-house. Some small businesses may find it helpful to establish a steering committee to provide direction and guidance for quality circle activities. Even if all these requirements are met, the small business will only benefit from quality circles if employee participation is voluntary, and if employees are allowed some input into the selection of problems to be addressed. Finally, the small business owner must allow time for the quality circles to begin achieving desired results; in some cases, it can take more than a year for expectations to be met. But successful quality circles offer a wide variety of benefits for small businesses. For example, they serve to increase managements awareness of employee ideas, as well as employee awareness of the need for innovation within the company. Quality circles also serve to facilitate communication and increase commitment among both labor and management. In enhancing employee satisfaction through participation in decision-making, such initiatives may also improve a small businesss ability to recruit and retain qualified employees. In addition, many companies find that quality circles further teamwork and reduce employee resistance to change. Finally, quality circles can improve a small businesss overall competitiveness by reducing costs, improving quality, and promoting innovation. 2.11 JAPANESE 5S PRINCIPLES 5S The 5Ses referred to in Lean are: Sort Straighten Shine Standardize Sustain

In fact, these 5S principles are actually loose translations of five Japanese words: Seiri - Put things in order (remove what is not needed and keep what is needed)
70

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Seiton - Proper Arrangement (Place things in such a way that they can be easily reached whenever they are needed) Seiso - Clean(Keep things clean and polished; no trash or dirt in the workplace) Seiketsu - Purity (Maintain cleanliness after cleaning - perpetual cleaning) Shitsuke - Commitment (a typical teaching attitude towards any undertaking to inspire pride and adherence to standards established for the four components)

NOTES

Another way to summarize 5S is: A place for everything (first 3Ses) and everything in its place (last two Ss) 2.12 8D METHODOLOGY 8 Disciplines The 8D (8 Disciplines) process is another problem solving method that is often required specifically in the automotive industry. One of the distinguishing characteristics of the 8D methodology is its emphasis on teams. The steps to 8D analysis are: 1. Use Team Approach 2. Describe the Problem 3. Implement and Verify Interim Actions (Containment) 4. Identify Potential Causes 5. Choose/Verify Corrective Actions 6. Implement Permanent Corrective Actions 7. Prevent Recurrence 8. Congratulate Your Team SUMMARY The principles and philosophies behind the evolution of various quality management techniques are elaborated in this unit. Walter A. Shewhart focused his work on ensuring control in industrial quality process. He laid the foundation for evolutionary thinking on quality and its management. William Edwards Deming, a contemporary to Shewhart has developed PDSA cycle and named it as Shewhart cycle. Later, it was changed as PDCA cycle. Deming has made an attempt to balance the standardized changes and continuous improvement of things in the organization.
71 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Joseph Jurans contribution is Quality Trilogy. While emphasizing quality planning during design of a product, he laid more stress on quality control during operations. The famous Cost of quality curve to identify the optimum conformance level was developed by Juran . The road map for quality planning and the steps to continuous quality improvement are amongst the contributions of Juran. Philip B.Crosby has identified five absolutes of quality management and he has prescribed a quality vaccine. To achieve zero defects in organization he has spelt out fourteen Step Quality Programme and firmly believes that the zero defect is an achievable goal. Masaaki Imai introduced Kaizen to the world. He has established Kaizen Institute which is propagating his ideas throughout the world. Mitchell Jay Feigenbaum pioneered studies in Chaos theory. Through his publications, he was able to disseminate the logistic maps developed by him. Kaoru Ishikawa suggested the diagram to identify the root cause of a problem. The fish bone diagram also known as cause and effect diagram is predominantly used in fixing quality related problems. The construction methodology is also presented. Dr.Genich Taguchi has popularized the concept design of experiments. He has also developed a complete philosophy of off-line quality control and innovations in the design on experiments are deliberated in detail. Shingeo introduced Zero Defects Model to the production community. Poke-Yoke is advocated by him. The concepts of quality circle was introduced by Deming. The requirements for successful quality circles and its evolution are presented. Japanese 5S principles, namely Seiri, Seiton, Seiso, Seiketsu, Shitsuke and the 8 disciplines to be focused in the new era are deliberated.

REVIEW QUESTIONS 1. Explain the influence of Walter A.Shewhart on ensuring quality in organization. 2. Explain Shewhart Cycle and elaborate the contributions of Deming on that. 3. Illustrate Juran Trilogy and demonstrate how quality is ensured throughout that. 4. Enumerate the 14-step quality programme advocated by Crosby. 5. Explain how the logistics map was developed by Feigenbaum. 6. Illustrate Ishikawa diagram and demonstrate its usefulness in problem solving. 7. What is Taguchi loss function? Explain the principles of operation. 8. Highlight the concept of quality circle and explain the requirements for successfully carrying it out. 9. Explain the Japanese 5S principle. 10. Discuss the 8D methodology with examples.

Anna University Chennai

72

DBA 1656

QUALITY MANAGEMENT

NOTES

UNIT-III STATISTICAL PROCESS CONTROL AND PROCESS CAPABILITY


INTRODUCTION
The application of the principles and philosophies of quality management calls for capability assessment. This has to be undertaken by the professionals through the process of re-engineering. There are specific control tools like SPC, to be adopted in various industries at various stages. To achieve TQM, every aspect of it whether it is reliability, maintenance, micro level technology assimilation has to be re-looked into. This comprehensive exercise will bring out the best part of organizational capabilities. This unit deals with Meaning and significance of Statistical Process Control (SPC), Construction of Control Charts for variables and attributes, Process Capability Meaning, Significance and Measurement, Six Sigma Concepts of Process Capability, Reliability Concepts Definitions, Reliability in Series and Parallel, Product Life Characteristics Curve, Total Productive Maintenance (TMP), Relevance to TQM, Terotechnology, Business Process Re-engineering (BPR), Principles, Applications, Reengineering Process, Benefits and Limitations. LEARNING OBJECTIVES Upon completion of this unit, you will be able to: Assess the importance and need for process control. Develop and use various charts and techniques for SPC Analyze the evolution of six sigma Understand the reliability variants and their applications Get a scenario about BPR and its usage. Apply various process control techniques in real life situation.
73 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

3.1 MEANING AND SIGNIFICANCE OF STATISTICAL PROCESS CONTROL (SPC) Statistical process control (SPC) is a method for achieving quality control in manufacturing processes. It employs control charts to detect whether the process observed is under control. Classical quality control was achieved by inspecting 100% of the finished product and accepting or rejecting each item based on how well the item met specifications. In contrast, statistical process control uses statistical tools to observe the performance of the production line to predict significant deviations that may result in rejected products. The underlying assumption is that there is variability in any production process: The process produces products whose properties vary slightly from their designed values, even when the production line is running normally, and these variances can be analyzed statistically to control the process. For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, in accordance with a distribution of net weights. If the production process, its inputs, or its environment changes (for example, the machines doing the manufacture begin to wear) this distribution can change. For example, as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than specified. If this change is allowed to continue unchecked, more and more product will be produced that fall outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case, the waste is in the form of free product for the consumer, typically waste consists of rework or scrap. By using statistical tools, the quality engineer responsible for the production line can troubleshoot the root cause of the variation that has crept in to the process and correct the problem. Focus Area: Quality - Measurement Definition and Summary: Applying statistical process control (use of control charts) to the management of software development efforts, to effect software process improvement. Statistical Process Control (SPC) can be applied to software development processes. A process has one or more outputs, as depicted in the figure below. These outputs, in turn, have measurable attributes. SPC is based on the idea that these attributes have two sources of variation: natural (also known as common) and assignable (also

Anna University Chennai

74

DBA 1656

QUALITY MANAGEMENT

known as special) causes. If the observed variability of the attributes of a process is within the range of variability from natural causes, the process is said to be under statistical control. The practitioner of SPC tracks the variability of the process to be controlled. When that variability exceeds the range to be expected from natural causes, the practitioner then identifies and corrects assignable causes.

NOTES

FIGURE 3.1 SPC is a powerful tool to optimize the amount of information needed for use in making management decisions. Statistical techniques provide an understanding of the business baselines, insights for process improvements, communication of value and results of processes, and active and visible involvement. SPC provides real time analysis to establish controllable process baselines; learn, set, and dynamically improve process capabilities; and focus business on areas needing improvement. SPC moves away from opinion-based decision-making. These benefits of SPC cannot be obtained immediately by all organizations. SPC requires defined processes and a discipline of following them. It requires a climate in which personnel are not punished when problems are detected, and strong management commitment. The key steps for implementing Statistical Process Control are: o o o o o Identify defined processes Identify measurable attributes of the process Characterize natural variation of attributes Track process variation If the process is in control, continue to track

75

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

If the process is not in control: Identify assignable cause Remove assignable cause Return to Track process variation

FIGURE 3.2 Statistical Process Control How To Perform SPC In practice, reports of SPC in software development and maintenance tend to concentrate on a few software processes. Specifically, SPC has been used to control software (formal) inspections, testing, maintenance, and personal process improvement. Control charts are the most common tools for determining whether a software process is under statistical control. A variety of types of control charts are used in SPC. Table 1, based on a survey [Radice 2000] of SPC usage in organizations attaining Level 4 or higher on the SEI CMM metric of process maturity, shows what types are most commonly used in applying SPC to software. The combination of an Upper Control Limit (UCL) and a Lower Control Limit (LCL) specify, on control charts, the variability due to natural causes. Table 2 shows the levels commonly used in setting control limits for software SPC. Table 3 shows the most common statistical techniques, other than control charts, used in software SPC. Some of these techniques are used in trial applications of SPC to explore the natural variability of processes. Some are used in techniques for eliminating assignable causes. Analysis of defects is the most common technique for eliminating assignable causes. Causal Analysis-related techniques, such
Anna University Chennai 76

DBA 1656

QUALITY MANAGEMENT

as Pareto analysis, Ishikawa diagrams, the Nominal Group Technique (NGT), and brainstorming, are also frequently used for eliminating assignable causes. Table 3.1: Usage of Control Charts Type of Control/ Attribute Chart Xbar-mR u-Chart Xbar c-Chart z-Chart Not clearly stated Percentage 33.3% 23.3% 13.3% 6.7% 6.7% 16.7%

NOTES

From Ron Radices survey of 25 CMM Level 4 and Level 5 organizations [Radice 2000] Table 3.2: Location of UCL-LCL in Control Charts Location Three-sigma Two-sigma One-Sigma Combination None/Not Clear Percentage 16% 4% 8% 16% 24%

From Ron Radices survey of 25 CMM level 4 and level 5 organizations [Radice 2000] Table 3.3: Usage of Other Statistical Techniques Statistical Technique Run Charts Histograms Pareto Analysis Scatter Diagrams Regression Analysis Pie Charts Radar/Kiviat Charts Other
77

Percentage 22.8% 21.1% 21.1% 10.5% 7.0% 3.5% 3.5% 10.5%


Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

From Ron Radices survey of 25 CMM level 4 and level 5 organizations [Radice 2000] Control charts are a central technology for SPC. Figure 3 shows a sample control chart constructed from simulated data. This is an X-chart, where the value of the attribute is graphed. Control limits are graphed. In this case, the control limits are based on a priori knowledge of the distribution of the attribute when the process is under control. The control limits are at three sigma. For a normal distribution, 0.2% of samples would fall outside the limits by chance. This control chart indicates the process is out of control. If this control chart were for real data, the next step would be to investigate the process to identify assignable causes and to correct them, thereby bringing the process under control.

FIGURE 3.3 Some have extended the focus of SPC in applying it to software processes. In manufacturing, the primary focus of control charts is to bring the process back into control. In software, the product is also a focus. When a software process exceeds the control limits, rework is typically performed on the product. In manufacturing, the cost of stopping a process is high. In software, the cost of stopping is lower, and few shutdown and startup activities are needed [Jalote and Saxena 2002]. SPC is one way of applying statistics to software engineering. Other opportunities for applying statistics exist in software engineering. Table 4 shows, by lifecycle phase, some of these uses of statistics. The National Research Council recently sponsored the Panel on Statistical Methods in Software Engineering [NRC 1996]. The panel recommended a wide range of areas for applying statistics, from visualizing test and metric data to conducting controlled experiments to demonstrate new methodologies.
Anna University Chennai 78

DBA 1656

QUALITY MANAGEMENT

Table 3.4: Some Applications of Statistics in Software Engineering


Phase Requirements Use of Statistics Specify performance goals that can be measured statistically, e.g., no more than 50 total field faults and zero critical faults with 90% confidence. Pareto analysis to identify fault-prone modules. Use of design of experiments in making design decisions empirically. Statistical control charts applied to inspections. Coverage metrics provides attributes. Design of experiments useful in creating test suites. Statistical usage testing is based on specified operational profile. Reliability models can be applied.

NOTES

Design Coding Testing

Based on [Dalal, et. al. 1993] Those applying SPC to industrial organizations, in general, have built process improvements on top of SPC. The focus of SPC is on removing variation caused by assignable causes. As defined here, SPC is not intended to lower process variation resulting from natural causes. Many corporations, however, have extended their SPC efforts with Six Sigma programs. Six Sigma provides continuous process improvement and attempts to reduce the natural variation in processes. Typically, Six Sigma programs use the Seven Tools of Quality (Table 5). The Shewhart Cycle (Figure 4) is a fundamental idea for continuous process improvement. Table 3.5: The Seven Tools of Quality
Tool Check Sheet Histogram Pareto Chart Cause and Effect Diagram Scatter Diagram Control Chart Graph Example of Use To count occurrences of problems. To identify central tendencies and any skewing to one side or the other. To identify the 20% of the modules which yield 80% of the issues. For identifying assignable causes. For identifying correlation and suggesting causation. For identifying processes that are out of control. For visually displaying data, e.g., in a pie chart.

79

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 3.4 Anticipated Benefits of Implementation SPC is a powerful tool to optimize the amount of information needed for use in making management decisions [Eickelmann and Anant 2003]. Statistical techniques provide an understanding of the business baselines, insights for process improvements, communication of value and results of processes, and active and visible involvement. SPC provides real time analysis to establish controllable process baselines; learn, set, and dynamically improve process capabilities; and focus business on areas needing improvement. SPC moves away from opinion-based decision making [Radice 2000]. These benefits of SPC cannot be obtained immediately by all organizations. SPC requires defined processes and a discipline of following them. It requires a climate in which personnel are not punished when problems are detected. It requires management commitment [Demmy 1989]. Detailed Characteristics The processes controllable by SPC are unlimited in application domain, lifecycle phase, and design methodology. Processes need to exhibit certain characteristics to be suitable for SPC (as shown in the table below). In addition, a process to which SPC is applied should be homogeneous. For example, applications of SPC to software inspections have found that inspections must often be decomposed to apply SPC effectively. Florence [1999] found, for instance, that the inspection of human machine interface specifications should
Anna University Chennai 80

DBA 1656

QUALITY MANAGEMENT

be treated as a process separate from the inspection of application specifications in the project he examined. Weller [2000] found that inspections of new and revised code should be treated as separate processes in the project he examined. A trial application of SPC is useful in identifying homogeneous sub processes. Issues other than identifying defined homogeneous processes arise in implementing SPC. A second table presents such implementation issues. Criteria Of Processes Suitable for SPC Well-defined Have attributes with observable measures Repetitive Sufficiently critical to justify monitoring effort (Based on [Demmy 1989]) SPC Implementation Issues TABLE 3.6 Define Process Consistent measurements cannot be expected from software processes that are not documented and generally followed. Measures need not be exhaustive. One or two measures that provide insight into the performance of a process or activity are adequate, especially if the measures are related to the process or activity goal. Measures that can be tracked inexpensively are preferable. Control charts should be constructed so as to detect process trends, not individual nonconforming events. Straightforward formulas exist for calculating control limits and analyzing distributions. Introductory college courses in statistics usually do not address process-control techniques in detail. SPC only signals the possible existence of a problem. Without detailed investigations, as in an audit, and instituting corrective action, SPC will not provide any benefit. Problems in following the above recommendations for implementing SPC can be decreased with effective training. SPC training based on examples of software processes is to be preferred. (Based on [Card 1994])
81

NOTES

   

Choose Appropriate Measures

Focus on Process Trends Calculate Control Limits Correctly

Investigate and Act

Provide Training

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Relationships to Other Practices: The Figure below represents a high-level process architecture for the subject practice, depicting relationships among this practice and the nature of the influences on the practice (describing how other practices might relate to this practice). These relationship statements are based on definitions of specific best practices found in the literature and the notion that the successful implementation of practices may influence (or are influenced by) the ability to successfully implement other practices. A brief description of these influences is included in the table below.

FIGURE 3.5 Process Architecture for the Statistical Process Control Gold Practice

Anna University Chennai

82

DBA 1656

QUALITY MANAGEMENT

TABLE 3.7 Summary of Relationship Factors INPUTS TO THE PRACTICE Determine which attributes, at what levels, should be controlled Statistical Process Control can only be effective if the most critical processes are identified and addressed using the technique. Practices that help establish clear goals and decision points, and are based on meaningful metrics and attributes based on specific program or technical goals, stand to gain the most payback from using SPC. SPC techniques need not be restricted to the present, i.e., planning for the insertion of new technology later in the life cycle should also plan for the use of SPC to ensure that processes are controlled and reliability of the resulting software artifacts is optimized. Practices such as Performance-Based Specifications and Commercial Specifications/Open System can imply the generation and collection of data. Such data may serve as appropriate input to SPC techniques such as control charts. Therefore, an environment that is datarich provides an excellent opportunity to leverage the benefit statistical process control techniques. An initial step in applying SPC is often to discover controllable and homogeneous processes. Past performance data can be used for this purpose. Formal inspection processes and processes for leveraging COTS/ NDI explicitly call for metrics to be collected. These metrics can be used as the basis for SPC.

NOTES

Define whether environment is appropriate for process control

Provide data upon which decisions can be based

OUTPUTS FROM THE PRACTICE Assess progress towards process control SPC can be used not only to control processes, but also to determine if quantitative requirements on software processes are being met. The results of SPC, then, provide valuable data and information that can be used to manage progress towards achievement of software requirements. Part of this ability to manage progress is supported by the quantitative progress measurements that are inherent in the SPC process, primarily in the form of
83 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES
Communicate progress towards controlling processes

defect tracking and correction against specific, quantitative quality targets. Management go/no-go decisions can be based on whether development processes are under control. SPC presents graphical displays supporting such decisions. The number and types of available graphical formats that can be used as part of the SPC process provide accessible visibility into progress being made to all program stakeholders. Demonstration-based reviews provide an excellent vehicle for communicating the progress being made in controlling processes through the use of SPC. By providing control over software development processes, SPC will result in more predictable and more reliable software. Rigorous testing that is guided by specifications and supported by well-documented and accurate operational and usage-based models will be much more effective under the controlled processes resulting from SPC.

Improve Testing Efficiency and Effectiveness

Definitions An application of statistics to controlling industrial processes, including processes in software development and maintenance. Statistical Process Control (SPC) is used to identify and remove variations in processes that exceed the variation to be expected from natural causes. The purpose of process control is to detect any abnormality in the process. - [Ishikawa 1982] Sources (Origins of the Practice) Walter Shewhart developed Statistical Process Control (SPC) in the 1920s. Shewhart sought methods applying statistics to industrial practice. Acceptance testing and SPC grew out of this work. Shewhart proposed the use of control charts, a core technique for SPC, in a historic internal memorandum of 16 May 1924 at Bell Telephone Laboratories. For a long time, SPC was most widely adopted in Japan, not the United States. Shewhart mentored W. Edwards Deming, and Deming went on to introduce quality technologies into Japanese industry. The Guide to Quality Control Ishikawa [1982], first published in 1968 in Japanese, is a guide to quality control techniques that became prevalent in Japan after World War II. Corporations in the United States began adopting
Anna University Chennai 84

DBA 1656

QUALITY MANAGEMENT

quality technology, including SPC, more widely in the 1980s. Recently, many United States corporations have instituted Six Sigma programs. These programs, through continual process improvement, attempt to reduce the natural variation in industrial processes. Some began applying SPC techniques to software in the 1980s. Gardiner and Montgomery [1987] report an example. Software inspections seem to provide the processes that are most commonly monitored with SPC in software. Some recently proposed software process models include opportunities for SPC. The spiral lifecycle provides a natural time for tuning software processes, namely before the start of the development of each increment. SPC yields analyzed data that managers can use in selecting processes to tune. Cleanroom software engineering combines incremental development and software inspections with other technologies, such as reliability modeling. The Software Engineering Institute (SEI) Capability Maturity Model (CMM) mandates that SPC be used in Level 4 organizations. 3.2 CONSTRUCTION OF CONTROL CHARTS FOR VARIABLES AND ATTRIBUTES Common Types of Charts The types of charts are often classified according to the type of quality characteristic that they are supposed to monitor: there are quality control charts for variables and control charts for attributes. Specifically, the following charts are commonly constructed for controlling variables: X-bar chart. In this chart the sample means are plotted in order to control the mean value of a variable (e.g., size of piston rings, strength of materials, etc.). R chart. In this chart, the sample ranges are plotted in order to control the variability of a variable. S chart. In this chart, the sample standard deviations are plotted in order to control the variability of a variable. S**2 chart. In this chart, the sample variances are plotted in order to control the variability of a variable. For controlling quality characteristics that represent attributes of the product, the following charts are commonly constructed:
85

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

C chart. In this chart (see example below), we plot the number of defectives (per batch, per day, per machine, per 100 feet of pipe, etc.). This chart assumes that defects of the quality attribute are rare, and the control limits in this chart are computed based on the Poisson distribution (distribution of rare events).

FIGURE 3.6

U chart. In this chart we plot the rate of defectives, that is, the number of defectives divided by the number of units inspected (the n; e.g., feet of pipe, number of batches). Unlike the C chart, this chart does not require a constant number of units, and it can be used, for example, when the batches (samples) are of different sizes.

Np chart. In this chart, we plot the number of defectives (per batch, per day, per machine) as in the C chart. However, the control limits in this chart are not based on the distribution of rare events, but rather on the binomial distribution. Therefore, this chart should be used if the occurrence of defectives is not rare (e.g., they occur in more than 5% of the units inspected). For example, we may use this chart to control the number of units produced with minor flaws.

P chart. In this chart, we plot the percent of defectives (per batch, per day, per machine, etc.) as in the U chart. However, the control limits in this chart are not based on the distribution of rare events but rather on the binomial distribution (of proportions). Therefore, this chart is most applicable to situations where the occurrence of defectives is not rare (e.g., we expect the percent of defectives to be more than 5% of the total number of units produced).

Anna University Chennai

86

DBA 1656

QUALITY MANAGEMENT

Control Charts for Variables vs. Charts for Attributes Sometimes, the quality control engineer has a choice between variable control charts and attribute control charts. Advantages of attribute control charts. Attribute control charts have the advantage of allowing for quick summaries of various aspects of the quality of a product, that is, the engineer may simply classify products as acceptable or unacceptable, based on various quality criteria. Thus, attribute charts sometimes bypass the need for expensive, precise devices and time-consuming measurement procedures. Also, this type of chart tends to be more easily understood by managers unfamiliar with quality control procedures; therefore, it may provide more persuasive (to management) evidence of quality problems. Advantages of variable control charts. Variable control charts are more sensitive than attribute control charts (see Montgomery, 1985, p. 203). Therefore, variable control charts may alert us to quality problems before any actual unacceptables (as detected by the attribute chart) will occur. Montgomery (1985) calls the variable control charts leading indicators of trouble that will sound an alarm before the number of rejects (scrap) increases in the production process. 3.3 PROCESS CAPABILITY MEANING, SIGNIFICANCE AND MEASUREMENT Process Capability 1. Select a candidate for the study. This step should be institutionalized. A goal of any organization should be the ongoing process improvement. However, because a company has only a limited resource base and cant solve all problems simultaneously, it must set priorities for its efforts. The tools for this include Pareto analysis and fishbone diagrams. 2. Define the process. It is all too easy to slip into the trap of solving the wrong problem. Once the candidate area has been selected in step 1, define the scope of the study. A process is a unique combination of machines, tools, methods, and personnel engaged in adding value by providing a product or service. Each element of the process should be identified at this stage. This is not a trivial exercise. The input of many people may be required. There are likely to be a number of conflicting opinions about what the process actually involves. 3. Procure resources for the study. Process capability studies disrupt normal operations and require significant expenditures of both material and human resources. Since it is a project of major importance, it should be managed as such. All of the usual project management techniques should be brought to bear. This includes planning, scheduling, and management status reporting.
87

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

4. Evaluate the measurement system. Using the techniques described in Chapter V, evaluate the measurement systems ability to do the job. Again, be prepared to spend the time necessary to get a valid means of measuring the process before going ahead. 5. Prepare a control plan. The purpose of the control plan is twofold: 1) isolate and control as many important variables as possible and, 2) provide a mechanism for tracking variables that can not be completely controlled. The object of the capability analysis is to determine what the process can do if it is operated the way it is designed to be operated. This means that such obvious sources of potential variation as operators and vendors will be controlled while the study is conducted. In other words, a single well-trained operator will be used and the material will be from a single vendor. There are usually some variables that are important, but that are not controllable. One example is the ambient environment, including temperature, barometric pressure, or humidity. Certain process variables may degrade as part of the normal operation; for example, tools wear and chemicals are used. These variables should still be tracked using logsheets and similar tools. 6. Select a method for the analysis. The SPC method will depend on the decisions made up to this point. If the performance measure is an attribute, one of the attribute charts will be used. Variables charts will be used for process performance measures assessed on a continuous scale. Also considered will be the skill level of the personnel involved, need for sensitivity, and other resources required to collect, record, and analyze the data. 7. Gather and analyze the data. Use one of the control charts described in this chapter, plus common sense. It is usually advisable to have at least two people go over the data analysis to catch inadvertent errors in transcribing data or performing the analysis. 8. Track down and remove special causes. A special cause of variation may be obvious, or it may take months of investigation to find it. The effect of the special cause may be good or bad. Removing a special cause that has a bad effect usually involves eliminating the cause itself. For example, if poorly trained operators are causing variability, the special cause is the training system (not the operator), and it is eliminated by developing an improved training system or a process that requires less training. However, the removal of a beneficial special cause may actually involve incorporating the special cause into the normal operating procedure. For example,

Anna University Chennai

88

DBA 1656

QUALITY MANAGEMENT

if it is discovered that materials with a particular chemistry produce better product the special cause is the newly discovered material and it can be made a common cause simply by changing the specification to assure that the new chemistry is always used. 9. Estimate the process capability. One point can not be overemphasized: the process capability cannot be estimated until a state of statistical control has been achieved! After this stage has been reached, the methods described later in this chapter may be used. After the numerical estimate of process capability has been arrived at it must be compared to managements goals for the process, or it can be used as an input into economic models. Demings all-or-none rules provide a simple model that can be used to determine if the output from a process should be sorted 100% or shipped as-is. 10. Establish a plan for continuous process improvement. Once a stable process state has been attained, steps should be taken to maintain it and improve upon it. SPC is just one means of doing this. Far more important than the particular approach taken is a company environment that makes continuous improvement of a normal part of the daily routine of everyone. 3.4 SIX SIGMA CONCEPTS OF PROCESS CAPABILITY Six Sigma

NOTES

The often-used six sigma symbol. Six Sigma is a system of practices originally developed by Motorola to systematically improve processes by eliminating defects. Defects are defined as units that are not members of the intended population. Since it was originally developed, Six Sigma has become an element of many Total Quality Management (TQM) initiatives. The process was pioneered by Bill Smith at Motorola in 1986 and was originally defined as a metric for measuring defects and improving quality, and a methodology to reduce defect levels below 3.4 Defects Per (one) Million Opportunities (DPMO). Six Sigma is a registered service mark and trademark of Motorola, Inc. Motorola has reported over US$17 billion in savings from Six Sigma as of 2006. In addition to Motorola, companies which also adopted Six Sigma methodologies early-on and continue to practice it today include Bank of America, Caterpillar, Honeywell International (previously known as Allied Signal), Raytheon and General Electric (introduced by Jack Welch).
89 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Recently Six Sigma has been integrated with the TRIZ methodology for problem solving and product design. Key concepts of Six Sigma At its core, Six Sigma revolves around a few key concepts. y Critical to Quality: Attributes most important to the customer y Defect: Failing to deliver what the customer wants y Process Capability: What your process can deliver y Variation: What the customer sees and feels y Stable Operations: Ensuring consistent, predictable processes to improve what the customer sees and feels y Design for Six Sigma: Designing to meet customer needs and process capability Methodology Six Sigma has two key methodologies: DMAIC and DMADV. DMAIC is used to improve an existing business process. DMADV is used to create new product designs or process designs in such a way that it results in a more predictable, mature and defect free performance. DMAIC Basic methodology consists of the following five steps: y Define the process improvement goals that are consistent with customer demands and enterprise strategy. y Measure the current process and collect relevant data for future comparison. y Analyze to verify relationship and causality of factors. Determine what the relationship is, and attempt to ensure that all factors have been considered. y Improve or optimize the process based upon the analysis using techniques like Design of Experiments. y Control to ensure that any variances are corrected before they result in defects. Set up pilot runs to establish process capability, transition to production and thereafter continuously measure the process and institute control mechanisms.

Anna University Chennai

90

DBA 1656

QUALITY MANAGEMENT

DMADV Basic methodology consists of the following five steps: y Define the goals of the design activity that are consistent with customer demands and enterprise strategy. y Measure and identify CTQs (critical to qualities), product capabilities, production process capability, and risk assessments. y Analyze to develop and design alternatives, create high-level design and evaluate design capability to select the best design. y Design details, optimize the design, and plan for design verification. This phase may require simulations. y Verify the design, set up pilot runs, implement production process and handover to process owners. Some people have used DMAICR (Realize). Others contend that focusing on the financial gains realized through Six Sigma is counter-productive and that said financial gains are simply byproducts of a good process improvement. Another additional flavor of Design for Six Sigma is the DMEDI method. This process is almost exactly like the DMADV process, utilizing the same toolkit, but with a different acronym. DMEDI stands for Define, Measure, Explore, Develop, Implement. Quality approaches and models DFSS (Design for Six Sigma) - A systematic methodology utilizing tools, training and measurements to enable us to design products and processes that meet customer expectations and can be produced at Six Sigma Quality levels. DMAIC (Define, Measure, Analyze, Improve and Control) - A process for continued improvement. It is systematic, scientific and fact based. This closed-loop process eliminates unproductive steps, often focuses on new measurements, and applies technology for improvement. Six Sigma - A vision of quality, which equates with only 3.4 defects per million opportunities for each product or service transaction. Strives for perfection. Quality Tools Associates are exposed to various tools and terms related to quality. Below are just a few of them. Control Chart - Monitors variance in a process over time and alerts the business to unexpected variance which may cause defects.
91

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Defect Measurement - Accounting for the number or frequency of defects that cause lapses in product or service quality. Pareto Diagram - Focuses our efforts on the problems that have the greatest potential for improvement by showing relative frequency and/or size in a descending bar graph. Based on the proven Pareto principle: 20% of the sources cause 80% of any problems. Process Mapping - Illustrated description of how things get done, which enables participants to visualize an entire process and identify areas of strength and weaknesses. It helps reduce cycle time and defects while recognizing the value of individual contributions. Root Cause Analysis - Study of original reason for nonconformance with a process. When the root cause is removed or corrected, the nonconformance will be eliminated. Statistical Process Control - The application of statistical methods to analyze data, study and monitor process capability and performance. Tree Diagram - Graphically shows any broad goal broken into different levels of detailed actions. It encourages team members to expand their thinking when creating solutions. Quality Terms Black Belt - Leaders of teams responsible for measuring, analyzing, improving and controlling key processes that influence customer satisfaction and/or productivity growth. Black Belts are full-time positions. Control - The state of stability, normal variation and predictability. Process of regulating and guiding operations and processes using quantitative data. CTQ: Critical to Quality (Critical Y) - Element of a process or practice which has a direct impact on its perceived quality. Customer Needs, Expectations - Needs, as defined by customers, which meet their basic requirements and standards. Defects - Sources of customer irritation. Defects are costly to both customers and to manufacturers or service providers. Eliminating defects provides cost benefits. Green Belt - Similar to Black Belt but not a full-time position. Master Black Belt - First and foremost teachers. They also review and mentor Black Belts. Selection criteria for Master Black Belts are quantitative skills and the ability to teach and mentor. Master Black Belts are full-time positions.

Anna University Chennai

92

DBA 1656

QUALITY MANAGEMENT

Variance - A change in a process or business practice that may alter its expected outcome. Statistics and robustness The core of the Six Sigma methodology is a data-driven, systematic approach to problem solving, with a focus on customer impact. Statistical tools and analysis are often useful in the process. However, it is a mistake to view the core of the Six Sigma methodology as statistics; an acceptable Six Sigma project can be started with only rudimentary statistical tools. Still, some professional statisticians criticize Six Sigma because practitioners have highly varied levels of understanding of the statistics involved. Six Sigma as a problem-solving approach has traditionally been used in fields such as business, engineering, and production processes. Roles required for implementation Six Sigma identifies five key roles for its successful implementation. y Executive Leadership includes CEO and other key top management team members. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements. y Champions are responsible for the Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from the upper management. Champions also act as mentor to Black Belts. At GE this level of certification is now called Quality Leader. y Master Black Belts, identified by champions, act as in-house expert coach for the organization on Six Sigma. They devote 100% of their time to Six Sigma. They assist champions and guide Black Belts and Green Belts. Apart from the usual rigor of statistics, their time is spent on ensuring integrated deployment of Six Sigma across various functions and departments. y Experts This level of skill is used primarily within Aerospace and Defense Business Sectors. Experts work across company boundaries, improving services, processes, and products for their suppliers, their entire campuses, and for their customers. Raytheon Incorporated was one of the first companies to introduce Experts to their organizations. At Raytheon, Experts work not only across multiple sites, but across business divisions, incorporating lessons learned throughout the company.
93

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

y Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma project execution, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma. y Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities. They operate under the guidance of Black Belts and support them in achieving the overall results. In many successful modern programs, Green Belts and Black Belts are empowered to initiate, expand, and lead projects in their area of responsibility. The roles as defined above, therefore, conform to the antiquated Mikel Harry/Richard Schroeder model, which is far from being universally accepted. The terms black belt and green belt are borrowed from the ranking systems in various martial arts. The term Six Sigma Sigma (the lower-case Greek letter ?) is used to represent standard deviation (a measure of variation) of a population (lower-case s, is an estimate, based on a sample). The term six sigma process comes from the notion that if one has six standard deviations between the mean of a process and the nearest specification limit, he will make practically no items that exceed the specifications. This is the basis of the Process Capability Study, often used by quality professionals. The term Six Sigma has its roots in this tool, rather than in simple process standard deviation, which is also measured in sigmas. Criticism of the tool itself, and the way that the term was derived from the tool, often sparks criticism of Six Sigma. The widely accepted definition of a six sigma process is one that produces 3.4 defective parts per million opportunities (DPMO). A process that is normally distributed will have 3.4 parts per million beyond a point that is 4.5 standard deviations above or below the mean (one-sided Capability Study). This implies that 3.4 DPMO corresponds to 4.5 sigmas, not six as the process name would imply. This can be confirmed by running on QuikSigma or Minitab a Capability Study on data with a mean of 0, a standard deviation of 1, and an upper specification limit of 4.5. The 1.5 sigmas added to the name Six Sigma are arbitrary and they are called 1.5 sigma shift (SBTI Black Belt material, ca 1998). Dr. Donald Wheeler dismisses the 1.5 sigma shift as goofy. In a Capability Study, sigma refers to the number of standard deviations between the process mean and the nearest specification limit, rather than the standard deviation of the process, which is also measured in sigmas. As process standard deviation goes

Anna University Chennai

94

DBA 1656

QUALITY MANAGEMENT

up, or the mean of the process moves away from the center of the tolerance, the Process Capability sigma number goes down, because fewer standard deviations will then fit between the mean and the nearest specification limit (see Cpk Index). The notion that, in the long term, processes usually do not perform as well as they do in the short term is correct. That requires that Process Capability sigma based on long term data is less than or equal to an estimate based on short term sigma. However, the original use of the 1.5 sigma shift is as shown above, and implicitly assumes the opposite. As sample size increases, the error in the estimate of standard deviation converges much more slowly than the estimate of the mean (see confidence interval). Even with a few dozen samples, the estimate of standard deviation often drags an alarming amount of uncertainty into the Capability Study calculations. It follows that estimates of defect rates can be very greatly influenced by uncertainty in the estimate of standard deviation, and that the defective parts per million estimates produced by Capability Studies often ought not to be taken too literally. Estimates for the number of defective parts per million produced also depends on knowing something about the shape of the distribution from which the samples are drawn. Unfortunately, there are no means for proving that data belong to any particular distribution. One can only assume normality, based on finding no evidence to the contrary. Estimating defective parts per million down into the 100s or 10s of units based on such an assumption is wishful thinking, since actual defects are often deviations from normality, which have been assumed not to exist. The 1.5 Sigma Drift The 1.5 sigma drift is the drift of a process mean, which occurs in all processes in a six sigma program. If a product being manufactured measures 100 3 cm (97 103 cm), over time the 1.5 sigma drift may cause the average to range up to 98.5 104.5 cm or down to 95.5 - 101.5 cm. This could be of significance to customers. The 1.5 shift was introduced by Mikel Harry. Harry referred to a paper about tolerancing, the overall error in an assembly is affected by the errors in components, written in 1975 by Evans, Statistical Tolerancing: The State of the Art. Part 3. Shifts and Drifts. Evans refers to a paper by Bender in 1962, Benderizing Tolerances A Simple Practical Probability Method for Handling Tolerances for Limit Stack Ups. He looked at the classical situation with a stack of disks and how the overall error in the size of the stack, relates to errors in the individual disks. Based on probability, approximations and experience, Bender suggests:

NOTES

95

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Harry then took this a step further. Supposing that there is a process in which 5 samples are taken every half hour and plotted on a control chart, Harry considered the instantaneous initial 5 samples as being short term (Harrys n=5) and the samples throughout the day as being long term (Harrys g=50 points). Due to the random variation in the first 5 points, the mean of the initial sample is different to the overall mean. Harry derived a relationship between the short term and long term capability, using the equation above, to produce a capability shift or Z shift of 1.5. Over time, the original meaning of short term and long term has been changed to result in long term drifting means. Harry has clung tenaciously to the 1.5 but over the years, its derivation has been modified. In a recent note from Harry We employed the value of 1.5 since no other empirical information was available at the time of reporting. In other words, 1.5 has now become an empirical rather than theoretical value. A further softening from Harry: ... the 1.5 constant would not be needed as an approximation. Despite this, industry has fixed on the idea that it is impossible to keep processes on target. No matter what is done, process means will drift by 1.5 sigma. In other words, if a process has a target value of 10.0, and control limits work out to be 13.0 and 7.0, over the long term the mean will drift to 11.5 (or 8.5), with control limits changing to 14.5 and 8.5. In truth, any process where the mean changes by 1.5 sigma, or any other amount, is not in statistical control. Such a change can often be detected by a trend on a control chart. A process that is not in control is not predictable. It may begin to produce defects, no matter where specification limits have been set. Digital Six Sigma In an effort to permanently minimize variation, Motorola has evolved the Six Sigma methodology to use information systems tools to make business improvements absolutely permanent. Motorola calls this effort Digital Six Sigma. Criticism Some companies that have embraced it have done poorly The cartoonist Scott Adams featured Six Sigma in a Dilbert cartoon published on November 26th 2006. When the process is introduced to his company Dilbert asks Why dont we jump on a fad that hasnt already been widely discredited? The Dilbert character states Fortune magazine says... blah blah... most companies that used Six Sigma have trailed the S&P 500.

Anna University Chennai

96

DBA 1656

QUALITY MANAGEMENT

Dilbert was referring to an article in Fortune which stated that of 58 large companies that have announced Six Sigma programs, 91 percent have trailed the S&P 500 since. The statement is attributed to an analysis by Charles Holland of consulting firm Qualpro (which espouses a competing quality-improvement process). The gist of the article is that Six Sigma is effective at what it is intended to do, but that it is narrowly designed to fix an existing process and does not help in coming up with new products or disruptive technologies. Based on arbritrary standards While 3.4 defects per million might work well for certain products/processes, it might not be ideal for others. A pacemaker might need higher standards, for example, whereas a direct mail advertising campaign might need less. The basis and justification for choosing 6 as the number of standard deviations is not clearly explained. What is Six Sigma? Six Sigma is a rigorous and disciplined methodology that uses data and statistical analysis to measure and improve a companys operational performance by identifying and eliminating defects in manufacturing and service-related processes. Commonly defined as 3.4 defects per million opportunities, Six Sigma can be defined and understood at three distinct levels: metric, methodology and philosophy... The goal of Six Sigma is to increase profits by eliminating variability, defects and waste that undermine customer loyalty. Six Sigma can be understood/perceived at three levels: 1. Metric: 3.4 Defects per Million Opportunities. DPMO allows you to take complexity of product/process into account. Rule of thumb is to consider at least three opportunities for a physical part/component - one for form, one for fit and one for function, in absence of better considerations. Also you want to be Six Sigma in the Critical to Quality characteristics and not the whole unit/ characteristics. 2. Methodology: DMAIC/DFSS structured problem solving roadmap and tools. 3. Philosophy: Reduce variation in your business and take customer-focused, data driven decisions. Six Sigma is a methodology that provides businesses with the tools to improve the capability of their business processes. This increase in performance and decrease in process variation leads to defect reduction and vast improvement in profits, employee morale and quality of product.
97

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

The History of Six Sigma The roots of Six Sigma as a measurement standard can be traced back to Carl Frederick Gauss (1777-1855) who introduced the concept of the normal curve. Six Sigma as a measurement standard in product variation can be traced back to the 1920s when Walter Shewhart showed that three sigma from the mean is the point where a process requires correction. Many measurement standards (Cpk, Zero Defects, etc.) later came on the scene but credit for coining the term Six Sigma goes to a Motorola engineer named Bill Smith. (Incidentally, Six Sigma is a federally registered trademark of Motorola). In the early and mid-1980s with Chairman Bob Galvin at the helm, Motorola engineers decided that the traditional quality levels measuring defects in thousands of opportunities didnt provide enough granularities. Instead, they wanted to measure the defects per million opportunities. Motorola developed this new standard and created the methodology and needed cultural change associated with it. Six Sigma helped Motorola realize powerful bottom-line results in their organization - in fact, they documented more than $16 Billion in savings as a result of our Six Sigma efforts. Since then, hundreds of companies around the world have adopted Six Sigma as a way of doing business. This is a direct result of many of Americas leaders openly praising the benefits of Six Sigma. Leaders such as Larry Bossidy of Allied Signal (now Honeywell), and Jack Welch of General Electric Company. Rumor has it that Larry and Jack were playing golf one day and Jack bet Larry that he could implement Six Sigma faster and with greater results at GE than Larry did at Allied Signal. The results speak for themselves. Six Sigma has evolved over time. Its more than just a quality system like TQM or ISO. Its a way of doing business. As Geoff Tennant describes in his book Six Sigma: SPC and TQM in Manufacturing and Services: Six Sigma is many things, and it would perhaps be easier to list all the things that Six Sigma quality is not. Six Sigma can be seen as: a vision; a philosophy; a symbol; a metric; a goal; a methodology. Roles required for implementation Six Sigma identifies five key roles for its successful implementation.

Executive Leadership includes CEO and other key top management team members. They are responsible for setting up a vision for Six Sigma implementation. They also empower the other role holders with the freedom and resources to explore new ideas for breakthrough improvements.
98

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Champions are responsible for the Six Sigma implementation across the organization in an integrated manner. The Executive Leadership draws them from the upper management. Champions also act as mentor to Black Belts. At GE this level of certification is now called Quality Leader. Master Black Belts, identified by champions, act as in-house expert coach for the organization on Six Sigma. They devote 100% of their time to Six Sigma. They assist champions and guide Black Belts and Green Belts. Apart from the usual rigor of statistics, their time is spent on ensuring integrated deployment of Six Sigma across various functions and departments. Experts This level of skill is used primarily within Aerospace and Defense Business Sectors. Experts work across company boundaries, improving services, processes, and products for their suppliers, their entire campuses, and for their customers. Raytheon Incorporated was one of the first companies to introduce Experts to their organizations. At Raytheon, Experts work not only across multiple sites, but across business divisions, incorporating lessons learned throughout the company. Black Belts operate under Master Black Belts to apply Six Sigma methodology to specific projects. They devote 100% of their time to Six Sigma. They primarily focus on Six Sigma project execution, whereas Champions and Master Black Belts focus on identifying projects/functions for Six Sigma. Green Belts are the employees who take up Six Sigma implementation along with their other job responsibilities. They operate under the guidance of Black Belts and support them in achieving the overall results.

NOTES

In many successful modern programs, Green Belts and Black Belts are empowered to initiate, expand, and lead projects in their area of responsibility.The terms black belt and green belt are borrowed from the ranking systems in various martial arts. Software used for Six Sigma There are generally two classes of software used to support Six Sigma: analysis tools, which are used to perform statistical or process analysis, and program management tools, used to manage and track a corporations entire Six Sigma program. Analysis tools include statistical software such as Minitab, JMP, SigmaXL, RapAnalyst or Statgraphics as well as process analysis tools such as iGrafx. Some alternatives include Microsoft Visio, Telelogic System Architect, IBM WebSphere Business Modeler, and Proforma Corp. ProVision. For program management, tracking and reporting, the most
99 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

popular tools are Instantis, PowerSteering, iNexus and SixNet. Other Six Sigma for IT Management tools include Proxima Technology Centauri, HP Mercury, BMC Remedy. 1. Six Sigma was industry specific 2. The average was very subjective in nature, it was very difficult to define average 3. There were problems in finding whether six sigma has been achieved or not. Where did the name Six Sigma come from? In my recollection, two recurring questions have dominated the field of six sigma. The first inquiry can be described by the global question: Why 6s and not some other level of capability? The second inquiry is more molecular. It can be summarized by the question: Where does the 1.5s shift factor come from and why 1.5 versus some other magnitude? For details on this subject, refer: Harry, M. J. Resolving the Mysteries of Six Sigma: Statistical Constructs and Engineering Rationale. First Edition 2003. Palladyne Publishing. Phoenix, Arizona. (Note: this particular publication will be available by October 2003). But until then, we will consider the following thumbnail sketch. At the onset of six sigma in 1985, this writer was working as an engineer for the Government Electronics Group of Motorola. By chance connection, I linked up with another engineer by the name of Bill Smith (originator of the six sigma concept in 1984). At that time, he suggested Motorola should require 50 percent design margins for all of its key product performance specifications. Statistically speaking, such a safety margin is equivalent to a 6 sigma level of capability. When considering the performance tolerance of a critical design feature, he believed a 25 percent cushion was not sufficient for absorbing a sudden shift in process centering. Bill believed the typical shift was on the order of 1.5s (relative to the target value). In other words, a four sigma level of capability would normally be considered sufficient, if centered. However, if the process center was somehow knocked off its central location (on the order of 1.5s), the initial capability of 4s would be degraded to 4.0s 1.5s = 2.5s. Of course, this would have a consequential impact on defects. In turn, a sudden increase in defects would have an adverse effect on reliability. As should be apparent, such a domino effect would continue straight up the value chain. Regardless of the shift magnitude, those of us working this issue fully recognized that the initial estimate of capability will often erode over time in a very natural way thereby increasing the expected rate of product defects (when considering a protracted

Anna University Chennai

100

DBA 1656

QUALITY MANAGEMENT

period of production). Extending beyond this, we concluded that the product defect rate was highly correlated to the long-term process capability, not the short-term capability. Of course, such conclusions were predicated on the statistical analysis of empirical data gathered on a wide array of electronic devices. Thus, we come to understand three things. Firstly, we recognized that the instantaneous reproducibility of a critical-to-quality characteristic is fully dependent on the goodness of fit between the operating bandwidth of the process and the corresponding bandwidth of the performance specification. Secondly, the quality of that interface can be substantively and consequentially disturbed by process centering error. Of course, both of these factors profoundly impact long-term capability. Thirdly, we must seek to qualify our critical processes at a 6s level of short-term capability if we are to enjoy a long-term capbility of 4s. By further developing these insights through applied research, we were able to greatly extend our understanding of the many statistical connections between such things as design margin, process capability, defects, field reliability, customer satisfaction, and economic success. Statistical Six Sigma Definition Six Sigma at many organizations simply means a measure of quality that strives for near perfection. But the statistical implications of a Six Sigma program go well beyond the qualitative eradication of customer-perceptible defects. Its a methodology that is well rooted in mathematics and statistics. The objective of Six Sigma Quality is to reduce process output variation so that on a long term basis, which is the customers aggregate experience with our process over time, this will result in no more than 3.4 defect Parts Per Million (PPM) opportunities (or 3.4 Defects Per Million Opportunities DPMO). For a process with only one specification limit (Upper or Lower), this results in six process standard deviations between the mean of the process and the customers specification limit (hence, 6 Sigma). For a process with two specification limits (Upper and Lower), this translates to slightly more than six process standard deviations between the mean and each specification limit such that the total defect rate corresponds to equivalent of six process standard deviations.

NOTES

101

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 3.7 Many processes are prone to being influenced by special and/or assignable causes that impact the overall performance of the process relative to the customers specification. That is, the overall performance of our process as the customer views it might be 3.4 DPMO (corresponding to Long Term performance of 4.5 Sigma). However, our process could indeed be capable of producing a near perfect output (Short Term capability also known as process entitlement of 6 Sigma). The difference between the best a process can be, measured by Short Term process capability, and the customers aggregate experience (Long Term capability) is known as Shift depicted as Zshift or shift. For a typical process, the value of shift is 1.5; therefore, when one hears about 6 Sigma, inherent in that statement is that the short term capability of the process is 6, the long term capability is 4.5 (3.4 DPMO what the customer sees) with an assumed shift of 1.5. Typically, when reference is given using DPMO, it denotes the Long Term capability of the process, which is the customers experience. The role of the Six Sigma professional is to quantify the process performance (Short Term and Long Term capability) and based on the true process entitlement and process shift, establish the right strategy to reach the established performance objective As the process sigma value increases from zero to six, the variation of the process around the mean value decreases. With a high enough value of process sigma, the process approaches zero variation and is known as zero defects.

Anna University Chennai

102

DBA 1656

QUALITY MANAGEMENT

Remembering Bill Smith, Father of Six Sigma Born in Brooklyn, New York, in 1929, Bill Smith graduated from the U.S. Naval Academy in 1952 and studied at the University of Minnesota School of Business. In 1987, after working for nearly 35 years in engineering and quality assurance, he joined Motorola, serving as vice president and senior quality assurance manager for the Land Mobile Products Sector. In honor of Smiths talents and dedication, Northwestern Universitys Kellogg Graduate School of Management established an endowed scholarship in Smiths name. Dean Donald P. Jacobs of the Kellogg School notified Motorolas Robert Galvin of the schools intention less than a month after Smith died. Bill was an extremely effective and inspiring communicator, Jacobs wrote in his July 27, 1993, letter. He never failed to impress his audience by the depth of his knowledge, the extent of his personal commitment, and the level of his intellectual powers. The school created the scholarship fund in recognition of Smiths contributions to Kellogg and his dedication to the teaching and practice of quality. It was a fitting tribute to a man who influenced business students and corporate leaders worldwide with his innovative Six Sigma strategy. As the one who followed most closely in his footsteps, Marjorie Hook is wellpositioned to speculate about Bill Smiths take on the 2003 version of Six Sigma. Today I think people sometimes try to make Six Sigma seem complicated and overly technical, she said. His approach was, If you want to improve something, involve the people who are doing the job. He always wanted to make it simple so people would use it. Six Sigma Costs and Savings The financial benefits of implementing Six Sigma at your company can be significant. Many people say that it takes money to make money. In the world of Six Sigma quality, the saying also holds true: it takes money to save money using the Six Sigma quality methodology. You cant expect to significantly reduce costs and increase sales using Six Sigma without investing in training, organizational infrastructure and culture evolution. Sure you can reduce costs and increase sales in a localized area of a business using the Six Sigma quality methodology and you can probably do it inexpensively by hiring an ex-Motorola or GE Black Belt. I like to think of that scenario as a get rich quick application of Six Sigma
103

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Companies of all types and sizes are in the midst of a quality revolution. GE saved $12 billion over five years and added $1 to its earnings per share. Honeywell (AlliedSignal) recorded more than $800 million in savings. GE produces annual benefits of over $2.5 billion across the organization from Six Sigma. Motorola reduced manufacturing costs by $1.4 billion from 1987-1994. Six Sigma reportedly saved Motorola $15 billion over the last 11 years. The above quotations may in fact be true, but pulling the numbers out of the context of the organizations revenues does nothing to help a company figure out if Six Sigma is right for them. For example, how much can a $10 million or $100 million company expect to save? I investigated what the companies themselves had to say about their Six Sigma costs and savings I didnt believe anything that was written on third party websites, was estimated by experts, or was written in books on the topic. I reviewed literature and only captured facts found in annual reports, website pages and presentations found on company websites. I investigated Motorola, Allied Signal, GE and Honeywell. I choose these four companies because they are the companies that invented and refined Six Sigma they are the most mature in their deployments and culture changes. As the Motorola website says, they invented it in 1986. Allied Signal deployed Six Sigma in 1994, GE in 1995. Honeywell was included because Allied Signal merged with Honeywell in 1999 (they launched their own initiative in 1998). Many companies have deployed Six Sigma between the years of GE and Honeywell well leave those companies for another article. Table 3.8: Companies And The Year They Implemented Six Sigma Company Name Motorola (NYSE:MOT) Allied Signal (Merged With Honeywell in 1999) GE (NYSE:GE) Honeywell (NYSE:HON) Ford (NYSE:F) Year Began Six Sigma 1986 1994 1995 1998 2000

Table 2 identifies by company, the yearly revenues, the Six Sigma costs (investment) per year, where available, and the financial benefits (savings). There are many blanks,
Anna University Chennai 104

DBA 1656

QUALITY MANAGEMENT

especially where the investment is concerned. Ive presented as much information as the companies have publicly disclosed. Table 3.9: Six Sigma Cost And Savings By Company Year Motorola 1986-2001 356.9(e) Allied Signal 1998 GE 1996 1997 1998 1999 15.1 79.2 90.8 100.5 111.6 ND 0.2 0.4 0.5 0.6 1.6 0.3 0.4 0.4 0.5 0.4 0.5 2 0.2 1 1.3 2 4.4 3 3.3 0.2 1.1 1.2 1.8 1.2 ND 16 1 4.5 Revenue ($B) Invested ($B) % Revenue Invested Savings ($B) % Revenue Savings

NOTES

1996-1999 382.1 Honeywell 1998 1999 2000 23.6 23.7 25.0

ND ND ND ND

0.5 0.6 0.7 1.8 4

2.2 2.5 2.6 2.4

1998-2000 72.3 Ford

2000-200243.9 ND 16 Key: $B = $ Billions, United States (e) = Estimated, Yearly Revenue 1986-1992 Could Not Be Found ND = Not Disclosed Note: Numbers Are Rounded To The Nearest Tenth

2.3

Although the complete picture of investment and savings by year is not present, Six Sigma savings can clearly be significant to a company. The savings as a percentage of revenue vary from 1.2% to 4.5%. And what we can see from the GE deployment is that a company shouldnt expect more than a breakeven the first year of implementation. Six Sigma is not a get rich quick methodology. I like to think of it like my retirement
105 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

savings plan Six Sigma is a get rich slow methodology the take-away point being that you will get rich if you plan properly and execute consistently. As GEs 1996 annual report states, It has been estimated that less than Six Sigma quality, i.e., the three-to-four Sigma levels that are average for most U.S. companies, can cost a company as much as 10-15% of its revenues. For GE, that would mean $8-12 billion. With GEs 2001 revenue of $111.6 billion, this would translate into $11.2-16.7 billion of savings. Although $2 billion worth of savings in 1999 is impressive, it appears that even GE hasnt been able to yet capture the losses due to poor quality or maybe theyre above the three-to-four Sigma levels that are the average for most U.S. companies? In either case, 1.2-4.5% of revenue is significant and should catch the eye of any CEO or CFO. For a $30 million a year company, that can translate into between $360,000 and $1,350,000 in bottom-line-impacting savings per year. It takes money to make money. Complementary Technologies It is difficult to concisely describe the ways in which Six Sigma may be interwoven with other initiatives (or vice versa). The following paragraphs broadly capture some of the possible interrelationships between initiatives. Six Sigma and improvement approaches such as CMM, CMMISM, PSPSM/ TSPSM are complementary and mutually supportive. Depending on current organizational, project or individual circumstances, Six Sigma could be an enabler to launch CMM, CMMISM, PSPSM, or TSPSM. Or, it could be a refinement toolkit/methodology within these initiatives. For instance, it might be used to select highest priority Process Areas within CMMISM or to select highest leverage metrics within PSPSM. Examination of the Goal-Question-Metric (GQM), Initiating-DiagnosingEstablishing-Acting-Leveraging (IDEALSM), and Practical Software Measurement (PSM) paradigms, likewise, shows compatibility and consistency with Six Sigma. GQ(I)M meshes well with the Define-Measure steps of Six Sigma. IDEAL and Six Sigma share many common features, with IDEALSM being slightly more focused on change management and organizational issues and Six Sigma being more focused on tactical, data-driven analysis and decision making. PSM provides a software-tailored approach to measurement that may well serve the Six Sigma improvement framework.

Anna University Chennai

106

DBA 1656

QUALITY MANAGEMENT

Six Sigma Process Capability So how do you know your processes cut the mustard? With Six Sigma, it all depends on the process capability. Process capability is a measure of how much variation or deviation occurs from what normally happens to what is expected to happen. For example, normally after you upgrade a system, you have a working system. It you have one piece working, and a couple other piece broken, well you now have a variation from what was expected to happen. Just to elaborate a little further, there are three main characteristics of process capability. The requirements are frequently a range of acceptable values. The process is capable, when its variation consistently falls within that rang. The process sigma level is an indicator of its capability and likelihood of meeting expectations. Expanding on this concept a little bit, to assure we all have a clearer understanding of how this relates to us. The first characteristic states that the requirements are frequently a range of acceptable values. For example, the billing system is always to be available weekdays between the hours of 6:00 am and 6:00 pm. So our customers will be satisfied when the system is available during these timeframes. The problem really becomes with clinical systems, when the system availability needs to be 100% of the time. That is a difficult range of acceptable values. The next characteristic of process capability is that a process is considered capable when its variation consistently falls within that range of acceptable values. Keep in mind, variation is the process doesnt behave as anticipated. Consider the above, if the billing system is down at 1:00 pm weekly, this is outside the range of acceptable times. It is a variation, and makes the process is not considered capable. The final characteristic of process capability is the process sigma level. Remember, the lower the sigma level, the greater the variation. A 1.0 sigma level indicates that the process has a good deal of variation and is not meeting requirements. In Six Sigma, A 6.0 sigma level is the goal What Is Six Sigma and the 1.5 shift? To quote a Motorola hand out from about 1987. The performance of a product is determined by how much margin exists between the design requirement of its characteristics (and those of its parts/steps), and the actual value of those characteristics. These characteristics are produced by processes in the factory, and at the suppliers.
107

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Each process attempts to reproduce its characteristics identically from unit to unit, but within each process some variation occurs. For more processes, such as those which use real time feedback to control outcome, the variation is quite small, and for others it may be quite large. A variation of the process is measured in Std. Dev, (Sigma) from the Mean. The normal variation, defined as process width, is +/-3 Sigma about the mean. Approximately 2700 parts per million parts/steps will fall outside the normal variation of +/- 3 Sigma. This, by itself, does not appear disconcerting. However, when we build a product containing 1200 parts/steps, we can expect 3.24 defects per unit (1200 x .0027), on average. This would result in a rolled yield of less than 4%, which means fewer than 4 units out of every 100 would go through the entire manufacturing process without a defect. Thus, we can see that for a product to be built virtually defect-free, it must be designed to accept characteristics which are significantly more than +/- 3 sigma away from the mean. It can be shown that a design which can accept twice the normal variation of the process, or +/- 6 sigma, can be expected to have no more than 3.4 parts per million defective for each characteristic, even if the process mean were to shift by as much as +/- 1.5 sigma. In the same case of a product containing 1200 parts/steps, we would now expect only only 0.0041 defects per unit (1200 x 0.0000034). This would mean that 996 units out of 1000 would go through the entire manufacturing process without a defect. To quantify this, Capability Index (Cp) is used; where: Design Specification Width Capability Index Cp = Process Width A design specification width of +/- 6 Sigma and a process width of +/- 3 Sigma yields a Cp of 12/6 = 2. However, as shown in, the process mean can shift. When the process mean is shifted with respect to design mean, the Capability Index is adjusted with a factor k, and becomes Cpk. Cpk = Cp(1-k), where: Process Shift k Factor = Design Specification Width

Anna University Chennai

108

DBA 1656

QUALITY MANAGEMENT

The k factor for a +/- 6 Sigma design with a 1.5 Sigma process shift . 1.5/(12/2) or 1.5/6 = 0.25 and the Cpk = 2(1- 0.25)=1.5 Six Sigma (6 ) is a business-driven, multi-faceted approach to process improvement, reduced costs, and increased profits. With a fundamental principle to improve customer satisfaction by reducing defects, its ultimate performance target is virtually defect-free processes and products (3.4 or fewer defective parts per million (ppm)). The Six Sigma methodology, consisting of the steps Define - Measure - Analyze - Improve - Control, is the roadmap to achieving this goal. Within this improvement framework, it is the responsibility of the improvement team to identify the process, the definition of defect, and the corresponding measurements. This degree of flexibility enables the Six Sigma method, along with its toolkit, to easily integrate with existing models of software process implementation. Six Sigma originated at Motorola in the early 1980s in response to a CEOdriven challenge to achieve tenfold reduction in product-failure levels in five years. Meeting this challenge required swift and accurate root-cause analysis and correction. In the mid-1990s, Motorola divulged the details of their quality improvement framework, which has since been adopted by several large manufacturing companies. Technical Detail The primary goal of Six Sigma is to improve customer satisfaction, and thereby profitability, by reducing and eliminating defects. Defects may be related to any aspect of customer satisfaction: high product quality, schedule adherence, cost minimization. Underlying this goal is the Taguchi Loss Function, which shows that increasing defects leads to increased customer dissatisfaction and financial loss. Common Six Sigma metrics include defect rate (parts per million or ppm), sigma level, process capability indices, defects per unit, and yield. Many Six Sigma metrics can be mathematically related to the others. The Six Sigma drive for defect reduction, process improvement and customer satisfaction is based on the statistical thinking paradigm [ASQ 00], [ASA 01]:

NOTES

Everything is a process All processes have inherent variability Data is used to understand the variability and drive process improvement decisions
109 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

As the roadmap for actualizing the statistical thinking paradigm, the key steps in the Six Sigma improvement framework are Define - Measure - Analyze - Improve Control. Six Sigma distinguishes itself from other quality improvement programs immediately in the Define step. When a specific Six Sigma project is launched, the customer satisfaction goals have likely been established and decomposed into subgoals such as cycle time reduction, cost reduction, or defect reduction. (This may have been done using the Six Sigma methodology at a business/organizational level.) The Define stage for the specific project calls for baselining and benchmarking the process to be improved, decomposing the process into manageable sub-processes, further specifying goals/sub-goals and establishing infrastructure to accomplish the goals. It also includes an assessment of the cultural/organizational change that might be needed for success. Once an effort or project is defined, the team methodically proceeds through Measurement, Analysis, Improvement, and Control steps. A Six Sigma improvement team is responsible for identifying relevant metrics based on engineering principles and models. With data/information in hand, the team then proceeds to evaluate the data/ information for trends, patterns, causal relationships and root cause, etc. If needed, special experiments and modeling may be done to confirm hypothesized relationships or to understand the extent of leverage of factors; but many improvement projects may be accomplished with the most basic statistical and non-statistical tools. It is often necessary to iterate through the Measure-Analyze-Improve steps. When the target level of performance is achieved, control measures are then established to sustain performance. A partial list of specific tools to support each of these steps is shown in Figure.3.10

FIGURE 3.8
Anna University Chennai 110

DBA 1656

QUALITY MANAGEMENT

An important consideration throughout all the Six Sigma steps is to distinguish which process substeps significantly contribute to the end result. The defect rate of the process, service or final product is likely more sensitive to some factors than others. The analysis phase of Six Sigma can help identify the extent of improvement needed in each substep in order to achieve the target in the final product. It is important to remain mindful that six sigma performance (in terms of the ppm metric) is not required for every aspect of every process, product and service. It is the goal only where it quantitatively drives (i.e., is a significant control knob for) the end result of customer satisfaction and profitability. The current average industry runs at four sigma, which corresponds to 6210 defects per million opportunities. Depending on the exact definition of defect in payroll processing, for example, this sigma level could be interpreted as 6 out of every 1000 paychecks having an error. As four sigma is the average current performance, there are industry sectors running above and below this value. Internal Revenue Service (IRS) phone-in tax advice, for instance, runs at roughly two sigma, which corresponds to 308,537 errors per million opportunities. Again, depending on the exact definition of defect, this could be interpreted as 30 out of 100 phone calls resulting in erroneous tax advice. (Two Sigma performance is where many noncompetitive companies run.) On the other extreme, domestic (U.S.) airline flight fatality rates run at better than six sigma, which could be interpreted as fewer than 3.4 fatalities per million passengers - that is, fewer than 0.00034 fatalities per 100 passengers. As just noted, flight fatality rates are better than six sigma, where six sigma denotes the actual performance level rather than a reference to the overall combination of philosophy, metric, and improvement framework. Because customer demands will likely drive different performance expectations, it is useful to understand the mathematical origin of the measure and the term six-sigma process. Conceptually, the sigma level of a process or product is where its customer-driven specifications intersect with its distribution. A centered six-sigma process has a normal distribution with mean=target and specifications placed 6 standard deviations to either side of the mean. At this point, the portions of the distribution that are beyond the specifications contain 0.002 ppm of the data (0.001 on each side). Practice has shown that most manufacturing processes experience a shift (due to drift over time) of 1.5 standard deviations so that the mean no longer equals target. When this happens in a six-sigma process, a larger portion of the distribution now extends beyond the specification limits: 3.4 ppm. Figure depicts a 1.5 -shifted distribution with 6 annotations. In manufacturing, this shift results from things such as mechanical wear over time and causes the six-sigma defect rate to become 3.4 ppm. The magnitude of the shift may
111

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

vary, but empirical evidence indicates that 1.5 is about average. Does this shift exist in the software process? While it will take time to build sufficient data repositories to verify this assumption within the software and systems sector, it is reasonable to presume that there are factors that would contribute to such a shift. Possible examples are declining procedural adherence over time, learning curve, and constantly changing tools and technologies (hardware and software).

FIGURE 3.9 Assumptions:


Normal Distribution Process Mean Shift of 1.5 from Nominal is Likely Process Mean and Standard Deviation are known Defects are randomly distributed throughout units Parts and Process Steps are Independent For this discussion, original nominal value = target

Key = standard deviation = center of the distribution (shifted 1.5 from its original , on-target location) +/-3 & +/-6 show the specifications relative to the original target Figure : Six Sigma Process with Mean Shifted from Nominal by 1. 5 Usage Considerations In the software and systems field, Six Sigma may be leveraged differently based on the state of the business. In an organization needing process consistency, Six Sigma
Anna University Chennai 112

DBA 1656

QUALITY MANAGEMENT

can help promote the establishment of a process. For an organization striving to streamline their existing processes, Six Sigma can be used as a refinement mechanism. In organizations at CMM level 1-3, defect free may seem an overwhelming stretch. Accordingly, an effective approach would be to use the improvement framework (Define-Measure-Analyze-Improve-Control) as a roadmap toward intermediate defect reduction goals. Level 1 and 2 organizations may find that adopting the Six Sigma philosophy and framework reinforces their efforts to launch measurement practices; whereas Level 3 organizations may be able to begin immediate use of the framework. As organizations mature to Level 4 and 5, which implies an ability to leverage established measurement practices, accomplishment of true six sigma performance (as defined by ppm defect rates) becomes a relevant goal. Many techniques in the Six Sigma toolkit are directly applicable to software and are already in use in the software industry. For instance, Voice of the Client and Quality Function Deployment are useful for developing customer requirements (and are relevant measures). There are numerous charting/calculation techniques that can be used to scrutinize cost, schedule, and quality (project-level and personal-level) data as a project proceeds. And, for technical development, there are quantitative methods for risk analysis and concept/design selection. The strength of Six Sigma comes from consciously and methodically deploying these tools in a way that achieves (directly or indirectly) customer satisfaction. As with manufacturing, it is likely that Six Sigma applications in software will reach beyond improvement of current processes/products and extend to design of new processes/products. Named Design for Six Sigma (DFSS), this extension heavily utilizes tools for customer requirements, risk analysis, design decision-making and inventive problem solving. In the software world, it would also heavily leverage re-use libraries that consist of robustly designed software. Maturity Six Sigma is rooted in fundamental statistical and business theory; consequently, the concepts and philosophy are very mature. Applications of Six Sigma methods in manufacturing, following on the heels of many quality improvement programs, are likewise mature. Applications of Six Sigma methods in software development and other upstream (from manufacturing) processes are emerging.
113

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Costs and Limitations Institutionalizing Six Sigma into the fabric of a corporate culture can require significant investment in training and infrastructure. There are typically three different levels of expertise cited by companies: Green Belt, Black Belt Practitioner, Master Black Belt. Each level has increasingly greater mastery of the skill set. Roles and responsibilities also grow from each level to the next, with Black Belt Practitioners often in team/project leadership roles and Master Black Belts often in mentoring/teaching roles. The infrastructure needed to support the Six Sigma environment varies. Some companies organize their trained Green/Black Belts into a central support organization. Others deploy Green/Black Belts into organizations based on project needs and rely on communities of practice to maintain cohesion. Alternatives In the past years, there have been many instances and evolutions of quality improvement programs. Scrutiny of the programs will show much similarity and also clear distinctions between such programs and Six Sigma. Similarities include common tools and methods, concepts of continuous improvement, and even analogous steps in the improvement framework. Differences have been articulated as follows:

Six Sigma speaks the language of business. It specifically addresses the concept of making the business as profitable as possible.

In Six Sigma, quality is not pursued independently from business goals. Time and resources are not spent improving something that is not a lever for improving customer satisfaction.

Six Sigma focuses on achieving tangible results. Six Sigma does not include specific integration of ISO900 or Malcolm Baldridge National Quality Award criteria.

Six Sigma uses an infrastructure of highly trained employees from many sectors of the company (not just the Quality Department). These employees are typically viewed as internal change agents.

Six Sigma raises the expectation from 3-sigma performance to 6-sigma. Yet, it does not promote Zero Defects which many people dismiss as impossible.

Anna University Chennai

114

DBA 1656

QUALITY MANAGEMENT

3.5 RELIABILITY CONCEPTS DEFINITIONS, RELIABILITY IN SERIES AND PARALLEL Definition In general, reliability (systemic def.) is the ability of a system to perform and maintain its functions in routine circumstances, as well as hostile or unexpected circumstances. The IEEE defines it as . . . the ability of a system or component to perform its required functions under stated conditions for a specified period of time. In natural language it may also denote persons who act efficiently in proper moments/circumstances (infallibile). Importance of Reliability What is Reliability? Reliability is a broad term that focuses on the ability of a product to perform its intended function. Mathematically speaking, assuming that an item is performing its intended function at time equals zero, reliability can be defined as the probability that an item will continue to perform its intended function without failure for a specified period of time under stated conditions. Please note that the product defined here could be an electronic or mechanical hardware product, a software product, a manufacturing process, or even a service. Why is Reliability Important? There are a number of reasons why product reliability is an important product attribute, including:

NOTES

Reputation. A companys reputation is very closely related to the reliability of their products. The more reliable a product is, the more likely the company is to have a favorable reputation. Customer Satisfaction. While a reliable product may not dramatically affect customer satisfaction in a positive manner, an unreliable product will negatively affect customer satisfaction severely. Thus high reliability is a mandatory requirement for customer satisfaction. Warranty Costs. If a product fails to perform its function within the warranty period, the replacement and repair costs will negatively affect profits, as well as gain unwanted negative attention. Introducing reliability analyses is an important step in taking corrective action, ultimately leading to a product that is more reliable.
115 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Repeat Business. A concentrated effort towards improved reliability shows existing customers that a manufacturer is serious about their product, and committed to customer satisfaction. This type of attitude has a positive impact on future business.

Cost Analysis. Manufacturers may take reliability data and combine it with other cost information to illustrate the cost-effectiveness of their products. This life cycle cost analysis can prove that although the initial cost of their product might be higher, the overall lifetime cost is lower than a competitors because their product requires fewer repairs or less maintenance.

Customer Requirements. Many customers in todays market demand that their suppliers have an effective reliability program. These customers have learned the benefits of reliability analysis from experience.

Competitive Advantage. Many companies will publish their predicted reliability numbers to help gain an advantage over their competition who either does not publish their numbers or has lower numbers.

Difference Between Quality and Reliability? Even though a product has a reliable design, when the product is manufactured and used in the field, its reliability may be unsatisfactory. The reason for this low reliability may be that the product was poorly manufactured. So, even though the product has a reliable design, it is effectively unreliable when fielded which is actually the result of a substandard manufacturing process. As an example, cold solder joints could pass initial testing at the manufacturer, but fail in the field as the result of thermal cycling or vibration. This type of failure did not occur because of an improper design, but rather it is the result of an inferior manufacturing process. So while this product may have a reliable design, its quality is unacceptable because of the manufacturing process. Just like a chain is only as strong as its weakest link, a highly reliable product is only as good as the inherent reliability of the product and the quality of the manufacturing process. Improving Products Reliability Evaluating and finding ways to attain high product reliability are all aspects of reliability engineering. There are a number of types of reliability analyses typically performed as part of this discipline.
Anna University Chennai 116

DBA 1656

QUALITY MANAGEMENT

Quality and Reliability Requirements Quality is associated with the degree of conformance of the product to customer requirements, and thus, in a sense, with the degree of customer satisfaction. Implicit in Japanese quality products is an acceptable amount of reliability; that is, the product performs its intended function over its intended life under normal environmental and operating conditions. Reliability assessments are incorporated through simulation and qualification functions at the design and prototyping stages. With basic reliability designed in, quality control functions are then incorporated into the production line using in-line process controls and reduced human intervention through automation. Since the mid1980s, Japanese firms have found that automation leads to improved quality in manufacturing. They have high reliability because they control their manufacturing lines. Reliability assurance tasks such as qualification are conducted (1) during the product design phase using analytical simulation methods and design-for-assembly software, and (2) during development using prototype or pilot hardware. Once again, it is the role of quality assurance to ensure reliability. Qualification includes activities to ensure that the nominal design and manufacturing specifications meet the reliability targets. In some cases, such as the Yamagata Fujitsu hard disk drive plant, qualification of the manufacturing processes and of the pilot lots are conducted together. Quality conformance for qualified products is accomplished through monitoring and control of critical parameters within the acceptable variations already established, perhaps during qualification. Quality conformance, therefore, helps to increase product yield and consequently to lower product cost. Quality Assurance in Electronic Packaging Japan has a long history of taking lower-yield technology and improving it. In the United States, companies change technology if yields are considered too low. The continuous improvement of quad flat packs (QFPs) in contrast to the introduction of ball-grid arrays (BGAs) is an example of this difference. Both countries are concerned with the quality and reliability limits of fine-pitch surface mount products. The Japanese continue to excel at surface mount technologies (SMT) as they push fine-pitch development to its limits. Many Japanese companies are now producing QFP with 0.5 mm pitch and expect to introduce 0.3 mm pitch packages within the next several years. As long as current QFP technology can be utilized in the latest product introductions, the cost of manufacturing is kept low and current SMT production lines can be utilized with minimal investment and with predictable quality and reliability results.
117

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Japans leaders in SMT have introduced equipment for placing very small and fine-pitch devices, for accurate screen printing, and for soldering. They have developed highly automated manufacturing and assembly lines with a high degree of in-line quality assurance. Thus, in terms of high-volume, rapid-turn-around, low-cost products, it is in their best interests to push the limits of surface mount devices. Furthermore, QFPs do not require new assembly methods and are inspectable, a factor critical to ensuring quality products. The United States is aggressively pursuing BGA technology; Hitachi, however appears to be applying an on-demand approach. It has introduced BGA in its recent supercomputer without any quality problems and feels comfortable in moving to new technology when it becomes necessary. Since Hitachis U.S. customers are demanding BGA for computer applications, Hitachi plans to provide BGA products. However, Dr. Otsuka of Meisei University, formerly with Hitachi, believes that for Japanese customers that are still cost driven, QFP packages will reach 0.15 mm pin pitch to be competitive with BGA in high-pin-count, low-cost applications. Dr. Otsuka believes that Japans ability to continue using QFP will allow Japan to remain the low-cost electronic packaging leader for the remainder of this decade. Like the United States, Japan is pursuing BGA, but unlike the United States, Japan is continuing to improve SMT with confidence that it will meet most cost and functional requirements through the year 2000. Matsushita and Fujitsu are also developing bumped bare-chip technologies to provide for continued miniaturization demands. Similar differences in technical concerns exist for wire bonding and known good die (KGD) technologies. The U.S. Semiconductor Industry Associations roadmap suggests a technology capability limit to wire bonding that is not shared by the Japanese component industry. The Japanese industry continues to develop new wire materials and attachment techniques that push the limits of wire bonding technologies. The Japanese consider concerns with KGD to be a U.S. problem caused by the lack of known good assembly (KGA); that is, U.S. firms lack the handling and assembly capability to assemble multiple-die packages in an automated, and thus high-quality, manner. With productivity and cost reduction being the primary manufacturing goals, increased factory automation and reduced testing are essential strategies. As TDK officials explained to the JTEC panelists during their visit, inspection is a critical cost issue:

Anna University Chennai

118

DBA 1656

QUALITY MANAGEMENT

It is TDKs QA goal to produce only quality products which need no inspection. At TDK, it is our goal to have no inspection at all, either human or machine. Our lowest labor cost in TDK is 32 yen per minute, or one yen every two seconds. If one multilayer semi-capacitor takes roughly one second to produce, then it costs about 0.6 yen in direct cost. If someone inspects it for two seconds, then we add 1.2 yen in inspection cost. That means we have to eliminate inspection to stay competitive. If we can reduce human and machine inspection, we can improve profits. Inspection does not add any value to the product. Quality control is implemented in the manufacturing lines to ensure that the processes stay within specified tolerances. Monitoring, verification, and assessment of assembly and process parameters are integral parts of the manufacturing line. Quality control ensures that all process variabilities beyond a specified tolerance are identified and controlled. The key focus of parameter variability appears to be in manufacturing process parameters and human errors and inadequacies, rather than in materials or parts. Incoming inspection is negligible because of the view that the quality of suppliers products can be trusted, and perhaps more importantly because the inspection process is not considered cost-effective. The global move to ISO 9000 certification helps guarantee supplier quality to further reduce inspection costs. Selection of specific quality control methods is dictated by product cost. Hidden costs associated with scheduling, handling, testing, and production yields become critical with increasing global competition. As more components are sourced from outside of Japan, these cost factors become increasingly crucial in maintaining competitive costs. Automation and its impact on quality. The Japanese have determined that manual labor leads to poor-quality output and that automation leads to higher-quality output. Sonys automation activities have reduced defect rates from 2000 to 20 parts per million. Quality has, therefore, become a key driver for factory automation in Japan. In addition, factory automation also adds the benefits of improving productivity and improving flexibility in scheduling the production or changeover of product types. Thus, whenever automation is cost-effective, it is used to replace manual assembly, handling, testing, and human inspection activities. This approach is applied to each new product and corresponding production line that is installed. For example, the old printed wiring board assembly line at Fujitsus Yamagata plant used extensive manual inspection, while the new line is in a clean room and is totally automated, including final inspection and testing. All of Nippondensos plants have now implemented factory-wide CIM systems. The system at Kota, uses factory-level data to meet quality standards and delivery
119

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

times. Boards are inserted into metal enclosures, sealed and marked, then burned-in and tested before shipping. Out of several hundred thousand units produced each month, only a couple of modules failed testing each month, according to JTECs hosts. Inspection and screening. As noted above, incoming inspection was negligible at most of the companies that the JTEC panel visited, because of the view that the quality of suppliers products could be trusted. Since the 1950s, the Japanese government has set quality requirements for any company that exports products from Japan. Suppliers have progressed in status from being fully inspected by their customers to being fully accepted. Qualified suppliers are now the standard for Japan, and most problems come from non-Japanese suppliers. Akio Morita of Sony lamented that finding quality U.S. suppliers was a major challenge for the company. Japanese suppliers were part of the virtual company, with strong customer ties and a commitment to help customers succeed. Components were not being screened at any of the printed wiring board (PWB) assembly, hard disk drive, or product assembly plants visited by the JTEC panel. Defects are seldom found in well-controlled and highly automated assembly lines. Where specific problems are found, tailored screens are implemented to address specific failure mechanisms at the board or product assembly level. For example, Fujitsu noted that todays components do not require burn-in, although at the board level it conducts some burn-in to detect solder bridges that occur during board assembly. But with the increasing cost of Japanese labor, the greatest pressure is to avoid unnecessary testing activities. Suppliers simply have to meet quality conformance standards to keep customers satisfied. Lack of conformance to requirements would be considered noncompetitive. With reliable components, assemblers must concentrate their efforts on the assembly process. Within a companys own production lines, automated inspection is central to factory automation activities. Older lines, like the 31/2-inch disk drive line the panel saw at Fujitsu, have extensive 100% manual inspection of PWBs. Fujitsus new line has fully automated inspection and testing. At Ibiden, automated inspection is part of the automated manufacturing process as a technique for alignment and assembly as well as for tolerance assessment and defect detection. Microscopic mechanical dimensioning is conducted on a sample basis. The newer the line, the greater the automation of inspection and testing. Reliability in Electronic Packaging In terms of reliability, the Japanese proactively develop good design, using simulation and prototype qualification, that is based on advanced materials and packaging technologies. Instead of using military standards, most companies use internal commercial best practices. Most reliability problems are treated as materials or process problems.

Anna University Chennai

120

DBA 1656

QUALITY MANAGEMENT

Reliability prediction methods using models such as Mil-Hdbk-217 are not used. Instead, Japanese firms focus on the physics of failure by finding alternative materials or improved processes to eliminate the source of the reliability problem. The factories visited by the JTEC panel are well equipped to address these types of problems. Assessment methods. Japanese firms identify the areas that need improvement for competitive reasons and target those areas for improvement. They dont try to fix everything; they are very specific. They continuously design products for reduced size and cost and use new technologies only when performance problems arise. As a result, most known technologies have predictable reliability characteristics. Infrastructure. The incorporation of suppliers and customers early in the product development cycle has given Japanese companies an advantage in rapid development of components and in effective design of products. This is the Japanese approach to concurrent engineering and is a standard approach used by the companies the JTEC panel visited. The utilization of software tools like design for assembly allows for rapid design and is an integral part of the design teams activities. At the time of the panels visit, design for disassembly was becoming a requirement for markets such as Germany. Suppliers are expected to make required investments to provide the needed components for new product designs. Advanced factory automation is included in the design of new factories. Training. The Japanese view of training is best exemplified by Nippondenso. The company runs its own two-year college to train production workers. Managers tend to hold four-year degrees from university engineering programs. Practical training in areas such as equipment design takes place almost entirely within the company. During the first six years of employment, engineers each receive 100 hours per year of formal technical training. In the sixth year, about 10% of the engineers are selected for extended education and receive 200 hours per year of technical training. After ten years about 1% are selected to become future executives and receive additional education. By this time, employees have earned the equivalent of a Ph.D. degree within the company. Management and business training is also provided for technical managers. In nonengineering fields, the fraction that become managers is perhaps 10%. Ibiden uses one-minute and safety training sessions in every manufacturing sector. One-minute discussions are held by section leaders and workers using visual aids that are available in each section. The subjects are specific to facets of the job like the correct way to use a tool or details about a specific step in the process. The daily events are intended to expose workers to additional knowledge and continuous training.
121

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

As a consequence, workers assure that production criteria are met. Ibiden also employs a quality patrol that finds and displays examples of poor quality on large bulletin boards throughout the plant. Exhibits the panel saw included anything from pictures of components or board lots sitting around in corners, to damaged walls and floors, to ziplock bags full of dust and dirt. The factory. Japanese factories pay attention to running equipment well, to continuous improvement, to cost reduction, and to waste elimination. Total preventive maintenance (TPM) is a methodology to ensure that equipment operates at its most efficient level and that facilities are kept clean so as not to contribute to reliability problems. In fact, the Japan Management Association gives annual TPM awards with prestige similar to the Deming Prize, and receipt of those awards is considered a required step for companies that wish to attain the Japan Quality Prize. No structured quality or reliability techniques are used - just detailed studies of operations, and automated, smooth-running, efficient production. Safety concerns appeared to the JTEC panel to be secondary to efficiency considerations. While floor markings and signs direct workers to stay away from equipment, few barriers keep individuals away from equipment. In the newest production lines, sensors are used to warn individuals who penetrate into machine space, and the sensors even stop machines if individuals approach too close. Factories provide workers with masks and hats rather than safety protection like eye wear. In most Japanese factories, street shoes are not allowed. Most electronic firms the panel visited were in the process of meeting new environmental guidelines. Fujitsu removed CFCs from its cleaning processes in October 1993. CFCs were replaced by a deionized-water cleaning process. In the old assembly process, the amount of handling required for inspection reduced the impact of cleaning. The new line had no such problems. To provide high reliability, Japanese firms create new products using fewer components, more automation, and flexible manufacturing technologies. For example, TDK is striving for 24-hour, nonassisted, flexible circuit card manufacturing using stateof-the-art high-density surface mounting techniques and integrated multifunction composite chips. It has developed true microcircuit miniaturization technologies that integrate 33 active and passive components on one chip. This will reduce the number of components required by customers during board assembly, thereby reducing potential assembly defects. In addition, the application of materials and process know-how provides a fundamental competitive advantage in manufacturing products with improved quality

Anna University Chennai

122

DBA 1656

QUALITY MANAGEMENT

characteristics. Nitto Denko, for example, has developed low-dust pellets for use in molding compounds. Ibiden has developed an epoxy hardener to enhance peel strength, thus improving reliability of its plating technology. The new process reduces cracking in the high-stress areas of small vias. Ibiden also uses epoxy dielectric for cost reduction and enhanced thermal conductivity of its MCM-D substrate. At the time of the JTEC visit, the company was also attempting to reduce solder resist height in an effort to improve the quality and ease of additive board assembly. It believes that a product with a resist 20 mils higher than the copper trace can eliminate solder bridging. Sony developed adhesive bonding technologies in order to improve the reliability and automation of its optical pickup head assembly. It set the parameters for surface preparation, bonding agents, and process controls. Sony used light ray cleaning to improve surface wetability and selected nine different bonding agents for joining various components in the pickup head. It now produces some 60% of the worlds optical pickup assemblies. The continuous move to miniaturization will keep the pressure on Japanese firms to further develop both their materials and process capabilities. 3.6 TOTAL PRODUCTIVE MAINTENANCE (TMP), RELEVANCE TO TQM What is Total Productive Maintenance (TPM)? It can be considered as the medical science of machines. Total Productive Maintenance (TPM) is a maintenance program which involves a newly defined concept for maintaining plants and equipment. The goal of the TPM program is to markedly increase production while, at the same time, increasing employee morale and job satisfaction. TPM brings maintenance into focus as a necessary and vitally important part of the business. It is no longer regarded as a non-profit activity. Down time for maintenance is scheduled as a part of the manufacturing day and, in some cases, as an integral part of the manufacturing process. The goal is to hold emergency and unscheduled maintenance to a minimum. Why TPM ? TPM was introduced to achieve the following objectives. The important ones are listed below.

NOTES

Avoid wastage in a quickly changing economic environment. Producing goods without reducing product quality. Reduce cost.
123 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Produce a low batch quantity at the earliest possible time. Goods sent to the customers must be non defective.

Similarities and differences between TQM and TPM The TPM program closely resembles the popular Total Quality Management (TQM) program. Many of the tools such as employee empowerment, benchmarking, documentation, etc. used in TQM are used to implement and optimize TPM.Following are the similarities between the two. 1. Total commitment to the program by upper level management is required in both programmes 2. Employees must be empowered to initiate corrective action, and 3. A long range outlook must be accepted as TPM may take a year or more to implement and is an on-going process. Changes in employee mind-set toward their job responsibilities must take place as well. The differences between TQM and TPM is summarized below. TABLE 3.10 Category Object (Output and effects ) Means of attaining goal TQM Quality cause ) Systematize the management. It is software oriented Quality for PPM wastes. TPM Equipment ( Input and Employees participation and it is hardware oriented Elimination of losses and

Target

Types of maintenance 1. Breakdown maintenance It means that people wait until equipment fails and repair it. Such a thing could be used when the equipment failure does not significantly affect the operation or production or generate any significant loss other than repair cost.
Anna University Chennai 124

DBA 1656

QUALITY MANAGEMENT

2. Preventive maintenance ( 1951 ) It is a daily maintenance ( cleaning, inspection, oiling and re-tightening ), design to retain the healthy condition of equipment and prevent failure through the prevention of deterioration, periodic inspection or equipment condition diagnosis, to measure deterioration. It is further divided into periodic maintenance and predictive maintenance. Just like human life is extended by preventive medicine, the equipment service life can be prolonged by doing preventive maintenance. 2a. Periodic maintenance ( Time based maintenance - TBM) Time based maintenance consists of periodically inspecting, servicing and cleaning equipment and replacing parts to prevent sudden failure and process problems. 2b. Predictive maintenance This is a method in which the service life of important part is predicted based on inspection or diagnosis, in order to use the parts to the limit of their service life. Compared to periodic maintenance, predictive maintenance is condition based maintenance. It manages trend values, by measuring and analyzing data about deterioration and employs a surveillance system, designed to monitor conditions through an on-line system. 3. Corrective maintenance ( 1957 ) It improves equipment and its components so that preventive maintenance can be carried out reliably. Equipment with design weakness must be redesigned to improve reliability or improving maintainability 4. Maintenance prevention ( 1960 ) It indicates the design of a new equipment. Weakness of current machines are sufficiently studied ( on site information leading to failure prevention, easier maintenance and prevents of defects, safety and ease of manufacturing ) and are incorporated before commissioning a new equipment. TPM - History TPM is a innovative Japanese concept. The origin of TPM can be traced back to 1951 when preventive maintenance was introduced in Japan. However the concept of preventive maintenance was taken from USA. Nippondenso was the first company to introduce plant wide preventive maintenance in 1960. Preventive maintenance is the concept wherein, operators produced goods using machines and the maintenance group was dedicated with work of maintaining those machines, however with the automation of Nippondenso, maintenance became a problem as more maintenance personnel were
125

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

required. So the management decided that the routine maintenance of equipment would be carried out by the operators. (This is Autonomous maintenance, one of the features of TPM ). Maintenance group took up only essential maintenance works. Thus, Nippondenso which already followed preventive maintenance also added Autonomous maintenance done by production operators. The maintenance crew went in the equipment modification for improving reliability. The modifications were made or incorporated in new equipment. This led to maintenance prevention. Thus preventive maintenance along with Maintenance prevention and Maintainability Improvement gave birth to Productive maintenance. The aim of productive maintenance was to maximize plant and equipment effectiveness to achieve optimum life cycle cost of production equipment. By then, Nippon Denso had made quality circles, involving the employees participation. Thus, all employees took part in implementing Productive maintenance. Based on these developments Nippondenso was awarded the distinguished plant prize for developing and implementing TPM, by the Japanese Institute of Plant Engineers ( JIPE ). Thus, Nippondenso of the Toyota group became the first company to obtain the TPM certification. TPM Targets: P Obtain Minimum 80% OPE. Obtain Minimum 90% OEE ( Overall Equipment Effectiveness ) Run the machines even during lunch. ( Lunch is for operators and not for machines!) Q Operate in a manner, so that there are no customer complaints. C Reduce the manufacturing cost by 30%. D Achieve 100% success in delivering the goods as required by the customer. S Maintain a accident free environment. M Increase the suggestions by 3 times. Develop multi-skilled and flexible workers.

Anna University Chennai

126

DBA 1656

QUALITY MANAGEMENT

TABLE 3.11 Motives of TPM 1. Adoption of life cycle approach for improving the overall performance of production equipment. 2. Improving productivity by highly motivated workers which is achieved by job enlargement. 3. The use of voluntary small group activities for identifying the cause of failure, possible plant and equipment modifications. Uniqueness of TPM The major difference between TPM and other concepts is that the operators are also involved in the maintenance process. The concept of I (Production operators ) Operate, You (Maintenance department ) fix is not followed. 1. Achieve Zero Defects, Zero Breakdown and Zero accidents in all functional areas of the organization. 2. Involve people in all levels of organization. 3. Form different teams to reduce defects and self Maintenance. Direct benefits of TPM 1. Increase productivity and OPE ( Overall Plant Efficiency ) by 1.5 or 2 times. 2. Rectify customer complaints. 3. Reduce the manufacturing cost by 30%. 4. Satisfy the customers needs by 100 % (Delivering the right quantity at the right time, in the required quality. ) 5. Reduce accidents. 6. Follow pollution control measures. Indirect benefits of TPM 1. Higher confidence level among the employees. 2. Keep the work place clean, neat and attractive. 3. Favourable change in the attitude of the operators.
127

NOTES

TPM Objectives

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

4. Achieve goals by working as a team. 5. Horizontal deployment of a new concept in all areas of the organization. 6. Share knowledge and experience. 7. The workers get a feeling of owning the machine. OEE ( Overall Equipment Efficiency ) : OEE = A x PE x Q A - Availability of the machine. Availability is proportion to time. Machine is actually available out of time. It should be available. A = ( MTBF - MTTR ) / MTBF. MTBF - Mean Time Between Failures = ( Total Running Time ) / Number of Failures. MTTR - Mean Time To Repair. PE - Performance Efficiency. It is given by RE X SE. Rate efficiency (RE) : Actual average cycle time is slower than design cycle time because of jams, etc. Output is reduced because of jams Speed efficiency (SE) : Actual cycle time is slower than design cycle time. Machine output is reduced because it is running at reduced speed. Q - Refers to quality rate. Which is percentage of good parts out of total produced sometimes called yield. Steps in introduction of TPM in a organization : Step A - PREPARATORY STAGE : STEP 1 - Announcement by Management to all about TPM introduction in the organization : Proper understanding, commitment and active involvement of the top management is needed for this step. Senior management should have awareness programmes, after which announcement is made to all. Publish it in the house magazine and put it in the notice board. Send a letter to all concerned individuals if required. STEP 2 - Initial education and propaganda for TPM : Training is to be done based on the need. Some need intensive training and some just an awareness. Take people who matters to places where TPM has already been successfully implemented.

Anna University Chennai

128

DBA 1656

QUALITY MANAGEMENT

STEP 3 - Setting up TPM and departmental committees : TPM includes improvement, autonomous maintenance, quality maintenance etc., as a part of it. When committees are set up it should take care of all these needs. STEP 4 - Establishing the TPM working system and target : Now each area is benchmarked and fix up a target for achievement. STEP 5 - A master plan for institutionalizing : Next step is implementation leading to institutionalizing wherein, TPM becomes an organizational culture. Achieving PM award is the proof of reaching a satisfactory level. STEP B - INTRODUCTION STAGE This is a ceremony and we should invite all, including suppliers as they should know that we want quality supply from them. related companies and affiliated companies who can be our customers, sister concerns etc. Some may learn from us and some can help us and customers will get the communication from us that we care for quality output. STAGE C - IMPLEMENTATION In this stage eight activities are carried which are called eight pillars in the development of TPM activity. Of these four activities are for establishing the system for production efficiency, one for initial control system of new products and equipment, one for improving the efficiency of administration and are for control of safety, sanitation as working environment. STAGE D - INSTITUTIONALIZING STAGE By all these activities, one would have reached maturity stage. Now is the time for applying for PM award. Also think of a challenging level to which you can take this movement.

NOTES

129

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Organization Structure for TPM Implementation :

FIGURE 3.10 Pillars of TPM

FIGURE 3.11

PILLAR 1 - 5S : TPM starts with 5S. Problems cannot be clearly seen when the work place is unorganized. Cleaning and organizing the workplace helps the team to uncover problems. Making problems visible is the first step of improvement.
Anna University Chennai 130

DBA 1656

QUALITY MANAGEMENT

TABLE 3. 12 Japanese Term Seiri Seiton Seiso Seiketsu Shitsuke SEIRI - Sort out : This means sorting and organizing the items as critical, important, frequently used items, useless, or items that are not needed as of now. Unwanted items can be salvaged. Critical items should be kept for use nearby and items that are not be used in the near future, should be stored in some place. For this step, the worth of the item should be decided based on utility and not cost. As a result of this step, the search time is reduced. TABLE 3.13 Priority Low Average Frequency of Use Less than once per year, Once per year< At least 2/6 months, Once per month, Once per week Once per day How to use Throw away, Store away from the workplace Store together but off-line English Translation Organization Tidiness Cleaning Standardization Discipline Equivalent S term Sort Systematize Sweep Standardize Self - Discipline

NOTES

High

Locate at the workplace

SEITON - Organise : The concept here is that Each items has a place, and only one place. The items should be placed back after usage at the same place. To identify items easily, name plates and colored tags has to be used. Vertical racks can be used for this purpose, and heavy items occupy the bottom position in the racks. SEISO - Shine the workplace : This involves cleaning the work place free of burrs, grease, oil, waste, scrap etc. No loosely hanging wires or oil leakage from machines. SEIKETSU - Standardization : Employees has to discuss together and decide on standards for keeping the work place / machines / pathways neat and clean. These standards are implemented for the whole organization and are tested / Inspected randomly.
131 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

SHITSUKE - Self discipline : Consider 5S as a way of life and bring about self-discipline among the employees of the organization. This includes wearing badges, following work procedures, punctuality, dedication to the organization etc. PILLAR 2 - JISHU HOZEN ( Autonomous maintenance ) : This pillar is geared towards developing operators to be able to take care of small maintenance tasks, thus freeing up the skilled maintenance people to spend time on more value added activity and technical repairs. The operators are responsible for the upkeep of their equipment to prevent it from deteriorating. Policy : 1. Uninterrupted operation of equipments. 2. Flexible operators to operate and maintain other equipments. 3. Eliminating the defects at source through active employee participation. 4. Stepwise implementation of JH activities. JISHU HOZEN Targets: 1. Prevent the occurrence of 1A / 1B because of JH. 2. Reduce oil consumption by 50% 3. Reduce process time by 50% 4. Increase use of JH by 50% Steps in JISHU HOZEN : 1. Preparation of employees. 2. Initial cleanup of machines. 3. Take counter measures 4. Fix tentative JH standards 5. General inspection 6. Autonomous inspection 7. Standardization and 8. Autonomous management.

Anna University Chennai

132

DBA 1656

QUALITY MANAGEMENT

Each of the above mentioned steps is discussed in detail below. 1. Train the Employees : Educate the employees about TPM, its advantages, JH advantages and Steps in JH. Educate the employees about abnormalities in equipments. 2. Initial cleanup of machines :
o

NOTES

Supervisor and technician should discuss and set a date for implementing step1 Arrange all items needed for cleaning On the arranged date, employees should clean the equipment completely with the help of maintenance department. Dust, stains, oils and grease has to be removed. Following are the things that has to be taken care while cleaning. They are Oil leakage, loose wires, unfastened nuts and bolts and worn-out parts. After clean up problems are categorized and suitably tagged. White tags is place where problems can be solved by operators. Pink tag is placed where the aid of maintenance department is needed. Contents of tag is transferred to a register. Make note of area which were inaccessible. Finally close the open parts of the machine and run the machine.

o o

o o

o o o

3. Counter Measures :
o

Inaccessible regions had to be reached easily. E.g., If there are many screws to open a fly wheel door, hinge door can be used. Instead of opening a door for inspecting the machine, acrylic sheets can be used. To prevent work out of machine parts necessary action must be taken. Machine parts should be modified to prevent accumulation of dirt and dust.

o o

4. Tentative Standard :
o

JH schedule has to be made and followed strictly.


133 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT
o

NOTES

Schedule should be made regarding cleaning, inspection and lubrication and it also should include details like when, what and how.

5. General Inspection :
o

The employees are trained in disciplines like pneumatics, electrical, hydraulics, lubricants and coolants, drives, bolts, nuts and safety. This is necessary to improve the technical skills of employees and to use inspection manuals correctly. After acquiring this new knowledge the employees should share this with others. By acquiring this new technical knowledge, the operators are now well aware of machine parts.

6. Autonomous Inspection :
o o

New methods of cleaning and lubricating are used. Each employee prepares his own autonomous chart / schedule in consultation with supervisor. Parts which have never given any problem or part which dont need any inspection are removed from list permanently based on experience. Including good quality machine parts. This avoid defects due to poor JH. Inspection that is made in preventive maintenance is included in JH. The frequency of cleanup and inspection is reduced based on experience.

o o

7. Standardization :
o

Upto the previous step only the machinery / equipment was the concentration. However in this step the surroundings of machinery are organized. Necessary items should be organized, such that there is no searching and searching time is reduced. Work environment is modified such that there is no difficulty in getting any item. Everybody should follow the work instructions strictly. Necessary spares for equipments is planned and procured.
134

o o

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

8. Autonomous Management :
o

NOTES

OEE and OPE and other TPM targets must be achieved by continuous improvement through Kaizen. PDCA ( Plan, Do, Check and Act ) cycle must be implemented for Kaizen.

PILLAR 3 - KAIZEN : Kai means change, and Zen means good ( for the better ). Basically kaizen is for small improvements, but carried out on a continual basis and involve all people in the organization. Kaizen is opposite to big spectacular innovations. Kaizen requires no or little investment. The principle behind is that a very large number of small improvements are more effective in an organizational environment than a few improvements of large value. This pillar is aimed at reducing losses in the workplace that affect our efficiencies. By using a detailed and thorough procedure we eliminate losses in a systematic method using various Kaizen tools. These activities are not limited to production areas and can be implemented in administrative areas as well. Kaizen Policy : 1. 2. 3. 4. 5. Practice concepts of zero losses in every sphere of activity. Relentless pursuit to achieve cost reduction targets in all resources Relentless pursuit to improve over all plant equipment effectiveness. Extensive use of PM analysis as a tool for eliminating losses. Focus of easy handling of operators.

Kaizen Target : Achieve and sustain zero losses with respect to minor stops, measurement and adjustments, defects and unavoidable downtimes. It also aims to achieve 30% manufacturing cost reduction. Tools used in Kaizen : 1. 2. 3. 4. 5. PM analysis Why - Why analysis Summary of losses Kaizen register Kaizen summary sheet.
135 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

The objective of TPM is maximization of equipment effectiveness. TPM aims at maximization of machine utilization and not merely machine availability maximization. As one of the pillars of TPM activities, Kaizen pursues efficient equipment, operator and material and energy utilization, that is the extremes of productivity and aims at achieving substantial effects. Kaizen activities try to thoroughly eliminate 16 major losses.

16 Major losses in a organization:


TABLE 3.14 Loss 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. Failure losses - Breakdown loss Setup / adjustment losses Cutting blade loss Start up loss Minor stoppage / Idling loss. Speed loss - operating at low speeds. Defect / rework loss Scheduled downtime loss Management loss Operating motion loss Line organization loss Logistic loss Measurement and adjustment loss Energy loss Die, jig and tool breakage loss Yield loss. Category

Losses that impede equipment efficiency

Losses that impede human work efficiency

Losses that impede effective use of production resources

Classification of losses :
TABLE 3.15 Aspect Causation Sporadic Loss Causes for this failure can be easily traced. Cause-effect relationship is simple to trace. Easy to establish a remedial measure
136

Chronic Loss This loss cannot be easily identified and solved. Even if various counter measures are applied This type of losses are caused because of hidden

Remedy
Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Impact / Loss

A single loss can be costly

defects in machine, equipment and methods. A single cause is a rare - a combination of causes that tends to be a rule The frequency of loss is more.

NOTES

Frequency of occurrence

The frequency of occurrence is low and occasional. Usually the line personnel in the production can attend to this problem.

Corrective action

Specialists in process engineering, quality assurance and maintenance people are required.

PILLAR 4 - PLANNED MAINTENANCE : It is aimed to have trouble free machines and equipments producing defect free products for total customer satisfaction. This breaks maintenance down into 4 families or groups which was defined earlier. 1. 2. 3. 4. Preventive Maintenance Breakdown Maintenance Corrective Maintenance Maintenance Prevention

With Planned Maintenance we evolve our efforts from a reactive to a proactive method and use trained maintenance staff to help train the operators to better maintain their equipment. Policy : 1. 2. 3. 4. Achieve and sustain availability of machines Optimum maintenance cost. Reduces spares inventory. Improve reliability and maintainability of machines.

Target : 1. Zero equipment failure and break down. 2. Improve reliability and maintainability by 50 %
137 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

3. Reduce maintenance cost by 20 % 4. Ensure availability of spares all the time. Six steps in planned maintenance : 1. 2. 3. 4. Equipment evaluation and recoding present status. Restore deterioration and improve weakness. Building up information management system. Prepare time based information system, select equipment, parts and members and map out plan. 5. Prepare predictive maintenance system by introducing equipment diagnostic techniques and 6. Evaluation of planned maintenance. PILLAR 5 - QUALITY MAINTENANCE : It is aimed towards customer delight through highest quality through defect free manufacturing. Focus is on eliminating nonconformances in a systematic manner, much like Focused Improvement. We gain understanding of what parts of the equipment affect product quality and begin to eliminate current quality concerns, then move to potential quality concerns. Transition is from reactive to proactive (Quality Control to Quality Assurance). QM activities is to set equipment conditions that preclude quality defects, based on the basic concept of maintaining perfect equipment to maintain perfect quality of products. The conditions are checked and measure in time series to verify whether that measure values are within standard values to prevent defects. The transition of measured values is watched to predict possibilities of defects occurring and to take counter measures beforehand. Policy : 1. 2. 3. 4. 5. 6. Defect free conditions and control of equipments. QM activities to support quality assurance. Focus of prevention of defects at source Focus on poka-yoke. ( fool proof system ) In-line detection and segregation of defects. Effective implementation of operator quality assurance.

Target : 1. Achieve and sustain customer complaints at zero 2. Reduce in-process defects by 50 % 3. Reduce cost of quality by 50 %.
Anna University Chennai 138

DBA 1656

QUALITY MANAGEMENT

Data requirements : Quality defects are classified as customer end defects and in-house defects. For customer-end data, we have to get data on, 1. Customer end line rejection 2. Field complaints. In-house, data include data related to products and data related to processes. Data related to product : 1. Product wise defects 2. Severity of the defect and its contribution - major/minor 3. Location of the defect with reference to the layout 4. Magnitude and frequency of its occurrence at each stage of measurement 5. Occurrence trend in the beginning and the end of each production/process/ changes. (Like pattern change, ladle/furnace lining etc.) 6. Occurrence trend with respect to restoration of breakdown/modifications/ periodical replacement of quality components. Data related to processes: 1. The operating condition for individual sub-process related to men, method, material and machine. 2. The standard settings/conditions of the sub-process 3. The actual record of the settings/conditions during the defect occurrence. PILLAR 6 - TRAINING : It is aimed to have multi-skilled revitalized employees whose morale is high and who is eager to come to work and perform all the required functions effectively and independently. Education is given to operators to upgrade their skill. It is not sufficient to know only Know-How but they should also learn Know-why. By experience they gain, Know-How to overcome a problem, what is to be done. This they do without knowing the root cause of the problem and why they are doing so. Hence, it becomes necessary to train them on knowing Know-why. The employees should be trained to achieve the four phases of skill. The goal is to create a factory full of experts. The different phases of skills are
139

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Phase 1 : Do not know. Phase 2 : Know the theory but cannot do. Phase 3 : Can do but cannot teach Phase 4 : Can do and also teach. Policy : 1. Focus on improvement of knowledge, skills and techniques. 2. Creating a training environment for self-learning based on felt needs. 3. Training curriculum / tools /assessment etc conductive to employee revitalization 4. Training to remove employee fatigue and make work enjoyable. Target : 1. Achieve and sustain downtime due to want men at zero on critical machines. 2. Achieve and sustain zero losses due to lack of knowledge / skills / techniques 3. Aim for 100 % participation in suggestion scheme. Steps in educating and training activities : 1. Setting policies and priorities and checking present status of education and training. 2. Establish training system for operation and maintenance skill upgradation. 3. Training the employees for upgrading the operation and maintenance skills. 4. Preparation of training calendar. 5. Kick-off of the system for training. 6. Evaluation of activities and study of future approach. PILLAR 7 - OFFICE TPM : Office TPM should be started after activating four other pillars of TPM (JH, KK, QM, PM). Office TPM must be followed to improve productivity, efficiency in the administrative functions and identify and eliminate losses. This includes analyzing processes and procedures towards increased office automation. Office TPM addresses twelve major losses. They are:

Anna University Chennai

140

DBA 1656

QUALITY MANAGEMENT

1. Processing loss 2. Cost loss including in areas such as procurement, accounts, marketing, sales leading to high inventories 3. Communication loss 4. Idle loss 5. Set-up loss 6. Accuracy loss 7. Office equipment breakdown 8. Communication channel breakdown, telephone and fax lines 9. Time spent on retrieval of information 10. Nonavailability of correct on line stock status 11. Customer complaints due to logistics 12. Expenses on emergency dispatches/purchases How to start office TPM ? A senior person from one of the support functions e.g. Head of Finance, MIS, Purchase etc should be heading the sub-committee. Members representing all support functions and people from Production & Quality should be included in the sub-committee. TPM co-ordinate plans and guides the sub committee. 1. Providing awareness about office TPM to all support departments 2. Helping them to identify P, Q, C, D, S, M in each function in relation to plant performance 3. Identify the scope for improvement in each function 4. Collect relevant data 5. Help them to solve problems in their circles 6. Make up an activity board where progress is monitored on both sides - results and actions along with Kaizens. 7. Fan out to cover all employees and circles in all functions. Kobetsu Kaizen topics for Office TPM :

NOTES

Inventory reduction Lead time reduction of critical processes Motion and space losses
141 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Retrieval time reduction. Equalizing the work load Improving the office efficiency by eliminating the time loss on retrieval of information, by achieving zero breakdown of office equipment like telephone and fax lines.

Office TPM and its benefits : 1. Involvement of all people in support functions for focusing on better plant performance 2. Better utilized work area 3. Reduce repetitive work 4. Reduced inventory levels in all parts of the supply chain 5. Reduced administrative costs 6. Reduced inventory carrying cost 7. Reduction in number of files 8. Reduction of overhead costs (to include cost of non-production/non-capital equipment) 9. Productivity of people in support functions 10. Reduction in breakdown of office equipment 11. Reduction of customer complaints due to logistics 12. Reduction in expenses due to emergency dispatches/purchases 13. Reduced manpower 14. Clean and pleasant work environment. P Q C D S M in Office TPM : P - Production output lost due to want of material, manpower productivity, production output lost due to want of tools. Q - Mistakes in preparation of cheques, bills, invoices, payroll, customer returns/warranty attributable to BOPs, rejection/rework in BOPs/job work, office area rework. C - Buying cost/unit produced, cost of logistics - inbound/outbound, cost of carrying inventory, cost of communication, demurrage costs. D - Logistics losses (Delay in loading/unloading)
Anna University Chennai 142

DBA 1656

QUALITY MANAGEMENT

Delay in delivery due to any of the support functions Delay in payments to suppliers Delay in information

NOTES

S - Safety in material handling/stores/logistics, Safety of soft and hard data. M - Number of kaizens in office areas. How office TPM supports plant TPM : Office TPM supports the plant, initially in doing Jishu Hozen of the machines (after getting training of Jishu Hozen), as in Jishu Hozen at the 1. Initial stages, machines are more and manpower is less, so the help of commercial departments can be taken, for this 2. Office TPM can eliminate the losses on-line for no material and logistics. Extension of office TPM to suppliers and distributors : This is essential, but only after we have done as much as possible internally. With suppliers it will lead to on-time delivery, improved in-coming quality and cost reduction. With distributors, it will lead to accurate demand generation, improved secondary distribution and reduction in damages during storage and handling. In any case, we will have to teach them based on our experience and practice and highlight gaps in the system which affect both sides. Incase of some of the larger companies, they have started to support clusters of suppliers. PILLAR 8 - SAFETY, HEALTH AND ENVIRONMENT : Target : 1. Zero accident, 2. Zero health damage 3. Zero fires. In this area focus is on to create a safe workplace and a surrounding area that is not damaged by our process or procedures. This pillar will play an active role in each of the other pillars on a regular basis. A committee is constituted for this pillar which comprises representative of officers as well as workers. The committee is headed by Senior Vice-President ( Technical ). Utmost importance to safety is given in the plant. Manager (safety) is looking after functions related to safety. To create awareness among employees various competitions
143 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

like safety slogans, quiz, drama, posters, etc. related to safety can be organized at regular intervals. Conclusion: Today, with competition in industry high all times, TPM may be the only thing that stands between success and total failure for some companies. It has been proven to be a program that works. It can be adapted to work not only in industrial plants, but in construction, building maintenance, transportation, and in a variety of other situations. Employees must be educated and convinced that TPM is not just another program of the month and that management is totally committed to the program and the extended time frame is necessary for full implementation. If everyone involved in a TPM program does his or her part, an unusually high rate of return compared to resources invested may be expected. TPM achievements Many TPM sites have made excellent progress in a number of areas. These include:

better understanding of the performance of their equipment (what they are achieving in OEE terms and what are the reasons for non-achievement), better understanding of equipment criticality and where it is worth deploying improvement effort and potential benefits, improved teamwork and a less adversarial approach between production and maintenance, improved procedures for changeovers and set-ups, carrying out frequent maintenance tasks, better training of operators and maintainers, which all lead to reduced costs and better service, general increased enthusiasm from involvement of the workforce.

However, the central paradox of the whole TPM Process is that, given that TPM is supposed to be about doing better maintenance, why do proponents end up with (largely) the same discredited schedules that they had already (albeit now being done by different people)? This is the central paradox - yes, the organization is more empowered, and re-shaped to allow us to carry out maintenance in the modern arena, but were still left with the problem of what maintenance should be done. The Reliability Centered Maintenance process was evolved within the civil aviation industry to fulfil this precise need. Infact, the definition of RCM is a process
Anna University Chennai 144

DBA 1656

QUALITY MANAGEMENT

used to determine the maintenance requirements of physical assets in their present operating context. In essence, we have two objectives; determine the maintenance requirements of the physical assets within their current operating context, and then ensure that these requirements are met as cheaply and effectively as possible. RCM is better at delivering objective one; TPM focuses on objective two. Total Productive Maintenance in India The Indian Industry is facing a severe global competition and hence many companies are finding it very difficult to meet the bottom line. The past decade has transformed the definition of Market Price, which was based on simple assumption under the monopolistic condition as given below: Production cost + Profit = Market price However, under the present scenario where all are facing the domestic/global competition, the above definition does not hold good and simply got transformed into: Market prices - Prod. Cost = Profit Although, the above two equations mathematically look to be the same, the difference is obvious as in the present scenario. The customer who has become quite demanding with respect to cost, quality and variety determines the market price. The current economic environment automatically brings tremendous pressure on optimizing the production cost for survival of the unit also. TPM meets the challenge and provides an effective program in terms of increased plant efficiency and productivity. TPM is a means of creating a safe and participative work environment, in which all employees target the elimination of all kinds of waste generated due to equipment failure, frequent breakdowns, defective products including rejections and rework. This leads to higher employee morale and greater organizational profitability. TPM implementation is not a difficult task. However, it requires: 1. Total commitment to the program by Top Management as it has to be TOP DRIVEN to succeed. 2. Total involvement and participation of all the employees. 3. Attitudinal changes and paradigm shift towards job responsibilities.
145

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Unit

NOTES

Steps involved in effective implementation of TPM are given briefly as follows: 1. Appoint a committed and responsible coordinator who is well-versed in TPM concept and able to convince the entire work force through an educational program. 2. Select a model cell or a machine, which has the maximum potential for improvement in respect of optimizing the profits. Such a model cell or machine will have the maximum problems in respect of Product quality Equipment failure and frequent breakdowns Unsafe conditions causing safety hazards

3. The TPM co-ordinator will head a small team comprising of the concerned persons of selected model cell or machine and would initiate the TPM program by first recording meticulously and observing fullest transparency of the problematic areas in respect of product defects, equipment breakdown data and number of accidents from the past data, if available. However, if the reliable data is not available, the same will have to be built up for the purpose of benchmarking and keeping the records of progress and scheduling the improvements (target) within the time period. 4. The TPM co-ordinator would encourage the team member to initiate the TPM through initial cleaning of the machines taken as a model for improvement. Initial cleaning is done to remove shortcomings or defects developed over the years, which have remained unnoticed. 5. Initial cleaning activities include removing dust, dirt, fluff etc. Through this process, improper conditions of the machine get detected in the form of:

Inaccuracies leading to defects in regard to quality Defective parts/components leading to the development of defects in the machine Detection of excessive wear and tear in the moving parts of the machine leading to production of defective parts. Motion resistance observed due to foreign matters found in moving parts of the machine. Detection of defects like loose fasteners, scratches, deformation and leakage etc., remaining invisible in dirty equipment.
146

Anna University Chennai

-4

DBA 1656

QUALITY MANAGEMENT

The initial cleaning leads to Inspection which in turn helps in detecting the abnormalities in the equipment gathered over the years causing quality defects, equipment failures, safety hazards. The action team would be responsible to ascertain the problem areas, determining the sources of generation of abnormalities and those causing forced deterioration. Through deliberation of various techniques the team would detail out a course of corrective action and implementing the corrective process. It is quite possible that in the beginning recognizing problem areas and determining the counter measures eliminating the sources of error, the team members may not find easy. They have to continue their thinking creatively that things could be done better. Pressure die casting company: For this company one 660 T HMT m/c was selected for developing it as a model cell. The m/c was studied and evaluated critically in depth by the team. The team decided to set first objective as to achieve Delightful Working Environment which is a pre-requisite to the introduction of practices of TPM. The team identified 14 cause factors responsible for the present condition of the m/c and coming in the way of achieving the DWE, the first objective. These cause factors were analyzed and improvement themes were developed such as counter measures to dust and dirt, difficult to clean and non-accessible areas, counter measures to leakages, flash and water spray, die coat spray etc. Implementation of solutions resulted into the following major achievements:

NOTES

Delightful working environment Productivity improvement by 20% Metal saving through elimination of flash and coil formation during each stroke.

Similar achievements resulted in the Automobile Company also in the form of breakdown reduction

Defects reduction Space saving Productivity improvement

To make these achievements sustainable a very important aspect of TPM is the establishment of Autonomous Maintenance. The purpose of autonomous inspection is to teach operators how to maintain their equipment by performing daily the following in not more than 15 minutes and thus developing an ownership of machine:
147 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Daily checks Lubrication management Tightening and checking schedule of fasteners Cleaning schedule Early detection of abnormal condition of the machine through sound, temperature and vibration. In the current Industrial scenario, TPM may be one of the only concepts that stand between success and total failure for some organizations. It is a program that works if it is implemented effectively with all sincerity and dedicated efforts of participative team. 3.7 TEROTECHNOLOGY Terotechnology A Greek word referring to the study of the costs associated with an asset throughout its life cycle, from acquisition to disposal. The goals of this approach are to try to reduce the different costs incurred at the various stages of the assets life and to derive methods that will help extend the assets life span. Also known as life-cycle costing. Terotechnology uses tools such as net present value, internal rate of return and discounted cash flow in an attempt to minimize the costs associated with the asset in the future. These costs can include engineering, maintenance, wages payable to operate the equipment, operating costs and even disposal costs. For example, lets say an oil company is attempting to map out the costs of an off-shore oil platform. They would use terotechnology to map out the exact costs associated with assembly, transportation, maintenance and dismantling of the platform and finally a calculation of salvage value. This study is not an exact science as you can imagine. There are many different variables that need to be estimated and approximated. However, a company who does not use this kind of study may be worse off than one that approaches an assets life cycle in a more ad hoc manner. Terotechnology: A word derived from the Greek and meaning the study and management of an assets life from its very start (acquisition) to its very end (final disposal, perhaps involving dismantling and specialised treatment prior to scrap). One of the most dramatic examples of the full consideration of terotechnology is the construction, use and final decommissioning of an oil platform at sea.

Anna University Chennai

148

DBA 1656

QUALITY MANAGEMENT

A word derived from the Greek root word tero or I care, that is now used with the term technology to refer to the study of the costs associated with an asset throughout its life cycle - from acquisition to disposal. Life-Cycle Costing Estimates of a products revenues and expenses over its expected life cycle. The result is to highlight upstream and downstream costs in the cost planning process that often receive insufficient attention. Emphasis is on the need to price products to cover all costs, not just production costs. History of life cycle cost analysis Life cycle cost analysis became popular in the 1960s when the concept was taken up by U.S. government agencies as an instrument to improve the cost effectiveness of equipment procurement. From that point, the concept has spread to the business sector, and is used there in new product development studies, project evaluations and management accounting. As there is high interest in life cycle cost analysis in maintenance, the International Electro technical Commission published a standard (IEC 60300) in 1996, which lies in the field of dependability management and gives recommendations how to carry out life cycle costing. This standard was renewed in July 2004. Realization of a life cycle cost analysis A life cycle cost analysis calculates the cost of a system or product over its entire life span. This also involves the process of Product Life Cycle Management so that the life cycle profits are maximized. The analysis of a typical system could include costs for:

NOTES

planning, research and development, production, operation, maintenance, cost of replacement, disposal or salvage.

This cost analysis depends on values calculated from other reliability analyses like failure rate, cost of spares, repair times, and component costs. Sometimes called a cradle-to-grave analysis, or womb-to-tomb

149

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

A life cycle cost analysis is important for cost accounting purposes. In deciding to produce or purchase a product or service, a timetable of life cycle costs helps show what costs need to be allocated to a product so that an organization can recover its costs. If all costs cannot be recovered, it would not be wise to produce the product or service. It reinforces the importance of locked-in costs, such as R&D. It offers three important benefits:- All costs associated with a project/product become visible, especially: upstream, R&D; downstream, customer service. - It allows an analysis of business function interrelationships. Low R&D costs may lead to high customer service costs in the future. - Differences in early stage expenditure are highlighted, enabling managers to develop accurate revenue predictions. A typical quantitative analysis would involve the use of a statement where an easy comparison of costs can be seen by having the different products a company produces next to each other. Disambiguation: Life cycle cost analysis or life cycle costing should not be confused with

life cycle analysis which is a part of the ISO 14000 series concerned with environmental issues product life cycle analysis which is a time dependent marketing construct

3.8 BUSINESS PROCESS RE-ENGINEERING (BPR), PRINCIPLES, APPLICATIONS Business Process Re-engineering is a management approach aiming at improvements by means of elevating efficiency and effectiveness of the processes that exist within and across organizations. Business process re-engineering is also known as BPR, Business Process Redesign, Business Transformation, or Business Process Change Management. In 1990, Michael Hammer, a former professor of computer science at the Massachusetts Institute of Technology (MIT), published an article in the Harvard Business Review, in which he claimed that the major challenge for managers is to obliterate nonvalue adding work, rather than using technology for automating it. This statement implicitly accused managers of having focused the wrong issues, namely that technology in general, and more specifically information technology, has been used primarily for automating existing work rather than using it as an enabler for making non-value adding work obsolete.
Anna University Chennai 150

DBA 1656

QUALITY MANAGEMENT

Hammers claim was simple: Most of the work being done does not add any value for customers, and this work should be removed, not accelerated through automation. Instead, companies should reconsider their processes in order to maximize customer value, while minimizing the consumption of resources required for delivering their product or service. A similar idea was advocated by Thomas H. Davenport and J. Short (1990), at that time a member of the Ernst & Young research center, in a paper published in the Sloan Management Review the same year as Hammer published his paper. This idea, to unbiasedly review a companys business processes, was rapidly adopted by a huge number of firms, which were striving for renewed competitiveness, which they had lost due to the market entrance of foreign competitors, their inability to satisfy customer needs, and their insufficient cost structure. Even well-established management thinkers, such as Peter Drucker and Tom Peters, were accepting and advocating BPR as a new tool for (re-)achieving success in a dynamic world. During the following years, a fast growing number of publications, books as well as journal articles, was dedicated to BPR, and many consulting firms embarked on this trend and developed BPR methods. However, the critics were fast to claim that BPR was a way to dehumanize the work place, increase managerial control, and to justify downsizing, i.e., major reductions of the work force (Greenbaum 1995, Industry Week 1994), and a rebirth of Taylorism under a different label. Despite this critique, reengineering was adopted at an accelerating pace and by 1993, as many as 65% of the Fortune 500 companies claimed to either have initiated re-engineering efforts, or to have plans to do so. Definition of BPR Different definitions can be found. This section contains the definition provided in notable publications in the field. Hammer and Champy (1993) define BPR as ... the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance, such as cost, quality, service, and speed. Thomas H. Davenport (1993), another well-known BPR theorist, uses the term process innovation, which he says encompasses the envisioning of new work strategies, the actual process design activity, and the implementation of the change in all its complex technological, human, and organizational dimensions.
151

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Additionally, Davenport points out the major difference between BPR and other approaches to organization development (OD), especially the continuous improvement or TQM movement, when he states: Today firms must seek not fractional, but multiplicative levels of improvement 10x rather than 10%. Finally, Johansson et al. (1993) provide a description of BPR relative to other process-oriented views, such as Total Quality Management (TQM) and Just-in-time (JIT), and state: Business Process Reengineering, although a close relative, seeks radical rather than merely continuous improvement. It escalates the efforts of JIT and TQM to make process orientation a strategic tool and a core competence of the organization. BPR concentrates on core business processes, and uses the specific techniques within the JIT and TQM toolboxes as enablers, while broadening the process vision. In order to achieve the major improvements BPR is seeking for, the change of structural organizational variables, and other ways of managing and performing work is often considered as being insufficient. For being able to reap the achievable benefits fully, the use of information technology (IT) is conceived as a major contributing factor. While IT traditionally has been used for supporting the existing business functions, i.e., it was used for increasing organizational efficiency, it now plays a role as enabler of new organizational forms, and patterns of collaboration within and between organizations. BPR derives its existence from different disciplines, and four major areas can be identified as being subjected to change in BPR - organization, technology, strategy, and people - where a process view is used as common framework for considering these dimensions. Business strategy is the primary driver of BPR initiatives and the other dimensions are governed by strategys encompassing role. The organization dimension reflects the structural elements of the company, such as hierarchical levels, the composition of organizational units, and the distribution of work between them. Technology is concerned with the use of computer systems and other forms of communication technology in the business. In BPR, information technology is generally considered as playing a role as enabler of new forms of organizing and collaborating, rather than supporting existing business functions. The people / human resources dimension deals with aspects such as education, training, motivation and reward systems. The concept of business processes - interrelated activities aiming at creating a value added output to a customer - is the basic underlying idea of BPR. These processes are characterized by a number of attributes: Process ownership, customer focus, value-adding, and cross-functionality.

Anna University Chennai

152

DBA 1656

QUALITY MANAGEMENT

The role of information technology Information technology (IT) plays an important role in the reengineering concept. It is considered as a major enabler for new forms of working and collaborating within an organization and across organizational borders. The early BPR literature, e.g., Hammer & Champy (1993), identified several so called disruptive technologies that were supposed to challenge traditional wisdom about how work should be performed. 1. Shared databases, making information available at many places 2. Expert systems, allowing generalists to perform specialist tasks 3. Telecommunication networks, allowing organizations to be centralized and decentralized at the same time 4. Decision-support tools, allowing decision-making to be a part of everybodys job 5. Wireless data communication and portable computers, allowing field personnel to work office independent 6. Interactive videodisk, to get in immediate contact with potential buyers 7. Automatic identification and tracking, allowing things to tell where they are, instead of requiring to be found 8. High performance computing, allowing on-the-fly planning and revisioning In the mid 1990s, especially workflow management systems were considered as a significant contributor to improved process efficiency. Also ERP (Enterprise Resource Planning) vendors, such as SAP, positioned their solutions as vehicles for business process redesign and improvement. Methodology Although the names and steps being used differ slightly between the different methodologies, they share the same basic principles and elements. The following description is based on the PRLC (Process Reengineering Life Cycle) approach developed by Guha et.al. (1993). A more detailed description can be found here.

NOTES

153

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 3.12 Simplified schematic outline of using a business process approach, examplified for pharmceutical R&D: 1. 2. 3. Structural organization with functional units Introduction of New Product Development as cross-functional process Re-structuring and streamlining activities, removal of non-value adding tasks 1. Envision new processes 1. Secure management support 2. Identify re-engineering opportunities 3. Identify enabling technologies 4. Align with corporate strategy 2. Initiating change 1. Set up re-engineering team 2. Outline performance goals 3. Process diagnosis 1. Describe existing processes 2. Uncover pathologies in existing processes
Anna University Chennai 154

DBA 1656

QUALITY MANAGEMENT

4. Process redesign 1. Develop alternative process scenarios 2. Develop new process design 3. Design HR architecture 4. Select IT platform 5. Develop overall blueprint and gather feedback 5. Reconstruction 1. Develop/install IT solution 2. Establish process changes 6. Process monitoring 1. Performance measurement, including time, quality, cost, IT performance 2. Link to continuous improvement BPR - a rebirth of scientific management? By its critics, BPR is often accused to be a re-animation of Taylors principles of scientific management, aiming at increasing productivity to a maximum, but disregarding aspects such as work environment and employee satisfaction. It can be agreed that Taylors theories, in conjunction with the work of the early administrative scientists have had a considerable impact on the management discipline for more than 50 years. However, it is not self-evident that BPR is a close relative to Taylorism and this proposed relation deserves a closer investigation. In the late 19th century Frederick Winslow Taylor, a mechanical engineer, started to develop the idea of management as a scientific discipline. He applied the principle that work and its organizational environment could be considered and designed upon scientific principles, i.e., that work processes could be studied in detail using a positivist analytic approach. Upon the basis of this analysis, an optimal organizational structure and way of performing all work tasks could be identified and implemented. However, he was not the one to originally invent the concept. In 1886, a paper entitled The Engineer as Economist, written by Henry R. Towne for the American Society of Mechanical Engineers, had laid the bedrock for the development of scientific management. The basic idea of scientific management was that work could be studied from an objective scientific perspective and that the analysis of the gathered information
155

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

could be used for increasing productivity, especially of blue-collar work, significantly. Taylor (1911) summarized his observations in the following four principles:

Observation and analysis through time study to set the optimal production rate. In other words, develop a science for each mans taska one best way. Scientifically select the best man for the job and train him in the procedures he is expected to follow. Co-operate with the man to ensure that the work is done as described. This means establishing a differential rate system of piece work and paying the man on an incentive basis, not according to the position. Divide the work between managers and workers so that managers are given the responsibility for planning and preparation of work, rather than the individual worker.

Scientific managements main characteristic is the strict separation of planning and doing, which was implemented by the use of a functional foremanship system. This means, that a worker, depending on the task his is performing, can report to different foreman, each of them being responsible for a small, specialized area. Taylors ideas had a major impact on manufacturing, but also administration. One of the most well-known examples is Ford Motor Co., which adopted the principles of scientific management at an early stage, and built its assembly line for the T-model based on Taylors model of work and authority distribution, thereby, giving name to Fordism. Successes BPR, if implemented properly, can give huge returns. BPR has helped giants like Procter and Gamble Corporation and General Motors Corporation succeed after financial drawbacks due to competition. It helped American Airlines somewhat get back on track from the bad debt that is currently haunting their business practice. BPR is about the proper method of implementation. General Motors Corporation implemented a 3-year plan to consolidate their multiple desktop systems into one. It is known internally as Consistent Office Environment. This re-engineering process involved replacing the numerous brands of desktop systems, network operating systems and application development tools into a more manageable number of vendors and technology platforms. According to Donald G. Hedeen, director of desktops and deployment at GM and manager of the upgrade program, he says that the process lays the foundation for the implementation of a
Anna University Chennai 156

DBA 1656

QUALITY MANAGEMENT

common business communication strategy across General Motors. (Booker, 1994). Lotus Development Corporation and Hewlett-Packard Development Company, formerly Compaq Computer Corporation, received the single largest non-government sales ever from General Motors Corporation. GM also planned to use Novell NetWare as a security client, Microsoft Office and Hewlett-Packard printers. According to Donald G. Hedeen, this saved GM 10% to 25% on support costs, 3% to 5% on hardware, 40% to 60% on software licensing fees, and increased efficiency by overcoming incompatibility issues by using just one platform across the entire company. Southwest Airlines offers another successful example of reengineering their company and using Information Technology the way it was meant to be implemented. In 1992, Southwest Airlines had a revenue of $1.7 billion and an after-tax profit of $91 million. American Airlines, the largest U.S. carrier, on the other hand had a revenue of $14.4 billion dollars but lost $475 million and has not made a profit since 1989 (Furey and Diorio, 1994). Companies like Southwest Airlines know that their formula for success is easy to copy by new start-ups like Morris, Reno, and Kiwi Airlines. In order to stay in the game of competitive advantage, they have to continuously re-engineer their strategy. BPR helps them be original. Critique The most frequent and harsh cirtique against BPR concerns the strict focus on efficiency and technology and the disregard of people in the organization that is subjected to a re-engineering initiative. Very often, the label BPR was used for major workforce reductions. Thomas Davenport, an early BPR proponent, stated that When I wrote about business process redesign in 1990, I explicitly said that using it for cost reduction alone was not a sensible goal. And consultants Michael Hammer and James Champy, the two names most closely associated with re-engineering, have insisted all along that layoffs shouldnt be the point. But the fact is, once out of the bottle, the re-engineering genie quickly turned ugly. (Davenport, 1995) Michael Hammer similarly admitted that I wasnt smart enough about that. I was reflecting my engineering background and was insufficient appreciative of the human dimension. Ive learned thats critical. (White, 1996) Other criticism brought forward against the BPR concept include

NOTES

lack of management support for the initiative and thus poor acceptance in the organization.
157 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

exaggerated expectations regarding the potential benefits from a BPR initiative and consequently failure to achieve the expected results. underestimation of the resistance to change within the organization. implementation of generic so-called best-practice processes that do not fit specific company needs. overtrust in technology solutions. performing BPR as a one-off project with limited strategy alignment and longterm perspective. poor project management.

Development after 1995 With the publication of critical articles by some of the founding fathers of the BPR concept in 1995 and 1996 the re-engineering hype was effectively over. Since then, considering business processes as a starting point for business analysis and redesign has become a widely accepted approach and is a standard part of the change methodology portfolio, but is typically performed in a less radical way as originally proposed. More recently, the concept of Business Process Management (BPM) has gained major attention in the corporate world and can be considered as a successor to the BPR wave of the 1990s, as it is evenly driven by striving for process efficiency supported by information technology. Equivalently to the critique brought forward against BPR, BPM is now accused of focusing on technology and disregarding the people aspects of change. Application Re-Engineering Business Process Re-engineering (BPR) combines aspects of re-engineering and business process outsourcing, and rationalizes them to return value in a short span of time. This, of course, maximizes long-term prospects for your business. Infosys provides Business Process Re-engineering (BPR) solutions that assist you to fundamentally rethink and redesign how your organization will meet its strategic objectives. Emphasis is on innovation, flexibility, quality service delivery, and cost control by re-engineering business methods and supporting processes using state-of-the-art BPR tools and methodologies.
Anna University Chennai 158

DBA 1656

QUALITY MANAGEMENT

Business Process re-engineering services address the need to leverage newer technology platforms, frameworks, and software products to transform IT systems and applications. Business Process Re-engineering applications help you:

NOTES

Scale up to handle a larger user base Effectively address operational or performance issues with current application portfolio

Achieve a higher degree of maintainability Alleviate licensing and support issues with the older technologies Improve user friendliness and portability of applications Reduce costs associated with maintaining old and poorly documented legacy system Infosys Business Process Re-engineering services are backed by delivery

excellence, robust consulting capabilities and proprietary tools and frameworks. How can it be dramatically improved?

Business and technology capabilities: Teams of dedicated business and technology specialists work with project teams to understand systems and architect robust technology solutions for re-engineering or migration.

Re-engineering process: Coupled with CMM Level 5 re-engineering process, Infosys achieves one of the best metrics in the industry. Proprietary InFlux methodology: Influx methodology aligns IT solutions to business requirements. It prescribes a methodical, process-centric approach to translate business requirements into clear IT solutions. It is a systematic and repeatable process that derives every part of the solution from the business process that will ultimately use it. Influx uses models, methods, techniques, tools, patterns and frameworks to achieve a smooth translation of enterprise business objectives into an effective IT solution. Project management capability: Our proprietary project management processes, well integrated project management tools, and senior management involvement at every stage of the project ensure that you will benefit from derisked project management.

159

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Business Process Re-engineering: Re-engineer your business processes to help to achieve performance improvements. Flexible Architecture Definition: Define the new solution architecture to make any enhancement easy to implement. Strong knowledge management disciplines: Infosys has clearly-defined knowledge management process and support infrastructure, and has a number of knowledge assets on re-engineering in different technology platforms. Risk Management: Strong project planning and management processes reduce operational and other business risks. Smooth rollout: Strong project management and re-engineering processes ensure a smooth rollout of new technology platforms, organization-wide.

3.9 RE-ENGINEERING PROCESS, BENEFITS AND LIMITATIONS Re-engineering Re-engineering is the radical redesign of an organizations processes, especially its business processes. Rather than organizing a firm into functional specialties (like production, accounting, marketing, etc.) and looking at the tasks that each function performs, we should, according to the re-engineering theory, be looking at complete processes from materials acquisition, to production, to marketing and distribution. The firm should be re-engineered into a series of processes. The main proponents of re-engineering were Michael Hammer and James A. Champy. In a series of books including Re-engineering the Corporation, Reengineering Management, and The Agenda, they argue that far too much time is wasted passing-on tasks from one department to another. They claim that it is far more efficient to appoint a team who are responsible for all the tasks in the process. In The Agenda they extend the argument to include suppliers, distributors, and other business partners. Re-engineering is the basis for many recent developments in management. The cross-functional team, for example, has become popular because of the desire to reengineer separate functional tasks into complete cross-functional processes. Also, many recent management information systems developments aim to integrate a wide number of business functions. Enterprise resource planning, supply chain management, knowledge management systems, groupware and collaborative systems, human resource management systems and customer relationship management systems all owe a debt to re-engineering theory.
Anna University Chennai 160

DBA 1656

QUALITY MANAGEMENT

Criticisms of re-engineering It has earned a bad reputation because such projects have often resulted in massive layoffs. This reputation is not altogether unwarranted, since companies have often downsized under the banner of re-engineering. Further, re-engineering has not always lived up to its expectations. The main reasons seem to be that:

NOTES

re-engineering assumes that the factor that limits organizations performance is the ineffectiveness of its processes (which may or may not be true) and offers no means of validating that assumption re-engineering assumes the need to start the process of performance improvement with a clean slate, i.e. totally disregard the status quo according to Eliyahu M. Goldratt (and his theory of constraints) re-engineering does not provide an effective way to focus improvement efforts on the organizations constraint.

There was considerable hype surrounding the books introduction (partially due to the fact that the authors of Re-engineering the Corporation reportedly bought numbers of copies to promote it to the top of bestseller lists). Abrahamson (1996) showed that fashionable management terms tend to follow a lifecycle, which for Re-engineering peaked between 1993 and 1996 (Ponzi and Koenig 2002). While arguing that Re-engineering was in fact nothing new (as e.g., when Henry Ford implemented the assembly line in 1908, he was in fact re-engineering, radically changing the way of thinking in an organization), Dubois (2002) highlights the value of signaling terms as Reengineering, giving it a name, and stimulating it. At the same time, there can be a danger in usage of such fashionable concepts as mere ammunition to implement particular reforms. Reengineering by Paul A. Strassmann excerpted from The Politics of Information Management The Information Economics Press, 1995 Early in 1993, an epochal event took place in the US. For the first time in history, white-collar unemployment exceeded blue-collar unemployment. In the experience of older generations, a college education entitles one to a job with an excellent earning potential, long-term job security and opportunity to climb a career ladder. If there was an economic downturn, unemployment was something that happened to others.
161 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Large-scale white collar unemployment should not have come as a surprise. Since 1979, the US. information workforce has kept climbing and in 1993 stood at 54% of total employment. Forty million new information workers had appeared since 1960. What do these people do? They are very busy and end up as either as corporate or social overhead if they work in the public sector. They are lawyers, consultants, coordinators, clerks, administrators, managers, executives and experts of all sorts. The expansion in computer-related occupations greatly increased the amount of information that these people could process and therefore demand from others. It is the characteristics of information work that it breeds additional information work at a faster rate than number of people added to the information payroll. Computers turned out to be greater multipliers of work than any other machine ever invented. However, the greatest growth has been in government which now employs more people than the manufacturing sector. Government workers predominantly are engaged in passing information and redistributing money which requires compliance with complex regulations. Who pays for this growth in overhead? Everybody does either in higher prices or as increased taxes. As long as US. firms could raise prices, there was always room for more overhead. When international economic competition started cutting into market share starting in the 1980s corporations had to reduce staff costs. Blue collar labor essential to manufacture goods was either outsourced to foreign lands, or automated, using proven industrial engineering methods to substitute capital for labor. By the mid 1980s major cost cuts could come only from reductions in overhead. Overhead Cost Reduction Early attempts announcing 20% or more across-the board layoffs in major corporations misfired. The most valuable experts left first to start up business ventures, most often with the knowledge they gained while the large firms lingered in bringing innovations into the marketplace. Much of the dynamic growth of the Silicon Valley and of the complexes surrounding Boston have their origins in the entrepreneurial exploitation of huge research and development investments of large corporations. The next wave was even more wasteful, because overhead was reduced by imposing cost cutting targets without the benefit of redesigning any of the business processes. Companies that resorted to these crude methods did not have the experience how to measure the value-added of information workers. Therefore, they resorted to methods that may have been somewhat effective for controlling blue collar employees. That was not successful because the same treatment that was acceptable for factory

Anna University Chennai

162

DBA 1656

QUALITY MANAGEMENT

workers made the remaining management staff act in defensive and counterproductive ways to protect their positions. Such methods disoriented and demoralized many who were responsible for managing customer service. This is where re-engineering came in. It applies well known industrial engineering methods of process analysis, activity costing and value-added measurement which have been around for at least 50 years. Appearance of Re-engineering The essence of re-engineering is to make the purge of recent excess staffing binges more palatable to managers. These executives became accustomed to increasing their own staff support as a means for towards gaining greater organizational clout. An unspoken convention used by of officials at high government and corporate levels, is that a position in a hierarchy exists independently of whether something useful is delivered to customers. The primary purpose of high level staffs is to act as guardians of the bureaucracys budget, privileges and influence. If you want to perform surgery on management overhead, do not do it in a dark room with a machete. First, you must gain acceptance from those who know how to make the organization work well. Second, you must elicit their cooperation in telling you where the cutting will do the least damage. Third, and most importantly, they must be willing to share with you insights where removal of an existing business process will actually improve customer services. Budget cutters who do little else than seek out politically unprotected components, cannot possibly know what are the full consequences of their actions. Re-engineering offers to them an easy way out. Re-engineering calls for throwing out everything that exists and recommends reconstituting a workable organization on the basis of completely fresh ideas. The new business model is expected to spring forth from the inspired insights of a new leadership team. Re-engineering is a contemporary repackaging industrial engineering methods from the past, rather something that is totally original. This cure is now administered in large doses to business enterprises that must instantly show improved profits to survive. However, re-engineering differs from the incremental and carefully analytic methods of the past. In political form it is much closer to a coup dtat than to the methods of a parliamentary democracy.

NOTES

163

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Re-engineering in the Public Sector Despite admirable pronouncements about reengineering the US. government, it remains to be seen if that may be a smoke screen to justify more spending. As long as the federal government continues to increasing taxes - an easy way out of any cost pressures - the prospects of reinventing the government will remain dim. Reinventing government does not deliver savings if meanwhile you keep expanding its scope. You can have less bureaucracy only of you eliminate functions that have demonstrably failed, such as loan guarantees, public housing, diverting schools from education to social experimentation, managing telecommunications and prescribing health care. Except for defense, justice, foreign relations and similar tasks which are essential instruments of governance, public sector attempts at economic engineering have always failed. The latest Washington reengineering campaign may turn out to be a retrogression instead of an improvement. You do not enhance a stagnating economy by claiming to save a probable $108 billions so that you can add over a trillion dollars of economic control to the public sector. An emetic will be always an emetic, regardless of the color and shape of the bottle it comes from. It does not do much for those who keep up a healthy diet by eating only what their body can use. A cure claiming to be an emetic but which nevertheless fattens will increase obesity. Extremists Re-engineering is a great idea and a clever new buzzword. There is not a manager who would not support to the idea of taking something that is defective and then fixing it. Industrial engineers, methods analysts and efficiency experts have been doing that for a long time. The recently introduced label of efficiency through re-engineering covers the adoption of radical means to achieve corrective actions. This extremism offers what appears to be instant relief from the pressures on corporate executives to show immediate improvements. Re-engineering, as recently promoted, is a new label that covers some consultants extraordinary claims. To fully understand the intellectual roots of re-engineering, let the most vocal and generally acknowledged guru of reengineering speak for himself. American managers ...must abandon the organizational and operational principles and procedures they are now using and create entirely new ones. Business

Anna University Chennai

164

DBA 1656

QUALITY MANAGEMENT

re-engineering means starting all over, starting from scratch. It means forgetting how work was done...old titles and old organizational arrangements...cease to matter. How people and companies did things yesterday doesnt matter to the business reengineer. Re-engineering...cant be carried out in small and cautious steps. It is an allor-nothing proposition that produces dramatically impressive results. The Contributions of Mike Hammer When Hammer was queried How do managers contemplating a big reengineering effort get everyone inside their company to join up? he answered in terms that reflect the violent point of view of all extremists in how to achieve any progress: ...On this journey we...shoot the dissenters. The theme of turning destruction on your own people remains a persistent motive: ...Its basically taking an ax and a machine gun to your existing organization. In view of the widespread popularity of Hammer I wonder how executives can subscribe to such ferocious views while preaching about individual empowerment, teamwork, partnership, participative management, knowledge-driven enterprise, learning corporation, employee gain sharing, fellow-worker trust, common bond, shared values, people-oriented leadership, cooperation and long-term career commitment. I usually match the ideas of new prophets with past patterns. It helps to understand whether the whats proposed is repackaging of what has been tried before. I find Hammers sentence structure as well as his dogmatic pronouncements as something that resonates with the radical views put forth by political hijackers like Robespierre, Lenin, Mao and Guevara. Just replace some of the nouns, and you can produce slogans that have been attributed to those who gained power by overthrowing the existing order. It is no coincidence that the most widely read book on re-engineering carries the provocative subtitle, A Manifesto for Business Revolution and claims to be a seminal book comparable to Adam Smiths The Wealth of Nations - the intellectual underpinning of capitalism. All you have to remember is that there is another book, also bearing the title Manifesto, that successfully spread the premise that the only way to improve capitalism is to obliterate it. What is at issue here is much more than reenginering, which has much to commend to itself. The question is one of morality of commerce against the morality of warfare.
165

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

The morality of warfare, of vengeance, violent destruction and the use of might has been with us every since primitive tribes had to compete for hunting grounds. Societies have recognized the importance of warfare by sanctioning a class that was allowed, subject to some rules, to kill, while the redeeming code of loyalty and self sacrifice for the good of all would prevail. The morality of commerce has been with us at least since 500 BC. It is based on shunning force, coming up with voluntary agreements, collaboration even with strangers and aliens, respecting contracts and promoting exchanges that benefit both to the buyer and the sellers. Just about every major national tragedy in the last few centuries can be traced to the substitution of the morality of warfare for the morality of commerce, under the guise that this will lead to greater overall prosperity. Mike Hammers adoption of the non-redeeming expressions of military morality have crossed the line what ought to be acceptable. Reengineering is and should remain an activity in the commercial domain and should be bound by its morality. Leave the military language and thinking to those who have to deal with the difficult choices one faces when confronting the prospects of getting shot at. Effectiveness of Revolutionary Changes I have listened carefully to the extremists who are the most prominent promoters of the proven old ideas now repackaged as a managerial innovation. Their well financed promotional efforts have succeeded in gaining at least temporary respectability for reengineering. I have found that they have successfully translated the radicalism of the 1960s, with its socially unacceptable slogan Do not reform, obliterate! into a fashionable, money-making proposition. The clarion call for overthrowing the status quo is similar to that trumpeted by the radical students who occupied the Deans office. Now the same arguments can be fashioned into more lucrative consulting services. If you ask what many of the radical proponents of reengineering what they did while they were the University during the 1960s, you will find a surprising number who pride themselves as erstwhile anti-establishment revolutionaries. If you look at political revolutionary movements back in time to the French Revolution, you will find their leaders motivated by a fixation on seizing power from the Establishment under whatever slogan that could be sold to those who hoped to improve justice, freedom or profit. Revolutionary leaders in the past 200 years, who were mostly intellectuals who hardly ever delivered anything other than pamphlets and speeches, have been consistent in making conditions worse after they take over the Establishment.

Anna University Chennai

166

DBA 1656

QUALITY MANAGEMENT

There is one thing that all past revolutionary movements have in common with the extremist views of reengineering. In each case, the leaders call for complete and uncompromising destruction of the institutions as they exist. It is only through this kind of attack on customs, habits and relationships that newcomers can gain influence without much opposition. The common characteristic of the elite that agitates destructively for positions of leadership is an arrogance that they are the only ones with superior insight who can be trusted in what to do. I am in favor of making evolutionary improvements in the way people work. If you want to call that reengineering, thats OK, though I prefer to call it business process redesign because the other label has become tainted by extremism. Besides, you cannot reengineer something that has not been engineered to begin with. Organizations evolve because it is impossible to design complex human relationships as if they were machine parts. What matters is not the label, but by what means you help organizations to improve. The long record of miscarriages of centrally planned radical reforms, and the dismal record of reengineering as acknowledged by Mike Hammer himself, suggest that an evolutionary approach will deliver better and more permanent improvements. Views on Business Improvement Lasting improvements in business processes can be made only with the support of those who know your business. Creating conditions for continuous, incremental and adaptive change is the primary task of responsible leadership. Cut-backs that respond abruptly to a steadily deteriorating financial situation are a sure sign that management has been either incompetent or asleep. Evolutionary change stimulates the imagination and the morale. It creates conditions for rewarding organizational learning and for inspiring employees to discover innovative ways for dealing with competitive challenges and with adversity. Dismissing employees on a large scale, accompanied by incentives for longtime employees to resign voluntarily, will paralyze those who are left with fear and an aversion to taking any initiatives. It will force out those who are the most qualified to find employment elsewhere. You will end up with an organization that will suffer from self-inflicted wounds while the competition is gaining on you. If you lose your best people, you will have stripped yourself of your most valuable assets. Getting rid of people because they have obsolete skills is a reflection of past neglect of the organization to innovate and learn. Liquidating a company is easy and profitable, but somebody ought to also start thinking about how to rebuild it for growth. That is the challenge of leading todays losers to tomorrows winners.
167

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

How do you perform business process redesign under adverse conditions? How do you motivate your people to give you their best so that they may prosper again, even though some positions of privilege will change or cease to exist? Business Process Redesign To be successful, business process redesign depends on the commitment and imaginative cooperation of your employees. Business process redesign must demonstrate that only by working together they can improve their long-term opportunities. Business process redesign relies primarily on the accumulated know-how of existing employees to find conditions that will support the creation of new jobs, even if that means that in the short run many of the existing jobs will have to cease to exist. In business process redesign, the people directly affected by the potential changes study the as-is conditions and propose to be alternatives to achieve the desired improvements. In business process redesign everybody with an understanding of the business will be asked to participate. External help is hired only for expertise that does not already exist anywhere internally. Business process redesign calls for applying rigorous methods to charting, pricing and process flow analysis of as-is conditions. Process redesign is never finished during the lifetime of a company. After implementing any major improvement new payoff opportunities will always emerge from what has just been learned. The primary objective of the business process improvement is to create a learning environment in which renewal and gain will be an ongoing process instead of just a one time shock therapy. Adopting formal business process flow methods and a consistent technique for keeping track of local improvements allows combining later on processes that were initially isolated for short-term delivery of local gains in productivity. Business process redesign balances the involvement of information managers, operating managers and subject matter experts. Cooperative teams are assembled under non-threatening circumstances in which much time is spent and perhaps wasted in discussing different points of view. Unanimity is not what business process is all about. Differences are recorded, debated and passed on to higher levels of management for resolution. Business process redesign requires that you perform a business case analysis, which calculates not only payoffs but also reveals the risks of each proposed alternative. This is not popular because the current methods for performing business case analysis

Anna University Chennai

168

DBA 1656

QUALITY MANAGEMENT

of computerization projects call for calculations that do not have the integrity for making them acceptable to financial executives. The overwhelming advantage of business process redesign, as compared with re-engineering, lies in its approach to managing organizational change. The relatively slow and deliberate process redesign effort is more in tune with the approach that people normally use to cope major changes. Every day should be process redesign day, because that is how organizational learning takes place and that is how you gain the commitment of your people. At each incremental stage of process design, your people can keep up the pace with their leaders, because they can learn how to share the same understanding of what is happening to the business. They are allowed the opportunity to think about what they are doing. They are not intimidated by precipitous layoffs that inhibit their sharing of ideas how to use their own time and talent more effectively. Character of Re-engineering Re-engineering, as currently practiced, primarily by drastic dictate and with reliance on outsiders to lead it, assumes that your own people cannot be trusted to fix whatever ails your organization. re-engineering accepts primarily what the experts, preferably newcomers to the scene, have to offer. In re-engineering the consultants will recommend to you what the to be conditions ought to look like, without spending much time understanding the reasons for the as-is conditions. The credo of re-engineering is to forget what you know about your business and start with a clean slate to reinvent what you would like to be. What applies to individuals or nations, certainly applies to corporations: you can never totally disregards your people, your relationships with customers, your assets, the accumulated knowledge and your reputation. Versions of the phrase ...throw history into the dustbin and start anew has been attributed to every failed radical movement in the last two hundred years. Re-engineering proponents do not worry much about formal methods. They practice techniques of emergency surgery, most often by an amputation. If amputation is not feasible, they resort to tourniquet-like remedies to stopping the flow of red ink. Radical re-engineering may apply under emergency conditions of imminent danger as long as someone considers that this will most likely leave us with a patient that may never recover to full health again because of demoralization of the workforce. It is
169

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

much swifter than the more deliberate approach of those who practice business process redesign. No wonder, the simple and quick methods are preferred by the impatient and those who may not have to cope with the unforeseen long term consequences on what happens to the quality and the dedication of the workforce. In re-engineering participation by most of the existing management is superfluous, because you are out to junk what is in place anyway. Under such conditions, for instance, bringing in an executive who was good in managing a cookie company to run a computer company makes perfect sense. In re-engineering debates are not to be encouraged since the goal is to produce a masterful stroke of insight that suddenly will turn everything around. Autocratic managers thrive on an opportunity to preside over an reengineering effort. reengineering also offers a new lease on the careers of chief information officers with propensities to forge ahead with technological means as a way of introducing revolutionary changes. A number of spokesmen in recent meetings of computer executives offered reengineering as the antidote to the slur that CIO stands for Career Is Over. Re-engineering conveys a sense of urgency that does not dwell on much financial analysis, and certainly not formal risk assessment. Managers who tend to rely on bold strokes rebel against analytic disciplines. When it comes to business case analysis we have the traditional confrontation of the tortoise and the hare - the plodders vs. the hipshooters. Sometimes the hip-shooters win, but the odds are against them in an endurance contest. Re-engineering does not offer the time or the opportunities for the much needed adaptation of an organization to changing conditions. It imposes changes swiftly by fiat, usually from a new collection of people imported to make long overdue changes. Even if the new approach may be a superior one for jarring an organization out of its ingrown bad habits, it will be hard to implement because those who are supposed to act differently will now have a negative attitude to do their creative best in support of the transition from the old to the new. Re-engineering has the advantage of being a choice of last resort when there is no time left to accomplish business process redesign. In this sense, it is akin to saying that sometimes dictatorship is more effective than community participation. Without probing why the leadership of an enterprise ever allowed such conditions to occur, I am left with a nagging doubt if the drastic cure does not ultimately end up causing worse damage than the disease.

Anna University Chennai

170

DBA 1656

QUALITY MANAGEMENT

Constitutional democracies, despite occasional reversals in fortune, have never willingly accepted dictatorship as the way out of their troubles. On the other hand, the record of attempts to deal with the crises in governance by drastic solutions is dismal. Though occasionally you may find remarkable short term improvements, extreme solutions that have destroyed past accumulation of human capital have always resulted in viewing an era of violence as times of retrogression. Dr. Michael Hammer, a Massachusetts Institute of Technology computer sciences professor, and James Champy, Chairman of CSC Index, gave new life and vigor to the concept of reengineering in the early 1990s with the release of their book Reengineering the Corporation: A Manifesto for Business Revolution. Now, over a decade old, Business Process Reengineering (BPR) is no longer the latest and hottest management trend. However, as BPR enters a new century, it has begun to undergo a resurgence in popularity. Companies have seen real benefit in evaluating processes before they implement expensive technology solutions. A process can span several departmental units, including accounting, sales, production, and fulfillment. By deconstructing processes and grading the activities in terms of whether or not they add value, organizations can pinpoint areas that are wasteful and inefficient. As organizations continue to implement Enterprise Resource Planning (ERP) systems, they realize that many systems were built based on departmental needs, rather than being geared to a specific process. The Essence of BPR Hammer and Champy noted that in the business environment nothing is constant or predictablenot market growth, customer demand, product life spans, technological change, nor the nature of competition. As a result, customers, competition, and change have taken on entirely new dynamics in the business world. Customers now have choice, and they expect products to be customized to their unique needs. Competition, no longer decided by best price alone, is driven by other factors such as quality, selection, service, and responsiveness. In addition, rapid change has diminished product and service life cycles, making the need for inventiveness and adaptability even greater. This mercurial business environment requires a switch from a task orientation to a process orientation, and it requires re-inventing how work is to be accomplished. As such, reengineering focuses on fundamental business processes as opposed to departments or organizational units.

NOTES

171

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Re-engineering Defined Re-engineering is the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical, contemporary measures of performance, such as cost, quality, service, and speed.Hammer and Champy, 1993 The National Academy of Public Administration recast this definition for government: Government business process reengineering is a radical improvement approach that critically examines, rethinks, and redesigns mission product and service processes within a political environment. It achieves dramatic mission performance gains from multiple customer and stakeholder perspectives. It is a key part of a process management approach for optimal performance that continually evaluates, adjusts or removes processes. NAPA, 1995 Some have argued that government activities are often policy generators or oversight mechanisms that appear to add no value, yet cannot be eliminated. They question how reengineering could have applicability in the public sector. Government only differs from the commercial sector in terms of the kinds of controls and customers it has. It still uses a set of processes aimed at providing services and products to its customers. The Principles of Re-engineering In Hammer and Champys original Manifesto reengineering was by definition radical; it could not simply be an enhancement or modification of what went before. It examined work in terms of outcomes, not tasks or unit functions, and it expected dramatic, rather than marginal improvements. The authors suggested seven principles of reengineering that would streamline work processes, achieve savings, and improve product quality and time management. Seven principles of re-engineering 1. Organize around outcomes, not tasks. 2. Identify all processes in an organization and prioritize them in order of redesign urgency. 3. Integrate information processing work into the real work that produces information. 4. Treat geographically dispersed resources as though they are centralized.

Anna University Chennai

172

DBA 1656

QUALITY MANAGEMENT

5. Link parallel activities in the workflow instead of just integrating their results. 6. Put the decision point where the work is performed, and build control into the process. 7. Capture information once and at the source. The Benefits of Re-engineering The hard task of re-examining mission and how it is being delivered on a day-today basis will have fundamental impacts on an organization, especially in terms of responsiveness and accountability to customers and stakeholders. Among the many rewards, reengineering:

NOTES

Empowers employees Eliminates waste, unnecessary management overhead, and obsolete or inefficient processes Produces often significant reductions in cost and cycle times Enables revolutionary improvements in many business processes as measured by quality and customer service Helps top organizations stay on top and low-achievers to become effective competitors.

Re-engineering: A Functional Management Approach Implementation of a re-engineering initiative usually has considerable impact across organizational boundaries, as well as impacts on suppliers and customers. Re-engineering can generate a significant change in:

Product and service requirements Controls or constraints imposed on a business process The technological platform that supports a business process. For this reason, it requires a sensitivity to employee attitudes as well as to the

impact of change on their lives. What is a business process? A business process is a structured, measured set of activities designed to produce a specified output for a particular customer or market.

173

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Selecting a process Wise organizations will focus on those core processes that are critical to their performance, rather than marginal processes that have little impact. There are several criteria reengineering practitioners can use for determining the importance of the process:

Is the process broken? Is it feasible that reengineering of this process will succeed? Does it have a high impact on the agencys strategic direction? Does it significantly impact customer satisfaction? Is it antiquated? Does it fall far below Best-in-Class? DoD has suggested that the following six tasks be part of any functional

management approach to reengineering projects: 1. Define the framework. Define functional objectives; determine the management strategy to be followed in streamlining and standardizing processes; and establish the process, data, and information systems baselines from which to begin process improvement. 2. Analyze. Analyze business processes to eliminate non-value added processes; simplify and streamline processes of little value; and identify more effective and efficient alternatives to the process, data, and system baselines. 3. Evaluate. Conduct a preliminary, functional, economic analysis to evaluate alternatives to baseline processes and select a preferred course of action. 4. Plan. Develop detailed statements of requirements, baseline impacts, costs, benefits, and schedules to implement the planned course of action. 5. Approve. Finalize the functional economic analysis using information from the planning data, and present to senior management for approval to proceed with the proposed process improvements and any associated data or system changes. 6. Execute. Execute the approved process and data changes, and provide functional management oversight of any associated information system changes.
Anna University Chennai 174

DBA 1656

QUALITY MANAGEMENT

Ensuring Re-engineering Success Much research has been conducted to determine why many reengineering projects fail or miss the mark. DoD has indicated that organizations successful in reengineering planning have a number of common traits:

NOTES

They are strongly supported by the CEO They break re-engineering into small or medium-sized elements Most have a willingness to tolerate change and to withstand the uncertainties that change can generate Many have systems, processes, or strategies that are worth hiding from competitors.

Six critical success factors from government experience In a publication for the National Academy of Public Administration, author Dr. Sharon L. Caudle identified six critical success factors that ensure government re-engineering initiatives achieve the desired results: 1. Understand re-engineering.
o o o

Understand business process fundamentals. Know what re-engineering is. Differentiate and integrate process improvement approaches.

2. Build a business and political case.


o

Have necessary and sufficient business (mission delivery) reasons for re-engineering. Have the organizational commitment and capacity to initiate and sustain re-engineering. Secure and sustain political support for re-engineering projects.

3. Adopt a process management approach.


o

Understand the organizational mandate and set mission-strategic directions and goals cascading to process-specific goals and decisionmaking across and down the organization. Define, model, and prioritize business processes important for mission performance.
175 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT
o

NOTES

Practice hands-on senior management ownership of process improvement through personal involvement, responsibility, and decisionmaking. Adjust organizational structure to better support process management initiatives. Create an assessment program to evaluate process management.

4. Measure and track performance continuously.


o

Create organizational understanding of the value of measurement and how it will be used. Tie performance management to customer and stakeholder current and future expectations.

5. Practice change management and provide central support.


o

Develop human resource management strategies to support reengineering. Build information resources management strategies and a technology framework to support process change. Create a central support group to assist and integrate reengineering efforts and other improvement efforts across the organization. Create an overarching and project-specific internal and external communication and education program.

6. Manage re-engineering projects for results.


o o

Have a clear criteria to select what should be re-engineered. Place the project at the right level with a defined reengineering team purpose and goals. Use a well-trained, diversified, expert team to ensure optimum project performance. Follow a structured, disciplined approach for re-engineering.

Applying re-engineering principles to health care Business process re-engineering has its roots in commercial manufacturing. Development work in the 1970s and 1980s in a range of commercial settings showed significant benefits from a systematic approach to the analysis and restructuring of manufacturing processes. It was widely adopted as a means of improving manufacturing
Anna University Chennai

176

DBA 1656

QUALITY MANAGEMENT

output to produce significant improvements in quality, capacity and cost. The approach attracted the interest of managers in the NHS and two projects (at Leicester Royal Infirmary and at Kings Healthcare in London) were funded by the Department of Health to test its application in a health care setting. The work started in Leicester in 1994. It involved a significant programme of 140 separate projects. A Framework for Defining Success was established to ensure that the impact of the work in its widest sense could be measured. Over the last five years significant gains have been made in the quality of services offered to patients, as well as in teaching and research. Indeed, few departments in the hospital were untouched by the initiative. The key lesson from the work at Leicester has been that change is typically created bottom-up in contrast to the top-down approach championed by the academic supporters of re-engineering. Clinical and management leaders have to create the right conditions for improvement. Redesigning health care differs in significant ways from that which can be applied in industrial settings: it involves several distinct steps: Identifying specific patient groups - targets for service process redesign. Ensuring that those involved in service provision are involved in service redesign. Being clear about the tools and techniques available. Analysing the current process to identify strengths and weaknesses: what adds value and what doesnt? Creating a model for the redesigned service. Establishing performance measures. Testing the new process - being honest - does it or doesnt it work? Then doing what works.

NOTES

Leicester Royal Infirmary has created a tool-kit that describes the tools and techniques used for patient process redesign. The tool-kit provides the basis for a series of Re-engineering Masterclasses that have attracted clinical and managerial interest within the UK and internationally, with visitors from health services in New Zealand, Sweden and Denmark taking part. The Leicester Royal Infirmarys dissemination work has been recognised with the granting of specialist Learning Centre status, thus confirming their role as an integral part of the growing NHS Learning
177 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Using business process re-engineering principles in educational reform? Introduction The objective of this paper is to apply business re-engineering principles to reform primary and secondary education. In particular, using information technology (especially telecommunications and networking), educational reforms can be proactively accomplished despite the dynamic environment and demands. Efforts to reform education through information technology (Corcoran, 1993) have entered the mainstream of society and of the corporation. Reforming education through information technology bears a remarkable resemblance to efforts to reform business organizations through information technology! (Business Week, April 1983) It is therefore appropriate to re-examine educational reform in terms of business language and concepts as used in business process re-engineering. Re-engineeering strives to break away from the old ways and rules about how we organize and conduct business. It involves recognizing and rejecting some of the old ways and then finding imaginative new ways to accomplish work. (Hammer, pp. 104-105) In this way, we might learn lessons from the corporate experience to apply information technology to reform educational structure. This perspective may also enable leaders of the corporate community to better understand the context of educational reforms and their necessary role in promoting and assisting those reforms. Can we apply the language (and experience) of business process re-engineering to educational reform? In his landmark paper on re engineering, Hammer (p. 105) asserts: In a time of rapidly changing technologies and ever- shorter product life cycles, product development often proceeds at a glacial pace. In an age of the customer, order fulfillment has high error rates and customer inquiries go unanswered for weeks. In a period when asset utilization is critical, inventory levels exceed many months of demand. Small changes in wording to adapt these assertions to education reveal a remarkable analogy! In a time of rapidly changing history and ever-shorter political and economic cycles, curriculum development often proceeds at a glacial pace.

Anna University Chennai

178

DBA 1656

QUALITY MANAGEMENT

In an age of keen competition and higher standards, student achievement has high failure rates and student needs go unanswered. In a period of limited resources, educational costs continue to climb but are often a diminishing proportion of infrastructure investment. These analogies should therefore provide us with new perspective to re-engineer education using advanced information technology. Re-engineering primary and secondary education The world has changed, but education hasnt necessarily adapted to these changes. At a recent Principals Conference in Singapore, John Yip, Director of Education, was quoted (Leong): It is crucial that we have a good education system which is relevant to the times. Change is inevitable.... Help students to develop attitudes and skills with which they can independently seek knowledge, process information and apply it to tackle issues. We still have industrial age schools that are unable to meet the needs of our emerging information age society! Davis (p. 24) claims that all organizations based on the industrial model are created for businesses that either no longer exist or are in the process of going out of existence. Again, quoting Hammer (p. 107), we can also draw another analogy between the business climate and the educational climate: Quality, innovation, and service are now more important than cost, growth, and control. It should come as no surprise that our business processes and structures are outmoded and obsolete: our work structures and processes have not kept pace with the changes in technology, demographics, and business objectives. Again, small changes in wording to adapt these statements to education reveal a remarkable analogy! Quality, innovation, and creativity are now more important than cost and standardized test scores. It should come as no surprise that our school processes and structures are outmoded and obsolete: our teaching structures and processes have not kept pace with the changes in technology, demographics, and societal conditions.

NOTES

179

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Educational problems Corcoran (p. 66) remarks that networks are changing the way teachers teach and students learn. Can we apply networking to re-engineer primary and secondary education? What are the problems of education today in our dynamic contemporary world? Recognizing the traditional isolation of teachers (and students), Newman (p. 49) argues that we must make a choice between systems that (merely) deliver traditional instruction from a central repository and systems that enable teachers and students to access and gather information from distributed resources and communities. The experience of teacher Sandra McCourtney (Corcoran, p. 67) demonstrates that a network can bring children the excitement of the outside world. Even independent research by students is possible as recognized by Bob Hughes (Corcoran, p. 66), Boeings corporate director of education relations, who sees computer networks as key to turning out students who adapt to change and who solve problems by seeking out and applying new ideas. In one of the recent flood of articles on networking in the mainstream press, Markoff laments inequities such as Different levels of access between information have vs. have-not and Prejudices due to professional rank, gender, race, religion, national origin, or physical ability. Hunter (p. 44) offers us hope that assumptions of the present educational system where some learners and populations are underserved because they live in particular places, and that learning opportunities are necessarily tied to local resources, are open to rethinking in a highly networked environment. As a progressive force for change, equity, and restructuring primary and secondary education, information technology has been offered as a mechanism for fostering change. (Gillman, 1989) More specifically, many proponents have identified networking as a mechanism for change. In particular, efforts to promote the US National Research and Education Network (NREN) for use in primary and secondary education have been most representative of this point of view! Perhaps, the most ambitious effort is the National School Network Testbed (Bernstein, et. al.), a national research and development resource in which schools, school districts, community organizations, state education agencies, technology developers, and industry partners are experimenting with applications that bring significant new educational benefits to teachers and students. The Consortium for School Networking (CoSN) has been most active in promoting this movement through its on-

Anna University Chennai

180

DBA 1656

QUALITY MANAGEMENT

line Internet discussion (cosndisc@bitnic.educom.edu) and other activities, e.g. gopher cosn.org . For membership information, send mail to info@cosn.org . Indeed, the need for educational reform is generally recognized. Applying the concept of internetworking to educational innovation, it becomes possible for every individual and group involved in educational change and research to be a direct contributor to the collective process of innovation. Examples of opportunities for direct contribution include (Hunter, p. 44): Improved communications among school district personnel and Sharing expertise among teachers of different disciplines and geographical locations. Critical shifts in application of information technology. The literature on organizational change and the future abounds with dramatic predictions of the need for organizations to adapt to profound change. Sproull and Kiesler (p. 116) claim that the networked organization differs from the conventional workplace with respect to both time and space so that managers can use networks to foster new kinds of task structures and reporting relationships. Davis claims that while the new economy is in the early decades of its unfolding, businesses continue to use organization models that were more appropriate to previous times than to current needs. (p. 5) In terms of information technology, Tapscott and Caston (pp. 14-17) identify organizational changes that are enabled by information technology and network access, in particular. Networking enables the informal web of relations people developed with each other inside the organization (Davis, p. 86) so as to get things done. Likewise, examples of how educational organizations might adopt these network innovations are not difficult to imagine. Hunter (1993) provides many examples of how network access changes the nature of teaching and learning. Hunter (p. 42) claims that new models of learning and teaching are made possible by the assumption that learners and teachers as individuals and groups can interact with geographically and institutionally distributed human and information resources. Integrated organization With the evolution from system islands to integrated systems, network access enables inter-disciplinary instruction. Network access flattens the instructional development process by giving teachers access to previously inaccessible information and teaching resources. Hunter (p. 42) speculates that application of the concepts and technology of internetworking may make it possible for separate reform efforts of diverse
181

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

groups and individuals to contribute to the building of a new educational system providing more accessible, higher-quality learning opportunities for everyone. Extended enterprise With the evolution from internal to inter-enterprise computing, more people, e.g. parents and business people, will become active in schools by dropping in electronically for a short time every day. Furthermore, students will leave the confines of the classroom and make electronic visits to museums, libraries, businesses, and governments around the world. Does business process re-engineering model fit educational problems? So, how ought we to apply the re engineering model to reform education? Hammer (pp. 108-111) offers principles of re engineering. Could these principles be applied to education? Lets see! Organize around outcomes, not tasks Have one person perform all the steps in a process and design that persons job around an objective or outcome instead of a single task (Hammer, p. 108) In the business sense, the idea (Hammer, p. 106) is to sweep away existing job definitions and departmental boundaries and to create a new position through empowerment. In the world of educational telecommunications, the experience of teacher Ed Barry (Corcoran, p. 68) reveals that the role of teacher changes to manager, not a dispenser of information because the teacher is empowered! Have those who use the output of the process perform the process Opportunities exist to re engineer processes so that the individuals who need the result of a process can do it themselves (Hammer, p. 109) In the business sense, when the people closest to the process perform it, there is little need for the overhead associated with managing it (Hammer, p. 109) By the same token, Kay (p. 146) has discovered that children learn in the same way as adults, in that they learn best when they can ask questions, seek answers in many places, consider different perspectives, exchange views with others and add their own findings to existing understandings. Subsume information-processing work into the real work that produces the information Move work from one person or department to another (Hammer, p. 110) Empower students to search for the answers in heretofore inaccessible places.

Anna University Chennai

182

DBA 1656

QUALITY MANAGEMENT

As a means to reduce the isolation of classroom teachers, Hunter (1993, p. 43) claims that a thread woven throughout most networked learning innovations is the idea that schooling can be more closely linked to work in the real world. Treat geographically dispersed resources as though they were centralized Use databases, telecommunication networks, and standardized processing systems to get the benefits of scale and coordination while maintaining the benefits of flexibility and service (Hammer, p. 110) Use databases, telecommunication networks, and computer- supported courseware to get the benefits of sharing curriculum development while maintaining the benefits of individualized learning and customization. Link parallel activities instead of integrating their results Forge links between parallel functions and coordinate them while their activities are in process rather than after they are completed (Hammer, p. 110) Often, separate units perform different activities that must eventually come together. For example, curriculum developers often prepare instructional materials independently for teachers to use. Instead, enabling teachers and students to communicate easily with curriculum developers during development and testing of materials ought to lead to better materials produced in a shorter time. Put the decision point where the work is performed and build control into the process People who do the work should make the decisions and that the process itself can have built-in controls (Hammer, p. 111) In general, we should empower teachers and students! For example, opportunities exist to re engineer teaching so that teachers can tap the expertise of curriculum developers and subject matter experts. Then, teachers can become selfmanaging and self-controlling. In a project called Learning Through Collaborative Visualization (Hunter, p. 43), students and teachers are working directly with scientists at the University of Michigan, the Exploratorium (museum) in San Francisco, the National Center for Supercomputer Applications in Urbana-Champaign (Illinois, USA), and the Technical Education Research Center (Cambridge, MA USA) on inquiries in atmospheric science, using two-way audio-video technology being developed by Bellcore and Ameritech.

NOTES

183

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Current research efforts In hope of making a case for the reforms described here, two research projects on educational telecommunications are experimenting with these principles. Common Knowledge: PittsburghCommon Knowledge: Pittsburgh is a US National Science Foundation-funded project to test the impact of Internet access on the Pittsburgh (Pennsylvania) Public Schools. Singapore pilot project Singapores Ministry of Education is pioneering Internet access in several junior colleges (grades 11-12) and a secondary school. These projects are conducting experiments to address (through telecommunications) the educational problems described earlier. Collaborative efforts Collaboration between these international partners is intended to enable a comparative evaluation of the impact of educational telecommunications on two very different educational systems. The United States is generally recognized for its stronger climate for innovation with notable educational experiments such as: Apple Vivarium Program (Kay) Cityspace (Markoff) On the other hand, Singapore and other Asian nations (Hirsch) are generally recognized for their stronger climate for teaching the fundamentals. Does the model fit? Re-engineering triggers changes of many kinds, not just of the business process itself. Job designs, organizational structures, management systems-anything associated with the process-must be refashioned in an integrated way. In other words, re engineering is a tremendous effort that mandates change in many areas of the organization. (Hammer, p. 112) Surely, primary and secondary education deserve the same attention and information technology has as much potential to reorganize education as well as work! The business re-engineering model is remarkably apt for educational reform. Perhaps, this novel (and somewhat provocative) approach may encourage fresh ideas in this difficult task.

Anna University Chennai

184

DBA 1656

QUALITY MANAGEMENT

SUMMARY The importance and indispensability of control in the production process is emphasized in this unit. The Statistical Process Control has its own evolution character and that is prescribed. Various control charts along with their applications are dealt elaborately. The process capability and its significance for bringing out a quality product are discussed in detail. The concept of six sigma, the methodology of adopting them are presented for the use of the readers. Bill Smith, the father six sigma has evolved the concept and its application in different industries and their outcome are placed for the consumption of the readers. The reliability concepts, their importance are explained with application in various industries. Product Life Characteristics Curve with its phases and quality requirements are deliberated. The Total Productive Maintenance has lot of overlapping on TQM. The eight pillars of TQM, viz tha 5S Components, Jishu Hozen (Autonomous Maintenance), Kaizen, Planned Maintenance, Quality Maintenance, Training, Office TPM, Safety, Health and Environment are elaborated for better understanding. The Life-Cycle Costing, otherwise popularly known as terotechnology is deliberated with a focus on how to realize it. The Business Process Reengineering, fundamentals and methodolgy are handled in this unit. Deliberations on whether BPR is a fad or rebirth of scientific management is also conducted. The reengineering process is elaborately dealt in conjunction with BPR. Examples on various sectors are also highlighted in this. REVIEW QUESTIONS 1. Explain why the statistical process control is of utmost significance to the quality ensuring process. 2. What are the types of Control Charts and explain the construction of any one of them? 3. Highlight the meaning, significance and measurement of process capability. 4. What is DMAIC and DMADV? Expalin its steps.

NOTES

185

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

5. Explain the contributions of Bill Smith and explain how it is used in cost saving. 6. Describe the process of process capability measurement. 7. Explain the role of TPM on product life cycle. 8. Detail the steps in the introduction of TPM in an organization. 9. What is Life Cycle Costing? Using example, demonstrate it. 10. BPR a rebirth of scientific management Critically examine. 11. Discuss the criticisms and benefits of re-engineering.

Anna University Chennai

186

DBA 1656

QUALITY MANAGEMENT

NOTES

UNIT IV TOOLS AND TECHNIQUES FOR QUALITY MANAGEMENT


INTRODUCTION
To achieve in quality Management, one need to be in possession of various tools and techniques. To establish a house of quality, we need to have an umbrella like coverage in the form of quality function deployment. Loss minimization, failure reduction process reliability improvement are possible to be achieved through the application of various statiscal and management tools old and new. This exercise has to be done by identifying a benchmark and trying hard to surpass it. This unit deals with Quality Functions Development (QFD), Benefits, Voice of Customer, Information Organization, House of Quality (HOQ), Building a HOQ, QFD Process, Failure Mode Effect Analysis (FMEA), Requirements of reliability, Failure Rate, FMEA stages, design, process and documentation, Taguchi Techniques Introduction, Loss function, Parameter and Tolerance Design, Signal to Noise Ratio, Seven Old (Statistical Tools), Seven new management tools, Benchmarking, poka yoke. LEARNING OBJECTIVES Upon completion of this unit, you will be able to: Build a house of quality Appreciate the importance of QFD and the process of deployment Use different analytical tools to detect and prevent failure and losses Compare the old and new management tools pertaining to Quality Management.

4.1 QUALITY FUNCTIONS DEPLOYMENT (QFD), BENEFITS VOICE OF CUSTOMER House of Quality is a graphic tool for defining the relationship between customer desires and the firm/product capabilities. It is part of the quality function deployment(QFD) and it utilizes a planning matrix to relate customer wants to how a firm (that produce the products)is going to meet those wants. It looks like a house with
187 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

correlation matrix as its roof, customer wants Vs product features as the main part, competitor evalution as the porch etc. The House of Quality is the first matrix in a four-phase QFD (Quality Function Deployment) process. Its called the House of Quality because of the correlation matrix that is roof shaped and sits on top of the main body of the matrix.The correlation matrix evaluates how the defined product specifications optimize or sub-optimize each other. The House of Quality is commonly associated with QFD, and in the minds of many who learned the topic from obsolete examples and books is the only thing that need be done. Glenn Mazur, executive director, QFD Insititute, explains this way: Think of The House of Quality (HOQ) like a highway interchange between the voice of the customer (VOC) and the Voice of the Engineer(VOE). 4.2 INFORMATION ORGANIZATION Data and information

Many people use the terms data and information as synonyms but these two terms actually convey very distinct concepts Data is defined as a body of facts or figures, which have been gathered systematically for one or more specific purposes o Data can exist in the forms of Linguistic expressions (e.g. name, age, address, date, ownership) Symbolic expressions (e.g. traffic signs) Mathematical expressions (e.g. E = mc2) Signals (e.g. electromagnetic waves) Information is defined as data which have been processed into a form that is meaningful to a recipient and is of perceived value in current or prospective decision-making o Although data are ingredients of information, not all data make useful information Data not properly collected and organized are a burden rather than an asset to an information user Data that make useful information for one person may not be useful to another person o Information is only useful to its recipients when it is Relevant (to its intended purposes and with appropriate level of required detail) Reliable, accurate and verifiable (by independent means)
188

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Up-to-date and timely (depending on purposes) Complete (in terms of attribute, spatial and temporal coverage) Intelligible (i.e. comprehensible by its recipients) Consistent (with other sources of information) Convenient/easy to handle and adequately protected The function of an information system is to change data into information, using the following processes: o Conversion transforming data from one format to another, from one unit of measurement to another, and/or from one feature classification to another o Organization organizing or re-organizing data according to database management rules and procedures so that they can be accessed costeffectively o Structuring formatting or re-formatting data so that they can be acceptable to a particular software application or information system o Modeling including statistical analysis and visualization of data that will improve users knowledge base and intelligence in decision making The concepts of organization and structure are crucial to the functioning of information systems without organization and structure it is simply impossible to turn data into information

NOTES

The Information Domain


An information system is designed to process data, i.e. to accept input (data), manipulate it in some way, and produce output (information) It is also designed to process events an event represents a problem or system control which triggers data processing procedures in an information system The information domain of an information system therefore includes both data (i.e., characters, numbers, images and sound) and events (i.e., problem and control) There are three different components of the information domain o Information organization (also referred to as information structure) the internal organization of various data and event items The design and implementation of information organization is referred to as data structure o Information contents and relationships the attributes relating to the data and the events, and the relationships with one another
189 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

The process of identifying information contents and relationships is known as data modeling in information system design o Information flow the ways by which data and events change as they are processed by the information system The process of identifying information flow is known as process modeling in information system design The above views of information domain provides the conceptual framework that links database management and application development in information systems o It signifies that information organization and data structure are not only important for the management of data, but also for the development of software applications that utilize these data

Information Organization

Information organization can be understood from four perspectives: o A data perspective o A relationship perspective o An operating system (OS) perspective o An application architecture perspective

The data perspective of information organization

The information organization of geographic data must be considered in terms of their descriptive elements and graphical elements because o These two types of data elements have distinctly different characteristics o The have different storage requirements o They have different processing requirements

Information organization of descriptive data

For descriptive data, the most basic element of information organization is called a data item o A data item represents an occurrence or instance of a particular characteristic pertaining to an entity (which can be a person, thing, event or phenomenon) It is the smallest unit of stored data in a database, commonly referred to as an attribute In database terminology, an attribute is also referred to as a stored field
190

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

The value of an attribute can be in the form of a number (integer or floating-point), a character string, a date or a logical expression (e.g. T for true or present; F for false or absent) Some attributes have a definite set of values known as permissible values or domain of values (e.g. age of people from 1 to 150; the categories in a land use classification scheme; and the academic departments in a university) A group of related data items form a record o By related data items, it means that the items are occurrences of different characteristics pertaining to the same person, thing, event or phenomenon (e.g. in a forest resource inventory, a record may contain related data items such as stand identification number, dominant tree species, average height and average breast height diameter) o a record may contain a combination of data items having different types of values (e.g. in the above example, a record has two character strings representing the stand identification number and dominant tree species; an integer representing the average tree height rounded to the nearest meter; and a floating-point number representing the average breast height diameter in meters) In database terminology, a record is always formally referred to as a stored record In relational database management systems, records are called tuples A set of related records constitutes a data file o By related records, it means that the records represent different occurrences of the same type or class of people, things, events and phenomena A data file made up of a single record type with single-valued data items is called a flat file A data file made up of a single record type with nested repeating groups of items forming a multi-level organization is called a hierarchical file o A data file is individually identified by a filename o A data file may contain records having different types of data values or having a single type of data value A data file containing records made up of character strings is called a text file or ASCII file
191

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES
o

A data file containing records made up of numerical values in binary format is called a binary file

In data processing literature, collections of data items or records are sometimes referred to by other terms other than data file according to their characteristics and functions

An array is a collection of data items of the same size and type (although they may have different values)

A one-dimensional array is called a vector A two-dimensional array is called a matrix Data files in relational databases are organized as tables Such tables are also called relations in relational database terminology

A table is a data file with data items arranged in rows and columns

A list is a finite, ordered sequence of data items (known as elements)

By ordered, it means that each element has a position in the list An ordered list has elements positioned in ascending order of values; while an unordered list has no permanent relation between element values and position

Each element has a data type In the simple list implementation, all elements must have the same data type but there is no conceptual objection to lists whose elements have different data types

A tree is a data file in which each data item is attached to one or more data items directly beneath it

The connections between data items are called branches Trees are often called inverted trees because they are normally drawn with the root at the top The data items at the very bottom of an inverted tree are called leaves; other data items are called nodes A binary tree is a special type of inverted tree in which each element has only two branches below it

A heap is a special type of binary tree in which the value of each node is greater than the values of its leaves

Anna University Chennai

192

DBA 1656

QUALITY MANAGEMENT

Heap files are created for sorting data in computer processing the heap sort algorithm works by first organizing a list of data into a heap A stack is a collection of cards in Apple Computers Hypercard software system The concept of database is the approach to information organization in computerbased data processing today o A database is defined as an automated, formally defined and centrally controlled collection of persistent data used and shared by different users in an enterprise Above definition excludes the informal, private and manual collection of data Centrally controlled does not mean physically centralized databases today tend to be physically distributed in different computer systems, at the same or different locations a database is set up to serve the information needs of an organization Data sharing is key to the concept of database Data in a database are described as permanent in the sense that they are different from transient data such as input to and output from an information system The data usually remain in the database for a considerable length of time, although the actual content of the data can change very frequently o The use of database does not mean the demise of data files Data in a database are still organized and stored as data files The use of database represents a change in the perception of data, the mode of data processing and the purposes of using the data (Table 1), rather than physical storage of the data o Databases can be organized in different ways known as database models The three conventional database models are: relational, network and hierarchical Relational data are organized by records in relations which resemble a table Network data are organized by records which are classified into record types, with 1:n pointers linking associated records

NOTES

193

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Hierarchical data are organized by records on a parentchild one-to-many relations The emerging database model is object-oriented Data are uniquely identified as individual objects that are classified into object types or classes according to the characteristics (attributes and operations) of the object

Information organization of graphical data

For graphical data, the most basic element of information organization is called a basic graphical element o There are three basic graphical elements Point Line, also referred to as arc Polygon, also referred to as area o These basic graphical elements can be individually used to represent geographic features or entities For example: point for a well; line for a road segment and polygon for a lake) o They can also be used to construct complex features For example: the geographic entity Hawaii on a map is represented by a group of polygons of different sizes and shapes The method of representing geographic features by the basic graphical elements of points, lines and polygon is said to be the vector method or vector data model, and the data are called vector data o Related vector data are always organized by themes, which are also referred to as layers or coverage Examples of themes: geodetic control, base map, soil, vegetation cover, land use, transportation, drainage and hydrology, political boundaries, land parcel and others o For themes covering a very large geographic area, the data are always divided into tiles so that they can be managed more easily A tile is the digital equivalent of an individual map in a map series A tile is uniquely identified by a file name o A collection of themes of vector data covering the same geographic area and serving the common needs of a multitude of users constitutes the spatial component of a geographical database
194

Anna University Chennai

DBA 1656
o

QUALITY MANAGEMENT

The vector method of representing geographic features is based on the concept that these features can be can be identified as discrete entities or objects This method is therefore based on the object view of the real world The object view is the method of information organization in conventional mapping and cartography Graphical data captured by imaging devices in remote sensing and digital cartography (such as multi-spectral scanners, digital cameras and image scanners) are made up of a matrix of picture elements (pixels) of very fine resolution o Geographic features in such form of data can be visually recognized but not individually identified in the same way that geographic features are identified in the vector method o They are recognizable by differentiating their spectral or radiometric characteristics from pixels of adjacent features For example, a lake can be visually recognized on a satellite image because the pixels forming it are darker than those of the surrounding features; but the pixels forming the lake are not identified as a single discrete geographic entity, i.e. they remain individual pixels Similarly, a highway can be visually recognized on the same satellite image because of its particular shape; but the pixels forming the highway do not constitute a single discrete geographic entity as in the case of vector data The method of representing geographic features by pixels is called the raster method or raster data model, and the data are described as raster data o The raster method is also called the tessellation method o A raster pixel is usually a square grid cell but there are there are several variants such as triangles and hexagons o A raster pixel represents the generalized characteristics of an area of specific size on or near the surface of the Earth The actual ground size depicted by a pixel is dependent on the resolution of the data, which may range from smaller than a square meter to several square kilometers o Raster data are organized by themes, which is also referred to as layers For example, a raster geographic database may contain the following themes: bed rock geology, vegetation cover, land use, topography, hydrology, rainfall, temperature
195

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Raster data covering a large geographic area are organized by scenes (for remote sensing images) of by raster data files (for images obtained by map scanning) o The raster method is based on the concept that geographic features are represented as surfaces, regions or segments o This method is therefore based on the field view of the real world o The field view is the method of information organization in image analysis systems in remote sensing and geographic information systems for resourceand environmental-oriented applications In the past, the vector and raster methods represented two distinct approaches to information systems o They were based on different concepts of information organization and data structure o They used different technologies for data input and output Recent advances in computer technologies allow these two types of data to be used in the same applications o Computers are now capable of converting data from the vector format to the raster format (rasterization) and vice versa (vectorization) o Computers are now able to display vector and raster simultaneously o The old debate on the usefulness of these two approaches to information organization does not seem to be relevant any more o Vector and raster data are largely seen as complimentary to, rather than competing against, one another in geographic data processing

The relationship perspective of information organization

Relationships represent a important concept in information organization it describes the logical association between entities o Relationships can be categorical or spatial, depending on whether they describe location or other characteristics

Categorical relationships

Categorical relationships describe the association among individual features in a classification system o The classification of data is based on the concept of scale of measurement o There are four scales of measurement: Nominal a qualitative, non-numerical and non-ranking scale that classifies features on intrinsic characteristics
196

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

For example, in a land use classification scheme, polygons can be classified as industrial, commercial, residential, agricultural, public and institutional Ordinal a nominal scale with ranking which differentiates features according to a particular order For example, in a land use classification scheme, residential land can be denoted as low density, medium density and high density Interval an ordinal scale with ranking based on numerical values that are recorded with reference to an arbitrary datum for example, temperature readings in degrees centigrade are measured with reference to an arbitrary zero (i.e. zero degree temperature does not mean no temperature) Ratio an interval scale with ranking based on numerical values that are measured with reference to an absolute datum For example, rainfall data are recorded in mm with reference to an absolute zero (i.e. zero mm rainfall mean no rainfall) Categorical relationships based on ranking are hierarchical or taxonomic in nature o This means that data are classified into progressively different levels of detail Data in the top level are represented by a limited broad basic categories Data in each basic category are then classified into different subcategories, which can be further classified into another level if necessary o The classification of descriptive data is typically based on categorical relationships

NOTES

Spatial relationships

Spatial relationships describe the association among different features in space o Spatial relationships are visually obvious when data are presented in the graphical form o However, it is difficult to build spatial relationships into the information organization and data structure of a database There are numerous types of spatial relationships possible among features
197 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Recording spatial relationships implicitly demands considerable storage space Computing spatial relationships on-the-fly slows down data processing particularly if relationship information is required frequently There are two types of spatial relationships o Topological describes the property of adjacency, connectivity and containment of contiguous features o Proximal describes the property of closeness of non-contiguous features Spatial relationships are very important in geographical data processing and modeling o The objective of information organization and data structure is to find a way that will handle spatial relationships with the minimum storage and computation requirements

The operating system (OS) perspective of information organization

From the operating system perspective, information is organized in the form of directories o Directories are a special type of computer files used to organize other files into a hierarchical structure Directories are also referred to as folders, particularly in systems using graphical user interfaces o A directory may also contain one of more directories The topmost directory in a computer is called the root directory A directory that is below another directory is referred to as a subdirectory A directory that is above another directory is referred to as a parent directory o Directories are designed for bookkeeping purposes in computer systems A directory is identified by a unique directory name Computer files of the same nature are usually put under the same directory A data file can be accessed in a computer system by specifying a path that is made up of the device name, one or more directory names and its own file name For example: c:\project101\mapdata\basemap\nw2367.dat o The concept of workspace used by many geographic information system software packages is based on the directory structure of the host computer
198

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

A workspace is a directory under which all data files relating to a particular project are stored

NOTES

The application architecture perspective of information organization


Computer applications nowadays tend to be constructed on the client/server systems architecture Client/server is primarily a relationship between processes running in the same computer or, more commonly, in separate computers across a telecommunication network o The client is a process that requests services The dialog between the client and the server is always initiated by the client A client can request services from many servers at the same time o The server is a process that provides the service A server is primarily a passive service provider A server can service many clients at the same time There are many ways of implementing a client/server architecture but from the perspective of information organization, the following five are most important o File servers the client requests specific records from a file; and the server returns these records to the client by transmitting them across the network o Database servers the client sends structured query language (SQL) requests to the server; the server finds the required information by processing these requests and then passes the results back to the client o Transaction servers the client invokes a remote procedure that executes a transaction at the server side; the server returns the result back to the client via the network o Web server communicating interactively by the Hypertext Transfer Protocol (HTTP) over the Internet, the Web server returns documents when clients ask for them by name o Groupware servers this particular type of servers provides a set of applications that allow clients (and their users) to communicate with one another using text, images, bulletin boards, video and other forms of media From the application architecture perspective, the objective of information organization and data structure is to develop a data design strategy that will optimize system operation by
199 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT
o

NOTES

Balancing the distribution of data resources between the client and the server Databases are typically located on the server to enable data sharing by multiple users Static data that are used for reference are usually allocated to the client Ensuring the logical allocation of data resources among different servers Data that are commonly used together should be placed in the same server Data that have common security requirements should be placed in the same server Data intended for a particular purpose (file service, database query, transaction processing, Web browsing or groupware applications) should be placed in the appropriate server Standardizing and maintaining metadata (i.e. data about data) to facilitate the search for the availability and characteristics of existing data

4.3 HOUSE OF QUALITY (HOQ), BUILDING A HOQ QFD was first put forth in 1966 in Quality Assurance work done by Prof. Yoji Akao and Mr. Oshiumi of Bridgestone Tire. Its purpose was to show the connections between true quality, quality characteristics, and process characteristics. This was done using the Fishbone Diagram, with true quality in the heads and quality and process characteristics in the bones. For more complex products, Mitsubishi Heavy Industry Kobe Shipyards combined these many fishbones into a matrix. In 1979, Mr. Sawada of Toyota Auto Body used the matrix in a reliability study which permitted him to address technical trade-offs in the quality characteristics. This was done by adding a roof to the top of the matrix, which he then dubbed the House of Quality.

Anna University Chennai

200

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 4.1 Building a House of Quality: The House of Quality is actually an assembly of other deployment hierarchies and tables. These include the Demanded Quality Hierarchy (rows), Quality Characteristics Hierarchy (columns), the relationships matrix which relates them using any one of several distribution methods, the Quality Planning Table (right side room), and Design Planning Table (bottom room). Many people, who haphazardly learned the over-simplified, obsolete version of QFD decades ago and failed to update their knowledge since then, refer to these rooms by undifferentiated terms such as Whats, Hows, etc. Sadly, this includes many book authors, professors, and consultants. This is not a wise way to do QFD because it limits your ability to apply QFD only in the most elementary form. It could be even detrimental for todays businesses that operate in complex environments. It is recommended that such terms be abandoned and that users refer to the actual data by name. This makes sense when there are multiple matrices used and proper naming conventions add clarity to the process. Critical Tool for Design for Six Sigma Black Belts The House of Quality has become a critical tool for Design for Six Sigma (DFSS). It serves the purpose of displaying complex transfer functions Y=f (X), where Y are the Critical to Customer Satisfaction factors and X the Critical to Quality factors.
201 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Other matrices can perform lower level transfer functions as well. Objective measures, target specifications, tolerances, and DPMO can also be added to the Design Planning Table. KPOV and KPIV can also be related in similar matrix formats. The Myth about the House of Quality Most interesting is that in many QFD studies, the House of Quality (HOQ) is not the starting point and can even be unnecessary. That the House of Quality is the QFD is a myth that is still propagated by many people and books of outdated QFD knowledge, even though Dr. Yoji Akao (founder of QFD) has repeatedly warned it is not QFD by itself. 4.4 QFD PROCESS Quality Function Deployment (QFD) QFD is a rigorous method for translating customer needs, wants, and wishes into step-by-step procedures for delivering the product or service. While delivering better designs tailored to customer needs, Quality Function Deployment also cuts the normal development cycle by 50%, making you faster to market. QFD uses the QFD House of Quality (a template in the QI Macros) to help structure your thinking, making sure nothing is left out.

FIGURE 4.2 There are four key steps to QFD thinking: 1. Product Planning- Translating what the customer wants (in their language, e.g., portable, convenient phone service) into a list of prioritized product/service design requirements (in your language, e.g., cell phones) that describes how the product works. It also compares your performance with your competitions, and sets targets for improvement to differentiate your product/service from your competitors.
Anna University Chennai 202

DBA 1656

QUALITY MANAGEMENT

2. Part Planning - Translating product specifications (design criteria from step 1) into part characteristics (e.g., light weight, belt-clip, battery-driven, not-hardwired but radiofrequency based). 3. Process Planning - Translating part characteristics (from step 2) into optimal process characteristics that maximize your ability to deliver Six Sigma quality (e.g., ability to hand off a cellular call from one antenna to another without interruption). 4. Production Planning - Translating process characteristics (from step 3) into manufacturing or service delivery methods that will optimize your ability to deliver Six Sigma quality in the most efficient manner (e.g., cellular antennas installed with overlapping coverage to eliminate dropped calls). Even in my small business, I often use the Quality Function Deployment template to evaluate and design a new product or service. It helps me think through every aspect of what my customers want and how to deliver it. It saves me a lot of clean up on the backend. It doesnt always mean that I get everything right, but I get more of it right, which translates into greater sales and higher profitability with less rework on my part. Thats the power of QFD. 4.5 FAILURE MODE EFFECT ANALYSIS (FMEA) FMEA Design and Process FMEA (Failure Mode and Effects Analysis) is a proactive tool, technique and quality method that enables the identification and prevention of process or product errors before they occur. Within healthcare, the goal is to avoid adverse events that could potentially cause harm to patients, families, employees or others in the patient care setting. As a tool embedded within Six Sigma methodology, FMEA can help identify and eliminate concerns early in the development of a process or new service delivery. It is a systematic way to examine a process prospectively for possible ways in which failure can occur, and then to redesign the processes so that the new model eliminates the possibility of failure. Properly executed, FMEA can assist in improving overall satisfaction and safety levels. There are many ways to evaluate the safety and quality of healthcare services, but when trying to design a safe care environment, a proactive approach is far preferable to a reactive approach.
203

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Definitions of FMEA FMEA evolved as a process tool used by the United States military as early as 1949, but application in healthcare didnt occur until the early 1990s, around the time Six Sigma began to emerge as a viable process improvement methodology. One of several reliability evaluation and design analysis tools, FMEA also can be defined as:

A problem prevention tool used to identify weak areas of the process and develop plans to prevent their occurrence. A semi-quantitative, inductive bottom-up approach executed by a team. A tool being recommended for Joint Commission on Accreditation of Healthcare Organizations (JCAHO) Standard LD.5.2. A structured approach to identify the ways in which a process can fail to meet critical customer requirement. A way to estimate the risk of specific causes with regard to these failures. A method for evaluating the current control plan for preventing failures from occurring. A prioritization process for actions that should be taken to improve the situation.

Why Do a FMEA? Historically, healthcare has performed root cause analysis after sentinel events, medical errors or when a mistake occurs. With the added focus on safety and error reduction, however, it is important to analyze information from a prospective point of view to see what could go wrong before the adverse event occurs. Examining the entire process and support systems involved in the specific events and not just the recurrence of the event requires rigor and proven methodologies. Here are some potential targets for a FMEA application:

New processes being designed Existing processes being changed Carry-over processes for use in new applications or new environments After completing a problem-solving study (to prevent recurrence)

Anna University Chennai

204

DBA 1656

QUALITY MANAGEMENT

When preliminary understanding of the processes is available (for a Process FMEA) After system functions are defined, but before specific hardware is selected (for a System FMEA) After product functions are defined, but before the design is approved and released to manufacturing (for a Design FMEA)

NOTES

Roles and Responsibilities The FMEA team members will have various responsibilities. In healthcare, the terms multi-disciplinary or collaboration teams are used to refer to members from different departments or professions. Leaders must lay the groundwork conducive to improvement for the team initiative, with empowerment to make the changes and recommendations for change, plus time to do the work. The FMEA team should not exceed 6 to 10 people, although this may depend on the process stage. Each team should have a leader and/or facilitator, record keeper or scribe, time keeper and a champion. In the data gathering or sensing stage, extensive voice of the customer may be required. During the FMEA design meeting, however, the team must have members knowledgeable about the process or subject matter. It is advisable to include facilitators with skills in team dynamics and rapid decisionmaking. Ground rules help define the scope and provide parameters to work within. The team should consider questions such as: What will success look like? What is the timeline? The FMEA provides the metrics or control plan. The goal of the preparation is to have a complete understanding of the process you are analyzing. What are the steps? What are its inputs and outputs? How are they related? Techniques for Accelerating Change While Six Sigma is based on solid principles and well-founded data, without departmental or organizational acceptance of change, Six Sigma solutions and tools such as FMEA may not be effective. Teams may decide to use change management tools such as CAP (Change Acceleration Process) to help build support and facilitate rapid improvement. Careful planning, communication, participation and ensuring that senior leaders are well-informed throughout the process will greatly increase the chance for a smoother implementation. Approach the FMEA process with a clear understanding of the challenges, an effective approach to overcome those challenges, and a plan to demonstrate a solid
205 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

track record of results. To gain leadership support, clearly define the value and return on investment for required resources Supporting FMEA Using Influence Strategy Once key stakeholders are known and their political, technical or cultural attitudes have been discussed (and verified), the task is to build an effective strategy for influencing them to strengthen, or at a minimum, maintain their level of support. This simple tool helps the team assess stakeholder issues and concerns, identifying and creating a strategy for those who must be moved to a higher level of support. Benefits of FMEA Here are the benefits of FEMA:

Captures the collective knowledge of a team Improves the quality, reliability, and safety of the process Logical, structured process for identifying process areas of concern Reduces process development time, cost Documents and tracks risk reduction activities Helps to identify Critical-To-Quality characteristics (CTQs) Provides historical records; establishes baseline Helps increase customer satisfaction and safety

FMEA reduces time spent considering potential problems with a design concept, and keeps crucial elements of the project from slipping through the cracks. As each FMEA is updated with unanticipated failure modes, it becomes the baseline for the next generation design. Reduction in process development time can come from increased ability to carry structured information forward from project to project, and this can drive repeatability and reproducibility across the system. Types of FMEA Process FMEA: Used to analyze transactional processes. Focus is on failure to produce intended requirement, a defect. Failure modes may stem from causes identified. System FMEA: A specific category of Design FMEA used to analyze systems and subsystems in the early concept and design stages. Focuses on potential failure modes associated with the functionality of a system caused by design.
Anna University Chennai 206

DBA 1656

QUALITY MANAGEMENT

Design FMEA: Used to analyze component designs. Focuses on potential failure modes associated with the functionality of a component caused by design. Failure modes may be derived from causes identified in the System FMEA. Other types:

NOTES

FMECA (Failure Mode, Effects, Criticality Analysis): Considers every possible failure mode and its effect on the product/service. Goes a step above FMEA and considers the criticality of the effect and actions, which must be taken to compensate for this effect. (critical = loss of life/product). A d-FMEA evaluates how a product can fail, and likelihood that the proposed design process will anticipate and prevent the problem. A p-FMEA evaluates how a process can fail, and the likelihood that the proposed control will anticipate and prevent the problem.

FMEA Requires Teamwork A cause creates a failure mode and a failure mode creates an effect on the customer. Each team member must understand the process, sub-processes and interrelations. If people are confused in this phase, the process reflects confusion. FMEA requires teamwork: gathering information, making evaluations and implementing changes with accountability. Combining Six Sigma, change management and FMEA you can achieve:

Better quality and clinical outcomes Safer environment for patients, families and employees Greater efficiency and reduced costs Stronger leadership capabilities Increased revenue and market share Optimized technology and workflow

Understanding how to use the right process or facilitation tool at the right time in healthcare can help providers move quality up, costs down and variability out. And that leads to preventing one failure before it harms one individual. Anticipate Problems and Minimize Their Occurrence and Impact Failure Modes and Effects Analysis (FMEA) is one of the most widely used and effective tools for developing quality designs, processes, and services.
207 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

When criticality is considered, a FMEA is often times referred to as a FMECA (Failure Modes, Effects, and Criticality Analysis). In this document, the term FMEA is used in a general sense to include both FMEAs and FMECAs. Developed during the design stage, FMEAs are procedures by which:

Potential failure modes of a system are analyzed to determine their effects on the system. Potential failure modes are classified according to their severity (FMEAs) or to their severity and probability of occurrence (FMECAs). Actions are recommended to either eliminate or compensate for unacceptable effects.

When introduced in the late 1960s, FMEAs were used primarily to assess the safety and reliability of system components in the aerospace industry. During the late 1980s, FMEAs were applied to manufacturing and assembly processes by Ford Motor Company to improve production. Today, FMEAs are being used for the design of products and processes as well as for the design of software and services in virtually all industries. As markets continue to become more intense and competitive, FMEAs can help to ensure that new products, which consumers demand be brought to market quickly, are both highly reliable and affordable. The principle objectives of FMEAs are to anticipate the most important design problems early in the development process and either to prevent these problems from occurring or to minimize their consequences as cost effectively as possible. In addition, FMEAs provide a formal and systematic approach for design development and actually aid in evaluating, tracking, and updating both design and development efforts. Because the FMEA is begun early in the design phase and is maintained throughout the life of the system, the FMEA becomes a diary of the design and all changes that affect system quality and reliability. Types of FMEAs All FMEAs focus on design and assess the impact of failure on system performance and safety. However, FMEAs are generally categorized based on whether they analyze product design or the processes involved in manufacturing and assembling the product.

Product FMEAs. Examine the ways that products (typically hardware or software) can fail and affect product operation. Product FMEAs indicate what can be done to prevent potential design failures.
208

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Process FMEAs. Examine the ways that failures in manufacturing and assembly processes can affect the operation and quality of a product or service. Process FMEAs indicate what can be done to prevent potential process failures prior to the first production run.

NOTES

Although FMEAs can be initiated at any system level and use either a topdown or bottom-up approach, todays products and processes tend to be complex. As a result, most FMEAs use an inductive, bottom-up approach, starting the analysis with the failure modes of the lowest level items of the system and then successively iterating through the next higher levels, ending at the system level. Regardless of the direction in which the system is analyzed, all potential failure modes are to be identified and documented on FMEA worksheets (hard copy or electronic), where they are then classified in relation to the severity of their effects. In a very simple product FMEA, for example, a computer monitor may have a capacitor as one of its components. By looking at the design specifications, it can be determined that if the capacitor is open (failure mode), the display appears with wavy lines (failure effect). And, if the capacitor is shorted (failure mode), the monitor goes blank (failure effect). When assessing these two failure modes, the shorted capacitor would be ranked as more critical because the monitor becomes completely unusable. On the FMEA worksheet, ways in which this failure mode can either be prevented or its severity lessened would be indicated. Approaches to FMEAs Product and process FMEAs can be further categorized by the level on which the failure modes are to be presented.

Functional FMEAs. Focus on the functions that a product, process, or service is to perform rather than on the characteristics of the specific implementation. When developing a functional FMEA, a functional block diagram is used to identify the top-level failure modes for each functional block on the diagram. For a heater, for example, two potential failure modes would be: Heater fails to heat and Heater always heats. Because FMEAs are best begun during the conceptual design phase, long before specific hardware information is available, the functional approach is generally the most practical and feasible approach by which to begin a FMEA, especially for large, complex products or processes that are more easily understood by function than by the details of their operation. When systems are very complex, the analysis for functional FMEAs generally begins at the highest system level and uses a top-down approach.
209 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Interface FMEAs. Focus on the interconnections between system elements so that the failures between them can be determined and recorded and compliance to requirements can be verified. When developing interface FMEAs, failure modes are usually developed for each interface type (electrical cabling, wires, fiber optic lines, mechanical linkages, hydraulic lines, pneumatics lines, signals, software, etc.). Beginning an interface FMEA as soon as the system interconnections are defined ensures that proper protocols are used and that all interconnections are compliant with design requirements. Detailed FMEAs. Focus on the characteristics of specific implementations to ensure that designs comply with requirements for failures that can cause loss of end-item function, single-point failures, and fault detection and isolation. Once individual items of a system (piece-parts, software routines, or process steps) are uniquely identified in the later design and development stages, FMEAs can assess the failure causes and effects of failure modes on the lowest level system items. Detailed FMEAs for hardware, commonly referred to as piece-part FMEAs, are the most common FMEA applications. They generally begin at the lowest piece-part level and use a bottom-up approach to check design verification, compliance, and validation.

Variations in design complexity and data availability will dictate the analysis approach to be used. Some cases may require that part of the analysis be performed at the functional level and other portions at the interface and detailed levels. In other cases, initial requirements may be for a functional FMEA that is to later progress to an interface FMEA, and then finally progress to a detailed FMEA. Thus, FMEAs completed for more complex systems often include worksheets that employ all three approaches to FMEA development. Failure mode and effects analysis Failure mode and effects analysis (FMEA) is a method (first developed for systems engineering) that examines potential failures in products or processes. It may be used to evaluate risk management priorities for mitigating known threatvulnerabilities. FMEA helps select remedial actions that reduce cumulative impacts of lifecycle consequences (risks) from a systems failure (fault). By adapting hazard tree analysis to facilitate visual learning, this method illustrates connections between multiple contributing causes and cumulative (life-cycle) consequences. It is used in many formal quality systems such as QS-9000 or ISO/TS 16949.
Anna University Chennai 210

DBA 1656

QUALITY MANAGEMENT

The basic process is to take a description of the parts of a system, and list the consequences if each part fails. In most formal systems, the consequences are then evaluated by three criteria and associated risk indices:

NOTES

severity (S), likelihood of occurrence (O), and (Note: This is also often known as probability (P)) inability of controls to detect it (D)

Each index ranges from 1 (lowest risk) to 10 (highest risk). The overall risk of each failure is called Risk Priority Number (RPN) and the product of Severity (S), Occurrence (O), and Detection (D) rankings: RPN = S O D. The RPN (ranging from 1 to 1000) is used to prioritize all potential failures to decide upon actions leading to reduce the risk, usually by reducing likelihood of occurrence and improving controls for detecting the failure. Applications FMEA is most commonly applied but not limited to design (Design FMEA) and manufacturing processes (Process FMEA). Design failure modes effects analysis (DFMEA) identifies potential failures of a design before they occur. DFMEA then goes on to establish the potential effects of the failures, their cause, how often and when they might occur and their potential seriousness. Process failure modes effects analysis (PFMEA) is a systemized group of activities intended to: 1. Recognize and evaluate the potential failure of a product/process and its effect, 2. Identify actions which could eliminate or reduce the occurrence, or improve detectability, 3. Document the process, and 4. Track changes to process-incorporated to avoid potential failures. FMEAAnalysis is very important for dynamic positioning systems. FMEA Analysis is often applied through the use of a FMEA table combined with a rating chart to allow designers to assign values to the: 1. Severity of potential failures. 2. Likelihood of a potential failure occurring. 3. The chance of detection within the process
211 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

The numbers are then summed to create the Risk Priority Number (RPN) This data can then be collected in a table such as: TABLE 4.1

Detection Potential Severity Effect of Failure Occurrence Possible

Anna University Chennai

Part

Function

Mode

Failure

Rating Failure

Cause of

Rating Means of Detection

Rating

RPN

Actions to be Taken

Preventative

212

DBA 1656

QUALITY MANAGEMENT

DISADVANTAGES FMEA is useful mostly as a survey method to identify major failure modes in a system. It is not able to discover complex failure modes involving multiple failures or subsystems, or to discover expected failure intervals of particular failure modes. For these, a different method called fault tree analysis is used. Elementary Method Failure Mode Effect Analysis (FMEA) Objective and Purpose The Failure Mode Effect Analysis (FMEA) is a method used for the identification of potential error types in order to define its effect on the examined object (System, Segment, SW/HW Unit) and to classify the error types with regard to criticality or persistency. This is to prevent errors and thus weak points in the design which might result in a endangering or loss of the system/software and/or in an endangering of the persons connected with the system/software. The FMEA is also to furnish results for corrective measures, for the definition of test cases and for the determination of operating and application conditions of the system/software. Means of Representation Means to represent the FMEA are e. g.:

NOTES

sequence descriptions of critical functions functional and reliability block diagrams error trees error classification lists lists of critical functions and items

Operational Sequence The basic principle is that both in the functional hierarchy and in the program logic defined success or error criteria are systematically (functionally and chronologically) queried: what happens if? This analysis and evaluation has to be realized for all operating phases and operating possibilities. The FMEA process consists of the following main steps:

FMEA planning to identify the FMEA goals and levels definition of specific procedures, basic rules, and criteria for the FMEA realization
213 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

analysis of the system with regard to functions, interfaces, operating phases, operating modes, and environment design and analysis of functional and reliability block diagrams or error tree diagrams to illustrate the processes, interconnections, and dependencies identification of potential error types evaluation and classification of the error types and its effects identification of measures to prevent errors and to check errors evaluation of the effects of suggested measures documentation of the results

Limits of the Methods Application Within the scope of submodel PM, the application of FMEA is limited to projects with much restrictive planned data or high requirements; a general application of FMEA would not be appropriate, considering the required effort and costs in comparison with the achieved results. Within the scope of the submodels SD and QA the method FMEA is applied if the reliability requirements to the System/to functional units are high. Failure Modes and Effects Analysis (FMEA) Also called: potential failure modes and effects analysis; failure modes, effects and criticality analysis (FMECA). Description Failure modes and effects analysis (FMEA) is a step-by-step approach for identifying all possible failures in a design, a manufacturing or assembly process, or a product or service. Failure modes means the ways, or modes, in which something might fail. Failures are any errors or defects, especially ones that affect the customer, and can be potential or actual. Effects analysis refers to studying the consequences of those failures. Failures are prioritized according to how serious their consequences are, how frequently they occur and how easily they can be detected. The purpose of the FMEA is to take actions to eliminate or reduce failures, starting with the highest-priority ones. An FMEA also documents current knowledge and actions about the risks of failures, for use in continuous improvement. FMEA is used during design to prevent
Anna University Chennai 214

DBA 1656

QUALITY MANAGEMENT

failures. Later its used for control, before and during ongoing operation of the process. Ideally, FMEA begins during the earliest conceptual stages of design and continues throughout the life of the product or service. Begun in the 1940s by the U.S. military, FMEA was further developed by the aerospace and automotive industries. Several industries maintain formal FMEA standards. What follows is an overview and reference. Before undertaking an FMEA process, learn more about standards and specific methods in your organization and industry through other references and training.

NOTES

When to Use

When a process, product or service is being designed or redesigned, after quality function deployment. When an existing process, product or service is being applied in a new way. Before developing control plans for a new or modified process. When improvement goals are planned for an existing process, product or service. When analyzing failures of an existing process, product or service. Periodically throughout the life of the process, product or service

Process (Again, this is a general procedure. Specific details may vary with standards of your organization or industry.) 1. Assemble a cross-functional team of people with diverse knowledge about the process, product or service and customer needs. Functions often included are: design, manufacturing, quality, testing, reliability, maintenance, purchasing (and suppliers), sales, marketing (and customers) and customer service. 2. Identify the scope of the FMEA. Is it for concept, system, design, process or service? What are the boundaries? How detailed should we be? Use flowcharts to identify the scope and to make sure every team member understands it in detail. (From here on, well use the word scope to mean the system, design, process or service that is the subject of your FMEA.)
215 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

3. Fill in the identifying information at the top of your FMEA form. Figure shows a typical format. The remaining steps ask for information that will go into the columns of the form. FMEA example TABLE 4.2

Anna University Chennai

216

DBA 1656

QUALITY MANAGEMENT

1. Identify the functions of your scope. Ask, What is the purpose of this system, design, process or service? What do our customers expect it to do? Name it with a verb followed by a noun. Usually you will break the scope into separate subsystems, items, parts, assemblies or process steps and identify the function of each. 2. For each function, identify all the ways failure could happen. These are potential failure modes. If necessary, go back and rewrite the function with more detail to be sure the failure modes show a loss of that function. 3. For each failure mode, identify all the consequences on the system, related systems, process, related processes, product, service, customer or regulations. These are potential effects of failure. Ask, What does the customer experience because of this failure? What happens when this failure occurs? 4. Determine how serious each effect is. This is the severity rating, or S. Severity is usually rated on a scale from 1 to 10, where 1 is insignificant and 10 is catastrophic. If a failure mode has more than one effect, write on the FMEA table only the highest severity rating for that failure mode. 5. For each failure mode, determine all the potential root causes. Use tools classified as cause analysis tool, as well as the best knowledge and experience of the team. List all possible causes for each failure mode on the FMEA form. 6. For each cause, determine the occurrence rating, or O. This rating estimates the probability of failure occurring for that reason during the lifetime of your scope. Occurrence is usually rated on a scale from 1 to 10, where 1 is extremely unlikely and 10 is inevitable. On the FMEA table, list the occurrence rating for each cause. 7. For each cause, identify current process controls. These are tests, procedures or mechanisms that you now have in place to keep failures from reaching the customer. These controls might prevent the cause from happening, reduce the likelihood that it will happen or detect failure after the cause has already happened but before the customer is affected. 8. For each control, determine the detection rating, or D. This rating estimates how well the controls can detect either the cause or its failure mode after they have happened but before the customer is affected. Detection is usually rated on a scale from 1 to 10, where 1 means the control is absolutely certain to detect the problem and 10 means the control is certain not to detect the problem (or no control exists). On the FMEA table, list the detection rating for each cause.
217

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

9. (Optional for most industries) Is this failure mode associated with a critical characteristic? (Critical characteristics are measurements or indicators that reflect safety or compliance with government regulations and need special controls.) If so, a column labeled Classification receives a Y or N to show whether special controls are needed. Usually, critical characteristics have a severity of 9 or 10 and occurrence and detection ratings above 3. 10. Calculate the risk priority number, or RPN, which equals S O D. Also calculate Criticality by multiplying severity by occurrence, S O. These numbers provide guidance for ranking potential failures in the order they should be addressed. 11. Identify recommended actions. These actions may be design or process changes to lower severity or occurrence. They may be additional controls to improve detection. Also note who is responsible for the actions and target completion dates. 12. As actions are completed, note results and the date on the FMEA form. Also, note new S, O or D ratings and new RPNs. 4.6 FMEA SIAGES DESIGN AND DOCUMENTATION Step 1 - Perform Preparatory Work Before beginning any analysis, it is important to do some preliminary prep work. This analysis is no different. The first thing that needs to be accomplished is to select a system to analyze. For instance, we may want to select a small subset of the facility, as opposed to selecting the entire facility, as our system. Once we know what system we want to work on, we must define failure. This may seem trivial, but it is an essential step in the analysis. If we were to ask 100 people to define failure, we would probably get 100 different definitions. This would make our analysis far to broad. We need to focus, not on everything, but on the things that are most important to our business at that point in time. For instance, if utilization is critical to our business today, we should center our definition around utilization; if our priority issue is quality than our definition should center around quality. Lets take a look at some examples of common failure definitions: Failure is any loss that interrupts the continuity of production. Failure is a loss of asset availability. Failure is the unavailability of equipment. Failure is a deviation from the status quo.

Anna University Chennai

218

DBA 1656

QUALITY MANAGEMENT

Failure is not meeting target expectations. Failure is any secondary defect. The definitions above are some common industrial failure definitions. Please note that there are no perfect failure definitions. For instance, Failure is any loss that interrupts the continuity of production has to include planned shutdowns, rate reductions for decreased sales, etc. It would not pick up failures on equipment that is spared since it does not interrupt the continuity of production. A precise failure definition is important since it focuses the facility on the priority issues. It fosters good communications since everyone knows what is important and it also provides a basis for a common understanding of what the facilitys needs are. Not to mention, it is an essential step in the development of a Significant Few failure list. There are few rules of thumb to consider when developing a failure definition. It must be concise and easily understandable. If it is not, it will leave too much room for interpretation. It should not have to be interpreted. It must only address one topic. This is important to maintain the focus of the analysis. If we include too many topics our target becomes too large. Finally, it should be approved and signed by someone in authority so that everyone in the organization sees that it is a priority issue. The next step in the preparation process is to develop a contact flow diagram. The contact flow diagram will allow you to break down your system into smaller, more manageable subsystems. The rule for this diagram is to map all of the process units that come into contact with the product. This diagram, as well as the failure definition, will be used when we begin to collect the data for the analysis. The next thing we need to accomplish before we begin our FMEA is to perform a gap analysis. In other words, we need to uncover the disparity between what we are producing now and what is our potential. This will give us some indication as to the potential opportunity in our facility. For instance, we produce widgets in our facility, and we currently produce 150,000 per year. However, our potential is 300,000 per year. Now we have a gap of 150,000 widgets per year. The final step in the preparation stage is to design a preliminary interview sheet and a schedule of people to interview to collect the data. This will be the form to assist you in collecting the data from your interviews. To put this all into perspective, the following is a checklist of items to be covered prior to beginning a FMEA.
219

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

TABLE 4.3 FMEA Preparatory Steps Define the system to analyze Define failure Draw a contact diagram Calculate the gap Develop data worksheets Develop preliminary interview schedule FMEA Preparation Checklist Step 2 Collection and Documentation of Data There are a couple of ways of collecting the data for this analysis. You can rely on your computer data systems (i.e. Maintenance Management System) or you can go to the people who are closest to the work and get their input. Although each has its advantages, interviewing is probably the best since the information will be coming straight from the source. If you have enough confidence in your data systems, then it will be useful to use that information to later validate your interviews. At this point lets discuss how you would use interviews to collect the data for your analysis. The process is really quite simple. Lets look at a simple scenario .... You send out a message to all of the people that you would like to interview. You state the date, time and a brief description of the FMEA process for the interviewees. Note: it is important to interview at least 2 or 3 people in each session so that the interviewees can bounce ideas off of each other. Once in the room, you will need to display a large copy of the contact flow diagram and the failure definition so that they are in clear view of the interviewees. Now you will begin the process of asking your questions. There really is only one initiating question that needs to be asked; What events or conditions satisfy the definition of failure within each of the subsystems in the contact flow diagram?. At this point the interviewees will begin to brainstorm all of the failure events that they have experienced within each of the subsystems. Once you have exhausted all of the possibilities, ask the interviewees what the frequency and impact is, on each of the failure events. The frequency should be based on the number of Completed (Y/N)

Anna University Chennai

220

DBA 1656

QUALITY MANAGEMENT

occurrences per year. The interviewees, however, will give you the information in the measurement units that make most sense to them. For instance, they may say it happens once per shift. It is your job to later translate that figure into the number of occurrences per year. The impact should include items such as manpower requirements, material costs and any downtime that might have been experienced. This is all there is to it! When you begin the interview process, it is best to interview the people who are closest to the work (i.e. mechanics and operators). You should also talk with supervisors and possibly managers but certainly not to the extent that you would for mechanics and operators. As a principal analyst, you will also need to be the principal interviewer. This means that you have to explain the process to the interviewees, ask the questions and capture the information on your log sheet. This can be a difficult job. If it is feasible, it would be advantageous to have an associate interviewer to assist you by recording the information on the log sheets. This allows you to focus on the questions and the interviewees. The job of interviewing can be quite an experience, particularly if you do not have a lot of experience in conducting them. It tends to be more of an art form than a science. Below is a listing of some tips that may be useful when you begin to conduct your FMEA interviews. Interview Tips Be very careful to ask the exact same lead questions to each of the interviewees. This will eliminate the possibility of having different answers depending on the interpretation of the question. Later you can expand on the questions, if further clarification is necessary. Make sure that the participants know what a FMEA is as well as the purpose and structure of the interviews. If you are not careful, the process may begin to look more like an interrogation than an interview to the interviewees. You want the interviewees to be comfortable. Allow the interviewees to see what you are writing. This will set them at ease since they can see that the information they are providing is being recorded correctly. never use a tape recorder in a FMEA session because it tends to make people uncomfortable and less likely to share information. Never argue with an interviewee. Even if you do not agree with the person, it is best to accept what they are saying at face value and double check it with the information
221

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

from other interviews. The minute you become argumentative, it reduces the amount of information that you can get from that person. Always be aware of interviewees names. There is nothing sweeter to a persons ears than the sound of their own name. If you have trouble remembering, simply write the names down in front of you so that you can always refer to them. It is important to develop a strategy to draw out quiet participants. There are many quiet people in our workforce who have a wealth of data to share but are not comfortable sharing it with others. We have to make sure that we draw out these quiet interviewees in a gentle and inquiring manner. Be aware of the body language of interviewees. There is an entire science behind body language. It is not important that you become an expert in this area. However, it is important to know that a substantial portion of human communication is through body language. Let the body language talk to you. In any set of interviews, there will be a number of people who are able to contribute more to the process than the others. It is important to make a note of the extraordinary contributors so that they can assist you later in the analysis. They will be extremely helpful if you need additional information, for validating your finished FMEA, as well as assisting you when you begin your actual Root Cause Failure Analysis (RCFA). Remember to use your failure definition and block diagram to keep interviewees on track if they begin to wander off of the subject. Step 3 - Summarize & Encode At this point we have conducted a series of separate interviews and we need to look through our data to reduce redundant entries. Then we convert frequencies from the interviewees measurement units into occurrences per year (i.e. 2 per month would translate into 24 times per year). The easiest way to summarize this information is to input the information into an electronic spreadsheet. There are many products on the market that you could use. Microsoft Excel, Lotus 123 or Borlands Quattro Pro are just a few of the more popular spreadsheet programs you should consider. Once the information is input, you can use your spreadsheet to sort the raw data first by sub-system and then by failure event. This will give you a closer look at the events that are redundant. As far as making the conversions to numbers of times per year, your more advanced spreadsheets can do many of these tasks for you. Consult your users manual for creating lookup tables.

Anna University Chennai

222

DBA 1656

QUALITY MANAGEMENT

The following example should give you an idea of what is meant by summarizing your data: TABLE 4.4 Sub-System Recovery Recovery Recovery Recovery Failure Event Recirculation Pump Fails Recirculation Pump Fails Recirculation Pump Fails Recirculation Pump Fails Failure Mode Bearing Fails Oil Contamination Bearing Locks Up Shaft Fractures Frequency 1 per month 1 per 2 months 1 per month 1 per year Impact 1 shift 1 day 12 hours 1 day

NOTES

This data suggests that the first three items are the same since they each impact the bearings and have fairly consistent frequencies and impacts. The last item is also related to bearings but went one step beyond the others since we not only lost the bearings but we also suffered a fractured shaft. This would indicate a separate mode of failure. A summarization of this data might look something like this: TABLE 4.5 Sub-System Recovery Recovery Failure Event Recirculation Pump Fails Recirculation Pump Fails Failure Mode Bearing Problems Shaft Fractures Frequency 12 per year 1 per year Impact 12 hours 1 day

Completed FMEA failure event summarization Step 4 - Calculate Loss At this point, we want to do a simple calculation to generate our total loss for each event in the analysis. The calculation is as follows: Frequency x Loss Per Occurrence(Impact) = Total Loss Per Yea Lets look at an example of just how to apply this:

223

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Sub-System Failure Event Recovery Compressor Mixers Recirculation Pump Fails Seal Failure Filter Switches

TABLE 4.6 Failure Frequency Mode Bearing Fails Blown Seals Filters Clogged 12 per year 4 per year 26 per year

Impact Total Loss (hrs./yr.) 12 lost 144 lost hrs. hrs. of prod. 24 lost 96 lost hrs. hrs.of prod. 2 lost hrs. 52 lost hrs.of prod.

Vent Condensers

Pressure Gauge Leaks

Leaks 33 per year Due To Corrosion

24 lost 8 lost hrs. hrs.of prod.

Completed Loss Calculation Example What we need to do is multiply the frequency times the impact to get our total loss. In the first event, we have a failure occurring once per month or 12 times per year. We lose a total of 12 hours production every time this occurs. So we simply multiply 12 occurrences times 12 hours of lost production to get a total loss of 144 hours per year. If you decide to use an electronic spreadsheet all of these calculations can be performed automatically by multiplying the frequency and impact columns. Refer to the section in your softwares user manual that concerns multiplying columns. It is important to make sure that total loss is communicated in the most appropriate units. For example, we used hours of downtime per year in the example above. Hours of downtime might not mean much to some people. So it might be more advantageous to convert that number from hours per year to dollars per year since everyone can relate to dollars. In other words, use the units that will get the most attention from everyone involved. Step 5 - Determining the Significant Few The concept of the Significant Few is derived from a famous Italian Economist name Vilfredo Pareto. Pareto said that In any set or collection of objects, ideas, people and events, a FEW within the sets or collections are MORE SIGNIFICANT than the remaining majority. Consider these examples: 80% of a banks assets are representative of 20% or less of its customers. 80% of the care given in a hospital is received by 20% or less of its patients.
Anna University Chennai 224

DBA 1656

QUALITY MANAGEMENT

Well it is no different in industry. 80% of the losses in a manufacturing facility are represented by 20% or less of its failure events. This means that we only have to perform root cause failure analysis on 20% or less of our failure events to reduce or eliminate 80% of our facilities losses. Now that is significant!!! In order to determine the significant few you must perform a few simple steps: Total all of the failure events in the analysis to create a global total loss. Sort the total column in descending order (i.e. highest to lowest) Multiply the global total loss column by 80% or .80. This will give you the Significant Few loss figure that you will need to determine what the Significant Few failures are in your facility. Go to the top of the total loss column and begin adding the top events from top to bottom. When the sum of these losses is equal to or greater than the Significant Few loss figure than those events are your Significant Few failure events. Lets take a look at how this applies to our discussion on FMEA. TABLE 4.7 Sub System Failure Event Failure Mode Freq. Impact Total Loss 2000 $850 $1,700,000 Sub System 3 Failure Event 1 Failure Mode 1 Sub System 2 Failure Event 2 Failure Mode 2 Sub System 4 Failure Event 3 Sub System 2 Failure Event 4 Sub System 3 Failure Event 5 Sub System 2 Failure Event 6 Sub System 3 Failure Event 7 Sub System 3 Failure Event 8 Sub System 4 Failure Event 9 Failure Mode 3 Failure Mode 4 Failure Mode 5 Failure Mode 6 Failure Mode 7 Failure Mode 8 Failure Mode 9

NOTES

1000 $1,250 $1,250,000 4 18 6 52 80 12 365 24 12 40 12 10 $75,000 $300,000 $6,000 $108,000 $12,000 $72,000 $1,000 $52,000 $500 $40,000

$3,000 $36,000 $75 $27,375

Sub System 3 Failure Event 10 Failure Mode 10 Sub System 1 Failure Event 11 Failure Mode 11 Sub System 2 Failure Event 12 Failure Mode 12 Sub System 1 Failure Event 13 Failure Mode 13 Sub System 2 Failure Event 14 Failure Mode 14
225

$1,000 $24,000 $1,300 $15,600 $300 $12,000

$1,000 $12,000 $1,000 $10,000


Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Sub System 1 Failure Event 15 Failure Mode 15 Sub System 3 Failure Event 16 Failure Mode 16 Sub System 2 Failure Event 17 Failure Mode 17 Total Global Loss Significant Few Losses

48 3 6

$200

$9,600

$2,000 $6,000 $1,000 $6,000 $3,680,575 $2,944,460

In the example above, we have totaled the loss column and have a total global loss of $3,680,575. The total loss column has been sorted in descending order so that it is easy to identify the Significant failure events. Our Significant Few loss figure that we are looking for is $2,944,460 ($3,680,575 x .80). Now all we have to do is simply go to the top of the total loss column and begin adding from top to bottom until we reach the Significant Few loss figure of $2,944,460. It turns out that the first 2 failure events represent approximately 80% of our losses ($2,950,000 ) or our Significant Few failure list. Now, instead of doing Root Cause Failure Analysis on everything, we are only going to do it on the ones in our Significant Few failure list. Step 6 - Validate Your Results There are a few validations that should be performed to make sure that our analysis is correct. You can use the gap analysis to make sure that all of the events add up to +/- 10% of the gap. If it ends up being less, you have probably left some important failure events off the listing. If you have more than the gap then you probably have not summarized your results well enough. There may be some redundancies in your list. A second validation that you can use is having a group of experienced people from your facility review your findings. This will help ensure that you are not too far off base. A third, and final, validation would be to use your computerized data systems to see if the events closely match the data in your maintenance management system. This will give you further confidence in your analysis. Do not worry if your list varies from your maintenance management system (MMS), since you will pick a lot of events that are never even recorded in the work order system (i.e. those events that may take only a few minutes to repair). Step 7 - Issue a Report As with any analysis, it is important to communicate your findings to all interested parties. Your report should include the following items: An explanation of the analysis technique.
Anna University Chennai 226

DBA 1656

QUALITY MANAGEMENT

The failure definition that was utilized. The contact flow diagram that was utilized. The results displayed graphically as well as the supporting spreadsheet lists. Recommendations of which failures are candidates for Root Cause Failure Analysis. A listing of everyone involved in the analysis including all of the interviewees. Last but not least, make sure that you communicate the results of the analysis back to the interviewees who participated, so that everyone can feel a sense of accomplishment and ownership. In summary, FMEA is a fantastic tool for limiting your analysis work to only those things that are of significant importance to the facility. You cannot perform Root Cause Failure Analysis on everything. However, you can use this tool to help narrow our focus to what is most important. 4.7 REQUIREMENTS OF RELIABILITY Organization for Reliability The product reliability should be the objective of everyone in the organization and a well entrenched reliability and quality assurance culture must be prevalent at all levels starting at the conceptual stage as design reviews, reliability improvement efforts must continue through fabrication, testing and operational phases. In the initial stages design reviews are dominated by design engineers but in the later phases, participation, by independent reliability and quality assurance offices increases. In order to ensure objectivity in assessing the reliability outside specialists and consultants participate in the reviews to improve the reliability. 4.8 FAILURE RATE Failure Analysis Failure Analysis is the cornerstone of reliability engineering. In this, technical failure analyses the cause and extent of failures. They are carried out by corrosion engineering, equipment inspection, plant engineering, R and D or customer service. For instance, a corrosion engineer may expose various alloys to cooling water to determine which fail and by what mechanism. An equipment inspector may measure pits in a tower top to determine remaining wall thickness. The number of operating hours between failures is a measure of the equipments reliability. In this context a failure is defined as any defect or malfunction which prevents successful accomplishment of a desired performance.
227

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

The failure rate function provides a natural way characterizing failure distributions. It is also known as hazard rate or force of mortality. The failure rate is usually expressed as failures per hour, month or other time interval. Usually the exponential distribution describes the probability of survival / reliability during the normal operation period. MTBF, it may be noticed, is the reciprocal of the failure rate. Failure rates are additive, while MTBF is not. The exponential distribution describes the probability of survival P (t) (reliability) during the normal operation period. Is (t) = Exp. (-rt) where r is failure per hour and ti time in hours. For example, if r is .01 and t is 1000 hours then it is 10 failures in the interval. The poisson distribution (discussed earlier) is related closely to the exponential distribution and in many situations the frequency distribution of failure rate follows the poisson distribution whose equation is P(x) = e-rt rtx / x! where x is the variable with possible values from 0 to ?. In this the mean is the expectation rt, and the variance is equal to the mean. For large rt, poisson distribution may be approximated to the normal pattern. The exponential and poisson distribution describe the most random situations possible. While the exponential distribution describes constant failure rate situations occur during the normal operation period, the normal distribution describes increasing failures that occur during the wear out period. The equation of density function of normal distribution is given by

Anna University Chennai

228

DBA 1656

QUALITY MANAGEMENT

V is known as the characteristic age at failure. For different components operating in a given environment, larger the V, greater the reliability. The value of K fixes the shape of P(t). It measures the dispersion and is used to calculate the variance. As K increases; this shape tends to the normal distribution. Bath Tub Curve An Advisory Group or Reliability of Electronic Equipment AGREE was formed by the U.S. Department of Defence in 1952 primarily to study the reliability requirements of critical military hardware. Their report submitted in 1957 gave a major impetus to the reliability movement. The group formulated the well known bath-tubcurve of reliability. The probability of failure of a manufactured product is similar to the mortality rate of human beings high during the initial and terminal stages with a steady low level in between.

NOTES

FIGURE 4.3 In view of the shape of the curve, reliability of an item can be determined by taking into consideration only the normal usage of the product. The useful working life of an item is normally indicated by the flat and uniform failure rate period of the bath tub curve. The useful life of a product is divided into three separate periods known as wear-in, normal operation and wear-out. These three periods are usually distinct but sometimes they overlap. The figure above shows the typical bath tub shaped failure rate (indicated on the vertical Y-axis) curve that an equipment experiences during the life time (shown in the horizontal X-axis). The three stages are I wear-in; II Normal Operations and III wear-out period. The wear-in period; also known as the infant mortality period is characterized by a decreasing failure rate. This is usually witnessed by starting new plants. In this initial debugging period of high breakdown is witnessed. Gradually the break-down
229 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

frequency decreases to a constant range when the plant is turned over to operations in a highly reliable stage.
Most original equipment manufacturers debug the equipments before delivery. The wear-in failures are caused by material defects and human errors during assembly and most suppliers have quality control programmes to defeat and eliminate these bugs. Failures during the wear-in period are described by the Weibull distribution. Admittedly, the normal operation period covers the major part of an equipment and is characterized by a constant failure rate. During this normal period, the failure rate does not change as the equipment ages implying that the probability of failure at one point of time is the same as at any other point in time. Like the human being, all equipment wears and all material degrades within time. After a long normal operation period with a constant failure rate, the wear-out period starts with increasing failure rate. When an equipment enters the wear-out period, it must be overhauled. Combinatorial Aspects The prediction of the overall reliability of a complex system, similar to the computer installation can be achieved through combinatorial analysis. Let us consider the computer configuration depicted in Exhibit.

FIGURE 4.4
Anna University Chennai 230

DBA 1656

QUALITY MANAGEMENT

Ideally we should like to accomplish our aim of getting correct output in 100% of the cases. But, in practice, this may not always be possible. Let us now assume that that chance of failure of each of the three blocks is 1%, i.e. each block has a reliability of 99%. The three blocks in Exhibit are in a series and if any one of them fails, the entire system fails. Therefore, the series reliability of the above system comprising of three blocks in a series is the product of the respective reliability of individual blocks. 0.99 x 0.99 x 0.99 = 0.970299 In the above system, the CPU is the most expensive block and it is not economically viable to have a standby CPU. In order to improve the reliability of this system, we can introduce additional input and output acting in parallel. This will ensure that the failure of one (input or output) will not result in the failure of the entire system. The reliability of this system can be obtained as follows: The chance of the failure of only one input is .01 because its reliability is 0.99. Form probability considerations, the chance of failure of both inputs is 0.01 x 0.01 = 0.0001. This implies that the reliability of the total input system is 1-0.0001 = 0.9999. Similarly, the reliability of the total output system is 0.9999. Therefore, the reliability of the total system = 0.9999 x 0.99 x 0.9999 = 0.989802 We see that the reliability of the total system improves by about 12%. This increased reliability is achieved through redundancy which implies the existence of more than one item for accomplishing a given task. Obviously, a cost benefit analysis has to be carried out before embarking on reliability through redundancy. Well-defined reliability protocol and well-equipped test facilities are only a prerequisite but problems in reliability do persist in a developing economy like India. These include large varieties of parts, small quantities of requirement, high stringent time schedule, indigenous manufacturers not bring upto the expectations for producing very high reliability. 4.9 SEVEN OLD STATISTICAL TOOLS Seven Tools of Quality The discipline of Total Quality Control uses a number of quantitative methods and tools to identify problems and suggest avenues for continuous improvement in
231

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

fields such as manufacturing. Over many years, total quality practitioners gradually realized that a large number of quality related problems can be solved with seven basic quantitative tools, which then became known as the traditional Seven Tools of Quality. These are: . Ishikawa diagram . Pareto chart . Check sheet . Control chart . Flowchart . Histogram . Scatter diagrams These tools have been widely used in most quality management organizations, and a number of extensions and improvements to them have been proposed and adopted. Pareto chart A Pareto chart is a special type of bar chart where the values being plotted are arranged in descending order. It is named for Vilfredo Pareto, and its use in quality assurance was popularized by Joseph M. Juran and Kaoru Ishikawa.

FIGURE 4.5

Anna University Chennai

232

DBA 1656

QUALITY MANAGEMENT

Simple example of a Pareto chart using hypothetical data showing the relative frequency of reasons for arriving late at work. The Pareto chart is one of the seven basic tools of quality control, which include the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram. Typically the left vertical axis is frequency of occurrence, but it can alternatively represent cost or other important unit of measure. The right vertical axis is the cumulative percentage of the total number of occurrences, total cost, or total of the particular unit of measure. The purpose is to highlight the most important among a (typically large) set of factors. In quality control, the Pareto chart often represents the most common sources of defects, the highest occurring type of defect, or the most frequent reasons for customer complaints, etc. Check sheet The check sheet is a simple document that is used for collecting data in realtime and at the location where the data is generated. The document is typically a blank form that is designed for the quick, easy, and efficient recording of the desired information, which can be either quantitative or qualitative. When the information is quantitative, the checksheet is sometimes called a tally sheet. A defining characteristic of a checksheet is that data is recorded by making marks (checks) on it. A typical checksheet is divided into regions, and marks made in different regions have different significance. Data is read by observing the location and number of marks on the sheet. 5 Basic types of Check Sheets : Classification : A trait such as a defect or failure mode must be classified into a category Location : The physical location of a trait is indicated on a picture of a part or item being evaluated Frequency : The presence or absence of a trait or combination of traits is indicated. Also number of occurrences of a trait on a part can be indicated Measurement Scale : A measurement scale is divided into intervals, and measurements are indicated by checking an appropriate interval Check List : The items to be performed for a task are listed so that, as each is accomplished, it can be indicated as having been completed.

NOTES

An example of a simple quality control checksheet The check sheet is one of the seven basic tools of quality control, which include the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram. See Quality Management Glossary.
233 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Control chart The control chart, also known as the Shewhart chart or process-behaviour chart is a statistical tool intended to assess the nature of variation in a process and to facilitate forecasting and management. A control chart is a more specific kind of a run chart. The control chart is one of the seven basic tools of quality control, which include the histogram, Pareto chart, check sheet, control chart, causeand-effect diagram, flowchart, andPerformance of control charts When a point falls outside of the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred. If one has, then that cause should be eliminated if possible. It is known that even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits. Since the control limits are evaluated each time a point is added to the chart, it readily follows that every control chart will eventually signal the possible presence of a special cause, even though one may not have actually occurred. For a Shewhart control chart using 3-sigma limits, this false alarm occurs on average once every 1/ 0.0027 or 370.4 observations. Therefore, the in-control average run length (or incontrol ARL) of a Shewhart chart is 370.4. Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart. It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), the Shewhart chart does not detect these changes efficiently. Other types of control charts have been developed, such as the EWMA chart and the CUSUM chart, which detect smaller changes more efficiently by making use of information from observations collected prior to the most recent data point.

Anna University Chennai

234

DBA 1656

QUALITY MANAGEMENT

Criticisms Several authors have criticised the control chart on the grounds that it violates the likelihood principle. However, the principle is itself controversial and supporters of control charts further argue that, in general, it is impossible to specify a likelihood function for a process not in statistical control, especially where knowledge about the cause system of the process is weak. Some authors have criticised the use of average run lengths (ARLs) for comparing control chart performance, because that average usually follows a Geometric distribution, which has a high variability. Flowchart

NOTES

FIGURE 4.6 A simple flowchart for what to do if a lamp doesnt work A flowchart (also spelled flow-chart and flow chart) is a schematic representation of an algorithm or a process flowchart is one of the seven basic tools of quality control, which include the histogram, Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram. See Quality Management
235 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Glossary. They are commonly used in business/economic presentations to help the audience visualize the content better, or to find flaws in the process. Symbols A typical flowchart from older Computer Science textbooks may have the following kinds of symbols: y Start and end symbols, represented as lozenges, ovals or rounded rectangles, usually containing the word Start or End, or another phrase signaling the start or end of a process, such as submit enquiry or receive product. y Arrows, showing whats called flow of control in computer science. An arrow coming from one symbol and ending at another symbol represents that control passes to the symbol the arrow points to. y Processing steps, represented as rectangles. Examples: Add 1 to X; replace identified part; save changes or similar. y Input/Output, represented as a parallelogram. Examples: Get X from the user; display X. y Conditional (or decision), represented as a diamond (rhombus). These typically contain a Yes/No question or True/False test. This symbol is unique in that it has two arrows coming out of it, usually from the bottom point and right point, one corresponding to Yes or True, and one corresponding to No or False. The arrows should always be labeled. More than two arrows can be used, but this is normally a clear indicator that a complex decision is being taken, in which case it may need to be broken-down further, or replaced with the pre-defined process symbol. y A number of other symbols that have less universal currency, such as:
o o

A Document represented as a rectangle with a wavy base; A Manual input represented by rectangle, with the top irregularly sloping up from left to right. An example would be to signify data-entry from a form; A Manual operation represented by a trapezoid with the longest parallel side utmost, to represent an operation or adjustment to process that can only be made manually. A Data File represented by a cylinder
236

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

y Note: All process symbols within a flowchart should be numbered. Normally a number is inserted inside the top of the shape to indicate which step the process is within the flowchart. Flowcharts may contain other symbols, such as connectors, usually represented as circles, to represent converging paths in the flow chart. Circles will have more than one arrow coming into them but only one going out. Some flow charts may just have an arrow point to another arrow instead. These are useful to represent an iterative process (what in Computer Science is called a loop). A loop may, for example, consist of a connector where control first enters, processing steps, a conditional with one arrow exiting the loop, and one going back to the connector. Off-page connectors are often used to signify a connection to a (part of a) process held on another sheet or screen. It is important to remember to keep these connections logical in order. All processes should flow from top to bottom and left to right. A flowchart is described as cross-functional when the page is divided into different lanes describing the control of different organizational units. A symbol appearing in a particular lane is within the control of that organizational unit. This technique allows the analyst to locate the responsibility for performing an action or making a decision correctly, allowing the relationship between different organizational units with responsibility over a single process. Histogram For the histogram used in digital image processing, see Color histogram.

NOTES

FIGURE 4.7 Example of a histogram of 100 normally distributed random values.


237 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

In statistics, a histogram is a graphical display of tabulated frequencies. A histogram is the graphical version of a table which shows what proportion of cases fall into each of several or many specified categories. The categories are usually specified as non-overlapping intervals of some variable. The categories (bars) must be adjacent. The word histogram is derived from histos and gramma in Greek, the first meaning web or mast and the second meaning drawing, record or writing. A histogram of something is thus, etymologically speaking, a drawing of the web of this something. The histogram is one of the seven basic tools of quality control, which also include the Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram. See also the glossary of quality management. Examples As an example we consider data collected by the U.S. Census Bureau on time to travel to work (2000 census, [1], Table 5). The census found that there were 124 million people who work outside of their homes. People were asked how long it takes them to get to work, and their responses were divided into categories: less than 5 minutes, more than 5 minutes and less than 10, more than 10 minutes and less than 15, and so on. The tables shows the numbers of people per category in thousands, so that 4,180 means 4,180,000. The data in the following tables are displayed graphically by histograms. An interesting feature of both diagrams is the spike in the 30 to 35 minutes category. It seems likely that this is an artifact: half an hour is a common unit of informal time measurement, so people whose travel times were perhaps a little less than, or a little greater than 30 minutes might be inclined to answer 30 minutes. This rounding is a common phenomenon when collecting data from people.

FIGURE 4.8
Anna University Chennai 238

DBA 1656

QUALITY MANAGEMENT

Histogram of travel time, US 2000 census. Area under the curve equals the total number of cases. This diagram uses Q/width from the table. TABLE 4.8 Data by absolute numbers Interval 0 5 10 15 20 25 30 35 40 45 60 90 Width 5 5 5 5 5 5 5 5 5 15 30 60 Quantity 4180 13687 18618 19634 17981 7190 16369 3212 4122 9200 6461 3435 Quantity/width 836 2737 3723 3926 3596 1438 3273 642 824 613 215 57

NOTES

This histogram shows the number of cases per unit interval so that the height of each bar is equal to the proportion of total people in the survey who fall into that category. The area under the curve represents the total number of cases (124 million). This type of histogram shows absolute numbers.

FIGURE 4.9
239 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Histogram of travel time, US 2000 census. Area under the curve equals 1. This diagram uses Q/total/width from the table. TABLE 4..9 Data by proportion Marks Scored 1020 2030 3040 4050 5060 6070 7080 8090 90100 No. of Students 1 1 4 4 8 7 11 6 5

This histogram differs from the first only in the vertical scale. The height of each bar is the decimal percentage of the total that each category represents, and the total area of all the bars is equal to 1, the decimal equivalent of 100%. The curve displayed is a simple density estimate. This version shows proportions, and is also known as an unit area histogram. Mathematical Definition In a more general mathematical sense, a histogram is simply a mapping mi that counts the number of observations that fall into various disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let n be the total number of observations and k be the total number of bins, the histogram mi meets the following conditions:

Cumulative Histogram A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mi is defined as:
Anna University Chennai 240

DBA 1656

QUALITY MANAGEMENT

NOTES
Number of bins and width There is no best number of bins, and different bin sizes can reveal different features of the data. Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. You should always experiment with bin widths before choosing one (or more) that illustrate the salient features in your data. The number of bins can be calculated directly, or from an suggested bin width, h:

Where n is the number of observations in the sample the braces indicate a ceiling function. Sturges formula

, and

which implicitly bases the bin sizes on the range of the data, and can perform poorly if n < 30. Scotts choice

where h is the common bin width, and s is the sample standard deviation. Freedman-Diaconis choice

which is based on the interquartile range 4.9 SEVEN NEW MANAGEMENT TOOLS Seven New Management and Planning Tools In 1976, the Union of Japanese Scientists and Engineers (JUSE) saw the need for tools to promote innovation, communicate information and successfully plan major
241 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

projects. A team researched and developed the seven new quality control tools, often called the seven management and planning (MP) tools, or simply the seven management tools. Not all the tools were new, but their collection and promotion were. The seven MP tools, listed in an order that moves from abstract analysis to detailed planning, are: 1. Affinity diagram: organizes a large number of ideas into their natural relationships. 2. Relations diagram: shows cause-and-effect relationships and helps you analyze the natural links between different aspects of a complex situation. 3. Tree diagram: breaks down broad categories into finer and finer levels of detail, helping you move your thinking step by step from generalities to specifics. 4. Matrix diagram: shows the relationship between two, three or four groups of information and can give information about the relationship, such as its strength, the roles played by various individuals, or measurements. 5. Matrix data analysis: a complex mathematical technique for analyzing matrices, often replaced in this list by the similar prioritization matrix. One of the most rigorous, careful and time-consuming of decision-making tools, a prioritization matrix is an L-shaped matrix that uses pairwise comparisons of a list of options to a set of criteria in order to choose the best option(s). 6. Arrow diagram: shows the required order of tasks in a project or process, the best schedule for the entire project, and potential scheduling and resource problems and their solutions. 7. Process decision program chart (PDPC): systematically identifies what might go wrong in a plan under development. As problems have increased in complexity, more tools have been developed to encourage employees to participate in the problem-solving process. Lets review the seven new quality management tools being used today. The affinity diagram is used to generate ideas, then organize these ideas in a logical manner. The first step in developing an affinity diagram is to post the problem (or issue) where everyone can see it. Next, team members write their ideas for solving the problem on cards and post them below the problem. Seeing the ideas of other members of the team helps everyone generate new ideas. As the idea generation phase slows, the team sorts the ideas into groups, based on patterns or common themes. Finally, descriptive title cards are created to describe each group of ideas.

Anna University Chennai

242

DBA 1656

QUALITY MANAGEMENT

The interrelationship digraph allows teams to look for cause and effect relationships between pairs of elements. The team starts with ideas that seem to be related and determines if one causes the other. If idea 1 causes idea 5, then an arrow is drawn from 1 to 5. If idea 5 causes idea 1, then the arrow is drawn from 5 to 1. If no cause is ascertained, no arrow is drawn. When the exercise is finished, it is obvious that ideas with many outgoing arrows cause things to happen, while ideas with many incoming arrows result from other things. A tree diagram assists teams in exploring all the options available to solve a problem, or accomplish a task. The tree diagram actually resembles a tree when complete. The trunk of the tree is the problem or task. Branches are major options for solving the problem, or completing the task. Twigs are elements of the options. Leaves are the means of accomplishing the options. The prioritization matrix helps teams select from a series of options based on weighted criteria. It can be used after options have been generated, such as in a tree diagram exercise. A prioritization matrix is helpful in selecting which option to pursue. The prioritization matrix adds weights (values) to each of the selection criteria to be used in deciding between options. For example, if you need to install a new software system to better track quality data, your selection criteria could be cost, leadtime, reliability, and upgrades. A simple scale, say 1 through 5, could be used to prioritize the selection criteria being used. Next, you would rate the software options for each of these selection criteria and multiply that rating by the criteria weighting. The matrix diagram allows teams to describe relationships between lists of items. A matrix diagram can be used to compare the results of implementing a new manufacturing process to the needs of a customer. For example, if the customers main needs are low cost products, short leadtimes, and products that are durable; and a change in the manufacturing process can provide faster throughput, larger quantities, and more part options; then the only real positive relationship is the shorter leadtime to the faster throughput. The other process outcomeslarger quantities and more optionsare of little value to the customer. This matrix diagram, relating customer needs to the manufacturing process changes, would be helpful in deciding which manufacturing process to implement. The process decision program chart can help a team identify things that could go wrong, so corrective action can be planned in advance. The process decision program chart starts with the problem. Below this, major issues related to the problem are listed.
243

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Below the issues, associated tasks are listed. For each task, the team considers what could go wrong and records these possibilities on the chart. Next, the team considers actions to prevent things from going wrong. Finally, the team selects which preventive actions to take from all the ones listed. The activity network diagram graphically shows total completion time, the required sequence of events, tasks that can be done simultaneously, and critical tasks that need monitoring. In this respect, an activity network diagram is similar to the traditional PERT chart used for activity measurement and planning. Affinity Diagram Also called as affinity chart, K-J method The affinity diagram organizes a large number of ideas into their natural relationships. This method taps a teams creativity and intuition. It was created in the 1960s by Japanese anthropologist Jiro Kawakita. When to Use y When you are confronted with many facts or ideas in apparent chaos y When issues seem too large and complex to grasp y When group consensus is necessary Typical situations are: 1 2 After a brainstorming exercise When analyzing verbal data, such as survey results.

Procedure Materials needed: sticky notes or cards, marking pens, large work surface (wall, table, or floor). 1. Record each idea with a marking pen on a separate sticky note or card. (During a brainstorming session, write directly onto sticky notes or cards if you suspect you will be following the brainstorm with an affinity diagram.) Randomly spread notes on a large work surface so all notes are visible to everyone. The entire team gathers around the notes and participates in the next steps. 2. It is very important that no one talk during this step. Look for ideas that seem to be related in some way. Place them side by side. Repeat until all notes are grouped. Its okay to have loners that dont seem to fit a group. Its all right
Anna University Chennai 244

DBA 1656

QUALITY MANAGEMENT

to move a note someone else has already moved. If a note seems to belong in two groups, make a second note. 3. You can talk now. Participants can discuss the shape of the chart, any surprising patterns, and especially reasons for moving controversial notes. A few more changes may be made. When ideas are grouped, select a heading for each group. Look for a note in each grouping that captures the meaning of the group. Place it at the top of the group. If there is no such note, write one. Often it is useful to write or highlight this note in a different color. 4. Combine groups into supergroups if appropriate. Example The ZZ-400 manufacturing team used an affinity diagram to organize its list of potential performance indicators. Shows the list team members brainstormed. Because the team works a shift schedule and members could not meet to do the affinity diagram together, they modified the procedure.

NOTES

Figure 4.10 They wrote each idea on a sticky note and put all the notes randomly on a rarely used door. Over several days, everyone reviewed the notes in their spare time
245 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

and moved the notes into related groups. Some people reviewed the evolving pattern several times. After a few days, the natural grouping shown in figure B had emerged. Notice that one of the notes, Safety, has become part of the heading for its group. The rest of the headings were added after the grouping emerged. Five broad areas of performance were identified: product quality, equipment maintenance, manufacturing cost, production volume, and safety and environmental.

FIGURE 4.10

Anna University Chennai

246

DBA 1656

QUALITY MANAGEMENT

CONSIDERATIONS y The affinity diagram process lets a group move beyond its habitual thinking and preconceived categories. This technique accesses the great knowledge and understanding residing untapped in our intuition. y Very important Do nots: Do not place the notes in any order. Do not determine categories or headings in advance. Do not talk during step 2. (This is hard for some people!) y Allow plenty of time for step 2. You can, for example, post the randomly-arranged notes in a public place and allow grouping to happen over several days. y Most groups that use this technique are amazed at how powerful and valuable a tool it is. Try it once with an open mind and youll be another convert. y Use markers. With regular pens, it is hard to read ideas from any distance. Relations Diagram Also called as interrelationship diagram or digraph, network diagram Description The relations diagram shows cause-and-effect relationships. Just as importantly, the process of creating a relations diagram helps a group analyze the natural links between different aspects of a complex situation. When to Use y When trying to understand links between ideas or cause-and-effect relationships, such as when trying to identify an area of greatest impact for improvement. y When a complex issue is being analyzed for causes. y When a complex solution is being implemented. y After generating an affinity diagram, cause-and-effect diagram or tree diagram, to more completely explore the relations of ideas. Basic Procedure Materials needed: sticky notes or cards, large paper surface (newsprint or two flipchart pages taped together), marking pens, tape. 1. Write a statement defining the issue that the relations diagram will explore. Write it on a card or sticky note and place it at the top of the work surface.

NOTES

247

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

2. Brainstorm ideas about the issue and write them on cards or notes. If another tool has preceded this one, take the ideas from the affinity diagram, the most detailed row of the tree diagram or the final branches on the fishbone diagram. You may want to use these ideas as starting points and brainstorm additional ideas. 3. Place one idea at a time on the work surface and ask: Is this idea related to any others? Place ideas that are related near the first. Leave space between cards to allow for drawing arrows later. Repeat until all cards are on the work surface. 4. For each idea, ask, Does this idea cause or influence any other idea? Draw arrows from each idea to the ones it causes or influences. Repeat the question for every idea. 5. Analyze the diagram:
o

Count the arrows in and out for each idea. Write the counts at the bottom of each box. The ones with the most arrows are the key ideas. Note which ideas have primarily outgoing (from) arrows. These are basic causes.

6. Note which ideas have primarily incoming (to) arrows. These are final effects that also may be critical to address. Be sure to check whether ideas with fewer arrows also are key ideas. The number of arrows is only an indicator, not an absolute rule. Draw bold lines around the key ideas. Example A computer support group is planning a major project: replacing the mainframe computer. The group drew a relations diagram (see figure below) to sort out a confusing set of elements involved in this project.

Anna University Chennai

248

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 4.11 Computer replacement project is the card identifying the issue. The ideas that were brainstormed were a mixture of action steps, problems, desired results and less-desirable effects to be handled. All these ideas went onto the diagram together. As the questions were asked about relationships and causes, the mixture of ideas began to sort itself out. After all the arrows were drawn, key issues became clear. They are outlined with bold lines.

New software has one arrow in and six arrows out. Install new mainframe has one arrow in and four out. Both ideas are basic causes. Service interruptions and increased processing cost both have three arrows in, and the group identified them as key effects to avoid.

Tree Diagram Also called: systematic diagram, tree analysis, analytical tree, hierarchy diagram The tree diagram starts with one item that branches into two or more, each of which branch into two or more, and so on. It looks like a tree, with trunk and multiple branches. It is used to break down broad categories into finer and finer levels of detail. Developing the tree diagram helps you move your thinking step by step from generalities to specifics.

249

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

When to Use

When an issue is known or being addressed in broad generalities and you must move to specific details, such as when developing logical steps to achieve an objective. When developing actions to carry out a solution or other plan. When analyzing processes in detail. When probing for the root cause of a problem. When evaluating implementation issues for several potential solutions. After an affinity diagram or relations diagram has uncovered key issues. As a communication tool, to explain details to others.

Procedure 1. Develop a statement of the goal, project, plan, problem or whatever is being studied. Write it at the top (for a vertical tree) or far left (for a horizontal tree) of your work surface. 2. Ask a question that will lead you to the next level of detail. For example:
o

For a goal, action plan or work breakdown structure: What tasks must be done to accomplish this? or How can this be accomplished? For root-cause analysis: What causes this? or Why does this happen? For gozinto chart: What are the components? (Gozinto literally comes from the phrase What goes into it?

Brainstorm all possible answers. If an affinity diagram or relationship diagram has been done previously, ideas may be taken from there. Write each idea in a line below (for a vertical tree) or to the right of (for a horizontal tree) the first statement. Show links between the tiers with arrows.
o

Do a necessary and sufficient check. Are all the items at this level necessary for the one on the level above? If all the items at this level were present or accomplished, would they be sufficient for the one on the level above? Each of the new idea statements now becomes the subject: a goal, objective or problem statement. For each one, ask the question again
250

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

to uncover the next level of detail. Create another tier of statements and show the relationships to the previous tier of ideas with arrows. Do a necessary and sufficient check for each set of items.
o

NOTES

Continue to turn each new idea into a subject statement and ask the question. Do not stop until you reach fundamental elements: specific actions that can be carried out, components that are not divisible, root causes. Do a necessary and sufficient check of the entire diagram. Are all the items necessary for the objective? If all the items were present or accomplished, would they be sufficient for the objective?

The district has three fundamental goals. The first, to improve academic performance, is partly shown in the figure below. District leaders have identified two strategic objectives that, when accomplished, will lead to improved academic performance: academic achievement and college admissions.

FIGURE 4.12 Tree diagram example Lag indicators are long-term and results-oriented. The lag indicator for academic achievement is Regents diploma rate: the percent of students receiving a state diploma by passing eight Regents exams. Lead indicators are short-term and process-oriented. Starting in 2000, the lead indicator for the Regents diploma rate was performance on new fourth and eighth grade state tests.
251 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Finally, annual projects are defined, based on cause-and-effect analysis, that will improve performance. In 20002001, four projects were accomplished to improve academic achievement. Thus this tree diagram is an interlocking series of goals and indicators, tracing the causes of systemwide academic performance first through high school diploma rates, then through lower grade performance, and back to specific improvement projects. Matrix Diagram Also called as matrix, matrix chart The matrix diagram shows the relationship between two, three or four groups of information. It also can give information about the relationship, such as its strength, the roles played by various individuals or measurements. Six differently shaped matrices are possible: L, T, Y, X, C and roof-shaped, depending on how many groups must be compared. When to Use each Shape Table 1 summarizes when to use each type of matrix. Also click on the links below to see an example of each type. In the examples, matrix axes have been shaded to emphasize the letter that gives each matrix its name.

An L-shaped matrix relates two groups of items to each other (or one group to itself).

A T-shaped matrix relates three groups of items: groups B and C are each related to A. Groups B and C are not related to each other.

A Y-shaped matrix relates three groups of items. Each group is related to the other two in a circular fashion.

A C-shaped matrix relates three groups of items all together simultaneously, in 3-D.

An X-shaped matrix relates four groups of items. Each group is related to two others in a circular fashion.

A roof-shaped matrix relates one group of items to itself. It is usually used along with an L- or T-shaped matrix.

Anna University Chennai

252

DBA 1656

QUALITY MANAGEMENT

Table 1: When to use differently-shaped matrices L-shaped T-shaped Y-shaped C-shaped X-shaped Roof-shaped 2 groups 3 groups 3 groups 3 groups 4 groups 1 group A B A B (or A A B A) C

NOTES

C but not B C A

All three simultaneously (3-D) A A B C D A but not A B in L or T C or B D

A when also A

L-Shaped Matrix This L-shaped matrix summarizes customers requirements. The team placed numbers in the boxes to show numerical specifications and used check marks to show choice of packaging. The L-shaped matrix actually forms an upside-down L. This is the most basic and most common matrix format.

Customer Requirements Customer D Purity % Trace metals (ppm) Water (ppm) Viscosity (cp) Color Drum Truck Railcar > 99.2 <5 < 10 20-35 < 10 Customer M > 99.2 <5 20-30 < 10 Customer R > 99.4 < 10 < 10 10-50 < 15 Customer T > 99.0 < 25 15-35 < 10

T-Shaped Matrix This T-shaped matrix relates product models (group A) to their manufacturing locations (group B) and to their customers (group C). Examining the matrix in different ways reveals different information. For example, concentrating on model A, we see that it is produced in large volume at the Texas plant and in small volume at the Alabama plant. Time Inc. is the major customer for model A,
253 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

while Arlo Co. buys a small amount. If we choose to focus on the customer rows, we learn that only one customer, Arlo, buys all four models. Zig buys just one. Time makes large purchases of A and D, while Lyle is a relatively minor customer. ProductsCustomersManufacturing Locations

Y-Shaped Matrix This Y-shaped matrix shows the relationships between customer requirements, internal process metrics and the departments involved. Symbols show the strength of the relationships: primary relationships, such as the manufacturing departments responsibility for production capacity; secondary relationships, such as the link between product availability and inventory levels; minor relationships, such as the distribution departments responsibility for order lead time; and no relationship, such as between the purchasing department and on-time delivery. The matrix tells an interesting story about on-time delivery. The distribution department is assigned primary responsibility for that customer requirement. The two metrics most strongly related to on-time delivery are inventory levels and order lead time. Of the two, distribution has only a weak relationship with order lead time and none with inventory levels. Perhaps the responsibility for on-time delivery needs to be reconsidered.

Anna University Chennai

254

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 4.13 Responsibilities for Performance to Customer Requirements C-Shaped Matrix Think of C meaning cube. Because this matrix is three-dimensional, it is difficult to draw and infrequently used. If it is important to compare three groups simultaneously, consider using a three-dimensional model or computer software that can provide a clear visual image. This figure shows one point on a C-shaped matrix relating products, customers and manufacturing locations. Zig Companys model B is made at the Mississippi plant.

FIGURE 4.14
255 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

X-Shaped Matrix This figure extends the T-shaped matrix example into an X-shaped matrix by including the relationships of freight lines with the manufacturing sites they serve and the customers who use them. Each axis of the matrix is related to the two adjacent ones, but not to the one across. Thus, the product models are related to the plant sites and to the customers, but not to the freight lines. A lot of information can be contained in an X-shaped matrix. In this one, we can observe that Red Lines and Zip Inc., which seem to be minor carriers based on volume, are the only carriers that serve Lyle Co. Lyle doesnt buy much, but it and Arlo are the only customers for model C. Model D is made at three locations, while the other models are made at two. What other observations can you make? Manufacturing SitesProductsCustomersFreight Lines

Roof-Shaped Matrix The roof-shaped matrix is used with an L- or T-shaped matrix to show one group of items relating to itself. It is most commonly used with a house of quality, where it forms the roof of the house. In the figure below, the customer requirements are related to one another. For example, a strong relationship links color and trace metals, while viscosity is unrelated to any of the other requirements.

Anna University Chennai

256

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 4.15 Frequently Used Symbols

Arrow Diagram Also called as activity network diagram, network diagram, activity chart, node diagram, CPM (critical path method) chart. The arrow diagram shows the required order of tasks in a project or process, the best schedule for the entire project, and potential scheduling and resource problems and their solutions. The arrow diagram lets you calculate the critical path of the project. This is the flow of critical steps where delays will affect the timing of the entire project and where addition of resources can speed up the project. When to Use

When scheduling and monitoring tasks within a complex project or process with interrelated tasks and resources. When you know the steps of the project or process, their sequence and how long each step takes, and.

257

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

When project schedule is critical, with serious consequences for completing the project late or significant advantage to completing the project early.

Procedure Materials needed: sticky notes or cards, marking pens, large writing surface (newsprint or flipchart pages) Drawing the Network 1. List all the necessary tasks in the project or process. One convenient method is to write each task on the top half of a card or sticky note. Across the middle of the card, draw a horizontal arrow pointing right. 2. Determine the correct sequence of the tasks. Do this by asking three questions for each task:
o o o

Which tasks must happen before this one can begin? Which tasks can be done at the same time as this one? Which tasks should happen immediately after this one?

It can be useful to create a table with four columns prior tasks, this task, simultaneous tasks, following tasks. 3. Diagram the network of tasks. If you are using notes or cards, arrange them in sequence on a large piece of paper. Time should flow from left to right and concurrent tasks should be vertically aligned. Leave space between the cards. 4. Between each two tasks, draw circles for events. An event marks the beginning or end of a task. Thus, events are nodes that separate tasks. 5. Look for three common problem situations and redraw them using dummies or extra events. A dummy is an arrow drawn with dotted lines used to separate tasks that would otherwise start and stop with the same events or to show logical sequence. Dummies are not real tasks. Problem situations:
o

Two simultaneous tasks start and end at the same events. Solution: Use a dummy and an extra event to separate them. In Figure 1, event 2 and the dummy between 2 and 3 have been added to separate tasks A and B.

Anna University Chennai

258

DBA 1656
o

QUALITY MANAGEMENT

Task C cannot start until both tasks A and B are complete; a fourth task, D, cannot start until A is complete, but need not wait for B. (See Figure 2.) Solution: Use a dummy between the end of task A and the beginning of task C.

NOTES

A second task can be started before part of a first task is done. Solution: Add an extra event where the second task can begin and use multiple arrows to break the first task into two subtasks. In Figure 3, event 2 was added, splitting task A.

Figure 1: Dummy separating simultaneous tasks

Figure 2: Dummy keeping sequence correct

Figure 3: Using an extra event

259

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

6. When the network is correct, label all events in sequence with event numbers in the circles. It can be useful to label all tasks in sequence, using letters. Scheduling: Critical Path Method (CPM) 7. Determine task timesthe best estimate of the time that each task should require. Use one measuring unit (hours, days or weeks) throughout, for consistency. Write the time on each tasks arrow. 8. Determine the critical path, the longest path from the beginning to the end of the project. Mark the critical path with a heavy line or color. Calculate the length of the critical path: the sum of all the task times on the path. 9. Calculate the earliest times each task can start and finish, based on how long preceding tasks take. These are called earliest start (ES) and earliest finish (EF). Start with the first task, where ES = 0, and work forward. Draw a square divided into four quadrants, as in the following table. Write the ES in the top left box and the EF in the top right. For each task:
o o

Earliest start (ES) = the largest EF of the tasks leading into this one Earliest finish (EF) = ES + task time for this task Table 4.12 Arrow diagram time box ES Earliest start LS Latest start EF Earliest finish LF Latest finish

10. Calculate the latest times each task can start and finish without upsetting the project schedule, based on how long later tasks will take. These are called latest start (LS) and latest finish (LF). Start from the last task, where the latest finish is the project deadline, and work backwards. Write the LS in the lower left box and the LF in the lower right box.
o

Latest finish (LF) = the smallest LS of all tasks immediately following this one Latest start (LS) = LF task time for this task
260

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

11. Calculate slack times for each task and for the entire project. Total slack is the time a job could be postponed without delaying the project schedule. Total slack = LS ES = LF EF Free slack is the time a task could be postponed without affecting the early start of any job following it. Free slack = the earliest ES of all tasks immediately following this one EF View example of a completed arrow diagram.

NOTES

FIGURE 4.16 Process Decision Program Chart The process decision program chart systematically identifies what might go wrong in a plan under development. Countermeasures are developed to prevent or
261 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

offset those problems. By using PDPC, you can either revise the plan to avoid the problems or be ready with the best response when a problem occurs. When to Use

Before implementing a plan, especially when the plan is large and complex. When the plan must be completed on schedule. When the price of failure is high.

Procedure

Obtain or develop a tree diagram of the proposed plan. This should be a highlevel diagram showing the objective, a second level of main activities and a third level of broadly defined tasks to accomplish the main activities.

For each task on the third level, brainstorm what could go wrong. Review all the potential problems and eliminate any that are improbable or whose consequences would be insignificant. Show the problems as a fourth level linked to the tasks.

For each potential problem, brainstorm possible countermeasures. These might be actions or changes to the plan that would prevent the problem, or actions that would remedy it once it occurred. Show the countermeasures as a fifth level, outlined in clouds or jagged lines.

Decide how practical each countermeasure is. Use criteria such as cost, time required, ease of implementation and effectiveness. Mark impractical countermeasures with an X and practical ones with an O.

Here are some questions that can be used to identify problems:


o

What inputs must be present? Are there any undesirable inputs linked to the good inputs?

o o

What outputs are we expecting? Might others happen as well? What is this supposed to do? Is there something else that it might do instead or in addition?

Does this depend on actions , conditions or events? Are these controllable or uncontrollable?

Anna University Chennai

262

DBA 1656
o o o o o o

QUALITY MANAGEMENT

What cannot be changed or is inflexible? Have we allowed any margin for error? What assumptions are we making that could turn out to be wrong? What has been our experience in similar situations in the past? How is this different from before? If we wanted this to fail, how could we accomplish that?

NOTES

Example

A medical group is planning to improve the care of patients with chronic illnesses such as diabetes and asthma through a new chronic illness management program (CIMP). They have defined four main elements and, for each of these elements, key components. The information is laid out in the process decision program chart below. Dotted lines represent sections of the chart that have been omitted. Only some of the potential problems and countermeasures identified by the planning team are shown on this chart. Process decision program chart example

FIGURE 4.17
263 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

For example, one of the possible problems with patients goal-setting is backsliding. The team liked the idea of each patient having a buddy or sponsor and will add that to the program design. Other areas of the chart helped them plan better rollout, such as arranging for all staff to visit a clinic with a CIMP program in place. Still other areas allowed them to plan in advance for problems, such as training the CIMP nurses how to counsel patients who choose inappropriate goals. 4.13 Benchmarking Why Benchmark? One of the prime reasons for using QFD is to develop a product or service which will excite the customer and get him/her to purchase your product. When a team captures the customers perceptions of how well different products perform in the marketplace, the team can better understand what is driving the purchase decision. They are able to determine what the market likes and dislikes. However, they are really still dealing with Customer Perceptions and not actual performance. They have not necessarily learned what they, as a team, have to do to create the desired level of Perceived Performance. Benchmarking your own, and others, products against the Design Measures which the team has established helps to define the level of Real Performance required to produce the desired level of Perceived Performance. It also helps you to answer the following questions: Has the team defined the right Measures to predict Customer Satisfaction? Does the product have perception, as opposed to, technical problems? Benchmarking is a relatively expensiveand time consuming process in most industries. Therefore, it is recommended practice to Benchmark only against the critical Design Measures. Criticality is defined by how important a particular Measure is to the success of the product and whether there are special circumstances impacting a particular Measure. A special circumstance might include whether a particular Measure is new or complex. Typically, a team might only Benchmark 50 percent of the Design Measures. Sorting the List of Design Measures based upon their importance values is a good way to identify which Measures to Benchmark. Who should we Benchmark? Generally, teams benchmark the same products or services for which they captured performance perceptions. In this way, they can try to correlate Actual

Anna University Chennai

264

DBA 1656

QUALITY MANAGEMENT

Performance with the Perceived Performance. A good policy is to Benchmark products across the whole spectrum of performance. In this way, it becomes much clearer what level of performance is perceived to be inadequate, what level is acceptable, and what level of performance currently gets customers excited about a product. Benchmarking all of the competitive products is not required; just check representative products. How do we capture the results of Benchmarking? There are two schools of thought relative to capturing Benchmark Results. The first suggests that the team capture the raw Benchmark data directly and associate that data with the appropriate Measure. The other suggests that the team translate the raw Benchmark data into the same scale as was used to capture the perceived performance ratings. Capturing the raw data and using it directly through the process tends to make it easier to understand exactly how well a product has to perform in order to achieve a desired level of customer satisfaction. However, the raw data sometimes implies too much precision for the process. For example, if the team were Benchmarking Number of Commands Required to Perform the Desired Functions as a way of predicting whether a software package would be perceived to be Is easy to use, they could easily get caught up in counting precise numbers when, in reality, Less than 10, 10 to 20, and More than 20 might be sufficiently accurate for the purposes of the team. On the other hand, translating the raw Benchmark data into the same rating scale as was used to capture perceived performances forces the team to repeatedly translate those ratings back into their original values. This tends to make nuances in the data disappear and be lost from consideration. However, since only numeric rating data is captured with this approach, QFD/CAPTURE could graph this data. QFD/CAPTURE supports both of these approaches. The general process is to define Related Data columns for the list whose entries are to be Benchmarked. Each column would represent a particular product. If the raw data is to be captured, the team would configure the Related Data columns to contain text so that they could enter any type of data and units which are appropriate. If they instead want to capture the ratings, they would configure the columns to contain numbers only. Setting Target Values How should we set our Target Values? The final goal of many QFD projects is to set the Target Values for the Design Measures. This step occurs when the data gathered throughout the process is brought together and final decisions are made to answer the question What are we really going to do (with respect to this product or service)? Setting Target Values should be relatively easy because:
265

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

The team has already defined where they want their product to be positioned for the Customer. The team has Benchmarked the existing products to gain a good understanding of what level of actual performance is required in order to produce the desired level of perceived performance. The team has evaluated the Tradeoffs between Design Measures in order to determine what compromises may be required and how those compromises would be made. Taking into account all of this information, the team decides upon the Targets which they will shoot for. Normally at this point, the team would not decide how they are going to achieve the Target Values. They are just stating, we know that we have to achieve this level of performance if we are going to be perceived the way in which we want to be perceived. Deciding on the implementation approach will generally occur during the Conceptualization process. QFD/CAPTURE supports setting Targets for a List through the Related Data columns associated with that List. Generally, a separate column is defined for each release which is being planned. For example, if a new product were going to be released in 1996 and followed up with enhancements in 1997 and 1998, the team would create a separate Related Data column for each year. This would allow the team to show the progression of product performance over the life of the Product. This implies a long term planning perspective rather than just a short term, getit-out-the-door, perspective. Since Target data is generally textual, these columns would be configured to display Text (as opposed to just numeric data). 4.10 POKAYOKE From the Japanese words poka (mistakes) and yokeru (to avoid). Poka Yoke is a mistake-proofing concept that aims to not only minimize defects (and waste due to defects), but to eliminate the possibility of defects at the source by establishing procedures and tools early on in a manufacturing process that may it impossible to perform a task or make a component incorrectly. A simple example is the hole near the rim of most sinks that prevent overflows. Poka-yoke - pronounced POH-kah YOH-keh means fail-safing or mistakeproofing avoiding (yokeru) inadvertent errors (poka)) is a behavior-shaping constraint, or a method of preventing errors by putting limits on how an operation can be performed in order to force the correct completion of the operation. The concept was originated by Shigeo Shingo as part of the Toyota Production System. Originally described as Baka-yoke, but as this means fool-proofing (or idiot proofing) the

Anna University Chennai

266

DBA 1656

QUALITY MANAGEMENT

name was changed to the milder Poka-yoke. An example of this in general experience is the inability to remove a car key from the ignition switch of an automobile if the automatic transmission is not first put in the Park position, so that the driver cannot leave the car in an unsafe parking condition where the wheels are not locked against movement. In the IT world another example can be found in a normal 3.5" floppy disk: the top-right corner is shaped in a certain way so that the disk cannot be inserted upside-down. In the manufacturing world an example might be that the jig for holding pieces for processing only allows pieces to be held in one orientation, or has switches on the jig to detect whether a hole has been previously cut or not, or it might count the number of spot welds created to ensure that, say, four have been executed by the operator. Implementation Shigeo Shingo recognises three types of Poka-Yoke 1. The contact method identifies defects by whether or not contact is established between the device and the product. Colour detection and other product property techniques are considered extensions of this. 2. The fixed-value method determines whether a given number of movements have been made. 3. The motion-step method determines whether the prescribed steps or motions of the process have been followed. Poka-yoke either give warnings or can prevent, or control, the wrong action. It is suggested that the choice between these two should be made based on the behaviours in the process, occasional errors may warrant warnings whereas frequent errors, or those impossible to correct, may warrant a control poka-yoke. SUMMARY Many tools and techniques are developed over time to manage quality in various industries. This is due to the fact that the nature and character varies from one to another. Developmentof quality functions in line with customer expectations are elaborated in this unit. The information collected for various purposes have to be judiciously used by establishing a logical relationship between them. The House of Quality is presented in a simpler way for better understanding. The quality function deployment is extensively deliberated right from thinking stage. The Failure Mode Effect Analusis, its, evolution, benefits, types and categories along with its application are presented. The FMEA design process, the steps involved in them are presented by using examples. Bath Tub Curve is demonstrated and the reliability concepts are deliberated in detail. Using
267

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

illustrations, the Taguchis Loss Function, its applications, the tolerance design are dealt with. The seven tools of quality Ishikawa diagram, Pareto Chart, Check Sheet, Control Chart, Flow Chart, Histogram and Scatter Diagrams are deliberated using examples. The seven new management tools viz. Affinity Diagram, Relations Diagram, Tree Diagram, Matrix Diagram, Matrix Data Analysis, Arrow Diagram and Process Decision Program Chart are also presented with illustrations. REVIEW QUESTIONS 1. Expalin the role of the customer in ensuring quality in an organization. 2. What is House of Quality? Explain the building process. 3. Explain how the quality function is deployed in organization without hazzles. 4. What is FEMA? Explain the process of analyzing failures. 5. Discuss the complementary nature of reliability and failures using examples. 6. Explain the parameter and tolerance of design methodology in Taguchi technique. 7. Compare and contrast the old and new management tools? Give examples. 8. Detail how benchmarking is arrived at. How the concept of Poka yoke is used in this?

Anna University Chennai

268

DBA 1656

QUALITY MANAGEMENT

NOTES

UNIT V QUALITY SYSTEMS ORGANIZING AND IMPLEMENTATION


INTRODUCTION
Standards give the professional a clarity of achievability. The Quality Management System has to engulf the organization so as to achieve Total Quality Management. Involvement of everyone-employee, leader and other stakeholders in a positive way will bring in success. To achieve this, the use of latest technologies like Computers, Telecommunication have become part of system organization and implementation process. This unit deals with Introduction to IS / ISO 9004 : 2000, Quality Management Systems, Guidelines for Performance Improvements, Quality Audits, TQM Culture, Leadership, Quality Council, Employee involvement, Motivation, Empowerment, Recognition and Reward, Information Technology Computers and Quality Functions, Internet and Electronic Communications, Information Quality Issues LEARNING OBJECTIVES Upon completion of this unit, you will be able to: y y y y y Classify the quality systems Develop and organize Quality Management Systems Handle change management on culture Focusing on all the stakeholders Apply latest developments in ICT for ensuring quality in organization.

5.1 INTRODUCTION TO IS / ISO 9004 : 2000 ISO 9000 The ISO 9000 Series, issued in 1987 by the International Organization for Standardization (ISO), is a set of international standards on quality and quality
269 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

management. The standards are generic and not specific to any particular product. They were adopted by the American Society for Quality Control (ASQC), now American Society for Quality, and issued in the United States as the ANSI/ASQC Q90 Series (revised in 1994 as the ANSI/ASQC Q9000 Series). ISO 9000:2000 is the most recent revision of the standards. ISO 9000 is a family of standards for quality management systems. ISO 9000 is maintained by ISO, the International Organization for Standardization and is administered by accreditation and certification bodies. For a manufacturer, some of the requirements in ISO 9001 (which is one of the standards in the ISO 9000 family) would include: y y y y a set of procedures that cover all key processes in the business; monitoring manufacturing processes to ensure they are producing quality product; keeping proper records; checking outgoing product for defects, with appropriate corrective action where necessary; and y regularly reviewing individual processes and the quality system itself for effectiveness. A company or organization that has been independently audited and certified to be in conformance with ISO 9001 may publicly state that it is ISO 9001 certified or ISO 9001 registered. Certification to an ISO 9000 standard does not guarantee the compliance (and therefore the quality) of end products and services; rather, it certifies that consistent business processes are being applied. Although the standards originated in manufacturing, they are now employed across a wide range of other types of organizations. A product, in ISO vocabulary, can mean a physical object, or services, or software. In fact, according to ISO in 2004, service sectors now account by far for the highest number of ISO 9001:2000 certificates - about 31% of the total History of ISO 9000 Pre-ISO 9000 During World War II, there were quality problems in many British high-tech industries such as munitions, where bombs were going off in factories. The adopted solution was to require factories to document their manufacturing procedures and to

Anna University Chennai

270

DBA 1656

QUALITY MANAGEMENT

prove by record-keeping that the procedures were being followed. The name of the standard was BS 5750, and it was known as a management standard because it did not specify what to manufacture, but how to manage the manufacturing process. According to Seddon, In 1987, the British Government persuaded the International Standards Organisation to adopt BS 5750 as an international standard. BS 5750 became ISO 9000. Certification ISO does not itself certify organizations. Many countries have formed accreditation bodies to authorize certification bodies, which audit organizations applying for ISO 9001 compliance certification. It is important to note that it is not possible to be certified to ISO 9000. Although commonly referred to as ISO 9000:2000 certification, the actual standard to which an organizations quality management can be certified is ISO 9001:2000. Both the accreditation bodies and the certification bodies charge fees for their services. The various accreditation bodies have mutual agreements with each other to ensure that certificates issued by one of the Accredited Certification Bodies (CB) are accepted world-wide. The applying organization is assessed based on an extensive sample of its sites, functions, products, services and processes; a list of problems (action requests or non-compliances) is made known to the management. If there are no major problems on this list, the certification body will issue an ISO 9001 certificate for each geographical site it has visited, once it receives a satisfactory improvement plan from the management showing how any problems will be resolved. An ISO certificate is not a once-and-for-all award, but must be renewed at regular intervals recommended by the certification body, usually around three years. In contrast to the Capability Maturity Model there are no grades of competence within ISO 9001. Fundamentals of ISO 9000 ISO 9000 represents an evolution of traditional quality systems rather than a technical change. Whereas traditional quality systems rely on inspection of products to ensure quality, the ISO 9000compliant quality system relies on the control and
271

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

continuous improvement of the processes used to design, produce, inspect, install, and service products. In short, ISO 9000 represents a systemic tool for bringing quality processes under control. Once processes are controlled, they can be continuously improved, resulting in higher-quality products. ISO 9000 represents a significant step beyond ensuring that specific products or services meet specifications or industry standards. It certifies that a facility has implemented a quality system capable of consistently producing quality products. That is, ISO 9000 does not certify the quality of products; it certifies the processes used to develop them. Thus ISO 9000 is a process-oriented rather than a results-oriented standard. It affects every function, process, and employee at a facility, and it stresses management commitment to quality. But above all, it is customer-focused: It strives to meet or exceed customer expectations. ISO 9000 is not a prescriptive standard for quality. The requirements section (ISO 9001), which covers all aspects of design, development, production, test, training, and service, is less than 10 pages long. For example, when addressing the product design process, ISO 9000 focuses on design inputs, outputs, changes, and verification. It is not meant to inhibit creative thinking. ISO 9000 is a system quality standard that provides requirements and guidance on all aspects of a companys procedures, organization, and personnel that affect qualityfrom product inception through delivery to the customer. It also provides significant requirements and guidance on the quality of the output delivered to the customer. Pertinent questions are: What benefits will the proposed changes to the procedures, organization, and personnel provide to the customer? Will the proposed changes help to continuously improve product delivery schedules and product quality and reduce the amount of variance in product output? ISO 9000 does not require inspection to verify quality, nor is it the preferred method. ISO requires that the output be verified according to documented processcontrol procedures. ISO 9000 does not mandate that specific statistical processes be used; it requires the user to implement appropriate statistical processes. ISO 9000 mandates product-control methods such as inspection only when process-control methods are neither practical nor feasible.

Anna University Chennai

272

DBA 1656

QUALITY MANAGEMENT

ISO 9000 does not provide industry-specific performance requirements. It provides a quality model that can be applied to virtually every industry procurement situation and is being used worldwide for commercial and, recently, government procurements. Many suppliers already have a quality system in place, be it simple or elaborate. ISO 9000 does not require a supplier to add new or redundant requirements to an existing quality system. Rather, it requires that the supplier specify a basic, commonsense, minimal quality system that will meet the quality needs of the customer. Thus, many suppliers find that their operative quality system already meets some or all of the ISO 9000 requirements. They only need to show that their existing procedures correspond to the relevant sections of ISO 9000. ISO 9000 provides suppliers with the flexibility of designing a quality system for their particular type of business, market environment, and strategic objectives. It is expected that management, aided by experienced internal quality personnel and, if necessary, external ISO consultants, will determine the exact set of supplier quality requirements. To ensure the overall success of the quality program, however, the specific work procedures should be created by those actually doing the work rather than by management or ISO consultants. Although an organizations documentation of work procedures may be ISO 9000 compliant, if employees do not follow the procedures, the organization may not attain ISO 9000 certification. Drawing upon employee expertise and keeping employees involved in the process when improving and controlling procedures are critical to attaining ISO 9000 compliance. Developing a quality system is not a sprint, but a journey, and because processes are continuously being improved, it is a journey without an end. ISO 9000 does not mandate the use of short-term motivational techniques to foster employee enthusiasm for a suppliers quality system program. Attempting to motivate employees by promising lower overhead or greater market share is not likely to be successful. Instead, it is recommended that employees be educated on how ISO 9000 standards will help them perform their jobs better and faster. ISO 9000 emphasizes that for any quality system to be successful, topmanagement commitment and active involvement are essential. Management is responsible for defining and communicating the corporate quality policy. It must define
273

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

the roles and responsibilities of individuals responsible for quality and ensure that employees have the proper background for their jobs and are adequately trained. Management must periodically review the effectiveness of the quality system. It should not back the effort to comply with ISO 9000 during its inception and then back down when the scope and cost of the effort is fully realized. When employees sense that management commitment has diminished, their own commitment slackens. Employees typically want out of a costly project not backed by management. ISO 9000 does require that an organization have documented and implemented quality procedures that ensure personnel understand the quality system, that management maintain control of the system, and that internal and external audits be performed to verify the systems performance. Because ISO 9000 affects the entire organization, all employees should be given at least basic instruction in the ISO 9000 process and its specific implementation at their facility. Training should emphasize goals, benefits, and the specific responsibilities and feedback required of each employee. ISO 9000 uses customer satisfaction as its benchmark. But the customers of ISO 9000compliant processes include not only the obvious end-users of the product, but also an organizations product designers, manufacturers, inspectors, deliverers, and sales force. Improving the processes that produce a quality product can provide an additional benefit: When the processes are well defined and constant and when employees are well trained to perform these processes, employee safety typically improves significantly. Also, during the course of improving its processes, a company often finds after close inspection that many of its processes and procedures are ineffectual and can be eliminated. Thus, while ISO 9000 requires preparation and maintenance of a formidable set of documents and records, the total paperwork of a company implementing ISO 9000 may decrease significantly in the long run. Other benefits of ISO 9000 compliance are a decrease in product defects and customer complaints and increased manufacturing yields. A final but very important by-product of implementing ISO 9000 is a heightened sense of mission at a company and an increased level of cooperation between departments. ISO 9000 is not product-quality oriented. It does not provide criteria for separating acceptable output from defective output. Instead, it is a strategy for continuous improvement where employees meet and exceed customer quality requirements and, in doing so, continuously improve the quality of the product.

Anna University Chennai

274

DBA 1656

QUALITY MANAGEMENT

ISO 9000 recognizes that when a customer is looking at a specific part of a product (e.g., car, stereo system), he is often looking at an item (e.g., engine, stereo cabinet) provided by a subcontractor. Hence, ISO 9000 requires that a company verify that its subcontractors are providing quality items. Today, organizations with excellent quality systems often partner with their subcontractors. ISO 9000 provides an excellent framework for such a relationship, with subcontractors providing the raw materials and components of the final product. The ISO 9000 family is a set of quality system management standards, the first in a set of evolving management system standards. Standards for environmental management are in place; standards for occupational safety, health management, and energy management will soon follow. These new standards will affect the space and aircraft industries just as they affect other industries. In summary, ISO 9000 compliance provides customers with the assurance that approved raw materials for a product have been purchased and that the product has been manufactured according to the correct specifications, assembled by trained employees, properly inspected and tested, adequately packaged for preservation, and transported in a manner that prevents damage to it en route. Overall, ISO 9000 compliance helps generate quality awareness among a companys employees, an improved competitive position for the company, an enhanced customer quality image, and increased market share and profits. Components of the ISO 9000 Series The ISO 9000 Series includes three standards: y ISO 9000:2000 Quality Management SystemsFundamentals and Vocabulary y ISO 9001:2000 Quality Management SystemsRequirements y ISO 9004:2000 Quality Management SystemsGuidelines for Performance Improvement ISO 9000 family ISO 9000 includes the following standards: y ISO 9000:2005, Quality management systems - Fundamentals and vocabulary. covers the basics of what quality management systems are and also contains the core language of the ISO 9000 series of standards.
275

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

y ISO 9001:2000 Quality management systems - Requirements is intended for use in any organization which designs, develops, manufactures, installs and/ or services any product or provides any form of service. It provides a number of requirements which an organization needs to fulfill if it is to achieve customer satisfaction through consistent products and services which meet customer expectations. This is the only implementation for which third-party auditors may grant certifications. y ISO 9004:2000 Quality management systems - Guidelines for performance improvements. Covers continual improvement. This gives you advice on what you could do to enhance a mature system. This standard very specifically states that it is not intended as a guide to implementation. ISO 9002:1994 and ISO 9003:1994 were discontinued in the ISO 9000:2000 family of standards. Organizations that do not have design or manufacturing responsibilities (and were previously certified using ISO 9002:1994) will now have to use ISO 9001:2000 for certification. These organizations are allowed to exclude design and manufacturing requirements in ISO 9001:2000 based on the rules for exception given in Clause 1.2, Permissible Exclusions. ISO Facts The International Organization for Standardization (ISO), founded in 1946, is a global federation of national standards organizations that includes some 130 member nations: y ISO is based in Geneva, Switzerland. y ISOs mission is to develop standards that facilitate trade across international borders. y In 1979, the Technical Committee 176 (ISO/TC 176) was established to create international standards for quality assurance. y Representatives from the United States and many other countries served on the committees responsible for developing ISO 9000. y Early in the 1990s, the chair of the consortium was a U.S. citizen from American Telephone & Telegraph (AT&T). y The U.S. standards organization within ISO is the American National Standards Institute (ANSI).

Anna University Chennai

276

DBA 1656

QUALITY MANAGEMENT

y The American Society for Quality (ASQ) has published a U.S. version of the ISO 9000 standards under the name Q9000. y ISO serves only as a disseminator of information on system quality. y ISO 9000 certificates are not issued on behalf of ISO. y ISO does not monitor the activities of ISO 9000 accreditation bodies. Monitoring is done by accreditation boards within member nations. Philosophy of ISO 9000 ISO 9000 places the responsibility for the establishment, performance, and maintenance of a quality system directly with a companys top management: y ISO requires the top management to define a quality policy, provide adequate resources for its implementation, and verify its performance. y Top management must demonstrate how its employees acquire and maintain awareness of its quality policy. The ISO 9000 process strives for generic applicability: y No specific methods, statistical processes, or techniques are mandated. y Emphasis is on the overall objective of meeting customer expectations regarding the output of the system quality process. y ISO has said that it will never issue industry (product-specific) quality guidelines. ISO 9000 strives to achieve a quality system by employing the following practices for continuous improvement: y y y y y Prevention rather than detection by inspection Comprehensive review of critical process points Ongoing communication between the facility, its suppliers, and its customers Documentation of processes and quality outcomes Management commitment at the highest levels

NOTES

ISO 9000 provides a clear definition of the management style required to achieve a world-class quality system: y Formal organization that delineates responsibilities
277 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

y Documented, authorized, and enforced procedures for all key activities y Full set of archived but periodically analyzed quality outcome records y Set of periodic reviews to track system quality performance and plan and implement corrective actions y Philosophy of regulating, but not eliminating, individual initiative in achieving system quality ISO 9000 provides a facility with a formal management style leading to system quality. The measure of success in implementing system quality is determined by wellorganized, well-planned, and well-executed periodic internal and external audits of the processes and quality outcomes of the facility. The majority of ISO member nations will not mandate the adoption of the ISO 9000 standards in the foreseeable future. To date, only Australia mandates adoption of the standards. How well the ISO standards facilitate trade in the international marketplace will determine how widespread their use becomes. Advantages According to the Providence Business News , implementing ISO often gives the following advantages: 1. 2. 3. 4. 5. 6. Create a more efficient, effective operation Increase customer satisfaction and retention Reduce audits Enhance marketing Improve employee motivation, awareness, and morale Promote international trade

Problems A common criticism of ISO 9000 is the amount of money time paperwork required for registration
Anna University Chennai 278

DBA 1656

QUALITY MANAGEMENT

According to Barnes, Opponents claim that it is only for documentation. Proponents believe that if a company has documented its quality systems, then most of the paperwork has already been completed. The ISO 9004:2000 standard ISO 9004:2000 goes beyond ISO 9001:2000 in that it provides guidance on how you can continually improve your business quality management system so that it benefits not only your customers but also: employees owners suppliers society in general

NOTES

By measuring these groups satisfaction with your business, youll be able to assess whether youre continuing to improve. Read about ISO 9004:2000 at the British Standards Institution (BSI) website. The ISO 9000 series, which includes 9001 and 9004, is based around eight quality management principles that the senior managers should use as a framework to improve the business: Customer focus - they must understand and fulfil customer needs. Leadership - they should demonstrate strong leadership skills to increase employee motivation. Involvement of people - all levels of staff should be aware of the importance of providing what the customer requires and their responsibilities within the business. Process approach - identifying your essential business activities and considering each one as part of a process. System approach to management - managing your processes together as a system, leading to greater efficiency and focus. You could think of each process as a cog in a machine, helping it to run smoothly. Continual improvement - this should be a permanent business objective. Factual approach to decision-making - senior staff should base decisions on thorough analysis of data and information.
279 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Mutually beneficial supplier relationships - managers should recognise that your business and its suppliers depend on each other. As ISO 9004:2000 is a set of guidelines and recommendations, you cant be certified as achieving it. 5.2 QUALITY MANAGEMENT SYSTEMS Quality Management System (QMS) can be defined as a set of policies, processes and procedures required for planning and Execution (Production / Development / Service) in their core business area of an Organization. QMS integrates the various internal processes within the organization and intends to provide a process approach for project execution. QMS enables the organizations to identify, measure, control and improve the various core business processes that will ultimately lead to improved business performance. Concept of QMS The concept of quality as we think of it now first emerged out of the Industrial Revolution. Previously goods had been made from start to finish by the same person or team of people, with handcrafting and tweaking the product to meet quality criteria. Mass production brought huge teams of people together to work on specific stages of production where one person would not necessarily complete a product from start to finish. In the late 1800s pioneers such as Frederick Winslow Taylor and Henry Ford recognised the limitations of the methods being used in mass production at the time and the subsequent varying quality of output. Taylor established Quality Departments to oversee the quality of production and rectifying of errors, and Ford emphasised standardisation of design and component standards to ensure a standard product was produced. Management of quality was the responsibility of the Quality department and was implemented by inspection of product output to catch defects. Application of statistical control came later as a result of World War production methods. Quality management systems are the outgrowth of work done by W. Edwards Deming, a statistician, after whom the Deming Prize for quality is named. Quality, as a profession and the managerial process associated with the quality function, was introduced during the second-half of the 20th century, and has evolved since then. No other profession has seen as many changes as the quality profession.

Anna University Chennai

280

DBA 1656

QUALITY MANAGEMENT

The quality profession grew from simple control, to engineering, to systems engineering. Quality control activities were predominant in the 1940s, 1950s, and 1960s. The 1970s were an era of quality engineering and the 1990s saw quality systems as an emerging field. Like medicine, accounting, and engineering, quality has achieved status as a recognized profession. Quality management organizations and awards The International Organization for Standardizations ISO 9000 series describes standards for a QMS addressing the processes surrounding the design, development and delivery of a general product or service. Organisztions can participate in a continuing certification process to demonstrate their compliance with the standard. The Malcolm Baldrige National Quality Award is a competition to identify and recognize top-quality U.S. companies. This model addresses a broadly based range of quality criteria, including commercial success and corporate leadership. Once an organization has won the award it has to wait several years before being eligible to apply again. The European Foundation for Quality Managements EFQM Excellence Model supports an award scheme similar to the Malcolm Baldrige Award for European companies. In Canada, the National Quality Institute presents the Canada Awards for Excellence on an annual basis to organizations that have displayed outstanding performance in the areas of Quality and Workplace Wellness, and have met the Institutes criteria with documented overall achievements and results. The Alliance for Performance Excellence is a network of state, local, and international organizations that use the Malcolm Baldrige National Quality Award criteria and model at the grassroots level to improve the performance of local organizations and economies. NetworkforExcellence.org is the Alliance web site; browsers can find Alliance members in their state and get the latest news and events from the Baldrige community. 5.3 GUIDELINES FOR PERFORMANCE IMPROVEMENTS 1. Purpose: The purpose of a Performance Improvement Plan is to communicate to the employee the specific job performance areas that do not meet expected standards.
281

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

2.

Develop a Performance Improvement Plan: a) b) c) d) e) f) Clearly state why the employees job performance is a concern and how it impacts the work environment. Summarize the facts and events that necessitate the development of a Performance Improvement Plan. Develop specific and measurable steps to improve performance and include the employees ideas for improvement. Establish reasonable timelines for improved performance on each expectation. Conduct periodic reviews on a regular basis to monitor progress being made toward the expected outcome and provide feedback. Communicate consequences for failure to meet expectations and sustain improved performance.

3. Implement the Performance Improvement Plan: a) b) c) Document each step of the Performance Improvement Plan Provide constructive feedback to help the employee understand how he/she is doing and what is expected. Focus on the job and not on the person. Concentrate on a specific behavior to enable the employee to understand what you want and why. The individual will feel less defensive. * * Example with focus on behavior: Your report is two days late. Example with focus on person: You are not very reliable about getting things done on time.

d) e)

Always meet with the employee and provide an opportunity for discussion and feedback. At the end of the Performance Improvement Plan period, the supervisor will determine if the process was satisfactorily completed or if progressive discipline will be implemented in conjunction with Human Resources.

5.4 QUALITY AUDITS Quality audit means a systematic, independent examination of a quality system. Quality audits are typically performed at defined intervals and ensures that the institution
Anna University Chennai 282

DBA 1656

QUALITY MANAGEMENT

has clearly-defined internal quality monitoring procedures linked to effective action. The checking determines if the quality system complies with applicable regulations or standards The process involves assessing the standard operating procedures (SOPs) for compliance to the regulations, and also assessing the actual process and results against what is stated in the SOP. The U.S. Food and Drug Administration requires quality auditing to be done as part of its Quality System Regulation (QSR) for medical devices, title 21 of the United States Code of Federal Regulations part 820. The process of a Quality Audit can be managed using software tools, often Web-based. Internal Quality auditing is an important element in ISOs quality system standard, ISO 9001. . With the upgrade of the ISO9000 series of standards from the 1994 to 2000 series, the focus of audits has shifted from procedural adherence only to measurement of the effectiveness of the Quality Management System processes to deliver in accordance with planned results. Higher education quality audit is an approach adopted by several countries, including New Zealand, Australia, Sweden, Finland Norway and the USA. It was initiated in the UK and is a term designed to focus on procedures rather than quality. Guidelines for Planning and Performing Quality Audits ISO 10011-1: 1990 Quality audit objectives

NOTES

Quality audits are intended to achieve the following kinds of objectives: To determine to what extent your quality system: Achieves its objectives. Conforms to your requirements. Complies with regulatory requirements. Meets customers contractual requirements. Conforms to a recognized quality standard. To improve the efficiency and effectiveness of your quality management system. To list your quality system in registry of an independent agency. To verify that your quality system continues to meet requirements.
283 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Professional conduct

Auditors must behave in a professional manner. Auditors must: Have integrity and be independent and objective. Have the authority they need to do a proper job. Avoid compromising the audit by discussing audit details with auditees during the audit.

The lead auditors job A lead auditors job is to:


Manage the audit. Assign audit tasks. Help select auditors. Orient the audit team. Prepare the audit plan. Define auditor qualifications. Clarify quality audit requirements. Communicate audit requirements. Prepare audit forms and checklists. Review quality system documents. Report major nonconformities immediately. Interact with auditees management and staff. Prepare, submit, and discuss audit reports.

Auditors job

An auditors job is to: Evaluate the quality system. Carry out assigned audit tasks. Comply with audit requirements. Respect all confidentiality requirements. Collect evidence about the quality system. Document audit observations and conclusions. Safeguard audit documents, records, and reports. Determine whether quality policy is being applied. Find out if the quality objectives are being achieved. See whether quality procedures are being followed. Detect evidence that might invalidate audit results.
284

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Clients job

NOTES

A clients job is to: Initiate the audit process. Select the auditor organization. Decide whether an audit needs to be done. Define the purpose and scope of the audit. Ensure that audit resources are adequate. Determine how often audits must be done. Specify which follow-up actions the auditee should take. Indicate which standards should be used to evaluate compliance. Select the elements, activities, and locations that must be audited. Ensure enough evidence is collected to draw valid conclusions. Receive and review the reports prepared by auditors.

NOTE: A client is the organization that asked for the audit. The client could be an auditee, a customer, a regulatory body, or a registrar. Auditees job

An auditees job is to: Explain the nature, purpose, and scope of the audit to employees. Appoint employees to accompany and assist the auditors.

Ensure that all personnel cooperate fully with the audit team. Provide the resources the audit team needs to do the audit.

Allow auditors to examine all documents, records, and facilities. Correct and prevent problems that were identified by the audit. NOTE: An auditee is the organization being audited or a member of that organization. When to do an audit

A client may initiate an audit because:


A regulatory agency requires an audit. A previous audit indicated that a follow-up audit was necessary. An auditee has made important changes in: Policies or procedures.

Technologies or techniques.
285 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Management or organization. An auditee may carry out audits on a regular basis to improve quality system performance or to achieve business objectives.

Prepare an audit plan

The auditor should begin planning the audit by reviewing documents (e.g. manuals) that both describe the quality system and explain how it is attempting to meet quality requirements.

If this preliminary review shows that the quality system is inadequate, the audit process should be suspended until this inadequacy is resolved.

Prepare an audit plan. The plan should be prepared by the lead auditor and approved by the client before the audit begins. The audit plan should: Define the objectives and scope of the audit.

Explain how long each phase of the audit will take. Specify where and when the audit will be carried out. Introduce the lead auditor and his team members. Identify the quality elements that will be audited. Identify the groups and areas that will be audited. List the documents and records that will be studied. List the people who are responsible for quality and whose areas and functions will be audited. Explain when meetings will be held with auditees senior management. Clarify who will get the final audit report and when it will be ready.

Perform the quality audit

Start the quality audit. Start the audit by having an opening meeting with the auditees senior management. This meeting should: Introduce the audit team. Clarify scope, objectives, and schedule. Explain how the audit will be carried out. Confirm that the auditee is ready to support the audit process. Prepare audit working papers. Prepare checklists (use to evaluate quality management system elements).
286

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

Prepare forms (use to record observations and collect evidence). Collect evidence by: Interviewing personnel. Reading documents. Reviewing manuals. Studying records. Reading reports. Scanning files. Analyzing data. Observing activities. Examining conditions. Confirm interview evidence. Evidence collected through interviews should, whenever possible, be confirmed by more objective means. Investigate clues. Clues that point to possible quality management system nonconformities should be thoroughly and completely investigated. Document observations. Auditors must study the evidence and document their observations. List nonconformities. Auditors must study their observations and make a list of key nonconformities. They must ensure that nonconformities are: Supported by the evidence. Cross-referenced to the standards that are being violated. Draw conclusions. Auditors must draw conclusions about how well the quality system is applying its policies and achieving its objectives. Discuss results. Auditors should discuss evidence, observations, conclusions, recommendations, and nonconformities with auditee senior managers before they prepare a final audit report.

NOTES

Prepare the audit report

Prepare the final audit report. The audit report should be dated and signed by the lead auditor. This report should include: The detailed audit plan. A review of the evidence that was collected. A discussion of the conclusions that were drawn. A list of the nonconformities that were identified. A judgment about how well the quality system complies with all quality system requirements.
287 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

An assessment of the quality systems ability to achieve quality objectives and apply the quality system policy. Submit the audit report. The lead auditor should send the audit report to the client, and the client should send it to the auditee.

Follow-up steps

Take remedial actions. The auditee is expected to take whatever actions are necessary to correct or prevent nonconformities. Schedule follow-up audit. Follow-up audits should be scheduled in order to verify that corrective and preventive actions were taken.

Quality Management Vs Quality Audit In the ePMbook, we will make a distinction between Quality Management and Quality Audit. By Quality Management, we mean all the activities that are intended to bring about the desired level of quality. By Quality Audit we mean the procedural controls that ensure participants are adequately following the required procedures. These concepts are related, but should not be confused. In particular, Quality Audit relates to the approach to quality that is laid down in quality standards such as the ISO-900x standards. The abbreviation QA has been generally avoided in the ePMbook as it can mean different things - e.g. Quality Assurance, Quality Audit, testing, external reviews, etc. The principle behind Quality Audit The principles of Quality Audit, in the sense we mean it here, are based on the style of quality standards used in several formal national and international standards such as the ISO-900x international quality standards. These standards do not in themselves create quality. The logic is as follows. Every organization should define comprehensive procedures by which their products or services can be delivered consistently to the desired level of quality. As was discussed in the section on Quality Management, maximum quality is rarely the
Anna University Chennai 288

DBA 1656

QUALITY MANAGEMENT

desired objective since it can cost too much and take too long. The average product or service provides a sensible compromise between quality and cost. There is also a legitimate market for products that are low cost and low quality. Standards authorities do not seek to make that business judgement and enforce it upon businesses, except where certain minimum standards must be met (e.g. all cars must have seat belts that meet minimum safety standards, but there is no attempt to define how elegant or comfortable they are). The principle is that each organization should create thorough, controlled procedures for each of its processes. Those procedures should deliver the quality that is sought. The Quality Audit, therefore, only needs to ensure that procedures have been defined, controlled, communicated and used. Processes will be put in place to deal with corrective actions when deviations occur. This principle can be applied to continuous business process operations or recurring project work. It would not be normal to establish a set of quality controlled procedures for a one-off situation since the emphasis is consistency. This principle may be applied whether or not the organization seeks to establish or maintain an externally recognized quality certification such as ISO-900x. To achieve a certification, the procedures will be subjected to internal and external scrutiny. Preparing for Quality Audit Thorough procedures need to be defined, controlled, communicated and used. Thorough Procedures should cover all aspects of work where conformity and standards are required to achieve desired quality levels. For example, one might decide to control formal program testing, but leave the preliminary testing of a prototype to the programmers discretion. Any recurring aspect of work could merit regulation. The style and depth of the description will vary according to needs and preferences, provided it is sufficiently clear to be followed. A major tenet is that the defined procedures are good and will lead to the desired levels of quality. Considerable thought, consultation and trialing should be applied in order to define appropriate procedures. Procedures will often also require defined forms or software tools.
289

NOTES

Procedures

Defined

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Controlled

As with any good quality management, the procedures should be properly controlled in terms of accessibility, version control, update authorities etc. All participants need to know about the defined procedures which they exist, where to find them, what they cover. Quality reviewers are likely to check that team members understand about the procedures. The defined procedures should be followed. Checks will be made to ensure this is the case. A corrective action procedure will be applied to deal with shortcomings. Typically, the corrective action would either be to learn the lesson for next time, or to rework the item if it is sufficiently important.

Communicated

Used

There is no reason why these Quality Audit techniques should conflict with the projects Quality Management processes. Where project work is recurring, the aim should be for the Quality Methods and other procedures to be defined once for both purposes. Problems may occur where the current project has significant differences from earlier ones. Quality standards may have been set in stone as part of a quality certification. In extreme situations this can lead to wholly inappropriate procedures being forced upon the team, for example, using traditional structured analysis and design in a waterfall style approach for what would be handled best using iterative prototyping. The Project Manager may need to re-negotiate quality standards with the organizations Quality Manager. Operating Quality Audit A Quality Audit approach affects the entire work lifecycle:

Pre-defined standards will impact the way the project is planned Quality requirements for specific work packages and deliverables will be identified in advance Specific procedures will be followed at all stages Quality Methods must be defined and followed Completed work and deliverables should be reviewed for compliance.

This should be seen as an underlying framework and set of rules to apply in the projects Quality Management processes.
Anna University Chennai 290

DBA 1656

QUALITY MANAGEMENT

Quality Audit reviews Although the impact of Quality Audit will be across all parts of the lifecycle, specific Quality Audit activities tend to be applied as retrospective reviews that the Project Team correctly followed its defined procedures. Such reviews are most likely to be applied at phase end and project completion. Of course, the major drawback of such a review is that it is normally too late to affect the outcome of the work. The emphasis is often on learning lessons and fixing administrative items. In many ways, the purpose of the review is to encourage conformity by the threat of a subsequent bad experience with the quality police. CHARACTERISTICS OF AUDITS What is a quality auditor and what is the purpose of a quality audit? Is a quality audit similar to a financial audit? Is an audit the same as a surveillance or inspection? These types of questions are often asked by those unfamiliar with the quality auditing profession. As far as what a quality auditor is, Allan J. Sayle says it best: Auditors are the most important of the quality professionals. They must have the best and most thorough knowledge of business, systems, developments, etc. They see what works, what does not work, strengths, weaknesses of standards, codes, procedures and systems. The purpose of a quality audit is to assess or examine a product, the process used to produce a particular product or line of products or the system supporting the product to be produced. A quality audit is also used to determine whether or not the subject of the audit is operating in compliance with governing source documentation such as corporate directives, federal and state environmental protection laws and regulations, etc. A quality audit distinguishes itself from a financial audit in that a financial audits primary objective is to verify the integrity and accuracy of the accounting methods used within the organization. Yet, despite this basic difference, it is important to note that many of the present-day quality audit techniques have their traditional roots in financial audits. WHOS AUDITING WHOM? The audit can be accomplished by three different sets of auditors and auditees: first party, second party, and third party.
291

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

First-Party Audits The first-party audit is also known as an internal audit or self audit. It is performed within your own company. This can be a central office group auditing one of the plants, auditing within a division, local audits within the plant, or any number of similar combinations. There are no external customer-supplier audit relationships here, just internal customers and suppliers. Second-Party Audits A customer performs a second-party audit on a supplier. A contract is in place and goods are being, or will be, delivered. If you are in the process of approving a potential supplier through the application of these auditing techniques, you are performing a supplier survey. A survey is performed before the contract is signed; an audit is performed after the contract is signed. Second-party audits are also called external audits, if you are the one doing the auditing. If your customer is auditing you, it is still a second-party audit, but, since you are now on the receiving end, this is an extrinsic (not external) audit. Third-Party Audits Regulators or registrars perform third-party audits. Government inspectors may examine your operations to see if regulations are being obeyed. Within the United States, this is quite common in regulated industries, such as nuclear power stations and medical device manufacturers. Through these regulatory audits, the consumer public receives assurance that the laws are being obeyed and products are safe. Registration audits are performed as a condition of joining or being approved. Hospitals and universities are accredited by non-governmental agencies to certain industry standards. Trade organizations may wish to promote the safety and quality of their industry products or services through an audit program and seal of approval. Other countries often use the term certification rather than registration. Businesses around the world are registering their facilities to the ISO 9001 standard in order to gain marketing advantage. Done properly, this registration promotes better business practices and greater efficiencies.

Anna University Chennai

292

DBA 1656

QUALITY MANAGEMENT

5.5 TQM CULTURE Culture Culture is the pattern of shared beliefs and values that provides the members of an organization rules of behaviour or accepted norms for conducting operations. It is the philosophies, ideologies, values, assumptions, beliefs, expectations, attitudes, and norms that knit an organization together and are shared by employees. For example, IBMs basic beliefs are, (1) respect for the individual, (2) best customer service and (3) pursuit of excellence. In turn, these beliefs are operationalized in terms of strategy and customer values. In simple terms, culture provides a framework to explain the way things are done around here. Other examples of basic beliefs include: Company Ford Delta 3M Lincoln electric Caterpillar McDonalds Basic belief Quality is job one A family feeling Product innovation Wages proportionate to productivity Strong dealer support; 24-hour spare parts support around the world Fast service, consistent quality

NOTES

Institutionalizing strategy requires a culture that supports the strategy. For most organizations a strategy based on TQM requires a significant if not sweeping change in the way people think. Jack Welch, head of General Electric and one of the most controversial and respected executives in America, states that cultural change must be sweeping not incremental change but quantum. His cultural transformation at GE calls for a boundary-less company where internal divisions blur, everyone works as a team, and both suppliers and customers are partners. His cultural concept of change may differ from Juran, who says that, when it comes to quality, there is no such thing as improvement in general. Any improvement is going to come about project by project and no other way. The acknowledged experts agree on the need for a cultural or value system transformation:

293

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Deming calls for a transformation of the American management style. Feigenbaurn suggests a pervasive improvement throughout the organization. According to Crosby, Quality is the result of a carefully constructed culture, it has to be the fabric of the organization It is not surprising that many executives hold the same opinions. In a Gallup Organization survey of 615 business executives, 43 percent rated a change in corporate culture, as an integral part of improving quality. The needed change may be given different names in different companies. Robert Crandall, CEO of American Airlines, calls it an innovative environment, while at DuPont it is The Way People Think and at Allied Signal Workers attitudes had to change. Xerox specified a 5-year cultural change strategy called Leadership through Quality. Successful organizations have a central core culture around which the rest of the company revolves. It is important for the organization to have a sound basis of core values into which management and other employees will be drawn. Without this central core, the energy of members of the organization will dissipate as they develop plans, make decisions, communicate, and carry on operations without a fundamental criteria of relevance to guide them. This is particularly true in decisions related to quality. Research has shown that quality means different things to different people and levels in the organization. Employees tend to think like their peers and think differently from those at other levels. This suggests that organizations will have considerable difficulty in improving quality unless core values are embedded in the organization. Commitment to quality as a core value for planning, organizing and control will be doubly difficult when a concern for the practice is lacking. Research has shown that many U.S. supervisors believe that a concern for quality is lacking among workers and managers. Where this is the case, the perceptions of these supervisors may become a self-fulfilling prophecy. Embedding a Culture of Quality It is one thing for top management to state a commitment to quality but quite another for this commitment to be accepted or embedded in the company. The basic vehicle for embedding an organizational culture is a teaching process in which desired behaviours and activities are learned through experiences, symbols, and explicit behaviour. Once again, the components of the total quality system provide the vehicles for change. Above all, demonstration of commitment by top management is essential.

Anna University Chennai

294

DBA 1656

QUALITY MANAGEMENT

This commitment is demonstrated by behaviours and activities that are exhibited throughout the company. Categories of behaviours include: Signalling : Making statements or taking actions that support the vision of quality, such as mission statements, creeds, or charters directed toward customer satisfaction. Public supermarkets Where shopping is a pleasure and JC Penneys The customer is always right are examples of such statements. Focus : Every employee must know the mission, his or her part in it, and what has to be done to achieve it. What management pays attention to and how they react to crisis is indicative of this focus. When all functions and systems are aligned and when practice supports the culture, everyone is more likely to support the vision. Johnson and Johnsons cool reaction to the Tylenol scare is such an example. Employee policies : These may be the clearest expression of culture, at least from the view point of the employee. A culture of quality can be easily demonstrated in such policies as the reward and promotion system status symbols, and other human resource actions. Executives at all levels could learn a lesson from David T. Kearns, Chairman and Chief Executive Officer of Xerox Corporation. In an article for the academic journal, Academy of Management Executive, he describes the change at Xerox: At the time Leadership-Through-Quality was introduced, I told our employees that customer satisfaction would be our top priority and that it would change the culture of the company. We redefined quality as meeting the requirements of our customers. It may have been the most significant strategy xerox ever embarked on. Among the changes brought about by the cultural change, were the management style and the role of first-line management, Kearns continues: We altered the role of first-line management from that of the traditional, dictatorial foreman to that of a supervisor functioning primarily as a coach and expediter. Using a modification of the Ishikawa (fishbone) diagram, Xerox demonstrated how the major component of the companys quality system was used for the transition to TQM.

NOTES

295

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

5.6 LEADERSHIP Leadership Commitment People create results. Involving all employees is essential to the GE quality approach. GE is committed to providing opportunities and incentives for employees to focus their talents and energies on satisfying customers. All GE employees are trained in the strategy, statistical tools and techniques of Six Sigma Quality. Training courses are offered at various levels: y Quality Overview Seminars: Basic Six Sigma awareness y Team Training: Basic tool introduction to equip employees to participate on Six Sigma teams y Master Black Belt, Black Belt and Green Belt Training: In-depth quality training that includes high-level statistical tools, basic quality control tools, Change Acceleration Process and Flow technology tools. y Design for Six Sigma (DFSS) Training: Prepares teams for the use of statistical tools to design it right the first time Quality is the responsibility of every employee. Every employee must be involved, motivated and knowledgeable if we are to succeed. 5.7 QUALITY COUNCIL Quality Control Quality control may generally be defined as a system that is used to maintain a desired level of quality in a product or service. This task may be achieved through different measures such as planning, design, use of proper equipment and procedures, inspection, and taking corrective action in case a deviation is observed between the product, service or process output and a specified standard (ASQC 1983; Walsh et al.1986). This general area may be divided into three main sub-areas namely, off-line quality control, statistical process control, and acceptance sampling plans. Off-Line Quality Control Off-line quality control procedures deal with measures to select and choose control label product and process parameters in such a way that the deviation between

Anna University Chennai

296

DBA 1656

QUALITY MANAGEMENT

the product or process output and the standard will be minimized. Much of this task is accomplished through product and process design. The goal is to come up with a design within the constraints of resources and environmental parameters such that when production takes place, the output meets the standard. Thus, to the extent possible, the product and process parameters are set before production begins. Principles of experimental design and the Taguchi method, discussed in a later chapter, provide information on off-line process control procedures. Statistical Process Control Statistical process control involves comparing the output of a process or a service with a standard and taking remedial actions in case of a discrepancy between the two. It also involves determining whether a process can produce a product that meets desired specifications or requirements. For example, to control paperwork errors in an administrative department, information might be gathered daily on the number of errors. If the observed number exceeds some specified standard, then on identification of possible causes, action should be taken to reduce the number of errors. This may involve training the administrative staff, simplifying operations if the error is of an arithmetic nature, redesigning the form, or other appropriate measures. On-line statistical process control means that information is gathered about the product, process, or service while it is functional. When the output differs from a determined norm, corrective action is taken in that operational phase. It is preferable to take corrective actions on a real-time basis for quality control problems. This approach attempts to bring the system to an acceptable state as soon as possible, thus minimizing either the number of unacceptable items produced or the time over which undesirable service is rendered. One question that may come to mind is: Shouldnt all processes be controlled on an off-line basis? The answer is yes, to the extent possible. The prevailing theme of quality control is that quality has to be designed into the product or service, it cannot be inspected into it. However, in spite of taking off-line quality control measures, there may be a need for on-line quality control, because variation in the manufacturing stage of a product or the delivery stage of a service is inevitable. Therefore, some rectifying
297

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

measures are needed in this phase. Ideally, a combination of off-line and on-line quality control measures will lead to a desirable level of operation. Acceptance Sampling Plans This branch of quality control deals with inspection of the product or service. When 100 percent inspection of all items is not feasible, a decision has to be made on how many items should be sampled or whether the batch should be sampled at all. The information obtained from the sample is used to decide whether to accept or reject the entire batch or lot. In the case of attributes, one parameter is the acceptable number of nonconforming items in the sample. If the observed number of nonconforming items are less than or equal to this number, the batch is accepted. This is known as the acceptance number. In the case of variables, one parameter may be the proportion of items in the sample that are outside the specifications. This proportion would have to be less than or equal to a standard for the lot to be accepted. A plan that determines the number of items to sample and the acceptance criteria of the lot, based on meeting certain stipulated conditions is known as an acceptance sampling plan. Lets consider a case of attribute inspection where an item is classified as conforming or not conforming to a specified thickness of 12 0.4 mm. Suppose the items come in batches of 500 units. If an acceptance sampling plan with a sample size of 50 and an acceptance number of 3 is specified, then the interpretation of the plan is as follows. Fifty items will be randomly selected by the inspector from the batch of 500 items. Each of the 50 items will then be inspected and classified as conforming or not conforming. If the number of nonconforming items in the sample is 3 or less, the entire batch of 500 items is accepted. However, if the number of nonconforming items is greater than 3, the batch is rejected. Alternatively, the rejected batch may be screened; that is, each item is inspected and nonconforming ones are removed. Benefits of Quality Control The goal of most companies is to conduct business in such a manner that an acceptable rate of return is obtained by the shareholders. What must be considered in

Anna University Chennai

298

DBA 1656

QUALITY MANAGEMENT

this setting is the short-term goal versus the long-term goal. If the goal is to show a certain rate of return this coming year, this may not be an appropriate strategy, because the benefits of quality control may not be realized immediately. However, from a longterm perspective, a quality control system may lead to a rate of return that is not only better but is also sustainable. One of the drawbacks of the manner in which many U.S. companies operate is that the output of managers is measured in short time frames. It is difficult for a manager to shown an increase of a 5 percent rate of return, say, in the quarter after implementing a quality system. Top management may then doubt the benefits of quality control. The advantages of a quality control system, however, become obvious in the long run. First and foremost is the improvement in the quality of products and services. Production improves because a well-defined structure for achieving production goals is present. Second, the system is continually evaluated and modified to meet the changing needs of the customer. Therefore, a mechanism exists to rapidly modify product or process design, manufacture, and service to meet customer requirements so that the company remains competitive. Third, a quality control system improves productivity, which is a goal of every organization. It reduces the production of scrap and rework, thereby increasing the number of usable products. Fourth, such a system reduces costs in the long run. The notion that improved productivity and cost reduction do not go hand in hand is a myth. On the contrary, this is precisely what a quality control system does achieve. With the production of fewer nonconforming items, total costs decrease, which may lead to a reduced selling price and thus increased competitiveness. Fifth, with improved productivity, the lead time for producing parts and subassemblies is reduced, which results in improved delivery dates. Once again, quality control keeps customers satisfied. Meeting their needs on a timely basis helps sustain a good relationship. Last, but not least, a quality control system maintains an improvement environment where everyone strives for improved quality and productivity. There is no end to this process there is always room for improvement. A company that adopts this philosophy and uses a quality control system to help meet this objective is one that will stay competitive. 5.8 EMPLOYEE INVOLVEMENT Employment involvement

NOTES

299

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

In a Harvard Business Review article, David Gumpert described a small microbrewery where the head of the company attributed their success to a loyal, small, and involved work force. He found that keeping the operation small, strengthened employee cohesiveness and gave them a feeling of responsibility and pride. This anecdote tells a lot about small groups and how they can impact motivation, productivity, and quality. If quality is the objective, employee involvement in small groups and teams will greatly facilitate the result because of two reasons: motivation and productivity. The theory of motivation, but not necessarily its practice, is fairly mature, and there is substantial proof that it can work. By oversimplifying a complex theory, it can be shown why team membership is an effective motivational device that can lead to improved quality. Teams improve productivity as a result of greater motivation and reduced overlap and lack of communication in a functionally based classical structure characterized by territorial battles and parochial outlooks. There is always the danger that functional specialists, if left to their own devices, may pursue their own interests with little regard for the overall company mission. Team membership, particularly a cross-functional team, reduces many of these barriers and encourages an integrative systems approach to achievement of common objectives, those that are common to both the company and the team. There are many success stories. To cite a few: Globe Metallurgical Inc., the first small company to win the Baldrige Award, had a 380 percent increase in productivity which was attributed primarily to self-managed work teams. The partnering concept requires a new corporate culture of participative management and teamwork throughout the entire organization. Ford increased productivity 28 percent by using the team concept with the same workers and equipment. Harleysville Insurance Companys Discovery program provides synergism resulting from the team approach. The program produced a cost saving of $3.5 million, along with enthusiasm and involvement among employees. At Decision Data Computer Corporation middle management is trained to support Pride Team. Martin Marietta Electronics and Missiles Group has achieved success with performance measurement teams (PMTs). Publishers Press has achieved significant productivity improvements and attitude change from the companys process improvement teams
300

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

(PITs). Florida Power and Light Company, the utility that was the first recipient of the Deming Prize, has long had quality improvement teams as a fundamental component of their quality improvement program.

NOTES

5.9 MOTIVATION

In psychology, motivation refers to the initiation, direction, intensity and persistence of behavior. Motivation is a temporal and dynamic state that should not be confused with personality or emotion. Motivation is having the desire and willingness to do something. A motivated person can be reaching for a long-term goal such as becoming a professional writer or a more short-term goal like learning how to spell a particular word. Personality invariably refers to more or less permanent characteristics of an individuals state of being (e.g., shy, extrovert, conscientious). As opposed to motivation, emotion refers to temporal states that do not immediately link to behavior (e.g., anger, grief, happiness). Drive theory There are a number of drive theories. The Drive Reduction Theory grows out of the concept that we have certain biological needs, such as hunger. As time passes the strength of the drive increases as it is not satisfied. Then as we satisfy that drive by fulfilling its desire, such as eating, the drives strength is reduced. It is based on the theories of Freud and the idea of negative feedback systems, such as a thermostat.
Anna University Chennai

301

DBA 1656

QUALITY MANAGEMENT

NOTES

There are several problems, however, that leave the validity of the Drive Reduction Theory open for debate. The first problem is that it does not explain how secondary reinforcers reduce drive. For example, money does not satisfy any biological or psychological need but reduces drive on a regular basis through a pay check (see: second-order conditioning). Secondly, if the drive reduction theory held true we would not be able to explain how a hungry human being can prepare a meal without eating the food before the end of the preparation. Supposedly, the drive to satiate ones hunger would drive a person to consume the food, however we prepare food on a regular basis and ignore the drive to eat. Thirdly, a drive is not able to be measured and therefore cannot be proven to exist in the first place (Barker 2004). Rewards and incentives A reward is that which is given following the occurrence of a behavior with the intention of acknowledging the positive nature of that behavior, and often with the additional intent of encouraging it to happen again. The definition of reward is not to be confused with the definition of reinforcer, which includes a measured increase in the rate of a desirable behavior following the addition of something to the environment. There are two kinds of rewards, extrinsic and intrinsic. Extrinsic rewards are external to, or outside of, the individual; for example, praise or money. Intrinsic rewards are internal to, or within, the individual; for example, satisfaction or accomplishment. It was previously thought that the two types of motivation (intrinsic and extrinsic) were additive, and could be combined to produce the highest level of motivation. Some authors differentiate between two forms of intrinsic motivation: one based on enjoyment, the other on obligation. In this context, obligation refers to motivation based on what an individual thinks ought to be done. For instance, a feeling of responsibility for a mission may lead to helping others beyond what is easily observable, rewarded, or fun. INTRINSIC MOTIVATION Intrinsic motivation is evident when people engage in an activity for its own sake, without some obvious external incentive present. A hobby is a typical example.

Anna University Chennai

302

DBA 1656

QUALITY MANAGEMENT

Intrinsic motivation has been intensely studied by educational psychologists since the 1970s, and numerous studies have found it to be associated with high educational achievement and enjoyment by students. There is currently no grand unified theory to explain the origin or elements of intrinsic motivation. Most explanations combine elements of Bernard Weiners attribution theory, Banduras work on self-efficacy and other studies relating to locus of control and goal orientation. Thus it is thought that students are more likely to experience intrinsic motivation if they:

NOTES

Attribute their educational results to internal factors that they can control (eg. the amount of effort they put in, not fixed ability). Believe they can be effective agents in reaching desired goals (eg. the results are not determined by dumb luck.) Are motivated towards deep mastery of a topic, instead of just rote-learning performance to get good grades.

Note that the idea of reward for achievement is absent from this model of intrinsic motivation, since rewards are an extrinsic factor. In knowledge-sharing communities and organizations, people often cite altruistic reasons for their participation, including contributing to a common good, a moral obligation to the group, mentorship or giving back. This model of intrinsic motivation has emerged from three decades of research by hundreds of educationalists and is still evolving. EXTRINSIC MOTIVATION Traditionally, extrinsic motivation has been used to motivate employees:

Tangible rewards such as payments, promotions (or punishments). Intangible rewards such as praise or public commendation.

Within economies transitioning from assembly lines to service industries, the importance of intrinsic motivation rises:

The further jobs move away from pure assembly lines, the harder it becomes to measure individual productivity. This effect is most pronounced for knowledge workers and amplified in teamwork. A lack of objective or universally accepted criteria for measuring individual productivity may make individual rewards arbitrary.
303 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Since by definition intrinsic motivation does not rely on financial incentives, it is cheap in terms of dollars but expensive in the fact that the inherent rewards of the activity must be internalized before they can be experienced as intrinsically motivating. However, intrinsic motivation is no panacea for employee motivation. Problems

include:

For many commercially viable activities it may not be possible to find any or enough intrinsically motivated people. Intrinsically motivated employees need to eat, too. Other forms of compensation remain necessary. Intrinsic motivation is easily destroyed. For instance, additional extrinsic motivation is known to have a negative impact on intrinsic motivation in many cases, perceived injustice in awarding such external incentives even more so.

Telic and Paratelic motivational modes Psychologist Michael Apters studies of motivation led him to describe what he called the telic (from Greek telos or goal) and paratelic motivational modes, or states. In the telic state, a person is motivated primarily by a particular goal or objective such as earning payment for work done. In the paratelic mode, a person is motivated primarily by the activity itselfintrinsic motivation. Punishment Punishment, when referred in general, is an unfavorable condition introduced into the environment to eliminate undesirable behavior. This is used as one of the measures of Behavior Modification. Action resulting in punishment will dismotivate repetition of action. Aggression Aggression is generally used in the civil service area where units are devoted to maintaining law and order. In some environments officers are grounded by their superiors in order to perform better and to stay out of illegal activities. Stress Stress works in a strange way to motivate, like reverse psychology. When under stress and difficult situations, a person feels pressured. This may trigger feelings
Anna University Chennai 304

DBA 1656

QUALITY MANAGEMENT

of under-achieving, which results in a reverse mindset, to strive to achieve. This is almost sub-conscious. The net amount motivation under stress may motivate a person to work harder in order to compensate for his feelings. Secondary goals These important biological needs tend to generate more powerful emotions and thus more powerful motivation than secondary goals. This is described in models like Abraham Maslows hierarchy of needs. A distinction can also be made between direct and indirect motivation: In direct motivation, the action satisfies the need, in indirect motivation, the action satisfies an intermediate goal, which can in turn lead to the satisfaction of a need. In work environments, money is typically viewed as a powerful indirect motivation, whereas job satisfaction and a pleasant social environment are more direct motivations. However, this example highlights well that an indirect motivational factor (money) towards an important goal (having food, clothes etc.) may well be more powerful than the direct motivation provided by an enjoyable workplace. Coercion The most obvious form of motivation is coercion, where the avoidance of pain or other negative consequences has an immediate effect. When such coercion is permanent, it is considered slavery. While coercion is considered morally reprehensible in many philosophies, it is widely practiced on prisoners, students in mandatory schooling, and in the form of conscription. Critics of modern capitalism charge that without social safety networks, wage slavery is inevitable. However, many capitalists such as Ayn Rand have been very vocal against coercion. Successful coercion sometimes can take priority over other types of motivation. Self-coercion is rarely substantially negative (typically only negative in the sense that it avoids a positive, such as undergoing an expensive dinner or a period of relaxation), however it is interesting in that it illustrates how lower levels of motivation may be sometimes tweaked to satisfy higher ones. SOCIAL AND SELF REGULATION Self control The self-control of motivation is increasingly understood as a subset of emotional intelligence; a person may be highly intelligent according to a more conservative definition
305

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

(as measured by many intelligence tests), yet unmotivated to dedicate this intelligence to certain tasks. Victor Vrooms expectancy theory provides an account of when people will decide whether to exert self control to pursue a particular goal. Self control is often contrasted with automatic processes of stimulus-response, as in the methodological behaviorists paradigm of JB Watson. Drives and desires can be described as a deficiency or need that activates behaviour that is aimed at a goal or an incentive. These are thought to originate within the individual and may not require external stimuli to encourage the behaviour. Basic drives could be sparked by deficiencies such as hunger, which motivates a person to seek food; whereas more subtle drives might be the desire for praise and approval, which motivates a person to behave in a manner pleasing to others. By contrast, the role of extrinsic rewards and stimuli can be seen in the example of training animals by giving them treats when they perform a trick correctly. The treat motivates the animals to perform the trick consistently, even later when the treat is removed from the process. Business Application At lower levels of Maslows hierarchy of needs, such as Physiological needs, money is a motivator, however it tends to have a motivating effect on staff that lasts only for a short period (in accordance with Herzbergs two-factor model of motivation). At higher levels of the hierarchy, praise, respect, recognition, empowerment and a sense of belonging are far more powerful motivators than money, as both Abraham Maslow and Douglas McGregors Theory X and theory Y have demonstrated vividly. Maslow has money at the lowest level of the hierarchy and shows other needs are better motivators to staff. McGregor places money in his Theory X category and feels it is a poor motivator. Praise and recognition are placed in the Theory Y category and are considered stronger motivators than money.

Motivated employees always look for better ways to do a job. Motivated employees are more quality oriented. Motivated workers are more productive.

Anna University Chennai

306

DBA 1656

QUALITY MANAGEMENT

5.10 EMPOWERMENT Empowerment Empowerment means investing people with authority. Its purpose is to tap the enormous reservoir of potential contribution that lies within every worker. Empowerment is an environment in which people have the ability, the confidence and the commitment to take the responsibility and ownership to improve the process and initiate the necessary steps to satisfy the process and initiate the necessary steps to satisfy customer requirements within well defined boundaries in order to achieve organizational values and goals. There are two steps to empowerment. One is to arm people to be successful through coaching, guidance and training. The second is letting people do by themselves. Empowerment should not be confused with delegation (or) job enrichment. Delegation refers to distributing and entrusting work to others. Employee empowerment requires that the individual is held responsible for accomplishing a whole task. The principles of empowering people are given here. 1. 2. 3. 4. 5. 6. 7. 8. Tell people what their responsibilities are. Give authority that is commensurate with responsibility. Set standards for excellence. Render training. Provide knowledge and information. Trust them. Allow them to commit mistakes. Treat them with dignity and respect.

NOTES

The empowerment matrix is shown here.

307

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

FIGURE 5.1 EMPOWERMENT MATRIX One of the dimensions of empowerment is capability. Employees must have the ability, skills and knowledge needed to know their jobs as well as their willingness to co-operate.
Anna University Chennai 308

DBA 1656

QUALITY MANAGEMENT

A key dimension to empowerment is alignment. All employees need to know the organizations mission, vision, values, policies, objectives and methodologies. Fully aligned employees not only know their roles, they are also dedicated to attain the goals. Once the management has developed empowerment capabilities and alignment, it can unleash the power, creativity and resourcefulness of the workforce. This is not possible without trust. Employees need to trust management and feel that management trusts them. Mutual trust therefore completes the picture required to build an empowerment workforce. 5.11 RECOGNITION AND REWARD Quality Management Philosophies: W. Edwards Deming is best known for helping to lead the Japanese manufacturing sector out of the ruins of World War II to becoming a major presence in the world market. The highest quality award in Japan, The Deming Prize, is named in his honor. He is also known for his 14 points (a new philosophy for competing on the basis of quality), for the Deming Chain Reaction, and for the Theory of Profound Knowledge. Read more about Demings Theory of Profound Knowledge at the MAAW web site. He also modified the Shewart cycle (Plan, Do, Check, Act) to what is now referred to as the Deming Cycle (Plan, Do, Study, Act). Beginning in the early 1980s he finally came to prominence in the United States and played a major role in quality becoming a major competitive issue in American industry. His book, Out of the Crisis (1986), is considered a quality classic. Read more about Dr. Deming and his philosophy at the W. Edwards Deming Institute Home Page. Joseph Juran also assisted the Japanese in their reconstruction. Juran first became well - known in the quality field in the U.S. as the editor of the Quality Control Handbook (1951) and later for his paper introducing the quality trilogy. While Demings approach is revolutionary in nature (i.e. throw out your old system and adopt the new philosophy of his 14 points), Jurans approach is more evolutionary (i.e. we can work to improve your current system). Deming refers to statistics as being the language of business while Juran says that money is the language of business and quality efforts must be communicated to management in their language. Read more about Dr. Juran and his philosophy at the Juran Institute web site. Phillip Crosby came to national prominence with the publication of his book, Quality is Free. He established the Absolutes of Quality Management which includes the only performance standard (that makes any sense) is Zero Defects, and the Basic Elements of Improvement. Phillip Crosby Associates II, Inc. home page.
309

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Armand Feigenbaum is credited with the creation of the idea of total quality control in his 1951 book, Quality ControlPrinciples, Practice, and Administration and in his 1956 article, Total Quality Control. The Japanese adopted this concept and renamed it Company-Wide Quality Control, while it has evolved into Total Quality Management (TQM) in the U.S. There are other major contributors to the quality field as we know it today. The list of major contributors would include Walter Shewhart, Shigeo Shingo, Genichi Taguchi, Kaoru Ishikawa, and David Garvin among others. Quality Practice Award The Quality Practice Award (QPA) is an award that is given to general practitioner practices in the United Kingdom to show recognition for high quality patient care by all members of staff in the team. It is awarded by the Royal College of General Practitioners (RCGP). For the practice to achieve the award, evidence has to be provided that conforms to a set criteria in the following areas: y y y y y y y y y y y y y y y y Practice Profile Availability Clinical Care Communication Continuity of Care Equipment and Minor Surgery Health Promotion Information Technology Medical Records Nursing and Midwifery Practice Management Other Professional Staff Patient Issues Premises Prescribing/Repeat Prescribing The Practice as a Learning Organisation
310

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

After the evidence is completed, an onsite visit is arranged and takes place during a normal working day to assess the practice and interview the members of staff. 5.12 INFORMATION TECHNOLOGY Information Technology (IT), as defined by the Information Technology Association of America (ITAA) is: the study, design, development, implementation, support or management of computer-based information systems, particularly software applications and computer hardware. In short, IT deals with the use of electronic computers and computer software to convert, store, protect, process, transmit and retrieve information, securely. In this definition, the term information can usually be replaced by data without loss of meaning. Recently it has become popular to broaden the term to explicitly include the field of electronic communication so that people tend to use the abbreviation lCT (Information and Communication Technology). The term information technology came about in the 1970s. Its basic concept, however, can be traced back even further. Throughout the 20th century, an alliance between the military and various industries has existed in the development of electronics, computers, and information theory. The military has historically driven such research by providing motivation and funding for innovation in the field of mechanization and computing. The first commercial computer was the UNIVAC I. It was designed by J.Presper Eckert and John Mauchly for the U.S. Census Bureau. The late 70s saw the rise of microcomputers, followed closely by IBMs personal computer in 1981. Since then, four generations of computers have evolved. Each generation represented a step that was characterized by hardware of decreased size and increased capabilities. The first generation used vacuum tubes, the second transistors, and the third integrated circuits. The fourth (and current) generation uses more complex systems such as Very-largescale integration. Information technology refers to all forms of technology applied to processing, storing, and transmitting information in electronic form. The physical equipment used for this purpose includes computers, communications equipment and networks, fax machines, and even electronic pocket. organizers. Information systems execute organized procedures that process and / or communicate information. We define information as a tangible or intangible entity that serves to reduce uncertainty about some state or event.

NOTES

311

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Data can originate from the internal operations of the firm and from external entities such as suppliers or customers. Data also come from external databases and services; for example, organizations purchase a great deal of marketing and competitive information. Brokerage firms provide a variety of research on different companies to clients. An information system usually processes these data in some way and presents the results to users. With the easy availability of personal computers, users often process the output of a formal system themselves in an adhoc manner. Human interpretation of information is extremely important in understanding how an organization reacts to the output of a system. Different results may mean different things to two managers. A marketing manager may use statistical programs and graphs to look for trends or problems with sales. A financial manager may see a problem with cash flow given the same sales data. The recipient of a systems output may be an individual, as in the example of the marketing manager, or it may be a workgroup. Many systems are used routinely for control purposes in the organization and require limited decision making. The accounts receivable application generally runs with little senior management oversight. It is a highly structured application with rules that can be followed by a clerical staff. A department manager handles exceptions. The output of some systems may be used as a part of a program or strategy. The system itself could be implementing a corporate strategy, such as simplifying the customer order process. A system might help managers make decisions. Information technology, however, extends far beyond the computational capabilities of computers. Today computers are used extensively for communications as well as for their traditional roles of data storage and computation. Many computers are connected together using various kinds of communications lines to form networks. There are more than 43 million host computers, for example, on the Internet, and over 100 million computers around the world access it, an estimated 70 million of which are in the U.S. Through a network, individuals and organizations are linked together, and these linkages are changing the way we think about doing business. Boundaries between firms are breaking down from the electronic communications link provided by networks. Firms are willing to provide direct access to their systems for suppliers and customers. If the first era of computing was concerned with computation, the second era is about communications.

Anna University Chennai

312

DBA 1656

QUALITY MANAGEMENT

Manager and the IT Managers are involved in a wide range of decisions about technology, decisions that are vital to the success of the organization. Some 45 to 50 percent of capital investment in the U.S. is for information, according to the Department of Commerce and other sources. Business Week estimate that there are 63 PCs per 100 workers in the U.S. (including machines at home), and others have estimated that one in three U.S. workers uses a computer on the job. A recent survey of 373 senior executives at large U.S. and Japanese companies found that 64 percent of the U.S. managers said they must use computers in their jobs. Other surveys have suggested that as many as 88 percent of managers use computers. One estimate is that in 1996, U.S. firms spent $500 billion on information technology while the IT bill for the world was $1 trillion. Because this technology is so pervasive, managers at all levels and in all functional areas of the firm are involved with IT. Managers are challenged with decisions about: The use of technology to design and structure the organization. The creation of alliances and partnerships that include electronic linkages. There is a growing trend for companies to connect with their customers and suppliers, and often with support service providers like law firms. The selection of systems to support different kinds of workers. Stockbrokers, traders and others use sophisticated computer-based workstations in performing their jobs. Choosing a vendor, designing the system, and implementing it are major challenges for management. The adoption of groupware or group-decision support systems for workers who share a common task. In many firms, the records of shared materials constitute one type of knowledge base for the corporation. Determining a World Wide Web Strategy: The Internet and World Wide Web offer ways to provide information, communicate, and engage in commerce. A manager must determine if and how the firm can take advantage of the opportunities provided by the Web. Routine transactions processing system: These applications handle the basic business transactions, for example, the order cycle from receiving a purchase order through shipping goods, invoicing, and receipt of payment. These routine systems must function for the firm to continue in business. More often today managers are eliminating physical documents in transactions processing and substituting electronic transmission over networks.
313

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Personal support systems: Managers in a variety of positions use personal computers and networks to support their work. Reporting and control: Managers have traditionally been concerned with controlling the organization and reporting results to management, shareholders, and the public. The information needed for reporting and control is contained in one or more databases on an internal computer network. Many reports are filed with the government and can be accessed through the Internet and the World Wide Web, including many 10K filings and other SEC-required corporate reports. Automated production processes: One of the keys to competitive manufacturing is increasing efficiency and quality through automation. Similar improvements can be found in the service sectors through technologies such as image processing, optical storage, and workflow processing in which paper is replaced by electronic images shared by staff members using networked workstations. Embedded products: Increasingly, products contain embedded intelligence. A modem automobile may contain six or more computers on chips, for example, to control the engine and climate, compute statistics, and manage an antilock brake and traction control system. A colleague remarked a few years ago that his washing machine today contained more logic than the first computer he worked on.

Major Trends In the past few years, six major trends have drastically altered the way organizations use technology. These trends make it imperative that a manager become familiar with both the use of technology and how to control it in the organization. 1. The use of technology to transform the organization: The cumulative effect of what all the technology firms are installing is to transform the organization and allow new types of organizational structures. Sometimes the transformation occurs slowly as one unit in an organization begins to use groupware. In other cases, like Kennametal or Oticon, a Danish firm. The firm is totally different after the application of technology. This ability of information technology to transform organizations is one of the most powerful tools available to a manager today.
314

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

2.

The use of information processing technology as a part of corporate strategy: Firms like Bron Passot are implementing information systems that give them an edge on the competition. Firms that prosper in the coming years will be managed by individuals who are able to develop creative, strategic applications of the technology. Technology as a pervasive part of the work environment: From the largest corporations to the smallest business, we find technology is used to reduce labor, improve quality, provide better customer service, or change the way the firm operates. Factories use technology to design parts and control production. The small auto-repair shop uses a packaged personal computer system to prepare work orders and bills for its customers. The body shop uses a computer-controlled machine with lasers to take measurements so it can check the alignment of automobile suspensions, frames, and bodies. In this text, we shall see a large number of examples of how technology is applied to change and improve the way we work. The use of technology to support knowledge workers: The personal computer has tremendous appeal. It is easy to use and has a variety of powerful software programs available that can dramatically increase the users productivity. When connected to a network within the organization and to the Internet, it is a tremendous tool for knowledge workers. The evolution of the computer from a computational device to a medium for communications: Computers first replaced punched card equipment and were used for purely computational tasks. From the large centralized computers, the technology evolved into desktop, personal computers. When users wanted access to information stored in different locations, companies developed networks to link terminals and computers to other computers. These networks have grown and become a medium for internal and external communications with other organizations. For many workers today, the communications aspects of computers are more important than their computational capabilities. The growth of the Internet and World Wide Web. The Internet offers a tremendous amount of information on-line, information that you can search from your computer. Network link people and organizations together, greatly speeding up the process of communications. The Internet makes expertise available regardless of time and distance, and provides access to information at any location connected to the Internet. Companies can expand their geographic scope electronically without having to open branch offices. The
315

NOTES

3.

4.

5.

6.

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Internet leads naturally to electronic commerce-creating new ways to market, contracts and complete transactions. What does all this mean for the management student? The manager must be a competent user of computers and the Internet, and learn to manage information technology. The personal computer connected to a network is as commonplace in the office as the telephone has been for the past 75 years. Managers today are expected to make information technology an integral part of their jobs. It is the manager, not the technical staff member, who must come up with the idea for a system, allocate resources, and see that systems are designed well to provide the firm with a competitive edge. You will have to recognize opportunities to apply technology and then manage the implementation of the new technology. The success of information processing in the firm lies more with top and middle management than with the information services department. Information technology today Today, the term Information Technology has ballooned to encompass many aspects of computing and technology, and the term is more recognizable than ever before. The Information Technology umbrella can be quite large, covering many fields. IT professionals perform a variety of duties that range from installing applications to designing complex computer networks and information databases. A few of the duties that IT professionals perform may include: Data Management Computer Networking Database Systems Design Software design Management Information Systems Systems management

History of Information Technology The term information technology came about in the 1970s. Its basic concept, however, can be traced back even further. Throughout the 20th century, an alliance between the military and various industries has existed in the development of electronics, computers, and information theory. The military has historically driven such research by providing motivation and funding for innovation in the field of mechanization and computing.
Anna University Chennai 316

DBA 1656

QUALITY MANAGEMENT

The first commercial computer was the UNIVAC 1. It was designed by J. Presper Eckert and John Mauchly for the U.S. Census Bureau. The late 70s saw the rise of microcomputers, followed closely by IBMs personal computer in 1981. Since then, four generations of computers have evolved. Each generation represented a step that was characterized by hardware of decreased size and increased capabilities. The first generation used vacuum tubes, the second transistors, and the third integrated circuits. The fourth (and current) generation uses more complex systems such as Verylarge-scale integration. Four basic periods: Characterized by a principal technology used to solve the input, processing, output and communication problems of the time: 1. 2. 3. 4. Premechanical, Mechanical, Electromechanical, and Electronic

NOTES

A. The Premechanical Age: 3000 B.C. -1450 A.D. 1. 2. 3. 4. 5. Writing and Alphabetscommunication. Paper and Pensinput technologies Books and Libraries: Permanent Storage Devices. The First Numbering Systems. The First Calculators: The Abacus.

B. The Mechanical Age: 1450 - 1840 1. 2. 3. 4. The First Information Explosion. The first general purpose computers Slide Rules, the Pascaline and Leibnizs Machine Babbages Engines The Difference Engine, The Analytical Engine.

C. The Electromechanical Age: 1840 - 1940. The discovery of ways to harness electricity was the key advance made during this period. Knowledge and information could now be converted into electrical impulses. 1. The Beginnings of Telecommunication 1. Voltaic Battery - Late 18th century. 2. Telegraph - early 1800s.
317 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

3. Morse Code - developed in1835 by Samuel Morse with dots and dashes. 4. Telephone and Radio - Followed by the discovery that electrical waves travel through space and can produce an effect far from the point at which they originated, led to the invention of the radio by Guglielmo Marconi in 1894 5. Electromechanical Computing - Herman Hollerith and IBM. Howard Aiken, a Ph.D. student at Harvard University, built the Mark I, completed in January 1942. It was 2.8 feet tall, 51 feet long, 2 feet thick, weighed 5 tons and used about 750,000 parts D. The Electronic Age: 1940 - Present. 1. First tries. * Early 1940s * Electronic vacuum tubes. 2. Eckert and Mauchly. 1. The First High-Speed, General-Purpose Computer Using Vacuum Tubes: Electronic Numerical Integrator and Computer (ENIAC) The ENIAC team (Feb 14, 1946). Left to right: J. Presper Eckert, Jr.; John Grist Brainerd; Sam Feltman; Herman H. Goldstine; John W. Mauchly; Harold Pender; Major General G. L. Barnes; Colonel Paul N. Gillon. Electronic Numerical Integrator and Computer (ENIAC) 1946. Used vacuum tubes (not mechanical devices) to do its calculations. Hence, first electronic computer. Developers John Mauchly, a physicist, and J. Prosper Eckert, an electrical engineer The Moore School of Electrical Engineering at the University of Pennsylvania Funded by the U.S. Army. But it could not store its programs (its set of instructions) 2. The First StoredProgram Computer(s)

Anna University Chennai

318

DBA 1656

QUALITY MANAGEMENT

Early 1940s, Mauchly and Eckert began to design the EDV AC the Electronic Discreet Variable Computer. John von Neumanns influential report in June 1945: The Report on the EDV AC British scientists used this report and outpaced the Americans. Max Newman headed up the effort at Manchester University Where the Manchester Mark I went into operation in June 1948becoming the first stored-program computer. Maurice Wilkes, a British scientist at Cambridge University, completed the ED SAC (Electronic Delay Storage Automatic Calculator) in 1949two years before EDV AC was finished. Thus, EDSAC became the first stored-program computer in general use (i.e., not a prototype). The First General-Purpose Computer for Commercial Use: Universal Automatic Computer (UNIVAC Late 1940s, Eckert and Mauchly began the development of a computer called UNIVAC (Universal Automatic Computer) Remington Rand. First UNIVAC delivered to Census Bureau in 1951. But, a machine called LEO (Lyons Electronic Office) went into action a few months before UNIVAC and became the worlds first commercial computer.

NOTES

4.

THE FOUR GENERATIONS OF DIGITAL COMPUTING 1. 1. 2. 3. 1. The First Generation (1951-1958) Vacuum tubes as their main logic elements. Punch cards to input and externally store data. Rotating magnetic drums for internal storage of data and programs Programs written in 1. Machine language 2. Assembly language
319 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES
2.

- Requires a compiler. The Second Generation (1959-1963).

1. Vacuum tubes replaced by transistors as main logic element. 1. AT&Ts Bell Laboratories, in the 1940s 2. Crystalline mineral materials called semiconductors could be used in the design of a device called a transistor 2. Magnetic tape and disks began to replace punched cards as external storage devices. 3. Magnetic cores (very small donut-shaped magnets that could be polarized in one of two directions to represent data) strung on wire within the computer became the primary internal storage technology. 1. High-level programming languages eg., FORTRAN and COBOL 3. 1. The Third Generation (1964-1979). Individual transistors were replaced by integrated circuits.

2. Magnetic tape and disks completely replaced punch cards as external storage devices. 3. Magnetic core internal memories began to give way to a new form, metal oxide semiconductor (MOS) memory, which, like integrated circuits, used silicon-backed chips. Operating systems Advanced programming languages like BASIC developed. Which is where Bill Gates and Microsoft got their start in 1975.

2. The Fourth Generation (1979- Present). 1. Large-scale and very large-scale integrated circuits (LSIs and VLSICs)

2. Microprocessors that contained memory, logic, and control circuits (an entire CPU = Central Processing Unit) on a single chip. Which allowed for home-use personal computers or Pcs, like the Apple (II and Mac) and IBM pc. Apple II released to public in 1977, by Stephen Wozniak and Steven Jobs.

Anna University Chennai

320

DBA 1656

QUALITY MANAGEMENT

Initially sold for $1,195 (without a monitor); had 16k RAM. First Apple Mac released in 1984. IBM PC introduced in 1981. Debuts with MS-DOS (Microsoft Disk Operating System) Fourth generation language software products eg., Visicalc, Lotus 1-2-3, dBase, Microsoft Word, and many others. Graphical User Interfaces (GUI) for PCs arrive in early 1980s

NOTES

Transforming Organizations How is information technology changing organizations? One impact of IT, is its use to develop new organizational structures. The organization that is most likely to result from the use of these variables is the T-Form or Technology-Form organization, an organization that uses IT to become highly efficient and effective. The firm has a flat structure made possible by using e-mail and groupware (programs that help co-ordinate people with a common task to perform) to increase the span of control and reduce managerial hierarchy. Employees co-ordinate their work with the help of electronic communications and linkages. Supervision of employees is based on trust because there are fewer face-to-face encounters with subordinates and colleagues than in todays organization. Managers delegate tasks and decision- making to lower levels of management, and information systems make data available at the level of management where it is needed to make decisions. In this way, the organization provides a fast response to competitors and customers. Some members of the organization primarily work remotely without having a permanent office assigned. The companys technological infrastructure features networks of computers. Individual client workstations connect over a network to larger computers that act as servers. The organization has an internal Intranet, and internal client computers are connected to the Internet so members of the firm can link to customers, suppliers, and others with whom they need to interact. They can also access the huge repository of information contained on the Internet and the firms own Intranet. Technology-enabled firms feature highly automated production and electronic information handling to minimize the use of paper and rely extensively on images and optical data storage. Technology is used to give workers jobs that are as complete as possible. In the office, companies will convert assembly line operations for processing documents to a series of tasks that one individual or a small group can perform from a
321 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

workstation. The firm also adopts and uses electronic agents, a kind of software robot, to perform a variety of tasks over networks. These organizations use communications technology to form temporary task forces focused on a specific project. Technology like e-mail and groupware facilitate the work of these task forces. These temporary workgroups may include employees of customers, suppliers, and/or partner corporations; they form a virtual team that meets electronically to work on a project. The organization is linked extensively with customers and suppliers. There are numerous electronic customer / supplier relationships. These linkages increase responsiveness, improve accuracy, reduce cycle times, and reduce the amount of overhead when firms do business with each other. Suppliers access customer computers directly to learn of their needs for materials, then deliver raw materials and assemblies to the proper location just as they are needed. Customers pay many suppliers as the customer consumes materials, dispensing with invoices and other documents associated with a purchase transaction. The close electronic linking of companies doing business together creates virtual components where traditional parts of the organization appear to exist, but in reality exist in a novel or unusual manner. For example, the traditional inventory of raw materials and subassemblies is likely not to be owned or stored by a manufacturing firm. This virtual inventory actually exists at suppliers locations. Possibly the subassemblies will not exist at all; suppliers will build them just in time to provide them to the customer. From the customers standpoint, however, it appears that all needed components are in inventory because suppliers are reliable partners in the production process. This model of a technology-enabled firm shows the extent to which mangers can apply IT to transforming the organization. The firms that succeed in the turbulent environment of the 21st century will take advantage of information technology to create innovative organizational structures. They will use IT to develop highly competitive products and services, and will be connected in a network with their customers and suppliers. The purpose of this book is to prepare you to manage in this technologically sophisticated environment of the 21 5t century. The Challenge of Change A major feature of information technology is the change that IT brings. Those who speak of a revolution from technology are really talking about change. Business and economic conditions change all the time; a revolution is a discontinuity, an abrupt and dramatic series of changes in the natural evolution of economies. In the early days of technology, change was gradual and often not particularly significant. The advent of

Anna University Chennai

322

DBA 1656

QUALITY MANAGEMENT

personal computers accelerated the pace of change, and when the Internet became available for profit-making activities around 1992, change became exponential and revolutionary. To a great extent, your study of information technology is a study of change. In what way can and does technology change the world around us? The impact of IT is broad and diverse; some of the changes it brings are profound. Information technology has demonstrated an ability to change or create the following: Within Organizations

NOTES

Create new pr9cedures, workflows, the knowledge base, products and services, and communications. Organizational structure

Facilitate new reporting relationships, increased spans of control, local decision rights, supervision, the formation of divisions, geographic scope, and virtual organizations. Interorganizational relations

Create new customer-supplier relations, partnerships, and alliances. The economy

Alter the nature of markets through electronic commerce, disintermediation, new forms of marketing and advertising, partnerships and alliances, the cost of transactions, and modes of governance in customer-supplier relationships. Education

Enhance on campus education through videoconferencing, e-mail, electronic meetings, groupware, and electronic guest lectures. Facilitate distance learning through e-mail, groupware, and videoconferencing. Provide access to vast amounts of reference material; facilitate collaborative projects independent of time zones and distance. National development

Provide small companies with international presence and facilitate commerce. Make large amounts of information available, perhaps to the consternation of certain governments. Present opportunities to improve education.
323 Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

A more extensive list of related topics is provided below. Worldwide World Information Technology and Services Alliance (WITSA) is a consortium of over 60 information technology (IT) industry associations from economies around the world. Founded in 1978 and originally known as the World Computing Services Industry Association, WITSA has increasingly assumed an active advocacy role in international public policy issues affecting the creation of a robust global information infrastructure.

5.13 COMPUTER AND QUALITY FUNCTIONS


This section examines the extent to which computer-based systems are organized to enhance or degrade the quality of working life for clerks, administrative staff, professionals, and managers. Worklife merits a lot of attention for four reasons. First, work is a major component of many peoples lives. Wage income is the primary way that most people between the ages of 22 and 65 obtain money for food, housing, clothing, transportation, etc. The United States population is about 260,000,000, and well over 110,000,000 work for a living. So, major changes in the nature of work the number of jobs, the nature of jobs, career opportunities, job content, social relationships at work, working conditions of various kinds can affect a significant segment of society. Second, in the United States, most wage earners work thirty to sixty hours per week-a large fraction of their waking lives. And peoples experiences at work, whether good or bad, can shape other aspects of their lives as well. Work pressures or work pleasures can be carried home to families. Better jobs give people some room to grow when they seek more responsible, or complex positions, while stifled careers often breed boredom and resentment in comparably motivated people. Although people vary considerably in what kinds of experiences and opportunities they want from a job, few people would be thrilled with a monotonous and socially isolated job, even if it were to pay very well. Third, computerization has touched more people more visibly in their work than in any other kind of settinghome, schools, churches, banking, and so on. Workplaces are good places to examine how the dreams and dilemmas of computerization really work out for large numbers of people under an immense variety of social and technical conditions. Fourth, many aspects of the way that people work influence their relationships
Anna University Chennai 324

DBA 1656

QUALITY MANAGEMENT

to computer systems, the practical forms of computerization, and their effects. For example, in our last section, Steven Hodas argued that the tenuousness of many teachers classroom authority could discourage them from seeking computer supported instruction in their classes. Also, Martin Baily and Paul Attewell argued that computerization has had less influence on the productivity of organizations because people integrate into their work so as to provide other benefits to them, such as producing more professionallooking documents and enhancing their esteem with others, or managers becoming counterproductive control-freaks with computerized reports. When specialists discuss computerization and work, they often appeal to strong implicit images about the transformations of work in the last one hundred years, and the role that technologies have played in some of those changes. In nineteenth century North America, there was a major shift from farms to factories as the primary workplaces. Those shiftsoften associated with the industrial revolution-continued well into the early twentieth century. Industrial technologies such as the steam engine played a key role in the rise of industrialism. But ways of organizing work also altered significantly. The assembly line with relatively high-volume, lowcost production and standardized, fragmented jobs was a critical advance in the history of industrialization. During the last 100 years, farms also were increasingly mechanized, with motorized tractors, harvesters and other powerful equipment replacing horse-drawn plows and hand-held tools. The farms also have been increasingly reorganized. Family farms run by small groups have been dying out, and have been bought up (or replaced by) huge corporate farms with battalions of managers, accountants, and hired hands. Our twentieth century economy has been marked by the rise of human service jobs, in areas such as banking, insurance, travel, education, and health. And many of the earliest commercial computer systems were bought by large service organizations such as banks and insurance companies. (By some estimates, the finance industries bought about 30% of the computer hardware in the United States in the 1980s.) During the last three decades, computer use has spread to virtually every kind of workplace, although large firms are still the dominant investors in computer-based systems. Since offices are the predominant site of computerization, it is helpful to focus on offices in examining the role that these systems play in altering work. Today, the management of farms and factories is frequently supported with computer systems in their offices. Furthermore, approximately 50% of the staff of high tech manufacturing firms are white collar workers who make use of such systems engineers, accountants, marketing specialists, etc. There is also some computerization in factory production lines through the introduction of numerically controlled machine tools and industrial robots. And certainly issues such as worklife quality and managerial
325

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

control are just as real on the shop floor as in white collar areas (See Shaiken, 1986; Zuboff, 1988). While the selections here examine white collar work, the reader can consider the parallels between the computerization of TCP is responsible for breaking up the message into datagrams, reassembling the datagrams at the other end, resending anything that gets lost, and putting things back in the right order. IP is responsible for routing individual datagrams. The datagrams are individually identified by a unique sequence number to facilitate reassembly in the correct order. The whole process of transmission is done through the use of routers. Routing is the process by which two communication stations find and use the optimum path across any network of any complexity. Routers must support fragmentation, the ability to subdivide received information into smaller units where this is required to match the underlying network technology. Routers operate by recognizing that a particular network number relates to a specific area within the interconnected networks. They keep track of the numbers throughout the entire process. Domain Name System The addressing system on the Internet generates IP addresses, which are usually indicated by numbers such as 128.201.86.290. Since such numbers are difficult to remember, a user-friendly system has been created known as the Domain Name System (DNS). This system provides the mnemonic equivalent of a numeric IP address and further ensures that every site on the Internet has a unique address. For example, an Internet address might appear as crito.uci.edu. If this address is accessed through a Web browser, it is referred to as a URL (Uniform Resource Locator), and the full URL will appear as http://www. crito.uci.edu. The Domain Name System divides the Internet into a series of component networks called domains that enable e-mail (and other files) to be sent across the entire Internet. Each site attached to the Internet belongs to one of the domains. Universities, for example, belong to the edu domain. Other domains are gov (government), com (commercial organizations), mil (military), net (network service providers), and org (nonprofit organizations). World Wide Web The World Wide Web (WWW) is based on technology called hypertext. The Web may be thought of as a very large subset of the Internet, consisting of hypertext and hypermedia documents. A hypertext document is a document that has a reference (or link) to another hypertext document, which may be on the same computer or in a different computer that may be located anywhere in the world. Hypermedia is a similar concept except that it provides links to graphic, sound, and video files in addition to text files.

Anna University Chennai

326

DBA 1656

QUALITY MANAGEMENT

In order for the Web to work, every client must be able to display every document from any server. This is accomplished by imposing a set of standards known as a protocol to govern the way that data are transmitted across the Web. Thus data travel from client to server and back through a protocol known as the HyperText Transfer Protocol (http). In order to access the documents that are transmitted through this protocol, a special Technological features The Internet Ls technological success depends on its principal communication tools, the Transmission Control Protocol (TCP) and the Internet Protocol (IP). They are referred to frequently as TCP/IP. A protocol is an agreed-upon set of conventions that defines the rules of communication. TCP breaks down and reassembles packets, whereas IP is responsible for ensuring that the packets are sent to the right destination. Data travels across the Internet through several levels of networks until it reaches its destination. E-mail messages arrive at the mail server (similar to the local post office) from a remote personal computer connected by a modem, or a node on a local-area network. From the server, the messages pass through a router, a special-purpose computer ensuring that each message is sent to its correct destination. A message may pass through several networks to reach its destination. Each network has its own router that determines how best to move the message closer to its destination, taking into account the traffic on the network. A message passes from one network to the next, until it arrives at the destination network, from where it can be sent to the recipient, who has a mailbox on that network. See also Electronic mail; Local-area networks; Widearea networks. TCP/IP TCP/IP is a set of protocols developed to allow cooperating computers to share resources across the networks. The TCP/IP establishes the standards and rules by which messages are sent through the networks. The most important traditional TCP/ IP services are file transfer, remote login, and mail transfer. The file transfer protocol (FTP) allows a user on any computer to get files from another computer, or to send files to another computer. Security is handled by requiring the user to specify a user name and password for the other computer. The network terminal protocol (TELNET) allows a user to log in on any other computer on the network. The user starts a remote session by specifying a computer to connect to. From that time until the end of the session, anything the user types is sent to the other computer. Mail transfer allows a user to send messages to users on other computers. Originally, people tended to use only one or two specific computers. They would maintain
327

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

mail files on those machines. The computer mail system is simply a way for a user to add a message to another users mail file. Other services have also become important: resource sharing, diskless workstations, computer conferencing, transaction processing, security, multimedia access, and directory services program known as a browser is required, which browses the Web. See also World Wide Web. Commerce on the Internet Commerce on the Internet is known by a few other names, such as e-business, Etailing (electronic retailing), and e-commerce. The strengths of e-business depend on the strengths of the Internet. Internet commerce is divided into two major segments, business-to-business (B2B) and business-to-consumer (B2C). In each are some companies that have started their businesses on the Internet, and others that have existed previously and are now transitioning into the Internet world. Some products and services, such as books, compact disks (CDs), computer software, and airline tickets, seem to be particularly suited for online business. Internet The Internet is a worldwide, publicly accessible network of interconnected computer networks that transmit data by packet switching using the standard Internet Protocol (IP). It is a network of networks that consists of millions of smaller domestic, academic, business, and government networks, which together carry various information and services, such as electronic mail, online chat, file transfer, and the interlinked Web pages and other documents of the World Wide Web. The USSRs launch of Sputnik spurred the United States to create the Advanced Research Projects Agency, known as ARPA, in February 1958 to regain a technological lead. [1][2] ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO, and saw universal networking as a potential unifying human revolution. Licklider had moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-l computer and conducted the first public demonstration of time-sharing. At the IPTO, Licklider recruited Lawrence Roberts to head a project to implement a network, and Roberts based the technology on the work of Paul Baran who had written an exhaustive study for the U.S. Air Force that recommended packet

Anna University Chennai

328

DBA 1656

QUALITY MANAGEMENT

switching (as opposed to circuit switching) to make a network highly robust and survivable. After much work, the first node went live at UCLA on October 29, 1969 on what would be called the ARPANET, one of the eve networks of todays Internet. Following on from this, the British Post Office, Western Union International and Tymnet collaborated to create the first international packet switched network, referred to as the International Packet Switched Service (IPSS), in 1978. This network grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The first TCP/IP-wide area network was operational by January 1, 1983, when the United States National Science Foundation (NSF) constructed a university network backbone that would later become the NSFNet. It was then followed by the opening of the network to commercial interests in 1985. Important, separate networks that offered gateways into, then later merged with, the NSFNet include Usenet, BITNET and the various commercial and educational networks, such as X.25, Compuserve and JANET. Telenet (later called Sprintnet) was a large privately-funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network eventually merged with the others in the 1990s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over these pre-existing communication networks, especially the international X.25 IPSS network, allowed for a great ease of growth. Use of the term Internet to describe a single global TCP/IP network originated around this time. Growth The network gained a public face in the 1990s. On August 6, 1991, CERN, which straddles the border between France and Switzerland, publicized the new World Wide Web project, two years after Tim Berners-Lee had begun creating HTML, HTTP and the first few Web pages at CERN. An early popular web browser was ViolaWWW based upon HyperCard. It was eventually replaced in popularity by the Mosaic web browser. In 1993 the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic/technical Internet. By 1996 the word Internet was coming into common daily usage, frequently misused to refer to the World Wide Web. Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate) During the 1990s, it was estimated that the Internet grew by 100% per year, with a brief period of explosive growth in 1996 and 1997.[3] This growth is often attributed to the lack of central
329

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

administration, which allows organic growth of the network, as well as the nonproprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents anyone company from exerting too much control over the network. Todays Internet A rack of servers Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is essentially defined by its interconnections and routing policies. As of June 10, 2007, 1.133 billion people use the Internet according to Internet World Stats. Writing in the Harvard International Review, philosopher NJ.Slabbert, a writer on Chat rooms provide another popular Internet service. Internet Relay Chat (IRC) offers multiuser text conferencing on diverse topics. Dozens of IRC servers provide hundreds of channels that anyone can log onto and participate in via the keyboard. See IRC. The Original Internet The Internet started in 1969 as the ARPAnet. Funded by the U.S. government, the ARPAnet became a series of high-speed links between major supercomputer sites and educational and research institutions worldwide, although mostly in the U.S. A major part of its backbone was the National Science Foundations NFSNet. Along the way, it became known as the Internet or simply the Net. By the 1990s, so many networks had become part of it and so much traffic was not educational or pure research that it became obvious that the Internet was on its way to becoming a commercial venture. It Went Commercial in 1995 In 1995, the Internet was turned over to large commercial Internet providers (ISPs), such as MCI, Sprint and UUNET, which took responsibility for the backbones and have increasingly enhanced their capacities ever since. Regional ISPs link into these backbones to provide lines for their subscribers, and smaller ISPs hook either directly into the national backbones or into the regional ISPs. The TCP/IP Protocol Internet computers use the TCP/IP communications protocol. There are more than 100 million hosts on the Internet, a host being a mainframe or medium to high-end

Anna University Chennai

330

DBA 1656

QUALITY MANAGEMENT

server that is always online via TCP/IP. The Internet is also connected to non-TCP/IP networks worldwide through gateways that convert TCP/IP into other protocols. Life before the Web Before the Web and the graphics-based Web browser, the Internet was accessed from Unix terminals by academicians and scientists using command-driven Unix utilities. Some of these utilities are still widely used, but are available in all platforms, including Windows, Mac and Linux. For example, FTP is used to upload and download files, and Telnet lets you log onto a computer on the Internet and run a program. See FTP, Telnet, Archie, Gopher and Veronica. The Next Internet Ironically, some of the original academic and scientific users of the Internet have developed their own Internet once again. Internet2 is a high-speed academic research network that was started in much the same fashion as the original Internet (see Internet2). See Web vs. Internet, World Wide Web, how to search the Web, intranet, NAP, hot topics and trends, IAB, information superhighway and online service. Policy issues for the Washington DC-based Urban Land Institute, has asserted that the Internet is fast becoming a basic feature of global civilization, so that what has traditionally been called civil society is now becoming identical with information technology society as defined by Internet use. The largest network in the world. It is made up of more than 350 million computers in more than 100 countries covering commercial, academic and government endeavors. Originally developed for the u.s. military, the Internet became widely used for academic and commercial research. Users had access to unpublished data and journals on a variety of subjects. Today, the Net has become commercialized into a worldwide information highway, providing data and commentary on every subject and product on earth. E-Mail Was the Beginning The Internets surge in growth iri the mid-1990s was dramatic, increasing a hundredfold in 1995 and 1996 alone. There were two reasons. Up until then, the major online services (AOL, CompuServe, etc.) provided e-mail, but only to customers of the same service. As they began to connect to the Internet for e-mail exchange, the Internet took on the role of a global switching center. An AOL member could finally send mail to a CompuServe member, and so on. The Internet glued the world together for electronic mail, and today, SMTP, the Internet mail protocol, is the global e-mail standard. The Web Was the Explosion
331

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

Secondly, with the advent of graphics-based Web browsers such as Mosaic and Netscape Navigator, and soon after, Microsofts Internet Explorer, the World Wide Web took off. The Web became easily available to users with pes and Macs rather than only scientists and hackers at Unix workstations. Delphi was the first proprietary online service to offer Web access, and all the rest followed. At the same time, new Internet service providers (ISPs) rose out of the woodwork to offer access to individuals and companies. As a result, the Web grew exponentially, providing an information exchange of unprecedented proportion. The Web has also become the storehouse for drivers, updates and demos that are downloaded via the browser as well as a global transport for delivering information by subscription, both free and paid. Newsgroups Although daily news and information is now available on countless Web sites, long before the Web, information on a myriad of subjects was exchanged via Usenet (User Network) newsgroups. Still thriving, news group articles can be selected and read directly from your Web browser. Chat Rooms 5.15 ELECTRONIC COMMUNICATION Electronic communication (e-mail, bulletin boards and newsgroups) comprises a relatively new form of communication. Electronic communication differs from other methods of communication in the following key areas: * Speed The time required to generate, transmit, and respond to messages. * Permanence The methods of storing messages and the permanence of these files. * Cost of Distribution The visible cost of sending messages to one or more individuals. * Accessibility The direct communication channels between individuals. * Security and Privacy The ability of individuals to access electronically stored mail and files. * Sender Authenticity The ability to verify the sender of a message. In using electronic communications, we may need to reevaluate what to expect in terms of rules, guidelines, and human behavior. Our experiences with paper and telephone

Anna University Chennai

332

DBA 1656

QUALITY MANAGEMENT

communications may not tell us enough. For each of the key areas mentioned, the differences between electronic and other forms of communication are discussed below. Speed With electronic mail, written messages are delivered to the recipient within minutes of their transmission. Messages can be read at the recipients convenience, at any time of the day. Or, the recipient can respond immediately, and an asynchronous dialogue can develop which resembles a telephone conversation or a meeting. The ease and speed with which messages transmit often changes the writing style and formality of the written communication. These changes can lead to misinterpretation of messages, and a need arises for a new set of standards for the interpretation of message content. Permanence Electronic communications appear to be a volatile form of communication in which messages disappear when deleted. However, messages can be stored for years on disks or tapes, or they can be printed and/or stored in standard files. Unlike paper copy or a telephone message, a message also can be altered, then printed, without evidence that it is not original. Electronic messages may also be reformatted, then printed, as more formal or official correspondence. Cost of Distribution The associated costs of paper or telephone communication are familiar to most people. The cost of a US Mail message (paper, stamp(s), and the personnel time to prepare the message) are known and visible. Long distance telephone costs are visible in a monthly bill. Due to the cost and effort involved, correspondents often limit their paper or telephone messages to select individuals known to absolutely require the information. By comparison, electronic communication allows discourse with a large number of correspondents, over a wide geographical area, with no more effort or cost than is required to send a single message locally. This multiple-mailing capability often leads to wider transmission of messages than is necessary, and messages may be distributed to individuals with only a casual interest in the information. Accessibility Organizations develop channels of communication to filter paper or telephone messages to ensure that only appropriate individuals receive the information. Comparable mechanisms may not yet be in place for electronic mail. In using electronic communication, organizations may need to reevaluate office procedures to ensure consistent documentation of correspondence and to prevent inappropriate correspondence
333

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

burdening individuals. Security and Privacy Currently, no legal regulations exist regarding the security and privacy of electronic mail. The vast majority of electronic mail messages are delivered to the correct addressee without intervention. However, messages may be intercepted by individuals other than the sender or recipient for reasons discussed below. Incorrect Address Routing software uses the address in an electronic mail message to determine the network and protocols for message delivery. Each computer that handles a mail message stamps it with information that allows tracking of the message. This information allows improperly addressed messages to be sent back to the sender. Occasionally, for technical reasons, an improperly addressed message can not be sent back to the sender. The message then is sent to a system administrators mailbox. The systems administrator usually attempts to return the message to the sender with an error message indicating the problem with the address. Perusal by Unauthorized Individuals Mail delivered to a secure file storage area on a computer is held there until the recipient retrieves it. The file can only be read by the owner of the mail while in storage. Once the mail is in the owners home directory, security depends on the owner. One group of users on every system has access to fill files on a system. These systems administrators have special privileges required to maintain the system. While these individuals have the ability to peruse private files, it is considered unprofessional to do so. Systems administrators normally access only those files required to perform their job. Sender authenticity Standard mail protocols do not test the From: portion of a message header for authenticity. A knowledgeable person can modify the From: address of messages. This is an extremely common occurrence Electronic communication in technical sense is deliberated as: A transmitter that takes information and converts it to a signal. A transmission medium over which the signal is transmitted. A receiver that receives the signal and converts it back into usable information. For example, consider a radio broadcast: In this case the broadcast tower is the transmitter, the radio is the receiver and the transmission medium is free space. Often

Anna University Chennai

334

DBA 1656

QUALITY MANAGEMENT

telecommunication systems are two-way and a single device acts as both a transmitter and receiver, or transceiver. For example, a mobile phone is a transceiver. Telecommunication over a phone line is called point-to-point communication because it is between one transmitter and one receiver. Telecommunication through radio broadcasts is called broadcast communication because it is between one powerful transmitter and numerous receivers. Analogue or digital Signals can either be analogue or digital. In an analogue signal, the signal is varied continuously with respect to the information. In a digital signal, the information is encoded as a set of discrete values (e.g. ls and Os). During transmission, the information contained in analogue signals will be degraded by noise. Conversely, unless the noise exceeds a certain threshold, the information contained in digital signals will remain intact. This represents a key advantage of digital signals over analogue signals. Networks A collection of transmitters, receivers or transceivers that communicate with each other is known as a network. Digital networks may consist of one or more routers that route data to the correct user. An analogue network may consist of one or more switches that establish a connection between two or more users. For both types of network, a repeater may be necessary to amplify or recreate the signal when it is being transmitted over long distances. This is to combat attenuation that can render the signal indistinguishable from noise. Channels A channel is a division in a transmission medium so that it can be used to send multiple streams of information. For example, a radio station may broadcast at 96 MHz while another radio station may broadcast at 94.5 MHz. In this case the medium has been divided by frequency and each channel received a separate frequency to broadcast on. Alternatively, one could allocate each channel a recurring segment of time over which to broadcast - this i.s known as time-division multiplexing and is sometimes used in digital communication. Modulation The shaping of a signal to convey information is known as modulation. Modulation can be used to represent a digital message as an analogue waveform. This is known as keying and several keying techniques exist (these include phase-shift keying, frequencyshift keying and amplitude-shift keying). Bluetooth, for example, uses phase-shift keying to exchange information between devices. Modulation can also be used to transmit the information of analogue signals at
335

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

higher frequencies. This is helpful because low-frequency analogue signals cannot be effectively transmitted over free space. Hence the information from a low-frequency analogue signal must be superimposed on a higher-frequency signal (known as a carrier wave) before transmission. There are several different modulation schemes available to achieve this (two of the most basic being amplitude modulation and frequency modulation). An example of this process in action is a DJs voice being superimposed on a 96 MHz carrier wave using frequency modulation (the voice would then be received on a radio as the channel 96 FM). Telecommunication Telecommunications is the transmission of data and information between computers using a communications link such as a standard telephone line. Typically, a basic telecommunications system would consist of a computer or terminal on each end, communication equipment for sending and receiving data, and a communication channel connecting the two users. Appropriate communications software is also necessary to manage the transmission of data between computers. Some applications that rely on this communications technology include the following: 1. Electronic mail (e-mail) is a message transmitted from one person to another through computerized channels. Both the sender and receiver must have access to on-line services if they are not connected to the same network. E-mail is now one of the most frequently used types of telecommunication. Facsimile (fax) equipment transmits a digitized exact image of a document over telephone lines. At the receiving end, the fax machine converts the digitized data back into its original form. Voice mail is similar to an answering machine in that it permits a caller to leave a voice message in a voice mailbox. Messages are digitized so the callers message can be stored on a disk. Videoconferencing involves the use of computers, television cameras, and communications software and equipment. This equipment makes it possible to conduct electronic meetings while the participants are at different locations. The Internet is a continuously evolving global network of computer networks that facilitates access to information on thousands of topics. The Internet is utilized by millions of people daily.

2.

3.

4.

5.

Actually, telecommunications is not a new concept. It began in the mid-1800s with the telegraph, whereby sounds were translated manually into words; then the telephone, developed in 1876, transmitted voices; and then the teletypewriter, developed in the early 1900s, was able to transmit the written word.
Anna University Chennai 336

DBA 1656

QUALITY MANAGEMENT

Since the 1960s, telecommunications development has been rapid and wide reaching. The development of dial modem technology accelerated the rate during the 1980s. Facsimile transmission also enjoyed rapid growth during this time. The 1990s have seen the greatest advancement in telecommunications. It is predicted that computing performance will double every eighteen months. In addition, it has been estimated that the power of the computer has doubled thirty-two times since World War II. The rate of advancement in computer technology shows no signs of slowing. To illustrate the computers rapid growth, Ronald Brown, former U.S. secretary of commerce, reported that only fifty thousand computers existed in the world in 1975, whereas, by 1995, it was estimated that more than fifty thousand computers were sold every ten hours (U.S. Department of Commerce, 1995). Deregulation and new technology have created increased competition and widened the range of network services available throughout the world. This increase in telecommunication capabilities allows businesses to benefit from the information revolution in numerous ways, such as streamlining their inventories, increasing productivity, and identifying new markets. In the following sections, the technology of modern telecommunications will be discussed. Communications Networks When computers were first invented, they were designed as stand-alone systems. As computers became more widespread, practical, useful, and indispensable, network systems were developed that allowed communication between computers. The term network describes computers that are connected for the purpose of sharing data, software, and hardware. The two types of networks include local area networks (LANs) and wide area networks (WANs). As the name suggests, LANs cover a limited geographic area, usually a square mile or less. This limited area can be confined to a room, a building, or a group of buildings. Although a LAN can include one central computer connected to terminals, more commonly it connects a group of personal computers. A WAN covers a much larger geographic area by means of telephone cables and/or other communications channels. WANs are often used to connect a companys branch offices in different cities. Some familiar public wide area networks include AT&T, Sprint, and MCI. Internet, Intranet, and Extranet Internet work is the term used to describe two or more networks that are joined together. The term Internet describes the collection of connected networks. The Internet has been made accessible by use of the World Wide Web. The Web allows users to navigate the milli~ns of sites found on the Internet using software applications called Web browsers. People make use of the Internet in numerous ways
337

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

for both personal and business applications. For instance, an investor is able to access a company directly and set up an investment account; a student is able to research an assigned topic for a class report; a shopper can obtain information on new and used cars. The Internet concept of global access to information transferred to a private corporate network creates an intranet. In conjunction with corporate Internet access, many companies are finding that it is highly practical to have an internal intranet. Because of the increased need for fast and accurate information, an efficient and seamless communications line enabling all members to access a wealth of relevant information instantaneously is vital. A company intranet in conjunction with the Internet can provide various types of information for internal and/or external use. Uses such as instantaneous transfer of information, reduced printing and reprinting, and elimination of out-of-date information can provide great benefits to geographically dispersed groups. Some examples of information that an intranet might include are company and procedures manuals, a company phonebook and e-mail listings, insurance and benefits information, in-house publications, job postings, expense reports, bulletin boards for employee memoranda, training information, inventory lists, price lists, and inventory control information. Putting such applications on an intranet can serve a large group of users at a substantially reduced cost. Some companies might want to make some company information accessible to preauthorized people outside the company or even to the general public. This can be done by using an extranet. An extranet is a collaborative network that uses Internet technology to link businesses with their suppliers, customers, or other businesses. An extranet can be viewed as part of a companys intranet. Access by customers would allow entering orders into a companys system. For example, a person may order airline tickets, check the plane schedule, and customize the trip to his or her preferences. In addition to time and labor savings, this type of order entry could also decrease errors made by employees when entering manually prepared orders. Security and privacy can be an issue in using an extranet. One way to provide this security and privacy would be by using the Internet with access via password authorization. Computer dial in and Internet access to many financial institutions is now available. This is an example of limited access to information. While bank employees have access to many facets of institutional information, the bank customers are able to access only information that has to do with their own accounts. In addition to their banking account number, they would have to use their password to gain access to the information.

Anna University Chennai

338

DBA 1656

QUALITY MANAGEMENT

Transmission Media The physical devices making up the communications channel are known as the transmission media. These devices include cabling media (such as twisted-pair cable, coaxial cable, and fiber-optic cable) and wireless media (such as microwaves and other radio waves as well as infrared light). Wireless transmission has the advantage of not having to install physical connections at every point. Microwave stations use radio waves to send both voice and digital signals. The principal drawback to this system is that microwave transmission is limited to line-of-sight applications. Relay antennas are usually placed twenty-five to seventy-five miles apart and can have no interfering buildings or mountains between them. Earth-based microwave transmissions, called terrestrial microwaves, send data from one microwave station to another, similar to the method by which cellular telephone signals are transmitted. Earth stations receive microwave transmissions and transmit them to orbiting communication satellites, which then relay them over great distances to receiving earth stations. Usually, geosynchronous satellites are placed roughly twenty-two thousand miles above the earth. Being geosynchronous allows the satellites to remain in fixed positions above the earth and to be constantly available to a given group of earth stations. Many businesses either lease or rent satellite and/or microwave communication services through the telephone company or other satellite communication companies. If a business has only a small amount of information to be transmitted each day, it may prefer to use a small satellite dish antenna instead. Types of Signals and Their Conversion By Modem Most telecommunications involving personal computers make use of standard telephone lines at some point in their data transmission. But since computers have been developed to work with digital signals, their transmission presents a non-compatible signal problem. Digital signals are on/off electrical pulses grouped in a manner to represent data. Originally, telephone equipment was designed to carry only voice transmission and operated with a continuous electrical wave called an analog signal. In order for telephone lines to carry digital signals, a special piece of equipment called a modem (Modulator/DE Modulator) is used to convert between digital and analog signals. Modems can be either external to the computer, and thus to be moved from one computer to another, or they can be internally mounted inside the computer. Modems are always used in pairs. Both the receiving and transmitting modems must operate at the same speed. Multiple transmission speeds allow faster modems to reduce their speed to match that
339

NOTES

Anna University Chennai

DBA 1656

QUALITY MANAGEMENT

NOTES

of a slower modem. The transmission rate and direction are determining factors that influence the speed, accuracy, and efficiency of telecommunications systems. 5.16 INFORMATION QUALITY ISSUES Information quality (IQ) is a term to describe the quality of the content of information systems. Most information system practitioners use the term synonymously with data quality. However, as many academics make a distinction between data and information, some will insist. on a distinction between data quality and information quality. Information quality assurance is confidence that particular information meets some context specific quality requirements. Information quality is a measure of the value which the information provides to the user of that information. Quality is subjective and the quality of information can vary among users and among uses of the information. Furthermore, accuracy is just one element of IQ and this can be source-dependent. Often there is a trade-off between accuracy and other aspects of the information determining its suitability for any given tasks. Dimensions of Information Quality The generally accepted list of elements used in assessing subjective Information Quality are those put forth in Wang & Strong (1996).: Intrinsic IQ: Accuracy, Objectivity, Believability, Reputation Contextual IQ: Relevancy, Value-Added, Timeliness, Completeness, Amount of information Representational IQ: Interpretability, Ease of understanding, Concise representation, Consistent representation Accessibility IQ: Accessibility, Access security

Researchers should evaluate the quality of information appearing online or in print based on five criteriascope of coverage, authority, objectivity, accuracy and timeliness. This guide defines the criteria, documents incidents of questionable, false or fraudulent information as reported in the news or trade literature, provides examples of Web sites that illustrate good or bad information, and suggests strategies that help you detect bad information. Criteria for Quality in Information To evaluate information, you should understand the significance of scope of coverage, authority, objectivity, accuracy and timeliness.
Anna University Chennai 340

DBA 1656

QUALITY MANAGEMENT

Scope of coverage Refers to the extent to which a source explores a topic. Consider time, periods, geography or jurisdiction and coverage of related or narrower topics. Authority

NOTES

Refers to the expertise or recognized official status of a source. When working with legal or government information, consider whether the source is the official provider of the information Objectivity: It is the bias or opinion expressed when a interprets or analyses facts. Accuracy: It describes information that is complete. Time bounds: It refers to information that is available at the time of publication.

The most basic requirements of good information are: Objectivity: That the information is presented in a manner free from propaganda or disinformation. Completeness: That the information is a complete, not a partial picture of the subject Pluralism: That all aspects of the information are given and are not restricted to present a particular viewpoint, as in the case of censorship.[1]

To achieve quality in electronic information, it is necessary to be sure that one is retrieving all of the relevant information, and then to determine what of the retrieved information is valuable; what information is free of bias, propaganda, or omissions. To have quality information, three things are necessary: Gaining full and appropriate access to the available information Making full use of the retrieval mechanisms, which requires an understanding of how these mechanisms work. Evaluation of the quality of the information.

The World Wide Web holds the potential for becoming the greatest repository of knowledge ever created. Different from the traditional library, material on the Web is frequently self-published, stored in quasi-secured repositories, and often, of unknown validity. The government, and it would seem a majority of the American population, favor public access of the Web through public libraries and public schools. Librarians are facing a new set of challenges in helping patrons access and utilize this new medium. Schools and public libraries face three main challenges:
Anna University Chennai

341

DBA 1656

QUALITY MANAGEMENT

NOTES

Providing safe access; the major concern here is appropriate access for minors. Locating useful, quality Information on the Web

Evaluating the information to verify quality SUMMARY Standards are important in ensuring the TQM culture in organizations. The Quality Management Systems use various standards and guiding principles for ensuring the adherence to objectives and also to improve the performance. Quality Audit will reveal the local stand of the system. The leadership plays an important and indispensable role to win over the employees in organization through various methods of recognition and reward. The developments in IT helped in defining quality functions and the computers and internet are used to address to the issues related to information quality. REVIEW QUESTIONS 1. 2. 3. 4. 5. 6. 7. 8. Expalin the procedure of establishing quality system in organizations. Describe ISO 9004 : 2000 and state its scope and applications. Enumerate the guidelines for performance improvements in service sector. Culture place an important role in achieving TQM in organizations Discuss. Describe the various types of leaderships and explain their impact on the employees. Trace the developments in Information Technology over the years. Computers alone do not ensure quality Critically examine. Detail the quality issues related to the information and explain the impact of wrong information on quality ensuring process.

Anna University Chennai

342

Anda mungkin juga menyukai