Anda di halaman 1dari 18

CONTENTS ABSTRACT INTRODUCTION WHAT IS AUTONOMIC COMPUTING?

KEY ELEMENTS OF AUTONOMIC COMPUTING FUNDAMENTALS OF AUTONOMIC COMPUTING AUTONOMIC COMPUTING AND CURRENT COMPUTING SYSTEM AUTONOMIC COMPUTING ARCHITECTURE NEED FOR AUTONOMIC COMPUTING BENEFITS CHAIIENGES CONCLUSIONS

Autonomic Computing

ABSTRACT Imagine a world where computers fix their own problems before you even know something is wrong. IBM is building that world with a range of autonomic computing capabilities across all of our product lines, helping you control an increasingly complex and expensive IT business. .The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning while keeping the system's complexity invisible to the user. As computing systems evolve, they are subject to the effect of continuous growth in the number of degrees of freedom that must be well managed in order to maintain their efficiency. An autonomic computing system would control the functioning of computer applications and systems without input from the user, in the same way that the autonomic nervous system regulates body systems without conscious input from the individual. Autonomic Computing is to develop computer systems capable of selfmanagement, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. In other words, autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users. An autonomic system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions In this White paper, I present an overview of Autonomic Computing. Also, the basic fundamental features of Autonomic Computing. This seminar report aims to explain basic concept of Autonomic Computing. Submitted By Saurabh S. Gilalkar IIIrd year, Computer Engineering. Government Polytechnic,Amravati.

Autonomic Computing

INTRODUCTION "Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead This quote made by the preeminent mathematician Alfred Whitehead holds both the lock and the key to the next era of computing. It implies a threshold moment surpassed only after humans have been able to automate increasingly complex tasks in order to achieve forward momentum. There is every reason to believe that we are at just such a threshold right now in computing. The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the I/T industry to keep them running. And it's not just a matter of numbers. It's the complexity of these systems and the way they work together that is creating shortage of skilled I/T workers to manage all of the systems. The high-tech industry has spent decades creating computer systems with ever- mounting degrees of complexity to solve a wide variety of business problems. Ironically, complexity itself has become part of the problem. Its a problem that's not going away, but will grow exponentially, just as our dependence on technology has. The high-tech industry has spent decades creating computer systems with ever mounting degrees of complexity to solve a wide variety of business problems. Ironically, complexity itself has become part of the problem. The solution may lie in automation, or creating a new capacity where important computing operations can run without the need for human intervention. On October 15th, 2001 Paul Horn, senior vice president of IBM Research addressed the Agenda conference, an annual meeting of the preeminent technological minds, held in Arizona. In his speech, and in a document he distributed there, he suggested a solution: build computer systems that regulate themselves much in the same way our nervous systems regulates and protects our bodies.

Autonomic Computing This new model of computing is called autonomic computing. The good news is that some components of this technology are already up and running. However, complete autonomic systems do not yet exist. This is not a proprietary solution. It's a radical change in the way businesses, academia, and even the government design, develop, manage and maintain computer systems. Autonomic computing calls for a whole new area of study and a whole new way of conducting business.

Autonomic Computing

WHAT IS AUTONOMIC COMPUTING?

Autonomic Computing is a new vision of computing initiated by IBM. It is


the ability of systems to be more self-managing. Autonomic computing is the next generation of integrated computer technology that will allow networks to manage themselves with little or no human intervention. By choosing the word autonomic, We are making an analogy with the autonomic nervous system, which controls many organs and muscles in the human body, which sends impulses that control heart rate, breathing and other functions without conscious thought or effort. The autonomic nervous system frees our conscious brain from the burden of having to deal with vital but lower level functions. Autonomic computing will free system administrators from many of todays routine management and operational tasks Autonomic computing is the result of the realization that unless we begin to build computing systems that reduce the complexity for those who use and manage them, we will not have the time or the expertise to unravel problems arising in newer systems. Autonomic computing is about freeing IT professionals to focus on high-value tasks by making technology work smarter. This means letting computing systems and infrastructure take care of managing themselves. Ultimately, it is writing business policies and goals and letting the infrastructure configure, heal and optimize itself according to those policies while protecting itself from malicious activities. Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives.

Autonomic Computing

KEY ELEMENTS OF AUTONOMIC COMPUTING The elements of autonomic computing can be summarized in to 8 key points. In short, they are as follows: 1. Knows Itself An autonomic computing system must be capable of taking continual stock of itself, its connections, devices and resources, and know which are to be shared or protected. 2. Configure Itself An autonomic computing system must be able to configure and reconfigure itself dynamically as needs dictate. 3. Optimizes Itself An autonomic computing system must constantly search for ways to optimize performance. 4. Heal Itself An autonomic computing system must perform self-healing by redistributing resources and reconfiguring itself to work around any dysfunctional elements. 5. Protect Itself An autonomic computing system must be able to monitor security and protect itself from attack. 6. Adapt Itself An autonomic computing system must be able to recognize and adapt to the needs of coexisting systems within its environment 7. Open Itself It must work with shared technologies. Proprietary solutions are not compatible with autonomic computing ideology. 8. Hide Itself An autonomic computing system will anticipate the optimized resources needed while keeping its complexity hidden.

Autonomic Computing

FUNDAMENTALS OF AUTONOMIC COMPUTING In order to incorporate these characteristics in self-managing systems, future autonomic computing systems will have four fundamental features.

Fig.1: Autonomic Computing Attributes

Self-Configuring Systems adapt automatically to dynamically changing environments. Then hardware and software systems have the ability to define themselves on-the fly, they are self- configuring. This aspect of self-managing means that new features, software, and servers can be dynamically added to the enterprise infrastructure with no disruption of services. Systems must be designed to provide this aspect at a feature level with capabilities such as plug and play devices, configuration setup wizards, and wireless server management. These features will allow functions to be added dynamically to the enterprise infrastructure with minimum human intervention. Self-configuring not only includes the ability for each individual system to configure itself on the fly, but also for

Autonomic Computing systems within the enterprise to configure them into the e-business infrastructure of the enterprise. Self-Healing Systems discover, diagnose, and react to disruptions. For a system to be selfhealing, it must be able to recover from a failed component by first detecting and isolating the failed component, taking it off line, fixing or isolating the failed component, and reintroducing the fixed or replacement component into service without any apparent application disruption. Systems will need to predict problems and take actions to prevent the failure from having an impact on applications. The self-healing objective must be to minimize all outages in order to keep enterprise applications up and available at all times. Self-Optimizing Systems monitor and tune resources automatically. Self-optimization requires hardware and software systems to efficiently maximize resource utilization to meet enduser needs without human intervention. Some of the systems already include industry leading technologies such as logical partitioning, dynamic workload management, and dynamic server clustering. These kinds of capabilities should be ex-tended across multiple heterogeneous systems to provide a single collection of computing resources that could be managed by a logical workload manager across the enterprise. Resource allocation and workload management must allow dynamic redistribution of workloads to systems that have the necessary resources to meet workload requirements. Similarly, storage, databases, networks, and other resources must be continually tuned to enable efficient operations even in unpredictable environments. Self-Protecting Systems anticipate, detect, identify, and protect themselves from attacks from anywhere. Self-protecting systems must have the ability to define and manage user access to all computing re-sources within the enterprise, to protect against unauthorized resource access, to detect intrusions and report and prevent these activities as they occur, and to provide backup and recovery capabilities that are as secure as the original resource management systems.

Autonomic Computing

AUTONOMIC COMPUTING AND CURRENT COMPUTING SYSTEM IBM frequently cites four aspects of self-management, which following Table summarizes. Early autonomic systems may treat these aspects as distinct, with different product teams creating solutions that address each one separately. Ultimately, these aspects will be emergent properties of a general architecture, and distinctions will blur into a more general notion of self-maintenance. The four aspects of self management such as self-configuring, self-healing, self-optimizing and self-protecting are compared here. Concept
Self configuration

Current Computing
Corporate multiple platforms. centers vendors have and

Autonomic Computing
Automated configuration of components and systems follows high-level policies. Rest of the system adjusts automatically and seamlessly Components and systems continually seek opportunities to improve their own performance and efficiency. System automatically detects diagnoses and repairs localized software and hardware problems. System automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent system wide failures.

Installing,

configuring, and integrating systems are time consuming Self-optimization and error prone. Systems have hundreds of manually set, nonlinear tuning parameters and their number Self-healing increases with each release. Problem determination in large, complex systems can take a team of programmers weeks. Detection of and recovery from attacks and cascading failures is manual.

Self-protection

Autonomic Computing

AUTONOMIC COMPUTING ARCHITECTURE The autonomic computing architecture concepts provide a mechanism discussing, comparing and contrasting the approaches different vendors use to deliver self-managing attributes in an autonomic computing system. The autonomic computing architecture starts from the premise that implementing self-managing attributes involves an intelligent control loop. This loop collects information from the system. Makes decisions and then adjusts the system as necessary. An intelligent control loop can enable the system to do such things as: software is missing intrusion attempt. A control loop can be provided by a resource provider, which embeds a loop in the runtime environment for a particular resource. In this case, the control loop is configured through the manageability interface provided for that resource (for example, a hard drive).In some cases, the control loop may be hard-wired or hard coded so it is not visible through the manageability interface. Autonomic systems will be interactive collections of autonomic elements individual system constituents that contain resources and deliver services to humans and other autonomic elements. , An autonomic element will typically consist of one or more managed elements coupled with a single autonomic manager that controls and represents them. At the core of an autonomic element is a control loop that integrates the manager with the managed element. In an autonomic environment, autonomic elements work together, communicating with each other and with high-level management tools. They regulate themselves and, sometimes, each other. They can proactively manage the system, while hiding the inherent complexity of these activities from end users and IT professionals. Self-heal, by restarting a failed element Self-optimize, by adjusting the current workload when it observes an increase in capacity Self-protect, by taking resources offline if it detects an Self-configure, by installing software when it detects that

10

Autonomic Computing Another aspect of the autonomic computing architecture is shown in the diagram below. This portion of the architecture details the functions that can be provided for the control loops. The architecture organizes the control loops into two major elements a managed element and an autonomic manager. A managed element is what the autonomic manager is controlling. . An autonomic manager is a component that implements a control loop.

Fig.2: Autonomic Computing Architecture

Managed Element: The managed element is a controlled system component. The managed element will essentially be equivalent to what is found in ordinary non-autonomic systems, although it can be adapted to enable the autonomic manager to monitor and control it. The managed element could be a hardware resource, such as storage, CPU, or a printer, or a software resource, such as a database, a directory service, or a large legacy system. At the highest level, the managed element could be an e utility, an application service,

11

Autonomic Computing or even an individual business .The managed element is controlled through its sensors and effectors. The sensors provide mechanisms to collect information about the state and state transition of an element. The effectors are mechanisms that change the state (configuration) of an element. The combination of sensors and effectors form the manageability interface that is available to an autonomic manager. As shown in the figure above, by the black lines connecting the elements on the sensors and effectors sides of the diagram, the architecture encourages the idea that sensors and effectors are linked together. For example, a configuration change that occurs through effectors should be reflected as a configuration change notification through the sensor interface. Autonomic Manager The autonomic manager is a component that implements the control loop. The autonomic manager distinguishes the autonomic element from its non-autonomic counterpart. By monitoring the managed element and its external environment, and constructing and executing plans based on an analysis of this information, the autonomic manager will relieve humans of the responsibility of directly managing the managed element. The architecture dissects the loop into four parts that share knowledge: from an element. The analyze part provides the mechanisms to correlate and model complex situations (time-series forecasting and queuing models, for example). These mechanisms allow the autonomic manager to learn about the IT environment and help predict future situations. The plan part provides the mechanisms to structure the action needed to achieve goals and objectives. The planning mechanism uses policy information to guide its work. The execute part provides the mechanisms that control the execution of a plan with considerations for on-the-fly updates. The monitor part provides the mechanisms that collect, aggregate, filter, manage and report details (metrics and topologies) collected

12

Autonomic Computing

NEED FOR AUTONOMIC COMPUTING It is observed that, Complexity is the business we are in, and complexity is what limits us. The computer industry has spent decades creating systems of marvelous and ever increasing complexity. But today, complexity itself is the problem. The spiraling cost of managing the increasing complexity of computing systems is becoming a significant inhibitor that threatens to undermine the future growth and societal benefits of information technology. Managing complex system has grown too costly and prone to error. Administrating such a complex system is too labor Intensive and people under such conditions are prone to make mistakes It is now estimated that one third or one half of the companys Total IT budget is spent in preventing or recovering from crashes Well-engineered autonomic functions targeted at improving and automating systems operations, installation, dependency management, and performance management can address many causes of these most frequent outages and reduce outages and downtime. Confluences of marketplace forces are driving the industry toward autonomic computing.Itself under varying and unpredictable conditions. .An autonomic system must perform something akin to healing-it must be able to recover from routine and extraordinary events that might cause some parts to malfunction. .A virtual world is no less dangerous than the physical one, so an autonomic computing system must be an expert in self-protection. .An autonomic computing system knows its environment and the context surrounding its activity, and acts accordingly. Perhaps most critical for the user, an autonomic computing system will anticipate the optimized resources needed to meet a users information needs while keeping its complexity hidden.

13

Autonomic Computing

BENEFITS Autonomic computing was conceived to lessen the spiraling demands for skilled IT resources, reduce complexity and to drive computing into a new era that may better exploit its potential to support higher order thinking and decision making. Immediate benefits will include reduced dependence on human

intervention to maintain complex systems accompanied by a substantial decrease in costs. Long-term benefits will allow individuals, organizations and businesses to collaborate on complex problem solving.

.Short-term IT related benefits Simplified user experience through a more responsive, real-time system. Cost-savings - scale to use. Scaled power, storage and costs that optimize usage across both hardware and software. Full use of idle processing power, including home PC's, through networked system. Natural language queries allow deeper and more accurate returns. Seamless access to multiple file types. Open standards will allow users to pull data from all potential sources by re-formatting on the fly. Stability. High availability. High security system. Fewer system or network errors due to self-healing Improved computational capacity

Long-term/ Higher Order Benefits Realize the vision of enablement by shifting available resources to higher-order business.

14

Autonomic Computing

Embedding autonomic capabilities in client or access devices, servers, storage systems, middleware, and the network itself. Constructing autonomic federated systems.

Achieving end-to-end service level management. Accelerated implementation of new capabilities Collaboration and global problem-solving. Distributed computing allows for more immediate sharing of information and processing power to use complex mathematics to solve problems.

Massive simulation - weather, medical - complex calculations like protein folding, which require processors to run 24/7 for as long as a year at a time.

15

Autonomic Computing

CHAIIENGES To create autonomic systems researchers must address key challenges with varying levels of complexity. They are: System identity: Before a system can transact with other systems it must know the extent of its own boundaries. How will we design our systems to define and redefine themselves in dynamic environments? Interface design: With a multitude of platforms running, system administrators face a, How will we build consistent interfaces and points of control while allowing for a heterogeneous environment? Translating business policy into IT policy: The end result needs to be transparent to the user. How will we create human interfaces that remove complexity and allow users to interact naturally with IT systems? Systemic approach: Creating autonomic components is not enough. How can we unite a constellation of autonomic components into a federated system? Standards: The age of proprietary solutions is over. How can we design and support open standards that will work? Adaptive algorithms: New methods will be needed to equip our systems to deal with changing environments and transactions. How will we create adaptive algorithms to take previous system experience and use that information to improve the rules? Improving network-monitoring functions to protect security detect potential threats and achieve a level of decision-making that allows for the redirection of key activities or data.
Smarter microprocessors that can detect errors and anticipate failures.

16

Autonomic Computing

CONCLUSIONS The simplest, most predictable of tasks the system tasks that operate under well understood principles (memory allocation, buffer pool management, load balancing, etc.), with accurate sensing and well-understood actions, are well suited for autonomic computing. Tasks that rely intrinsically on user interaction, or critically depend on external world-state are often not well suited for autonomic computing. The time is right for the emergence of self-managed or autonomic systems. Over the past decade, we have come to expect that "plug-and-play" for Universal Serial Bus (USB) devices, such as memory sticks and cameras, simply workseven for technophobic users. Today, users demand and crave simplicity in computing solutions. With the advent of Web and grid service architectures, we begin to expect that an average user can provide Web services with high resiliency and high availability. The goal of building a system that is used by millions of people each day and administered by a half- time person, as articulated by Jim Gray of Microsoft Research, seems attainable with the notion of automatic updates. Thus, autonomic computing seems to be more than just a new middleware technology; in fact, it may be a solid solution for reining in the complexity problem. Historically, most software systems were not designed as self-managing systems. Retrofitting existing systems with selfmanagement capabilities is a difficult problem. Even if autonomic computing technology is readily available and taught in computer science and engineering curricula, it will take another decade for the proliferation of autonomicity in existing systems.

17

Autonomic Computing

REFERENCES Web Reference:1. http://www.ibm.com/research/autonomic 2. http://en.wikipedia.org/wiki/Autonomic_Computing 3. http://autonomiccomputing.org Book Reference:P. Horns Autonomic Computing: IBMs Perspective on the State of Information Technology.

***

18