Anda di halaman 1dari 11

6310041

A Seminar Report
On

MULTICORE PROCESSORS
Submitted in the partial fulfillment of the requirement of

Bachelor of Technology (B.Tech.)


In

Computer Science & Engineering

Submitted To:

Submitted By:

Computer Science & Engineering


1

6310041

Guru Nanak Institute of Technology Mullana,Ambala

6310041

INTRODUCTION
Intel manufactured the first microprocessor, the 4-bit 4004, in the early 1970s which was basi-cally just a number-crunching machine. Shortly afterwards they developed the 8008 and 8080, both 8-bit, and Motorola followed suit with their 6800 which was equivalent to Intels 8080. The companies then fabricated 16-bit microprocessors, Motorola had their 68000 and Intel the 8086 and 8088; the former would be the basis for Intels 80386 32-bit and later their popular Pentium lineup which were in the first consumer-based PCs. [18, 19] Each generation of processors grew smaller, faster, dissipated more heat, and consumed more power.

The Need for Multicore


Due to advances in circuit technology and performance limitation in wide-issue, superspeculative processors, Chip-Multiprocessors (CMP) or multi-core technology has be-come the mainstream in CPU designs. Speeding up processor frequency had run its course in the earlier part of this decade; computer architects needed a new approach to improve performance. Adding an additional processing core to the same chip would, in theory, result in twice the performance and dissipate less heat, though in practice the actual speed of each core is slower than the fastest single core processor. In Sep-tember 2005 the IEE Review noted that power consumption increases by 60% with every 400MHz rise in clock speedBut the dual-core approach means you can get a significant boost in performance without the need to run at ruinous clock rates. Multicore is not a new concept, as the idea has been used in embedded systems and for special-ized applications for some time, but recently the technology has become mainstream with Intel and Advanced Micro Devices (AMD) introducing many commercially available multicore chips. In contrast to commercially available two and four core machines in 2008, some experts believe that by 2017 embedded processors could sport 4,096 cores, server CPUs might have 512 cores and desktop chips could use 128 cores. [2] This rate of growth is astounding considering that current desktop chips are on the cusp of using four cores and a single core has been used for the past 30 years.

Multicore Basics
3

6310041

The following isnt specific to any one multicore design, but rather is a basic overview of multi-core architecture. Although manufacturer designs differ from one another, multicore architec-tures need to adhere to certain aspects. The basic configuration of a microprocessor is seen in Figure 2. Closest to the processor is Level 1 (L1) cache; this is very fast memory used to store data frequently used by the processor. Level 2 (L2) cache is just off-chip, slower than L1 cache, but still much faster than main memory; L2 cache is larger than L1 cache and used for the same purpose. Main memory is very large and slower than cache and is used, for example, to store a file currently being edited in Microsoft Word. Most systems have between 1GB to 4GB of main memory compared to approximately 32KB of L1 and 2MB of L2 cache. Finally, when data isnt located in cache or main memory the system must retrieve it from the hard disk, which takes exponentially more time than reading from the mem-ory system. If we set two cores side-by-side, one can see that a method of communication between the cores, and to main memory, is necessary. This is usually accomplished either using a single communication bus or an interconnection network. The bus ap-proach is used with a shared memory model, whereas the inter-connection network approach is used with a distributed memory model. After approximately 32 cores the bus is overloaded with the amount of processing, communication, and competition, which leads to diminished performance; therefore, a communica-tion bus has a limited scalability. Multicore processors seem to answer the deficiencies of single core processors, by increasing bandwidth while decreasing power consumption. Table 1, below, shows a comparison of a single and multicore (8 cores in this case) processor used by the Packaging Research Center at Georgia Tech. With the same source voltage and multiple cores run at a lower frequency we see an al-most tenfold increase in bandwidth while the total power consumption is reduced by a factor of four.

WORKING OF MULTICORE PROCESSORS

6310041

The dual core or multicore processor differs from a single core in that the single core processor must take the incoming data bits one at a time, process that bit of data and move on the next one. A dual core process detects incoming data streams and determines whether they could be calculated more quickly if both cores were working. If that's the case, the dual-core processor will split the data and crunch the numbers at the same time, effectively doubling the processor's speed. While of limited use for applications that aren't "processor intensive" they really shine when working with high level calculations or even computer games. When new data is loaded into the cache, it is pulled from the hard drive. Because the CPU can typically process data faster than the storage media it's pulling from, performance suffers. In a dual-core processor, the data is pulled by each processor when needed. A dual-core processor The data streams are processed at the same time, and once the data is calculated, the processors mesh the data back into a single usable stream. This isn't to be confused with a multi-processor system, in which all processors reside on the same chip. The multi-processor system, because of the way the data is split and reintegrated, can be significantly faster than a dual-core setup.

Advantages
The proximity of multiple CPU cores on the same die allows the cache coherency circuitry to operate at a much higher clock-rate than is possible if the signals have to travel off-chip. Combining equivalent CPUs on a single die significantly improves the performance of cache snoop (alternative: Bus snooping) operations. Put simply, this means that signals between different CPUs travel shorter distances, and therefore those signals degrade less. These higher-quality signals allow more data to be sent in a given time period, since individual signals can be shorter and do not need to be repeated as often. Assuming that the die can fit into the package, physically, the multi-core CPU designs require much less printed circuit board (PCB) space than do multi-chip SMP designs. Also, a dualcore processor uses slightly less power than two coupled single-core processors, principally because of the decreased power required to drive signals external to the chip. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front side bus (FSB). In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core-design. Also, adding more cache suffers from diminishing returns.[citation needed]
5

6310041

Multi-core chips also allow higher performance at lower energy. This can be a big factor in mobile devices that operate on batteries. Since each core in multi-core is generally more energy-efficient, the chip becomes more efficient than having a single large monolithic core. This allows higher performance with less energy. The challenge of writing parallel code clearly offsets this benefit.[5]

Disadvantages
Maximizing the utilization of the computing resources provided by multi-core processors requires adjustments both to the operating system (OS) support and to existing application software. Also, the ability of multi-core processors to increase application performance depends on the use of multiple threads within applications. The situation is improving: for example the Valve Corporation's Source engine offers multi-core support,[6][7] and Crytek has developed similar technologies for CryEngine 2, which powers their game, Crysis. Emergent Game Technologies' Gamebryo engine includes their Floodgate technology,[8] which simplifies multi-core development across game platforms. In addition, Apple Inc.'s OS X, starting with Mac OS X Snow Leopard, and iOS starting with iOS 4, have a built-in multicore facility called Grand Central Dispatch. Integration of a multi-core chip drives chip production yields down and they are more difficult to manage thermally than lower-density single-chip designs. Intel has partially countered this first problem by creating its quad-core designs by combining two dual-core on a single die with a unified cache, hence any two working dual-core dies can be used, as opposed to producing four cores on a single die and requiring all four to work to produce a quad-core. From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence. Finally, raw processing power is not the only constraint on system performance. Two processing cores sharing the same system bus and memory bandwidth limits the real-world performance advantage. It has been claimed[by whom?] that if a single core is close to being memory-bandwidth limited, then going to dual-core might give 30% to 70% improvement; if memory bandwidth is not a problem, then a 90% improvement can be expected; however, Amdahl's law makes this claim dubious.[citation needed] It would be possible for an application that used two CPUs to end up running faster on one dual-core if communication between the CPUs was the limiting

6310041

factor, which would count as more than 100% improvement.

APPLICATIONS
7

6310041

Among advantages of multi-core CPUs is the diversity of applications they can solve, according to Casey Weltzin, product manager for software at National Instruments Inc. Understandably, initial industrial usage has been in higher-end automation systems. While multi-core CPUs are now an option in the industrial and embedded spaces, over the next 10 years they will increasingly become ubiquitous as technology continues to trickle down to lower-end, power-constrained devices, said Weltzin. (Read additional Control Engineering articles in this series, linked below: Insights on multi-core processors and Multi-core processors: Software key to success and Computing Power: Multi-core Processors Help Industrial Automation.)

The wide potential of multi-core processors (MCPs) stems from their ability to perform multiple tasks in parallel. This includes overall tasks such as pipelining many chunks of data simultaneously for high system throughput or enabling more complex control algorithms, explained Weltzin. Any application that can be divided into parallel pieces is suitable for multi-core processors, he added.

Industrial applications of MCPs have included: Machine vision CAD systems CNC machines Automated test systems, and Motion control. Particularly in the latter area, there is an interesting suggestion of commonality for parallelism between multi-axis motion and multi-core processing. Although numerous applications can benefit from using multi-core processors, heavily regulated industries or application areas may see slower adoption, Weltzin noted. These are mainly medical and safety-related applications requiring extensive certifications. In these cases, not only will software need to be modified to run on multi-core CPUs, but the modified software may also need to undergo intense scrutiny that presents an additional barrier to overcome, he said.
8

6310041

For process safety-related applications, at least, a somewhat different view of MCPs came from Alexandra Dopplinger, global segment lead for Factory Automation & Drives at Freescale Semiconductor Inc. Functions such as safety controllers often run on separate dedicated processors, she explained. However, operating systems such as QNX Neutrino RTOS can partition and prioritize deterministic, real-time control algorithms to safely run alongside an industrial network protocol and a human-machine interface on a single core or multiple cores, without fear of interrupt or resource contention, she said. (Neutrino RTOS is a product of QNX Software Systems, a subsidiary of Research In Motion Ltd.see more below.) Dopplinger further mentioned that several new MCPs are certifiable for use in Safety Integrity Level 3 (SIL-3) applications. For all multi-core processor solutions, users should carefully select development tools that make the system easier to debug and certify, she cautioned. A demo and training module, available from Freescale Semiconductor, provide further information about safety-related applications. The demo (at http://bit.ly/fhaXrP) describes automotive functional safety using Freescale MPC564xL 32-bit, dual-core microprocessors, while the training module (at http://bit.ly/fSKQ6R) addresses functional safety capabilities of MPC5643L processors, per International Electrotechnical Commission standard IEC 61508 (SIL-3).

CONCLUSION
Before multicore processors the performance increase from generation to generation was easy to see, an increase in frequency. This model broke when the high frequencies caused
9

6310041

processors to run at speeds that caused increased power consumption and heat dissipation at detrimental levels. Adding multiple cores within a processor gave the solution of running at lower frequencies, but added interesting new problems. Multicore processors are architected to adhere to reasonable power consumption, heat dissipation, and cache coherence protocols. However, many issues remain unsolved. In order to use a multicore processor at full capacity the applications run on the system must be multithreaded. There are relatively few applications (and more importantly few programmers with the know-how) written with any level of parallelism. The memory systems and interconnection net-works also need improvement. And finally, it is still unclear whether homogeneous or heterogeneous cores are more efficient. With so many different designs (and potential for even more) it is nearly impossible to set any standard for cache coherence, interconnections, and layout. The greatest difficulty remains in teaching parallel programming techniques (since most programmers are so versed in sequential programming) and in redesigning current applications to run optimally on a multicore system. Multicore processors are an important innovation in the microprocessor timeline. With skilled programmers capable of writing parallelized applications multicore efficiency could be increased dramatically. In years to come we will see much in the way of improvements to these systems. These improvements will provide faster programs and a better computing experience.

REFERENCES List of references is as follows:10

6310041

> www.controleng.com >www.wikipedia.com >www.arcobel.com > www.intel.com

11

Anda mungkin juga menyukai