Anda di halaman 1dari 67

AUTOMATIC CIRCUIT GENERATOR

A
MAJOR PROJECT
Submitted for the partial fulfillment of the requirement for the award of Degree of

BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE & ENGINEERING

Submitted by:

Guided By:

1.Hariom Dejwar

Mrs. Anjana Deen

2.Murlidhar Daharwal
3.Sachin
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
UNIVERSITY INSTITUTE OF TECHNOLOGY
RAJIV GANDHI PROUDYOGIKI VISHWAVIDALAYA
BHOPAL-462036
MAY 2010

UNIVERSITY INSTITUTE OF TECHNOLOGY


RAJIV GANDHI PROUDYOGIKI VISHWAVIDALAYA, BHOPAL

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


CERTIFICATE

This is to certify that Hariom Dejwar,Murlidhar Daharwal & Sachin of B.E final year ,
Computer science & Engg. has completed his major project AUTOMATIC CIRCUIT
GENERATOR during the academic year 2009-10 under my guidance and supervision.

I approve the project for the submission for the partial fulfillment of the requirement for
the award of degree in Computer Science & Engineering.

Name of GUIDE
Prof. Anjana Deen

Prof.V.K Sethi
Director

Prof.Sanjay Silakari
Head of Department

Prof. Piyush Shukla

UIT , RGPV

CSE
2

DECLARATION BY CANDIDATE

We hereby declare that the work which is being presented in the Major project
AUTOMATIC CIRCUIT GENERATOR submitted in partial fulfillment of the
requirement for the award of Bachelor Degree in Computer Science & Engineering
.The work which has been carried out at University Institute of Technology ,RGPV
,Bhopal is an authentic record of our work carried under the guidance of Mrs.
Anjana Deen and Mr. Piyush Shukla Department of Computer Science &
Engineering, UIT , RGPV ,Bhopal
The matter written in this project has not been submitted by us for the award
of any other Degree.
Hariom Dejwar
Murlidhar Daharwal
Sachin

ACKNOWLEDGEMENT
ABSTRACT
Table of Contents

S.NO

TOPIC

Page No.

1.

Introduction

08

2.

Literature Survey and related work

12

3.

Problem Description

18

4.

Proposed work

29

5.

Design And Development

32

6.

Implementation

41

7.

Results

51

8.

Conclusion & Future work

63

Bibliography

Acknowledgement
We would like to express our deepest appreciation to our advisor, Mrs. Anjana
Deen and Mr. Piyush Shukla for her constant guidance, encouragement and
support in helping us to complete this work. We will cherish the experience
learning and working with her forever.
In addition, We would like to thank Prof. Sanajay Silakari (Head of Department
CSE) and Mr. Manish Ahirwar(faculty,computer science department) for their
help and guidance on our project.
We would also like to thank our lab assisstant, Mr. Subodh Srivastava, for
providing us all necessary lab equipment.We would like to thank our friends who
have made us stay at graduate school enjoyable and who have made every aspect
of project exciting and interesting to us.We would also like to thank our family
member for supporting us.

Abstract
With ever shrinking geometries, growing metal density and increasing clock rate on chips, delay
testing is becoming a necessity in industry to maintain test quality for speed-related failures. The
purpose of delay testing is to verify that the circuit operates correctly at the rated speed.
However, functional tests for delay defects are usually unacceptable for large scale designs due
to the prohibitive cost of functional test patterns and the difficulty in achieving very high fault
coverage. Scan based delay testing, which could ensure a high delay fault coverage at reasonable
development cost, provides a good alternative to the at-speed functional test. This dissertation
addresses several key challenges in scan-based delay testing and develops efficient Automatic
Test Pattern Generation (ATPG) and Design for testability (DFT) algorithms for delay testing. In
the dissertation, two algorithms are first proposed for computing and applying transition test
patterns using stuck-at test vectors, thus avoiding the need for a transition fault test generator.
The experimental results show that we can improve both test data volume and test application
time by 46.5% over a commercial transition ATPG tool. Secondly, we propose a hybrid scanbased delay testing technique for compact and high fault coverage test set, which combines the
advantages of both the skewed-load and broadside test application methods. On an average,
about 4.5% improvement in fault coverage is obtained by the hybrid approach over the broadside approach, with very little hardware overhead. Thirdly, we propose and develop a constrained
ATPG algorithm for scan-based delay testing, which addresses the overtesting problem due to the
possible detection of functionally untestable faults in scan-based testing. The experimental
results show that our method efficiently generates a test set for functionally testable transition
faults and reduces the yield loss due to overtesting of functionally untestable transition faults.
Finally, a new approach on identifying functionally untestable transition faults in non-scan
sequential circuits is presented. We formulate a new dominance relationship for transition faults
and use it to help identify more untestable transition faults on top of a fault-independent method
based on static implications. The experimental results for ISCAS89 sequential benchmark
circuits show that our approach can identify many more functionally untestable transition faults
than previously reported.

Acknowledgement
We would like to express our deepest appreciation to our advisor, Mrs. Anjana
Dean, for her constant guidance, encouragement and support in helping us to
complete this work. We will cherish the experience learning and working with her
forever.
In addition, We would like to thank Prof. Sanajay Silakari (Head of Department
CSE) and Mr. Manish Ahirwar for their help and guidance on our project.
We would like to thank our friends who have made us stay at graduate school
enjoyable and who have made every aspect of project exciting and interesting to
us.

Introduction

Introduction
The main objective of traditional test development has been the attainment of high stuck-at fault
coverage. However, the presence of some random defects does not affect a circuits operation at
low speed while it may cause circuit malfunction at rated speed. This kind of defect is called the
delay defect. With ever shrinking geometries, growing metal density and increasing clock rate of
chips, delay testing is gaining more and more industry attention to maintain test quality for
speed-related failures. The purpose of a delay test is to verify that the circuit operates correctly at
a desired clock speed. Although application of stuck-at fault tests can detect some delay defects,
it is no longer sufficient to test the circuit for the stuck-at faults alone. Therefore, delay testing is
becoming a necessity for todays IC manufacturing test. In the past, testing a circuits
performance was typically accomplished with functional test patterns. However, developing
functional test patterns that attain satisfactory fault coverage is unacceptable for large scale
designs due to the prohibitive development cost. Even if functional test patterns that can achieve
high fault coverage are available, applying these test patterns at-speed for high speed chips
requires very stringent timing accuracy, which must be provided by very expensive automatic
test equipment (ATEs). The scan-based delay testing where test patterns are generated by an
automatic test pattern generator (ATPG) on designs that involve scan chains is increasingly used
as a cost efficient alternative to the at speed functional pattern approach to test large scale chips
for performance-related failures.Design-for-testability (DFT)-focused ATEs, which are designed
and developed to lower ATE cost by considering widely used DFT features of circuits under test
(CUTs) such as full and partial scan are emerging as a strong trend in the test industry. Several
delay fault models have been developed, such as transition delay fault, gate delay fault path
delay ,and segment delay fault models.A transition fault at node assumes a large delay at such
that the transition at will not reach the latch or primary output within the clock period. The path
delay fault model assumes a small delay at each gate. It models cumulative effect of gate delays
along a specific path, from a primary input to a primary output. If the cumulative delay exceeds
the slack for the path, then the chip fails. Segment delay fault targets path segments instead of
complete paths. Among these fault models, the transition delay fault model is most widely used
in industry for its simplicity. ATPGs and fault simulators that are developed for stuck-at faults
can be reused for transition delay faults with minor modifications. Unlike the path delay fault
model where the number of target faults is often exponential, the number of transition delay
faults is linear with the number of circuit lines. This eliminates the need for critical path analysis
and identification procedures, which are necessary for the path delay fault model. The gate delay
model is similar to the transition delay fault model in that the delay fault is lumped at one gate in
the CUT. However, unlike the transition delay model which does not take into account fault
sizes, the gate delay model 2 takes into account fault sizes. The segment delay fault model is a
trade-off between the path delay fault and transition delay fault models. Detection of a delay
fault normally requires the application of a pair of test vectors; the first vector, called
initialization vector, initializes the targeted faulty circuit line to a desired value and the second
vector, called launch vector, launches a transition at the circuit line and propagates the fault
effect to primary output(s) and/or scan flip-flop(s).
_

In order to improve the quality of microprocessor tests, the use of instructional sets for
testing is indespensable. In this paper; we will present a new method consisting of the automatic
generation of a functional test pattern, formed by a combination of instruction sets and enabling
the efficient improvement of the fault coverage. With this method, a test pattern is first generated
to test all of an S number of instruction mnemonics. Then, for the faults that were undetected by
that test pattern, an L number of sets of K numbder of instructions are drawn from the S number
of instructions, and the set enabling the efficient improvement of the fault coverage is selected.
By repeating this procedure, a high fault coverage can be obtained with a short test pattern. The
effectiveness of our method was proved by the results of experiments obtained with the software
that was created based on this method.
Different techniques for solving the problem of generating tests for structural faults in
equential circuits have been proposed over the years. On the gate-level, deterministic and
simulation based algorithms have been proposed. However, the execution times are extremely
long and for medium and large circuits mostly rather low fault coverages have been achieved.
Recently, promising results based on software testing techniques combined with low level test
have been published in The approach offers high fault coverages for medium sized benchmark
circuits but the test generation still takes relatively much time. Furthermore, the authors have not
developed any formalized method for generating the high-level test frames and the time needed
to generate the frames has not been taken into account in the experiments. Trivial finite state
machines containing only a single control state have been implemented for some of the larger
example circuits. At present, hierarchical test generation is the fastest and most effective means
for sequential circuits testing Here, designs described on different abstraction levels, usually on
architectural- and gate-level, are used. The method cannot be applied to designs that do not have
appropriate modularity, or where gate-level implementation for the modules is not known.
However, as a number of commercial high-level synthesis tools have emerged, the input
description should not be a major issue.Previous works in the area of hierarchical testing have
the following main shortcomings:
1. Only the faults in the datapath Functional Units (FU) are targeted.This usually results in low
fault coverages for the control part as well as for multiplexers, registers and fanout buses of the
datapath.
2. A complex set of symbols and constraints is used during high-level path activation.This feature
makes the symbolic path activation process very compute-intensive. For more complex circuits it
can also cause high-level tests for many FUs to fail due to the strict conditions.
The aim of the approach proposed in current paper is toovercome the above mentioned
shortcomings. Differently from known methods, both, control unit and datapath are handled in a
uniform manner. A restricted set of symbols is used during the path activation. This allows to
simplify the test generation algorithm while still maintaining a good correspondence between
high-level assessments and gatelevel fault coverage. The paper is organized as follows. Section 2
10

gives a short overview of representing circuit architecture by Decision Diagram (DD) models.
Section 3 introduces the test generation algorithm. Finally, experimental results and conclusions are
presented.

11

Literature Survey & Related Work

12

Literature Survey & Related Work


Several delay fault models have been developed for delay defects: transition delay fault,
gate delay fault , path delay fault, and segment delay fault models. A transition fault at node
assumes a large delay at such that the transition at will not reach the latch or primary output
within the clock period. The path delay fault model assumes a small delay at each gate. It models
cumulative effect of gate delays along a specific path, from a primary input to a primary output.
If the cumulative delay exceeds the slack for the path, then the chip fails. Segment delay fault
targets path segments instead of complete paths. Among these fault models, the transition delay
fault model is most widely used in industry for its simplicity. ATPGs and fault simulators that are
developed for stuck-at faults can be reused for transition delay faults with minor modifications.
Unlike the path delay fault model where the number of target faults is often exponential, the
number of transition delay faults is linear to the number of circuit lines. This eliminates the need
for critical path analysis and identification procedures, which are necessary for the path delay
fault model.

Gate delay model is similar to transition delay fault model in that the delay fault is lumped at one
gate in the CUT. However, unlike transition delay model which does not take into account fault
sizes, gate delay model takes into account fault sizes. Segment delay fault model is a trade-off
between path delay fault and transition delay fault models.

Functional Delay Testing


In the past, testing circuits performance was typically accomplished with functional test
patterns(i.e. testing a microprocessor with instruction sequences), in which the input signals to
the CUT are determined by its functionality. This result in a much smaller set of vector pairs
applicable for delay testing. Also, after the application of certain test
sets, some
registers/flipflops may not be enabled in the immediate next cycle, and thus delay fault effects
propagated to them can not be latched and will be lost. Therefore, developing functional test
patterns that attain satisfactory fault coverage is unacceptable for large scale designs due to the
prohibitive development cost. Even if functional test patterns that can achieve high fault
coverage are available, applying these test patterns at-speed for high speed chips requires very
stringent timing accuracy, which can be provided by very expensive automatic test equipments
(ATEs). The scan-based delay testing where test patterns are generated by an automatic test
pattern generator (ATPG) on designs that involve scan chains is increasingly used as a cost
efficient alternative to the at-speed functional pattern approach to test large scale chips for
performance-related failures. Design-for-testability (DFT)- focused ATEs which are designed
and developed to lower ATE cost by considering widely used DFT features of circuits under test
(CUTs) such as full and partial scan are emerging as a strong trend in test industry.

13

Scan-based Delay Testing


Traditionally, three different approaches have been used to apply two-vector tests to standard
scan designs. They differ in the way of storing and applying the second vector of each vector
pair.

Enhanced-scan
In the first approach, enhance -scan two vectors (V1, V2) are stored in the tester scan memory.
The first scan shift loads V1 into the scan chain. It is thenapplied to the circuit under test to
initialize it. Next, V2 is scanned in, followed by an apply and subsequently a capture of the
response. During shifting in of V2 it is assumed that the initialization of V1 is not destroyed.
Therefore enhanced-scan transition testing assumes a hold-scan design. Enhanced scan transition
test has two primary advantages: coverage and test data volume. Since enhanced scan testing
allows the application of any arbitrary vector pair to the combinational part of a sequential
circuit. Hence, complete fault coverage can be attained. Tester memory requirement is also
important, and considerable attention is being paid to reduce the tester memory requirement for
s@ tests. The problemis far worse for transition tests as the following data shows. In it was
reported that for skewed load transition tests for an ASIC, the s@ vector memory requirement
was 8.51M versus 50.42M for transition test. This implies an increase of a factor of 5.9. The
downside of using enhanced-scan transition test is that special scan design, viz. hold-scan that
can hold two bits, is required. This may leads to higher area overhead, which may prevent it from
using widely in ASIC area. However,in microprocessors and other high performance circuits that
require custom design, such cells are used for other reasons. In custom designs, the circuit often
is not fully decoded, hold scan cells are used to prevent contention in the data being shifted, as
well as preventing excessive power dissipation in the circuit during the scan shift phase.
Furthermore, if hold-scan cells are used, the failing parts in which only the scan logic failed can
often be retrieved; thus enhancing, to some extent, the diagnostic capability associated with scan
DFT. Therefore, for such designs enhanced-scan transition tests is preferred. This is our
motivation for investigating good ATPG techniques for enhanced-scan transition tests. Therefore,
we can see that the two vectors (V1,V2) are independent to each other. However, in the next two
approaches (skewed-load and broadside), the second vector is derived from the first vector.

Skewed-load
In the second approach, referred to as the skewed-load [SP93] or launch-from-shift approach,
The initialization vector of a test vector pair is first loaded into scan chain by consecutive scan
shift operations, where is the number of scan flip-flops in the scan chain, in the same fashion as a
stuck-at test vector is loaded into the scan chain. The last shift cycle when a test vector is fully
loaded into the scan chain CUT, is referred as the initialization cycle. The second vector is
obtained by shifting in the first vector (initialization vector), which is loaded into the scan chain,

14

by one more scan flip-flop and scanning in a new value into the scan chain input. Note that the
scan enable signal stays at logic high during the launch cycle. At the next clock cycle (capture
cycle), the scan enable signal switches to logic low and the scan flip-flops in the scan chain are
configured in their normal mode to capture the response to the scanned in test vector. Since the
capture clock is applied at full system clock speed after the launch clock, the scan enable signal,
which typically drives all scan flip-flops in the CUT, should also switch within the full system
clock cycle. This requires the scan enable signal to be driven by a sophisticated buffer tree or
strong clock buffer. Such design requirement is often too costly to meet. Furthermore, meeting
such a strict timing required for the scan enable signal may result in longer design time. Since the
second vector of each vector pair is obtained by shifting in the first vector by one more scan flipflop, given a first vector, there are only two possible vectors for the second vector that differs
only at the value for the first scan flip-flop whose scan input is connected to the scan chain input.
This shift dependency restricts the number of combinations of test vector pairs to in standard
scan environment, where is the number of scan flip-flops in the scan chain. If there is a
transition delay fault that requires a 1 at state input in an initialization vector and requires a 0 at
state input in the corresponding launch vector to be detected, then that fault is untreatable by the
skewed-load approach (assume that the scan chain is constructed by using only non-inverting
outputs of scan flip-flops).

Broadside
In the third approach, referred to as the broad-side or launch-from capture, Similar to the
skewed-load approach, the initialization vector of a test vector pair is first loaded into scan chain
by consecutive scan shift operations, where is the number of scan flip-flops in the scan chain, in
the same fashion as a stuck-attest vector is loaded into the scan chain. Then, the second vector is
obtained from the circuit response to the first vector. Hence, the scan flip-flops are configured
into the normal mode by lowering the enable signal before every launch. Since the launch clock
following an initialization clock need not be an at-speed clock, the scan enable signal does not
have to switch to logic low at full system clock speed between the initialization clock and the
launch clock. Note that in broad-side approach, launch vectors are applied when scan flip-flops
are in their normal mode. In other words, the at-speed clocks, the capture clock after the launch,
is applied to scan flip-flops while the scan flip-flops stays in their normal mode. Hence, the scan
enable signal does not have to switch between the launch cycle and the capture cycle when
clocks are applied at full system clock speed. Hence, the broad-side approach does not require atspeed transition of the scan enable signal and can be implemented with low hardware overhead.
Even though the broad-side approach is cheaper to implement than the skewed load approach,
fault coverage achieved by test pattern sets generated by the broadside approach is typically
lower than that achieved by test pattern sets generated by the skewed-load approach .Test pattern
sets generated by the broadside approach are typically larger than those generated by the skewedload approach. In order to generate two vector tests for the broad-side approach, an ATPG with
sequential property that considers two full time frames is required. On the other hand, test
patterns for the skewed-load approach can be gene acted bay combinational ATPG with little
15

modification. Hence, higher test generation cost (longer test generation time) should be paid for
the broad-side approach. Since in the broad-side approach, the second vector is given by the
circuit response to the first vector, unless the circuit can transition to all states, where is the
number of scan flip-flops, the number of possible vector that can be applied as second vectors of
test vector pairs is limited. Hence, if a state required to activate and propagate a fault is an
invalid state, i.e., the state cannot be functionally justified, then the transition delay fault is
untreatable. Typically, in large circuits that have a large number of flip-flops, the number of
reachable states is only a small fraction of states. Due to this reason, transition fault coverage for
standard scan designs is often substantially lower than stuck-at fault coverage.
__

Among the three approaches for applying delay tests, broadside suffers from poor fault coverage.
Since there is no dependency between the two vectors in Enhanced scan, it can give better
coverage than skewed-load transition test. Skewed-load transition tests also lead to larger test
data volume. Compared to stuck-at tests, the increase in the number of vectors required for
enhanced scan to get complete coverage is about .For skewed-load transition test,it has been
observed that the data volume has an increase of 5.9X.For most circuits, test sets generated by
the skewed-load approach achieve higher fault coverage than those generated by the broadside
approach.Sizes of test pattern sets generated by the skewed-load approach are also typically
smaller than those generated by the broad-side approach However, the skewed-load approach
requires higher hardware overhead and may require longer design times .

16

Time Delay
A fixed time delay is often useful. One such application, a dead time generator, is useful where
two switching transistors cannot be simultaneously turned on. This could be where, in a halfwave bridge type application, the turn-on on one device is delayed following the turn off of the
other device. Another use is in SMPS models to delay switching transistor turn-on during
intervals where switching noise could be present.
The problem is that often other variables may be changing in the course of an analysis, and using
pwl devices, logical equations, or some circuit implementations might be cumbersome. In this
case a delay line is used. This was suggested by Chris Basso and described in his book.
17

Problem Description

18

Problem Description
Integrated circuit
In electronics, an integrated circuit (also known as IC, microcircuit, microchip, silicon chip, or
chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as
passive components) that has been manufactured in the surface of a thin substrate of
semiconductor material. Integrated circuits are used in almost all electronic equipment in use
today and have revolutionized the world of electronics.
A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual
semiconductor devices, as well as passive components, bonded to a substrate or circuit board.

Integrated circuits were made possible by experimental discoveries which showed that
semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century
technology advancements in semiconductor device fabrication. The integration of large numbers
of tiny transistors into a small chip was an enormous improvement over the manual assembly of
circuits using electronic components. The integrated circuit's mass production capability,
reliability, and building-block approach to circuit design ensured the rapid adoption of
standardized ICs in place of designs using discrete transistors.
There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low
because the chips, with all their components, are printed as a unit by photolithography and not
constructed as one transistor at a time. Furthermore, much less material is used to construct a
circuit as a packaged IC die than as a discrete circuit. Performance is high since the components
switch quickly and consume little power (compared to their discrete counterparts) because the
components are small and close together. As of 2006, chip areas range from a few square
millimeters to around 350 mm, with up to 1 million transistors per mm.

19

Invention

The idea of integrated circuit was conceived by a radar scientist working for the Royal Radar
Establishment of the British Ministry of Defence, Geoffrey W.A. Dummer (1909-2002), who
published it at the Symposium on Progress in Quality Electronic Components in Washington,
D.C. on May 7, 1952. He gave many symposia publicly to propagate his ideas. Dummer
unsuccessfully attempted to build such a circuit in 1956.
Jack Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and
successfully demonstrated the first working integrated circuit on September 12, 1958. In his
patent application of February 6, 1959, Kilby described his new device as a body of
semiconductor material ... wherein all the components of the electronic circuit are completely
integrated. Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the
integrated circuit
Robert Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.
Noyce's chip solved many practical problems that Kilby's had not. Noyce's chip, made at
Fairchild Semiconductor, was made of silicon, whereas Kilby's chip was made of germanium.
Early developments of the integrated circuit go back to 1949, when the German engineer Werner
Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying
device showing five transistors on a common substrate arranged in a 2-stage amplifier
arrangement. Jacobi discloses small and cheap hearing aids as typical industrial applications of
his patent. A commercial use of his patent has not been reported.
A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a
single miniaturized component. Components could then be integrated and wired into a
bidimensional or tridimensional compact grid. This idea, which looked very promising in 1957,
was proposed to the US Army by Jack Kilby, and led to the short-lived Micromodule Program
(similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby
came up with a new, revolutionary design: the IC.
20

Generations
SSI, MSI and LSI
The first integrated circuits contained only a few transistors. Called "Small-Scale Integration"
(SSI), digital circuits containing transistors numbering in the tens provided a few logic gates for
example, while early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as
two transistors. The term Large Scale Integration was first used by IBM scientist Rolf Landauer
when describing the theoretical concept, from there came the terms for SSI, MSI, VLSI, and
ULSI.
SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile
and Apollo program needed lightweight digital computers for their inertial guidance systems; the
Apollo guidance computer led and motivated the integrated-circuit technology[citation needed], while
the Minuteman missile forced it into mass-production.
These programs purchased almost all of the available integrated circuits from 1960 through
1963, and almost alone provided the demand that funded the production improvements to get the
production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars).
They began to appear in consumer products at the turn of the decade, a typical application being
FM inter-carrier sound processing in television receivers.
The next step in the development of integrated circuits, taken in the late 1960s, introduced
devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration"
(MSI).
They were attractive economically because while they cost little more to produce than SSI
devices, they allowed more complex systems to be produced using smaller circuit boards, less
assembly work (because of fewer separate components), and a number of other advantages.
Further development, driven by the same economic factors, led to "Large-Scale Integration"
(LSI) in the mid 1970s, with tens of thousands of transistors per chip.
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that
began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors.
True LSI circuits, approaching 10000 transistors, began to be produced around 1974, for
computer main memories and second-generation microprocessors.

VLSI
The final step in the development process, starting in the 1980s and continuing through the
present, was "very large-scale integration" (VLSI). The development started with hundreds of
thousands of transistors in the early 1980s, and continues beyond several billion transistors as of
2009.
21

There was no single breakthrough that allowed this increase in complexity, though many factors
helped. Manufacturers moved to smaller rules and cleaner fabs, so that they could make chips
with more transistors and maintain adequate yield. The path of process improvements was
summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools
improved enough to make it practical to finish these designs in a reasonable time. The more
energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power
consumption. Better texts such as the landmark textbook by Mead and Conway helped schools
educate more designers, among other factors.
In 1986 the first one megabit RAM chips were introduced, which contained more than one
million transistors. Microprocessor chips passed the million transistor mark in 1989 and the
billion transistor mark in 2005. The trend continues largely unabated, with chips introduced in
2007 containing tens of billions of memory transistors.

Advances in integrated circuits

Among the most advanced integrated circuits are the microprocessors or "cores", which control
everything from computers to cellular phones to digital microwave ovens. Digital memory chips
and ASICs are examples of other families of integrated circuits that are important to the modern
information society. While the cost of designing and developing a complex integrated circuit is
quite high, when spread across typically millions of production units the individual IC cost is
minimized. The performance of ICs is high because the small size allows short traces which in
turn allows low power logic (such as CMOS) to be used at fast switching speeds.
ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to
be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or
increase functionalitysee Moore's law which, in its modern interpretation, states that the
number of transistors in an integrated circuit doubles every two years. In general, as the feature
size shrinks, almost everything improvesthe cost per unit and the switching power
consumption go down, and the speed goes up. However, ICs with nanometer-scale devices are
not without their problems, principal among which is leakage current (see subthreshold leakage
22

for a discussion of this), although these problems are not insurmountable and will likely be
solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and
power consumption gains are apparent to the end user, there is fierce competition among the
manufacturers to use finer geometries. This process, and the expected progress over the next few
years, is well described by the International Technology Roadmap for Semiconductors (ITRS).

Popularity of ICs
Only a half century after their development was initiated, integrated circuits have become
ubiquitous. Computers, cellular phones, and other digital appliances are now inextricable parts of
the structure of modern societies. That is, modern computing, communications, manufacturing
and transport systems, including the Internet, all depend on the existence of integrated circuits.

Classification

Digital integrated circuits can contain anything from one to millions of logic gates, flip-flops,
multiplexers, and other circuits in a few square millimeters. The small size of these circuits
allows high speed, low power dissipation, and reduced manufacturing cost compared with boardlevel integration. These digital ICs, typically microprocessors, DSPs, and micro controllers work
using binary mathematics to process "one" and "zero" signals.
Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by
processing continuous signals. They perform functions like amplification, active filtering,
demodulation, mixing, etc. Analog ICs ease the burden on circuit designers by having expertly
designed analog circuits available instead of designing a difficult analog circuit from scratch.
ICs can also combine analog and digital circuits on a single chip to create functions such as A/D
converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully
account for signal interference.

23

Manufacturing
Fabrication

Rendering of a small standard cell with three metal layers (dielectric has been removed). The sandcolored structures are metal interconnect, with the vertical pillars being contacts, typically plugs of
tungsten. The reddish structures are polysilicon gates, and the solid at the bottom is the crystalline silicon
bulk.

Schematic structure of a CMOS chip, as built in the early 2000s. The graphic shows LDD-MISFET's on
an SOI substrate with five metallization layers and solder bump for flip-chip bonding. It also shows the
section for FEOL (front-end of line), BEOL (back-end of line) and first parts of back-end process.
24

The semiconductors of the periodic table of the chemical elements were identified as the most
likely materials for a solid state vacuum tube by researchers like William Shockley at Bell
Laboratories starting in the 1930s. Starting with copper oxide, proceeding to germanium, then
silicon, the materials were systematically studied in the 1940s and 1950s. Today, silicon
monocrystals are the main substrate used for integrated circuits (ICs) although some III-V
compounds of the periodic table such as gallium arsenide are used for specialized applications
like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect
methods of creating crystals without defects in the crystalline structure of the semiconducting
material.
Semiconductor ICs are fabricated in a layer process which includes these key process steps:

Imaging
Deposition

Etching

The main process steps are supplemented by doping and cleaning.


Mono-crystal silicon wafers (or for special applications, silicon on sapphire or gallium arsenide
wafers) are used as the substrate. Photolithography is used to mark different areas of the
substrate to be doped or to have polysilicon, insulators or metal (typically aluminium) tracks
deposited on them.

Integrated circuits are composed of many overlapping layers, each defined by photolithography,
and normally shown in different colors. Some layers mark where various dopants are diffused
into the substrate (called diffusion layers), some define where additional ions are implanted
(implant layers), some define the conductors (polysilicon or metal layers), and some define the
connections between the conducting layers (via or contact layers). All components are
constructed from a specific combination of these layers.

In a self-aligned CMOS process, a transistor is formed wherever the gate layer (polysilicon or
metal) crosses a diffusion layer.

Capacitive structures, in form very much like the parallel conducting plates of a traditional
electrical capacitor, are formed according to the area of the "plates", with insulating material
between the plates. Capacitors of a wide range of sizes are common on ICs.

Meandering stripes of varying lengths are sometimes used to form on-chip resistors, though most
logic circuits do not need any resistors. The ratio of the length of the resistive structure to its
width, combined with its sheet resistivity, determines the resistance.

More rarely, inductive structures can be built as tiny on-chip coils, or simulated by gyrators.

Since a CMOS device only draws current on the transition between logic states, CMOS devices
consume much less current than bipolar devices.
25

A random access memory is the most regular type of integrated circuit; the highest density
devices are thus memories; but even a microprocessor will have memory on the chip. (See the
regular array structure at the bottom of the first image.) Although the structures are intricate
with widths which have been shrinking for decades the layers remain much thinner than the
device widths. The layers of material are fabricated much like a photographic process, although
light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would
be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used
to create the patterns for each layer. Because each feature is so small, electron microscopes are
essential tools for a process engineer who might be debugging a fabrication process.
Each device is tested before packaging using automated test equipment (ATE), in a process
known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of
which is called a die. Each good die (plural dice, dies, or die) is then connected into a package
using aluminium (or gold) bond wires which are welded and/or Thermosonic Bonded to pads,
usually found around the edge of the die. After packaging, the devices go through final testing on
the same or similar ATE used during wafer probing. Test cost can account for over 25% of the
cost of fabrication on lower cost products, but can be negligible on low yielding, larger, and/or
higher cost devices.

Packaging

The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used
by the military for their reliability and small size for many years. Commercial circuit packaging
quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s
pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid
array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the
early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as
either gull-wing or J-lead, as exemplified by small-outline integrated circuit -- a carrier which
occupies an area about 30 50% less than an equivalent DIP, with a typical thickness that is
26

70% less. This package has "gull wing" leads protruding from the two long sides and a lead
spacing of 0.050 inches.
In the late 1990s, PQFP and TSOP packages became the most common for high pin count
devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD
are currently transitioning from PGA packages on high-end microprocessors to land grid array
(LGA) packages.
Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array
packages, which allow for much higher pin count than other package types, were developed in
the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the
package balls via a package substrate that is similar to a printed-circuit board rather than by
wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed
over the entire die rather than being confined to the die periphery.
Traces out of the die, through the package, and into the printed circuit board have very different
electrical properties, compared to on-chip signals. They require special design techniques and
need much more electric power than signals confined to the chip itself.
When multiple dies are put in one package, it is called SiP, for System In Package. When
multiple dies are combined on a small substrate, often ceramic, it's called an MCM, or MultiChip Module. The boundary between a big MCM and a small printed circuit board is sometimes
fuzzy.

Chip labeling and manufacture date

Most integrated circuits large enough to include identifying information include four common
sections: the manufacturer's name or logo, the part number, a part production batch number
and/or serial number, and a four-digit code that identifies when the chip was manufactured.
Extremely small surface mount technology parts often bear only a number used in a
manufacturer's lookup table to find the chip characteristics.
The manufacturing date is commonly represented as a two-digit year followed by a two-digit
week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or
approximately in October 1983.

Legal protection of semiconductor chip layouts


Prior to 1984, it was not necessarily illegal to produce a competing chip with an identical layout.
As the legislative history for the Semiconductor Chip Protection Act of 1984, or SCPA,
explained, patent and copyright protection for chip layouts, or topographies, were largely
unavailable. This led to considerable complaint by U.S. chip manufacturersnotably, Intel,
which took the lead in seeking legislation, along with the Semiconductor Industry Association
(SIA)--against what they termed "chip piracy."
27

A 1984 addition to US law, the SCPA, made all so-called mask works (i.e., chip topographies)
protectable if registered with the U.S. Copyright Office. Similar rules apply in most other
countries that manufacture ICs. (This is a simplified explanation - see SCPA for legal details.)

28

Proposed Work

29

Proposed Work
In the effort to investigate novel and efficient delay fault testing algorithms and techniques,
several ideas on delay fault testing has been explored. This dissertation addresses the following
problems on developing novel and efficient ATPG and Design-for-testability (DFT) algorithms
for all three approaches for scan-based delay testing.
1. Explosion in test data volume and test application time
2. Lower fault delay fault coverage
3. High complexity in delay fault ATPG
4. The overtesting problem in scan-based delay testing
5. Functional vs. scan-based delay testing
We first present two efficient transition fault ATPG algorithms , in which we compute
good quality transition test sets using stuck-at test vectors. Experimental results obtained using
the new algorithms show that there is a 20% reduction in test set size compared to a state-of-theart native transition test ATPG tool, without losing fault coverage. Other benefits of our
approach, viz. productivity improvement, constraint handling and design data compression are
also highlighted.
Our second contribution is on the techniques to reduce data volume and application time
for scan-based transition test. We propose a novel notion of transition test chains to substitute the
conventional transition pattern and combine this idea with the ATE repeat capability to reduce
test data volume. Then a new DFT technique for scan testing is presented to address the test
application issue. Our experimental results show that our technique can improve both test data
volume and test application by 46.5% over a commercial ATPG tool.
Thirdly, a novel scan-based delay test approach , referred to as the hybrid delay scan, is
proposed. The proposed scan-based delay testing method combines advantages of the skewedload and broad-side approaches. The hybrid approach can achieve higher delay fault coverage
than the broad-side approach. Unlike the skewed-load approach whose design requirement is
often too costly to meet due to the fast scan enable signal that must switch in a full system clock
cycle, the hybrid delay scan does not require a strong buffer or buffer tree to drive the fast scan
enable signal. Hardware overhead added to standard scan designs to implement the hybrid
approach is negligible. Since the fast scan enable signal is internally generated, no external pin is
required. Transition delay fault coverage achieved by the hybrid approach transition was equal to
or higher than that achieved by the broad-side for all benchmark circuits. On an average, 6 about
4.5% improvement in fault coverage was obtained by the hybrid approach over the broad-side
approach.Next, we propose a new concept of testing only functionally testable transition faults in
Broadside Transition testing via a novel constrained ATPG. Illegal (unreachable) states that
enable detection of functionally untestable faults are first identified, and this set of undesirable
illegal states is efficiently represented as a Boolean formula. Our constrained ATPG then uses
this constraint formula to generate Broadside vectors that avoid those undesirable states. In doing
so, our method efficiently generates a test set for functionally testable transition faults and
minimizes detection of functionally untestable transition faults. Because we want to avoid
launching and propagating transitions in the circuit that are not possible in the functional mode, a
direct benefit of our method is the reduction of yield loss due to overtesting of these functionally
untestable transitions.Finally, we proposed a new approach on identifying functionally untestable
30

transition faults in non-scan sequential circuits. A new dominance relationship for transition
faults is formulated and used to identify more sequentially untestable transition faults. The
proposed method consists of two phases: first, a large number of functionally untestable
transition faults is identified by a fault-independent sequential logic implications implicitly
crossing multiple time-frames, and the identified untestable faults are classified into three
conflict categories. Second, additional functionally untestable transition faults are identified by
dominance relationships from the previous identified untestable transition faults. The
experimental results for sequential benchmark circuits showed that our approach can quickly
identify many more functionally untestable transition faults than previously reported.
In short, the topics we have investigated include:
1. Transition Fault ATPG Based on Stuck-at Test Vectors
2. Efficient Transition Testing using Test Chains and Exchange Scan
3. Hybrid Scan-based Delay Testing
4. Constrained ATPG for Broadside Transition Testing
5. Functional Untestable Transition Faults Identification
The rest of the dissertation is organized as follows. First, we give a brief review of the
preliminaries on delay fault testing in Chapter 2. Then, a novel transition fault ATPG based on
stuck-at test vectors is discussed in Chapter 3. Chapter 4 presents two techniques to further
reduce the test data volume and test application time for scan-based transition test. In Chapter 5,
a novel scan-based delay test approach, referred to as the hybrid delay scan, is proposed.
Then,we proposed a new concept of testing only functionally testable transition faults in
Broadside Transition testing via a novel constrained ATPG in Chapter 6. Chapter 7 presents a
novel approach to identify functional untestable transition faults using implication andtransition
dominance relationship. Finally, Chapter 8 concludes the dissertation.

31

Design & Development

32

Design & Development


What is VHDL?

VHDL is the VHSIC Hardware Description Language. VHSIC is an abbreviation for Very High
Speed Integrated Circuit. It can describe the behaviour and structure of electronic systems, but is
particularly suited as a language to describe the structure and behaviour of digital electronic
hardware designs, such as ASICs and FPGAs as well as conventional digital circuits.
VHDL is a notation, and is precisely and completely defined by the Language Reference Manual
( LRM ). This sets VHDL apart from other hardware description languages, which are to some
extent defined in an ad hoc way by the behaviour of tools that use them. VHDL is an
international standard, regulated by the IEEE. The definition of the language is non-proprietary.
VHDL is not an information model, a database schema, a simulator, a toolset or a methodology!
However, a methodology and a toolset are essential for the effective use of VHDL.
Simulation and synthesis are the two main kinds of tools which operate on the VHDL language.
The Language Reference Manual does not define a simulator, but unambiguously defines what
each simulator must do with each part of the language.
VHDL does not constrain the user to one style of description. VHDL allows designs to be
described using any methodology - top down, bottom up or middle out! VHDL can be used to
describe hardware at the gate level or in a more abstract way. Successful high level design
requires a language, a tool set and a suitable methodology. VHDL is the language, you choose
the tools, and the methodology... well, I guess that's where Doulos come in to the equation!

33

The Requirement
The development of VHDL was initiated in 1981 by the United States Department of Defence to
address the hardware life cycle crisis. The cost of reprocuring electronic hardware as
technologies became obsolete was reaching crisis point, because the function of the parts was not
adequately documented, and the various components making up a system were individually
verified using a wide range of different and incompatible simulation languages and tools. The
requirement was for a language with a wide range of descriptive capability that would work the
same on any simulator and was independent of technology or design methodology.

Standardization
The standardization process for VHDL was unique in that the participation and feedback from
industry was sought at an early stage. A baseline language (version 7.2) was published 2 years
before the standard so that tool development could begin in earnest in advance of the standard.
All rights to the language definition were given away by the DoD to the IEEE in order to
encourage industry acceptance and investment.

ASIC Mandate
DoD Mil Std 454 mandates the supply of a comprehensive VHDL description with every ASIC
delivered to the DoD. The best way to provide the required level of description is to use VHDL
throughout the design process.

VHDL 1993
As an IEEE standard, VHDL must undergo a review process every 5 years (or sooner) to ensure
its ongoing relevance to the industry. The first such revision was completed in September 1993,
and this is still the most widely supported version of VHDL.

VHDL 2000 and VHDL 2002


One of the features that was introduced in VHDL-1993 was shared variables. Unfortunately, it
wasn't possible to use these in any meaningful way. A working group eventually resolved this by
proposing the addition of protected types to VHDL. VHDL 2000 Edition is simply VHDL-1993
with protected types.
VHDL-2002 is a minor revision of VHDL 2000 Edition. There is one significant change, though:
the rules on using buffer ports are relaxed, which makes these much more useful than hitherto.

34

Benefits of using VHDL


Executable specification
It is often reported that a large number of ASIC designs meet their specifications first time, but
fail to work when plugged into a system. VHDL allows this issue to be addressed in two ways: A
VHDL specification can be executed in order to achieve a high level of confidence in its
correctness before commencing design, and may simulate one to two orders of magnitude faster
than a gate level description. A VHDL specification for a part can form the basis for a simulation
model to verify the operation of the part in the wider system context (eg. printed circuit board
simulation). This depends on how accurately the specification handles aspects such as timing and
initialization.
Behavioural simulation can reduce design time by allowing design problems to be detected early
on, avoiding the need to rework designs at gate level. Behavioural simulation also permits design
optimization by exploring alternative architectures, resulting in better designs.

Tools
VHDL descriptions of hardware design and test benches are portable between design tools, and
portable between design centres and project partners. You can safely invest in VHDL modelling
effort and training, knowing that you will not be tied in to a single tool vendor, but will be free to
preserve your investment across tools and platforms. Also, the design automation tool vendors
are themselves making a large investment in VHDL, ensuring a continuing supply of state-ofthe-art VHDL tools.

Technology
VHDL permits technology independent design through support for top down design and logic
synthesis. To move a design to a new technology you need not start from scratch or reverseengineer a specification - instead you go back up the design tree to a behavioural VHDL
description, then implement that in the new technology knowing that the correct functionality
will be preserved.

Benefits

Executable specification
Validate spec in system context (Subcontract)

Functionality separated from implementation

Simulate early and fast (Manage complexity)

Explore design alternatives

Get feedback (Produce better designs)


35

Automatic synthesis and test generation (ATPG for ASICs)

Increase productivity (Shorten time-to-market)

Technology and tool independence (though FPGA features may be unexploited)

Portable design data (Protect investment)

Design Flow using VHDL


The diagram below summarizes the high level design flow for an ASIC (ie. gate array, standard
cell) or FPGA. In a practical design situation, each step described in the following sections may
be split into several smaller steps, and parts of the design flow will be iterated as errors are
uncovered.

System-level Verification
As a first step, VHDL may be used to model and simulate aspects of the complete system
containing one or more devices. This may be a fully functional description of the system
allowing the FPGA/ASIC specification to be validated prior to commencing detailed design.
Alternatively, this may be a partial description that abstracts certain properties of the system,
such as a performance model to detect system performance bottle-necks.

RTL design and testbench creation


Once the overall system architecture and partitioning is stable, the detailed design of each
FPGA/ASIC can commence. This starts by capturing the design in VHDL at the register transfer
level, and capturing a set of test cases in VHDL. These two tasks are complementary, and are
sometimes performed by different design teams in isolation to ensure that the specification is
correctly interpreted. The RTL VHDL should be synthesizable if automatic logic synthesis is to
be used. Test case generation is a major task that requires a disciplined approach and much
engineering ingenuity: the quality of the final FPGA/ASIC depends on the coverage of these test
cases.

RTL verification
The RTL VHDL is then simulated to validate the functionality against the specification. RTL
simulation is usually one or two orders of magnitude faster than gate level simulation, and
experience has shown that this speed-up is best exploited by doing more simulation, not
spending less time on simulation.
In practice it is common to spend 70-80% of the design cycle writing and simulating VHDL at
and above the register transfer level, and 20-30% of the time synthesizing and verifying the
gates.

36

Look-ahead Synthesis
Although some exploratory synthesis will be done early on in the design process, to provide
accurate speed and area data to aid in the evaluation of architectural decisions and to check the
engineer's understanding of how the VHDL will be synthesized, the main synthesis production
run is deferred until functional simulation is complete. It is pointless to invest a lot of time and
effort in synthesis until the functionality of the design is validated.

70% of design time at RTL!

What is the difference between VHDL and Verilog?


Fundamentally speaking, not a lot. You can produce robust designs and comprehensive test
environments with both langauges, for both ASIC and FPGA. However, the two langauges
approach the task from different directions; VHDL, intended as a specification langauge, is very
exact in its nature and hence very verbose. Verilog, intended as a simulation langauge, it much
closer to C in style, in that it is terse and elegant to write but requires much more care to avoid
nasty bugs. VHDL doesn't let you get away with much; Verilog assumes that whatever you wrote
was exactly what you intended to write. If you get a VHDL architecture to compile, it's probably
going to approximate to the function you wanted. For Verilog, successful compilation merely
indicates that the syntax rules were met, nothing more. VHDL has some features that make it
good for system-level modelling, whereas Verilog is much better than VHDL at gate-level
simulation. To confuse the situation more, see SystemVerilog...

Overview of ISE
37

ISE controls all aspects of the design flow. Through the Project Navigator interface, you can
access all of the design entry and design implementation tools. You can also access the files and
documents associated with your project. Project Navigator Interface The Project Navigator
Interface, by default, is divided into four panel subwindows, as seen On the top left is the
Design, Files and Libraries panels which include display and access to the source files in the
project, as well as access to running processes for the currently selected source. At the bottom of
the Project Navigator is the Console, Errors and Warnings panels which display status messages,
errors, and warnings. To the right is a multi-document interface (MDI) window referred to as the
Workspace. It enables you to view design reports, text files, schematics, and simulation
waveforms. Each window may be resized, undocked from Project Navigator, moved to a new
location within the main Project Navigator window, tiled, layered, or closed. Panels may be
opened or closed by using the View -> Panels -> * menu selections. The default layout can
always be restored by selecting View > Restore Default Layout. These windows are discussed
in more detail in the following sections.

Design Panel
The Sources view displays the project name, the target device, and user documents and design
source files associated with the selected Design View. The Design View (Sources for) dropdown list at the top of the Sources tab allows you to view only those source files associated with
the selected Design View, such as Synthesis/Implementation orSimulation. Each file in a Design
View has an associated icon. The icon indicates the file type (HDL file,schematic, core, or text
file, for example).
.

Creating a New Project

38

39

40

Implementation

Implementation
41

VHDL PROGRAMS FOR VARIOUS LOGIC CIRCUITS


--------------------------------------------------- AND gate
-- two descriptions provided
-------------------------------------------------library ieee;
use ieee.std_logic_1164.all;
-------------------------------------------------entity AND_ent is
port( x: in std_logic;
y: in std_logic;
F: out std_logic
);
end AND_ent;
-------------------------------------------------architecture behav1 of AND_ent is
begin
process(x, y)
begin
-- compare to truth table
if ((x='1') and (y='1')) then
F <= '1';
else
F <= '0';
end if;
end process;
end behav1;
architecture behav2 of AND_ent is
begin
F <= x and y;
end behav2;
--------------------------------------------------

--------------------------------------- OR gate

42

-- two descriptions provided


-------------------------------------library ieee;
use ieee.std_logic_1164.all;
-------------------------------------entity OR_ent is
port( x: in std_logic;
y: in std_logic;
F: out std_logic
);
end OR_ent;
--------------------------------------architecture OR_arch of OR_ent is
begin
process(x, y)
begin
-- compare to truth table
if ((x='0') and (y='0')) then
F <= '0';
else
F <= '1';
end if;
end process;
end OR_arch;
architecture OR_beh of OR_ent is
begin
F <= x or y;
end OR_beh;
---------------------------------------

-----------------------------------------

43

-- NAND gate
-- two descriptions provided
----------------------------------------library ieee;
use ieee.std_logic_1164.all;
-----------------------------------------entity NAND_ent is
port( x: in std_logic;
y: in std_logic;
F: out std_logic
);
end NAND_ent;
-----------------------------------------architecture behv1 of NAND_ent is
begin
process(x, y)
begin
-- compare to truth table
if (x='1' and y='1') then
F <= '0';
else
F <= '1';
end if;
end process;
end behv1;
----------------------------------------architecture behv2 of NAND_ent is
begin
F <= x nand y;
end behv2;
-----------------------------------------

-----------------------------------------

44

-- NOR gate
-- two descriptions provided
----------------------------------------library ieee;
use ieee.std_logic_1164.all;
----------------------------------------entity NOR_ent is
port( x: in std_logic;
y: in std_logic;
F: out std_logic
);
end NOR_ent;
-----------------------------------------architecture behv1 of NOR_ent is
begin
process(x, y)
begin
-- compare to truth table
if (x='0' and y='0') then
F <= '1';
else
F <= '0';
end if;
end process;
end behv1;
architecture behv2 of NOR_ent is
begin
F <= x nor y;
end behv2;
-----------------------------------------

--------------------------------------- OR gate

45

-- two descriptions provided


-------------------------------------library ieee;
use ieee.std_logic_1164.all;
-------------------------------------entity OR_ent is
port( x: in std_logic;
y: in std_logic;
F: out std_logic
);
end OR_ent;
--------------------------------------architecture OR_arch of OR_ent is
begin
process(x, y)
begin
-- compare to truth table
if ((x='0') and (y='0')) then
F <= '0';
else
F <= '1';
end if;
end process;
end OR_arch;
architecture OR_beh of OR_ent is
begin
F <= x or y;
end OR_beh;
---------------------------------------

--------------------------------------

46

-- XOR gate
--- two descriptions provided
-------------------------------------library ieee;
use ieee.std_logic_1164.all;
-------------------------------------entity XOR_ent is
port( x: in std_logic;
y: in std_logic;
F: out std_logic
);
end XOR_ent;
-------------------------------------architecture behv1 of XOR_ent is
begin
process(x, y)
begin
-- compare to truth table
if (x/=y) then
F <= '1';
else
F <= '0';
end if;
end process;
end behv1;
architecture behv2 of XOR_ent is
begin
F <= x xor y;
end behv2;
--------------------------------------

47

--------------------------------------- XOR gate


-- two descriptions provided
-------------------------------------library ieee;
use ieee.std_logic_1164.all;
-------------------------------------entity XOR_ent is
port( x: in std_logic;
y: in std_logic;
F: out std_logic
);
end XOR_ent;
-------------------------------------architecture behv1 of XOR_ent is
begin
process(x, y)
begin
-- compare to truth table
if (x/=y) then
F <= '1';
else
F <= '0';
end if;
end process;
end behv1;
architecture behv2 of XOR_ent is
begin
F <= x xor y;
end behv2;
--------------------------------------

48

------------------------------------------------------------- Combinational Logic Design


-- A simple example of VHDL Structure Modeling
-- we might define two components in two separate files,
-- in main file, we use port map statement to instantiate
-- the mapping relationship between each components
-- and the entire circuit.
-----------------------------------------------------------library ieee;
use ieee.std_logic_1164.all;

-- component #1

entity OR_GATE is
port( X:
in std_logic;
Y:
in std_logic;
F2:
out std_logic
);
end OR_GATE;
architecture behv of OR_GATE is
begin
process(X,Y)
begin
F2 <= X or Y;
end process;
end behv;

-- behavior des.

------------------------------------------------------------library ieee;
use ieee.std_logic_1164.all;

-- component #2

entity AND_GATE is
port( A:
in std_logic;
B:
in std_logic;
F1:
out std_logic
);
end AND_GATE;
architecture behv of AND_GATE is
begin
process(A,B)
begin
F1 <= A and B;
end process;
end behv;

-- behavior des.

-------------------------------------------------------------library ieee;
use ieee.std_logic_1164.all;

-- top level circuit

49

use work.all;
entity comb_ckt is
port( input1: in std_logic;
input2: in std_logic;
input3: in std_logic;
output: out std_logic
);
end comb_ckt;
architecture struct of comb_ckt is
component AND_GATE is
port( A:
in std_logic;
B: in std_logic;
F1:
out std_logic
);
end component;

-- as entity of AND_GATE

component OR_GATE is
port( X:
in std_logic;
Y:
in std_logic;
F2: out std_logic
);
end component;

-- as entity of OR_GATE

signal wire: std_logic;

-- signal just like wire

begin
-- use sign "=>" to clarify the pin mapping
Gate1: AND_GATE port map (A=>input1, B=>input2, F1=>wire);
Gate2: OR_GATE port map (X=>wire, Y=>input3, F2=>output);
end struct;
----------------------------------------------------------------

50

Result

Result

51

Fig. 1 basic gates for half adder

52

Fig.2 logic circuit for half adder

53

Fig. 3 initial timing and clock wizard

54

Fig. 4 schematic editor for half adder

55

Fig.5 simulation

and Project Status (05/11/2010 - 18:31:07)


Project File:

and.ise

Implementation State:

Module Name:

AND_ent

Errors:

X 1 Error

Target Device:

xc4vlx15-12sf363

Warnings:

No Warnings

Product Version:

ISE 11.1

Routing Results:

Design Goal:

Balanced

Timing Constraints:

Design Strategy:

Xilinx Default (unlocked)

Final Timing Score:

Fig. 6 project status

56

Mapped

Detailed Reports

[-]

Report Name

Status

Synthesis Report

Generated

Errors

Warnings Infos

Current Tue May 11 18:28:08 2010

Translation Report

Current Tue May 11 18:28:30 2010

Map Report

Current Tue May 11 18:31:04 2010

X 1 Error 0

Place and Route Report


Power Report
Post-PAR Static Timing Report
Bitgen Report
Secondary Reports
Report Name

[-]
Status

Generated

Date Generated: 05/11/2010 - 18:31:08


Fig. 7 detailed report

Synthesis Report
Release 11.1 - xst L.33 (nt)
Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.
--> Parameter TMPDIR set to xst/projnav.tmp
Total REAL time to Xst completion: 0.00 secs
Total CPU time to Xst completion: 0.33 secs
--> Parameter xsthdpdir set to xst
Total REAL time to Xst completion: 0.00 secs
Total CPU time to Xst completion: 0.33 secs
--> Reading design: AND_ent.prj
TABLE OF CONTENTS
1) Synthesis Options Summary
2) HDL Compilation
3) Design Hierarchy Analysis
4) HDL Analysis
5) HDL Synthesis
5.1) HDL Synthesis Report
6) Advanced HDL Synthesis
6.1) Advanced HDL Synthesis Report
7) Low Level Synthesis
8) Partition Report
9) Final Report
9.1) Device utilization summary
57

9.2) Partition Resource Summary


9.3) TIMING REPORT
=====================================================================
====
* Synthesis Options Summary *
=====================================================================
====
---- Source Parameters
Input File Name : "AND_ent.prj"
Input Format : mixed
Ignore Synthesis Constraint File : NO
---- Target Parameters
Output File Name : "AND_ent"
Output Format : NGC
Target Device : xc4vlx15-12-sf363
---- Source Options
Top Module Name : AND_ent
Automatic FSM Extraction : YES
FSM Encoding Algorithm : Auto
Safe Implementation : No
FSM Style : lut
RAM Extraction : Yes
RAM Style : Auto
ROM Extraction : Yes
Mux Style : Auto
Decoder Extraction : YES
Priority Encoder Extraction : YES
Shift Register Extraction : YES
Logical Shifter Extraction : YES
XOR Collapsing : YES
ROM Style : Auto
Mux Extraction : YES
Resource Sharing : YES
Asynchronous To Synchronous : NO
Use DSP Block : auto
Automatic Register Balancing : No
---- Target Options
Add IO Buffers : YES
Add Generic Clock Buffer(BUFG) : 32
Number of Regional Clock Buffers : 16
Register Duplication : YES
Slice Packing : YES
Optimize Instantiated Primitives : NO
Use Clock Enable : Auto
Use Synchronous Set : Auto
Use Synchronous Reset : Auto
58

Pack IO Registers into IOBs : auto


Equivalent register Removal : YES
---- General Options
Optimization Goal : Speed
Optimization Effort : 1
Power Reduction : NO
Library Search Order : AND_ent.lso
Keep Hierarchy : NO
Netlist Hierarchy : as_optimized
RTL Output : Yes
Global Optimization : AllClockNets
Read Cores : YES
Write Timing Constraints : NO
Cross Clock Analysis : NO
Hierarchy Separator : /
Bus Delimiter : <>
Case Specifier : maintain
Slice Utilization Ratio : 100
BRAM Utilization Ratio : 100
DSP48 Utilization Ratio : 100
Verilog 2001 : YES
Auto BRAM Packing : NO
Slice Utilization Ratio Delta : 5
=====================================================================
====
=====================================================================
====
* HDL Compilation *
=====================================================================
====
Compiling vhdl file "H:/New Folder/AND_gate.vhd" in Library work.
Entity <AND_ent> compiled.
Entity <AND_ent> (Architecture <behav1>) compiled.
Entity <AND_ent> (Architecture <behav2>) compiled.
=====================================================================
====
* Design Hierarchy Analysis *
=====================================================================
====
Analyzing hierarchy for entity <AND_ent> in library <work> (architecture <behav2>).
=====================================================================
====
* HDL Analysis *
=====================================================================
====
Analyzing Entity <AND_ent> in library <work> (Architecture <behav2>).
59

Entity <AND_ent> analyzed. Unit <AND_ent> generated.


=====================================================================
====
* HDL Synthesis *
=====================================================================
====
Performing bidirectional port resolution...
Synthesizing Unit <AND_ent>.
Related source file is "H:/New Folder/AND_gate.vhd".
Unit <AND_ent> synthesized.
=====================================================================
====
HDL Synthesis Report
Found no macro
=====================================================================
====
=====================================================================
====
* Advanced HDL Synthesis *
=====================================================================
====
=====================================================================
====
Advanced HDL Synthesis Report
Found no macro
=====================================================================
====
=====================================================================
====
* Low Level Synthesis *
=====================================================================
====
Optimizing unit <AND_ent> ...
Mapping all equations...
Building and optimizing final netlist ...
Found area constraint ratio of 100 (+ 5) on block AND_ent, actual ratio is 0.
Final Macro Processing ...
=====================================================================
====
Final Register Report
Found no macro
=====================================================================
====
=====================================================================
====
* Partition Report *
60

=====================================================================
====
Partition Implementation Status
------------------------------No Partitions were found in this design.
------------------------------=====================================================================
====
* Final Report *
=====================================================================
====
Final Results
RTL Top Level Output File Name : AND_ent.ngr
Top Level Output File Name : AND_ent
Output Format : NGC
Optimization Goal : Speed
Keep Hierarchy : NO
Design Statistics
# IOs : 3
Cell Usage :
# BELS : 1
# LUT2 : 1
# IO Buffers : 3
# IBUF : 2
# OBUF : 1
=====================================================================
====
Device utilization summary:
--------------------------Selected Device : 4vlx15sf363-12
Number of Slices: 1 out of 6144 0%
Number of 4 input LUTs: 1 out of 12288 0%
Number of IOs: 3
Number of bonded IOBs: 3 out of 240 1%
--------------------------Partition Resource Summary:
--------------------------No Partitions were found in this design.
--------------------------=====================================================================
====
TIMING REPORT
NOTE: THESE TIMING NUMBERS ARE ONLY A SYNTHESIS ESTIMATE.
FOR ACCURATE TIMING INFORMATION PLEASE REFER TO THE TRACE REPORT
GENERATED AFTER PLACE-and-ROUTE.
Clock Information:
61

-----------------No clock signals found in this design


Asynchronous Control Signals Information:
---------------------------------------No asynchronous control signals found in this design
Timing Summary:
--------------Speed Grade: -12
Minimum period: No path found
Minimum input arrival time before clock: No path found
Maximum output required time after clock: No path found
Maximum combinational path delay: 4.858ns
Timing Detail:
-------------All values displayed in nanoseconds (ns)
=====================================================================
====
Timing constraint: Default path analysis
Total number of paths / destination ports: 2 / 1
------------------------------------------------------------------------Delay: 4.858ns (Levels of Logic = 3)
Source: x (PAD)
Destination: F (PAD)
Data Path: x to F
Gate Net
Cell:in->out fanout Delay Delay Logical Name (Net Name)
---------------------------------------- -----------IBUF:I->O 1 0.754 0.436 x_IBUF (x_IBUF)
LUT2:I0->O 1 0.147 0.266 F1 (F_OBUF)
OBUF:I->O 3.255 F_OBUF (F)
---------------------------------------Total 4.858ns (4.156ns logic, 0.702ns route)
(85.5% logic, 14.5% route)
=====================================================================
====
Total REAL time to Xst completion: 7.00 secs
Total CPU time to Xst completion: 7.13 secs
-->
Total memory usage is 142536 kilobytes
Number of errors : 0 ( 0 filtered)
Number of warnings : 0 ( 0 filtered)
Number of infos : 0 ( 0 filtered)

Translation Report
62

Release 11.1 ngdbuild L.33 (nt)


Copyright (c) 1995-2009 Xilinx, Inc. All rights reserved.
Command Line: C:\Xilinx\11.1\ISE\bin\nt\unwrapped\ngdbuild.exe -ise and.ise
-intstyle ise -dd _ngo -nt timestamp -i -p xc4vlx15-sf363-12 AND_ent.ngc
AND_ent.ngd
Reading NGO file "C:/Xilinx/11.1/ISE/and/AND_ent.ngc" ...
Gathering constraint information from source properties...
Done.
Resolving constraint associations...
Checking Constraint Associations...
Done...
Checking Partitions ...
Checking expanded design ...
Partition Implementation Status
------------------------------No Partitions were found in this design.
------------------------------NGDBUILD Design Results Summary:
Number of errors: 0
Number of warnings: 0
Total memory usage is 95588 kilobytes
Writing NGD file "AND_ent.ngd" ...
Total REAL time to NGDBUILD completion: 9 sec
Total CPU time to NGDBUILD completion: 8 sec
Writing NGDBUILD log file "AND_ent.bld"...

63

Conclusion & Future Work

Conclusion & Future Work


With the rapid advances in integrated circuit technology, it is possible to fabricate digital circuit
with large no. of devices in a single vlsi chip. The increase in size and complexity of circuit
64

placed on a chip, with little or no increase in no. of I/O pins, drastically reduces the
controllability and observabillity of logic on chip. Since VLSI engineering has been used in wide
range of application, the need for testing is becoming more and more important.
The test generation for sequential circuit has been recognized as a difficult problem. different
approaches have been used to deal with the testing the testing problem, either by randomly
generating test sequences or by using deterministic test generation methods.
The existing test generating algorithms for sequential circuit can generate test sequences for
large sequential circuits. However, with increasing circuit complexity, either test generation time
increases exponentially, either test generation time increases exponentially or it cannot produce
test sequences due to the exponential increase of reachable states.
In this dissertation, we propose a new approach for designing test generation algorithms with
better time complexity and fault coverage. A global search approach has been developed for the
test generation of large sequential circuits. The approach justifies a fault at a primary output or
next state line order to trace different sensitive paths from the primary inputs and present state
lines to the primary output or next state line. During the global test generation process. Many
faults are considered as candidates to be tested simultaneously.

65

Bibliography

Bibliography

66

M. ABRAMOVICI, M. A. BREUER, and A. D. FRIEDMAN. Digital


Systems Testing and Testable Design. Computer Science Express,
1990.
V. D. AGRAWAL and S. T. CHAKRADHAR. Combinational ATPG
Theorems for Identifying untestable faults in synchronous Sequential
Circuits. IEEE Trans. on Computer-Aided Design of Integrated Circuit
and System, Vol. 14(9):11551160, Sep. 1995.
F. BRGLEZ, D. BRYAN, and K. KOZMINSKI. Combinational Profiles
of Sequential Benchmark Circuits. In Proceedings IEEE International
Symposium on Circuits and Systems, 1989.
D. BRAND and V. S. IYENGAR. Identification of Redundant Delay
Faults. IEEE Trans. on Computer-Aided Design of Integrated Circuit
and System, Vol. 13(5):553565, MAY 1994.
D. BELETE, A. RAZDAN, W. SCHWARZ, R. RAINA,
C. HAWKINS, and J. MOREHEAD. Use of DFT Techniques
In Speed Grading a 1GHz+ Microprocessor. In Proceedings IEEE
International Test Conference, pages 11111119, 2002.
S. T. CHAKRADHAR and V. D. AGRAWAL. A transitive closure
algorithm for test generation. IEEE Trans. on Computer-Aided Design
of Integrated Circuit and System, Vol. 12(7):10151028, JULY 1993.
A. CHANDRA and K. CHAKRABARTY. Frequency Directed Run
Length(FDR) Codes with Application to System on a Chip Data Compression.
In Proceedings VLSI Testing Symposium, pages 4247,
2001.
K.-T. CHENG. Transition Fault Testing for Sequential Circuits. IEEE
Trans. on Computer-Aided Design of Integrated Circuit and System,

67

Anda mungkin juga menyukai