Juan-Antonio Carballo
Copyright 2008 by
PennWell Corporation
1421 South Sheridan Road
Tulsa, Oklahoma 74112-6600 USA
800.752.9764
+1.918.831.9421
sales@pennwell.com
www.pennwellbooks.com
www.pennwell.com
Carballo, Juan-Antonio.
Chip design for non-designers / Juan-Antonio Carballo.
p. cm.
Includes bibliographical references and index.
ISBN-13: 978-1-59370-106-2
1. Integrated circuits--Design and construction. I. Title.
TK7874.C355 2008
621.3815--dc22
2007046495
2 Specifying a Chip . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
The Chip Specification . . . . . . . . . . . . . . . . . . . . . . . . . 13
Developing Specifications through Languages . . . . . . . . . . . . . . 14
How Are Specifications Developed? . . . . . . . . . . . . . . . . . . . 16
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 System-Level Design . . . . . . . . . . . . . . . . . . . . . . . . . . 19
System-Level Designa Growing Discipline . . . . . . . . . . . . . . . 19
Design Sub-flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Design Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
What about the software? . . . . . . . . . . . . . . . . . . . . . . 27
On-chip interconnections: Buses and networks . . . . . . . . . . . . 28
What about the chip package? . . . . . . . . . . . . . . . . . . . . 31
A custom model using general-purpose languages . . . . . . . . . . 34
System-Level Design to Logic-Level Design: High-Level Synthesis . . . . . 38
Chip Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Advanced planning of an SoC and the players involved . . . . . . . . 42
Emulation as a means to accelerate high-level design
while keeping accuracy . . . . . . . . . . . . . . . . . . . . . . 43
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Digital design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Analog design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Overall chip design . . . . . . . . . . . . . . . . . . . . . . . . . . 150
System-level design . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Logic designtiming analysis . . . . . . . . . . . . . . . . . . . . . 151
Logic designpower analysis and optimization . . . . . . . . . . . . . 151
Logic designtestability . . . . . . . . . . . . . . . . . . . . . . . . 151
Layout design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Analog design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Digital design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Manufacturability/yield . . . . . . . . . . . . . . . . . . . . . . . . 152
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Contents vii
Focus areas
The focus is on the fundamentals of chip design, which should survive the
evolution in design technology. As a result, the reader should be able to fully
benefit from this book for at least five years. There is a lack of books providing
similar value that are currently in publication. While there are good books on
chip design, they tend to have different objectives and are detailed and lengthy.
This book does not try to compete with or outdo those books. It is intended for
non-design professionals who need a practical and useful introduction.
This book may also be used as an academic text, for a course that could
be called Introduction to Chip Design or Introduction to Design Methodologies,
especially in a manufacturing-oriented degree program. It is also a suitable source
of material for internal seminars, and indeed, when appropriate, I use it for my
own seminars, with audiences ranging from 30 to 200 people.
Introduction xi
Introduction xiii
Summary
By reading this book, one should expect to accomplish the following:
Introduction xv
In doing so, we will focus on the pretape-out side; digital design, but
also analog content; computing, communications, and consumer applications;
ASIC (SoC) and semicustom design; hardware, not processor software; and key
parameters, such as power consumption. Principal assumptions regarding the
average reader are a general knowledge of chip design and some understanding
of how transistors, circuits, and logic work.
As standard disclaimers, examples should not be interpreted as part of any
product; trademarks expressed or implied belong to their respective owners; and
no endorsement of any product is intended.
First, the specifications, or requirements, for the chip are defined at the
system level. In other words, we describe what the chip needs to do (its
function) and with what quantitative characteristics, such as speed, area,
cost, and power consumption.
Then, the chip or circuit is designed at a high level, the system level.
Here, we describe the chip and how it functions well within the entire
system, as the initial specifications require. At this level, we describe
both behavior and structurethat is, we describe the function of the
chip (behavior), as well as the set of blocks that compose the chip and
how they are interconnected with each other. Some of these blocks are
analog blocks (e.g., a circuit that generates a clock), and others are digital
(e.g., a multiplier).
Next, each of the blocks in the chip is then designed in one of three
forms: custom digital, semi-custom digital, and custom analog. (Note
that a fourth approach, using a pre-designed block, implies that another
team completed the block using one of the other three approaches.
Thus, there are only three possible approaches.) These forms are shown
as the three subflows from left to right in figure 13. Analog blocks are
designed in a custom manner (right-most subflow). Large digital blocks
are designed (middle subflow in fig. 13) by assembling a number of
custom-designed digital subblocks (left-most subflow in fig. 13).
Verification tools. These are used to check that a design meets certain
properties. For example, a verification tool can be used to check that a
Summary
In this chapter, two key weapons have been described that design teams use
to manage the daunting complexity of designing a chip: a set of clearly defined
abstraction levels; and a design flow that extends from higher levels to lower
levels, until a manufacturable design is complete. The concept of design tools,
which provide automated support for a design flow, has also been introduced;
without design tools, modern chip design would be impossible.
Developing Specifications
through Languages
A chip design specification is a description of an artifact (i.e., a chip).
Therefore, a language is needed to develop such a description. There are two
ways to create a specification:
Question. Why should design teams use formal languages, if English and
other natural languages are the most developed ways for humans to communicate?
Conversely, could we design a chip on the basis of only natural languages, such
as English?
Table 21 depicts a possible chip specification for a high-performance
interface communications circuit. Many exact copies of these circuits sit in a
typical single modern networking chip. It is a very difficult type of circuit to design,
because it includes both analog and digital circuits inside, because it needs to run
at extremely high speeds (to enable increasingly high bandwidths), and because it
cannot consume a lot of power (owing to limitations in the overall chip).
As can be seen in the portion of the design specification displayed in table
21, the specification conveys a set of critical requirements that need to be met:
Specifying a Chip 15
Specifying a Chip 17
Summary
This chapter has introduced the concept of the specification, a critical and
historically underserved part of a modern chips design. Informal (in natural
languages) and formal (can be processed by intelligent software) specifications
have been covered, both of which are necessary.
Design Sub-flow
The system-level design flow is a subset of the overall design flowand
sits at the top of the design chain, as can be imagined. Figure 31 depicts a
simplified description of a typical system-level design flow. The input to this flow
is the design specification. The output is a register-transfer-level (RTL)/logic-
level description of the design. (For simplicity, assume that this flow includes
only digital design components. The case of analog or mixed-signal design flow
at the system level will be treated separately (see chapter 5).
System-level design 21
Fig. 32. Design of an SoC through the use of various system-level models
System-level design 23
(3.1)
System-level design 25
System-level design 27
Most modern chips connect large cores via an interconnection structure called
a bus. Buses are the shared highways for on-chip (or off-chip, which is beyond
the scope of this book) communications signals, with limited capacity. Because
of its limited capacity and the target cores limited attention span, when a core
(e.g., a microprocessor core) wants to talk to another core (e.g., a memory core,
for writing on it or reading from it), it may need to apply for access to that bus
beforehand. If the bus is not being accessed by another core to use that memory,
then the bus can be used to make the connection and transfer the data.
Buses can be categorized into blocking and nonblocking. A blocking bus is
used in complete bursts of data. Once the burst of data is transmitted, the bus is
released and can be used by a different core or set of cores. Nonblocking buses
are used on a cycle-by-cycle basisthat is, for every few clock ticks, some data
may be transferred, but others may be able to access the bus in between. Here is
a simplified example of blocking bus operation in SystemC:
System-level design 29
The conventional dual in-line package (DIP), which features pins on the
left and right sides of the chip
Ceramic pin grid array (CPGA), which features pins on a grid and uses
ceramic materials
System-level design 31
V = IR (3.2)
(3.3)
(3.4)
System-level design 33
As figure 37 indicates, there are transmitter link circuits and receiver link
circuits. Transmitter circuits, implemented as cores within the chip that transmits
the data, send information over the channelthat is, the medium where
communication happens in the hardware system. A channel can be a copper
wire, a fiber-optic wire, or a computer board, for example. Because channels,
chip packages, and connectors are not perfect transmission media, the signal
transmitted becomes distorted in the frequency and time domain as it moves out
of the chip through the package, across the channel, and then into the receiver
chip through its own package. As a result, the signal that the receiver obtains is
quite different from the one sent, as can be seen at the top of figure 37.
This distortion effect, combined with the very high speeds that the signal goes
at, presents a difficult design task. Note that the receiver circuit needs to perform
complex signal recovery functions, first using analog operations to receive the
signal (the world is always analog before it enters a digital chip), then converted
to digital signals, then perform complex DSP operations to recover accurately
the information initially sent on the other side. Designing these transmitter and
receiver circuits or cores requires accurate modeling of the entire system that
produces these distortion effects, and a similarly accurate approach to model the
speed, power consumption, area, and overall error performance of the system.
Enter the custom modeling system provided in figure 38.
System-level design 35
The system design (at left in fig. 38) operates a design environment to
perform estimations on
What kind of BER performance is possible (i.e., how many times per
second we would receive a signal and misinterpret it, e.g., decode it
as 0 when it is 1), which, for example, can be modeled in the design
environment as
(3.5)
How much chip area it will take to achieve such performance and power
consumption. Similar to power consumption, in a first approximation, the
total area number can be computed as the sum of the areas of all blocks,
plus the estimated area for the interconnections between these blocks.
System-level design 37
System-level design 39
We will return to this topic later (see chap. 5), when the detailed version of these
layouts is generated, with special emphasis on the generation of the clock and
voltage supply network.
To summarize, chip planning allows a design team to establish feasibility of
the chip quantitative requirements by generating a rough sketch of the design,
emulating the back-of-the-envelope approach that leads to a set of numbers, plus
a prototype that provides background for these numbers and an idea of what
the design looks like. If these numbers are completely out of any reasonable
boundaries (e.g., area is 10 times the maximum established requirement), then a
redesign or replanning is necessary at this stage, when the cost of fixing mistakes
is manageable. The same situation later on can cost many times more in schedule,
cost, and revenue misses.
A graphical description of a basic chip design estimation process is provided
in figure 39. As depicted in figure 39, there are several key inputs to this
estimation process: a high-level block structure, technology parameters, and
optimization criteria.
Fig. 39. A graphical depiction and example of the chip design prototyping process
System-level design 41
Fig. 310. Advanced chip-planning process, showing the origin of each input
The outputs provided can include the layout or floorplan, power consumption,
yield/manufacturability, cost per chip, possible chip packages, chip area,
maximum frequency, maximum performance, and a logic/functional description
resulting from a rough synthesis step, as described previously in this chapter.
System-level design 43
RTL/Logic-Level Designfrom
Art to Science
Once system-level design has been executed, it is time for RTL/logic-level
design. In general, this design phase is much more automated than system-level
design and details the digital portion of a hardware design, including its function
and its structure. RTL/logic-level design is done on all major digital blocks in a
design and can be defined as follows:
There are two reasons for describing these two levels in a single chapter:
First, both currently fall under a single, increasingly merged level of abstraction;
that is, a designer can write code to describe registers in the same file (e.g., where
Fig. 41. The talks that need to be pursued during RTL/logic-level design
The logic design flow is depicted in figure 41, including the most important
tasks that need to be pursued during the phase of design. As figure 41 indicates,
during logic design, engineers have to pursue the following tasks:
Logic simulation and verification. For both the synthesized and the
raw RTL/logic-level description, it is necessary to perform a number
of checks that provide the engineering team with practical assurance
that the design can be manufactured as indicated in the specification.
Because of the detailed level of abstraction, there are a large number
of complex simulations and verifications that need to be performed.
Items that need to be verified include (but are not limited to) power
consumption, performance, functionality, testability, and correctness.
In the following sections, each of these steps and their subtasks are described in
more detail.
Design Entry
During design entry, designers enter the detailed description of a digital
block by using a specific language that allows them to mix and to interconnect
various types of RTL and logic-level blocksand, optionally, certain low-level
behavioral descriptions.
Question. Logic design is practically always based on writing code in
hardware programming languages, with VHDL and Verilog being by far the
most popular. Why is a coding language (and not a graphical schematic tool) the
correct and most popular approach?
Coding languages
Logic design is always conducted with the support of coding languages
specifically developed for hardware design. There are several good reasons to
use coding languages. The number of logic gates and blocks in a design, even
in a partial digital block, is typically extremely large, from 50,000 to millions of
gates. An effective way to enterand, more important, to documentthis mass
of objects is imperative.
Coding languages, well proven in software development environments, can
handle very large numbers of objects in an organized manner, enabling engineers
to quickly and effectively enter and understand designs in a team environment.
Hundreds, or even thousands, of engineers can work to develop complex designs
taking thousands, or possibly millions, of lines of code. For this reason, logic
design has evolved into an activity largely based on writing and debugging large
amounts of code in special-purpose languages, such as Verilog and VHDL. Note
that both languages can be used for logic blocks in both custom chips and FPGAs.
RTL/Logic-Level design 47
VHDL. This language was originally created for the U.S. Department
of Defense to document structure and behavior of the chips included
in the hardware bought by the department (to forgo large manuals).
Soon simulator software was developed that could parse the language,
and next logic synthesis tools were created. In accordance with the
original customers requirement, to leverage past experience, VHDL was
loosely based on the existing Ada software-programming language (not
a hardware language) and as such is a strongly typed language. Being
a hardware description language, VHDL can describe the parallelism
inherent in hardware and includes a fundamental set of Boolean
operators (NAND, NOR, NOT, etc.). The language is an IEEE standard
and allows designers to include numbers (integer and real), logical values
(1s and 0s), characters, arrays of bits and characters, and time units.
Behavior statements can be inclusive, and modern simulators can simulate
their execution of all those statements in parallel, just as the hardware
would do. If the described design blocks are composed of synthesizable
Verilog, logic synthesis tools can be applied to synthesize the design.
Combinational blocks
Combinational blocks perform logic operations. Complex logic operations
are formed by combining basic logic operations is various ways. Any systematic
operation on numbers, text/language, or anything else that can be expressed as a
sequence of 0s and 1s should be able to be expressed as a logic operation. Several
fundamental logic operationswhich we call operators or, more commonly,
gates, with logic inputs and logic outputsare depicted in figure 42.
RTL/Logic-Level design 49
(4.1)
(4.2)
(4.3)
(4.4)
OR/NOR. An OR operator returns 0 only when both inputs are also 0 and
1 otherwise. A NOR operator is the combination of a NOT operator with an OR
operator. The equations describing OR and NOR operators, respectively, are
as follows:
(4.5)
(4.6)
(4.7)
RTL/Logic-Level design 51
(4.8)
Sequential blocks
Sequential blocks store data immediately atop a clock pulse (the clock signal
voltage goes up, then goes down, forming a pulse [or mountain peak]) or a
clock edge (the clock signal goes up or down). Sequential blocks can be classified
according to multiple dimensions, including
Sizein terms of the number of bits of information they can store: 1-bit
versus N-bit or registers
RTL/Logic-Level design 53
Fig. 44. Verilog description of a comparator block (top) and a divider block (bottom)
Modules or blocks, small (logic gates) and large (adders, multipliers, and
processors)
Inputs/outputs for each of the blocks and for the overall design
Connections between each of the blocks, most frequently in the form of wires
Timing (arrival times for signals at specific points, wires, modules, etc.)
RTL/Logic-Level design 55
Logic Synthesis
Logic synthesis is the process by which a logic description is mapped to the
available fundamental logic blocks in a particular technology, called cells. These
are basic combinational and sequential blocks that belong to a library. The library
includes a fixed group of cells that provide commonly used functions and that can
be combined to form larger logic functions. Library cells are verified to work with
the manufacturing technology that will be used to fabricate the chip at hand. The
result of a logic synthesis execution is called a netlist, because it looks like a list
of gates interconnected by wires or nets.
Logic descriptions rarely match exactly the set of libraries in a cell, for several
reasons. Descriptions may be written at a high level to improve the productivity
of designers. Portions of the descriptions may be written in the form of behavioral
statements. Designers may not know exactly what cells the library contains.
Finally, it may be more effective to write a technology-independent description,
especially in cases in which the same chip is to be manufactured by multiple
suppliers over time. Logic synthesis provides this mapping.
RTL/Logic-Level design 57
There are many ways in which logic synthesis could map a logic description
into library gates. Some criteria need to be applied for what amounts to be an
optimization process. The most popular criteria include the following:
Timing. The netlist is generated such that it is either the fastest possible
or guaranteed to meet a certain speed requirement. This tactic usually
implies that the resulting circuit consumes more power and takes more
area than it would otherwise.
Area. Gates are optimized for area; thus, the smallest possible size for
each individual gate that satisfies all other constraints is chosen. As a
result, resulting chips coming off the fabrication line may be slower on
average yet consume less power than they would using a different tactic.
Figure 47 depicts graphically the logic synthesis process, including its various
inputs and outputs. As figure 47 indicates, one key input of the logic synthesis
A third critical input is (as you might guess) the library of gates and blocks the
use of which the technology permits. This library will generally include all basic
logic functions, with a number of sizes for each of these functions. For example,
for the NOR function, the library could include 10 different gates, from NOR1
to NOR10, with the latter being a much larger cell than the former. As will be
discussed later in chapter 5, the size of a logic gate depends largely on the size
of its internal devices, its transistors, and the difficulty in interconnecting these
internal devices.
The main output (as you might again guess) consitutes the mapped logic
gates or netlist. The logic netlist is a critical piece of information in digital design.
Modern design tools allow the performance of myriad operations on a netlist,
to verify a large number of aspects of a design (e.g., timing analysis or power
consumption analysis) and to generate the final details of the design (e.g., the
layout on the basis of which the manufacturing masks are constructed) before
sending it to the manufacturing team.
RTL/Logic-Level design 59
The input logic description needs to include constructs that can indeed
be mapped to the technology library. For example, the library may not
have edge-triggered latches; thus, those should not be included explicitly
in the description.
Signal integrity. Will the chip be too sensitive to internal and external noise?
Other design checks. For example, are the names of pins and blocks
correct?
The following sections cover these extremely important tasks that need to be
performed before generating the final block and chip layouts.
Logic Simulation
In the process of logic simulation, designers figure out whether a created
logic block performs the functions it is supposed to. Figure 48 depicts the logic
simulation process.
Question. As figure 48 indicates, logic simulation can take the same
presynthesis logic description as an input. Why is that enough? Should it not take
the synthesized netlist?
RTL/Logic-Level design 61
As figure 49 indicates, the designer first pursues design entrythat is, uses
an editor tool to enter a logic description in the form of Verilog code, plus the set
of input vectors or testbench, to enable verification of functionality. As with most
logic simulators, the code is first compiled into a software object that can be more
effectively simulated by the tool. (Verilog is text, representing certain structures.
Compilation will explicitly generate those structures into data types that can be
directly accessed by the simulator code.)
The simulator itself is then run. As a result, a set of waveforms can be
displayed by a viewer tool that shows logic values as a function of time for every
RTL/Logic-Level design 63
Timing verification. The goal is to verify that the logic circuit is as fast
as required.
Power analysis. The goal is to verify that the logic circuit is as energy
efficient as required. A simulation is often used here as well.
T
est synthesis. Add the necessary logic circuits (combinational and
sequential logic), if necessary, to improve testability. This is not really a
verification step. It is included here because, although in theory it may
be run at the same time as logic synthesis, it is often not.
Test verification. The goal is to verify that the logic circuit is easily
testable (i.e., to verify testability).
As in any other logic-related design task, timing analysis takes a set of inputs
including the actual description of the logic, which was entered during logic design
entry. While this input may have some timing annotations, timing information is
mostly contained in other files or inputs to timing analysis.
Specifically, timing analysis takes a critical inputnamely, timing constraints.
The most important type of constraints in modern digital design for decades
(recently, other constraintse.g., power constraintshave moved to the
forefront), timing constraints are intended to ensure that the entire logic block
enables the chip to go as fast as the initial performance specification indicates,
while leaving enough room for manufacturing imperfections and uncertainties.
Timing constraints often come with boundary conditions called arrival
times. A timing constraint typically refers to the latest and earliest possible
time a signal needs to arrive at a storage device (i.e., a latch) so that the signal
is properly stored when the next clock cycle opens up that latch temporarily.
RTL/Logic-Level design 65
The key output that comes from timing analysis comprises timing violations.
For each timing constraint, the timing analysis output indicates whether the
constraint is satisfied; otherwise, it indicates how that constraint was violated
(e.g., was the signal too early? was it too late?).
Example. Figure 411 depicts a timing analysis applied to a simple logic
circuit in two cases: with no timing violations and with one timing violation.
(4.9)
This function, however, is executed sequentially in two clock cycles. First, the
square function of I1 is computed and stored in a register on the first clock cycle:
(4.10)
Then, on the second clock cycle, as the second input comes in, the result is added
to the second input, I2, and is stored in the output register, O:
(4.11)
In the example, when the timing and functionality of the logic block are
correct, for inputs I1 = 8 and I2 = 4, the output produces O = 68 (i.e., 8 times
8 plus 4) two clock cycles later:
RTL/Logic-Level design 67
As figure 412 shows, the designer pulls up the output file with a text editor to
view the output information. The designer finds, among other items, that the first
output bit in the output register, O(1), features a constraint violation, specifically
a setup violation. The first output has arrived at the latch pin 5 picoseconds
(5 10-12 seconds, or 5 trillionths of a second) too late.
The designer then goes back to the logic block description and it finds that
there is a logic gate in the path that leads to the violation (i.e., the chain of
logic gates that ends in such a violating pin) that could be made a little larger,
potentially fixing the violation at the cost of area and power consumption. Thus,
RTL/Logic-Level design 69
What about voltage? The higher the voltage applied on a digital logic gate is
(as we will see in chap. 5), the faster this gate may run. The voltage supply for a
chip has a certain level of uncertainty, owing to the various sources of variation
of this voltage:
As it travels from its power source through its wiring into the chip, then
through internal wires to each of the logic blocks, and, in turn, to each
of the logic gates, the signal gets distorted in various ways (this will be
discussed later).
Just like timing analysis, before design automation came of age, power
analysis was originally pursued by hand, literally by examining a complex
diagram of logic functions and adding up the estimated amount of power
each function would consume. Todays power analysis (determining the
amount of power consumed) and optimization (finding ways to reduce power
to satisfy unmet power requirements) is becoming a complex, difficult process,
especially when combined with the need to meet difficult timing constraints
(in many cases, timing and power tighten the rope in opposite directions);
yet, much automation is included, at least in computing the amount of
power consumed. Even in that regard, however, the recent explosion of
static, or leakage, powerthat is, the power consumed when the chip is not
performing any useful workis producing great problems in chip design,
not only because it is difficult to reduce but also because it is very hard to
estimate accurately. Figure 413 depicts the power analysis and optimization
process graphically.
RTL/Logic-Level design 71
(4.12)
(4.13)
(4.14)
(4.15)
where f is the overall frequency of the chip, which in a first approximation is the
inverse of the period of its fastest clock cycle, T. In others words, the faster a chip
goes, the higher its voltage will be; and the larger its equivalent capacitance is,
the more dynamic power it will consume.
Still, a more precise yet simple equation is possible. Not all circuits in a chip
are switching constantly between 0 and 1. Otherwise, we would have a clock
signal in each wire. As a result, the percentage of time that a circuit is switching
fully from 0 to 1, and vice versa, needs to be added to the equation, as a multiplier
to the formula proposed in equation (4.15):
(4.16)
where is the average percentage of time that the circuit is switching fully
between 0 and 1, and vice versa.
This is a very powerful analytical model. Modern power-saving techniques
are very strongly based on leveraging this formula (eq. [4.16]). First, clock-gating
techniques focus on turning on the clock on a chips circuits only when needed.
When a block of a portion of a chip is not needed for a certain time, turning
its clock off is equivalent to reducing the parameter f to 0. Second, voltage
scaling is another very powerful technique, based on turning down the voltage
of a specific block when that block allows (i.e., without losing too much speed
and becoming nonfunctional). This technique requires voltage controllability
and granularitythat is, the ability to control the voltage (as opposed to its
being fixed by a constant external source) and the ability to control each block
independently, respectively. A block can also be turned completely off by turning
down its voltage supply all the way to 0. In any case, the parameter we are
playing with in equation (4.16) is V.
Finally, there is capacitance, C. There are numerous ways to reduce (and,
unfortunately, increase) capacitance in a chip. Wires have a capacitance whose
value goes up as their width and length increase, everything else being equal. Wires
RTL/Logic-Level design 73
(4.17)
where PS and IS denote static power and current, respectively, and PD and ID
denote dynamic power and current, respectively.
It is not easy to assign a single power number to each of these parameters. In
other words, power consumption depends on a number of factors, including
RTL/Logic-Level design 75
Figure 413 depicts graphically the key aspects involved in pursuing logic
power analysis and verification. As figure 413 shows, the first input is, as usual,
the RTL/logic-level description of our design. This is the golden input of the
design at this abstraction level (i.e., the key input specification that we start from,
that is being implemented, and that will not be changed at this point). For the most
accurate results, logic synthesis needs to be performed beforehand. Otherwise,
the design flow generally needs to perform a default synthesis or make strong
assumptions to get an initial estimate, and a detailed synthesis is done later on
(including a careful synthesis, with detailed technology information from each
logic gate and possibly wiring information) for an accurate power estimation.
A second crucial input comprises the power constraints. Whether they are
implicit or explicit, constraints will guide the process during optimization. At
minimum, they need to be checked when power analysis produces a result. For
example, see the OK? box in figure 413. The power constraints for a chip can
be readily obtained; constraints for each logic block, however, are more difficult
to obtain. From the overall chip power budget, which block should take more,
and which block should take less? Designers, managers, and their tools need to
support power budgeting in a systematic and effective manner. In this sense, it
is similar to the problem of timing budgeting. Once block-level power constraints
have been obtained, each power analysis run can be checked against them.
The third key input, which is also featured in timing analysis, comprises
input patterns and arrival times for each signal. As discussed earlier, the amount
of power depends on the inputs a circuit has to process. Generally, the more/
faster these inputs change, the more power the logic block is likely to consume
specifically, more dynamic power. While latches may be switching continuously
if the clock is never turned off, the rest of the logic will usually switch depending
on the data on which it operates.
The main output of timing analysis and verification is an estimation of the
power consumed by the block at hand. Note that power is statistical. As a result,
the most common output includes at least three sets of numbers: typical, worst
case, and best case. In other words, average, high, and low values, respectively, are
provided. The explanation is simple. As with timing analysis, the manufacturing
process has imperfections and hard-to-predict variations that have a significant
impact on power. As a result, logic gates could be faster than expected or slower
than expected. When gates are fast, they tend to consume more power, and
vice versa. Faster gates, as can be recalled from timing analysis, represent the
best case from the timing standpoint, corresponding to the highest power
consumption number (11 milliwatts in fig. 413). Conversely, the worst case
actually corresponds to the smallest power consumption number (9 milliwatts).
RTL/Logic-Level design 77
As figure 414 indicates, the designer starts, as usual, with the RTL/logic-
level description of the design. Once the designer is content with a specific
version of that description (e.g., timing analysis has been successfully run), the
designer will want to verify that the power consumption constraints are also met.
Alternatively, if those constraints are not available, the designer may want to at
least verify whether the design is power efficient and whether it is possible to
make it more power efficient.
To pursue power verification, as previously explained, the designer runs a
power analysis tool. Running this tool may entail compiling the logic description
(in Verilog code in this example) so that it is mapped to the technology
library; thus, all technology information is made available for a more precise
power estimation.
The result of running the tool is, in this case, an output file that can
be viewed with a text editor, as usual. Sophisticated tools exist that provide
a graphical depiction of the power analysis result, perhaps using colors to
indicate what parts of the block are hotter and which ones are cooler.
More frequently, though, text output that can be understood and analyzed in
detail is provided.
Unfortunately, the average-case power consumption is 14 milliwatts for this
particular logic block. Since the power constraint for average case was set to
12 milliwatts, the constraint is clearly violated. The designer needs to do
something about this issue.
The designer decides to look at the Verilog code again with an editor. He
or she does so and finds that the number of buffers is good. (Recall that a buffer
has no logic function; it is put in place to clean up and strengthen the signal
RTL/Logic-Level design 79
Voltage. The higher the voltage applied on a digital logic gate is, the
faster this gate will run and the higher will be the power consumed. As
previously described, voltage supply has a level of uncertainty, owing to
sources of variation. In addition to manufacturing-induced variations, the
voltage signal becomes distorted as it travels from its power source into
the chip and through internal wires to each of the logic blocks and to
logic gates.
Signal Integrity
Timing and power consumption, although perhaps the most important
performance-related parameters in the design, nevertheless are not the only
parameters for which to perform verification. Signal integrity is a very important
issue in todays chips, as voltage levels keep declining and addressing noise issues
becomes imperative. The key word here is noise.
Definition. Signal integrity verification is the design task aimed at verifying
and addressing noise issues in electronic circuit blocks.
What is noise? From the perspective of a chip designer, noise is the dark magic
that somehow makes voltages around the chip vary dynamically and unexpectedly
as time goes by. The primary signal integrity issues are cross talk and supply voltage
noise, but other key phenomena include electromigration and substrate coupling.
Cross talk
Imagine that a signal A, which should remain 1 until the next clock cycle,
suddenly bounces down so much that it practically becomes a 0. Why? For
example, a nearby signal B changes from 0 to 1, and as a result, signal A is
affected by a phenomenon called coupling or cross talk (figure 415). Coupling
happens because two signals that are close to each other, connected by a certain
wire or material, may share electrical charge or current over time and thus
influence each other. Typically, one signal will act as the aggressor and the other
as the victim.
RTL/Logic-Level design 81
(4.19)
Therefore, the voltage gets divided in an inversely proportional relationship with
capacitances (i.e., larger capacitances need smaller voltages to store the same
charge, given that Q = CV); thus, the voltage across the victim (assuming that it
starts from no voltage and no charge) would be expressed as
(4.20)
In summation, the higher the capacitance between aggressor and victim is,
the stronger will be the coupling. If C2 approaches a very large number, then
equation (4.20) shows that the voltages are the same, i.e., that there is 100%
coupling!
Electromigration
Conductors in chips deteriorate primarily because of the current going
through them. As current passes through wires, actual material is transported
owing to the movement of the charges (which is precisely what electrical current
is about) in the wire. As the charges (electrons or ions) move through the wire,
they run into the atoms that really make up the wire metal. As charges run into
these atoms, their momentum gets transferred, especially when the density of
this current is high. In other words, metal wire atoms become displaced.
Electromigration can cause actual faults in a circuit. As atoms move, the wire
may become open, or else it may become connected to another wire, which is
referred to as a short. Electromigration becomes a more acute problem when
circuits, devices, and wires become smaller, as chip technologies become more
advanced. Why? Because it then takes fewer displaced atoms for a fault to occur.
Substrate coupling
Electrical signals going through wires and devices can couple with each
other via the silicon substrate. This phenomenon of substrate coupling noise has
characteristics similar to cross talk, but since it happens through the substrate
in every chip, it does not affect digital circuits as strongly as analog circuits. The
silicon substrate is a much longer path than the material between two adjacent
wires; furthermore, digital circuits are robust, since they only have two states,
0 and 1, between which they switch quickly at discrete clock cycles. However,
analog circuits, as shown in chapter 5, have continuous, infinite states or values
and are thus much more sensitive to noise, including substrate noise.
Analog circuits are increasingly included in the same silicon substrate (i.e., on
the same chip) as digital circuits. These digital circuits are switching at increasingly
faster rates and are becoming closer to themselves and to the analog circuits. As
RTL/Logic-Level design 83
***
Signal integrity analysis, as you may have guessed by now, influences the
signals that determine speed and power consumption in a circuit. For this reason,
signal integrity analysis is most often integrated in modern designs with the
following design tasks that determine performance and power consumption:
Testing cost and testing time have become a huge portion of the overall cost
and time involved in producing a working chip. As a result, it is very important
to account for testing issues before the chip is actually builtthat is, when the
chip is designed. This task is hence called design for test (DFT), and it is closely
associated with a test plan, or strategy, to design the chip for optimal testability.
One might assume that testing implies directly checking that a chip performs
its function and does so at the speed and power consumption required. Although
this is essentially accurate, most of the testing effort has conventionally been
allocated to testing the function of the device. If one thinks about the complexity
of the function in any current chip, however, it is evident that the testing challenge
is an unsurmountable one.
Example. Consider a chip that provides encoding and decoding of high-
resolution (1,024 769) images. How long and how much would it cost to test
this functionality?
Clearly, it would be extremely difficult to test 100% of the functionality of a
complex circuit by checking all possible data and states with which it would need
to work. In our example, testing for functionality by checking for all possible
images that would need to be decoded and/or encoded would clearly be difficult.
In addition, if a failure were to occur, it would not necessarily be related to a
design issue. How do we know whether it is a design or a manufacturing issue,
while providing an effective testing method?
The answer is by use of fault models. A well-established theory of testing
shows how defects can be mathematically modeled as specific types of faults, and
these models can be leveraged to develop a clear, feasible, cost-effective test plan.
The most common defect or fault model is the stuck-at fault model. Under this
model, a defect is given by a certain node in the chip (i.e., a point touching one
RTL/Logic-Level design 85
Test synthesis
The goal of test synthesis is apparently simple: to make the chip (or circuit)
more testable. What exactly does this mean? Testing the chip is a complex process
that can vary widely, depending on the type of chip and its market application.
There are two key goals of test synthesis that will benefit any specific chip or
circuit block: test logic generation and test input generation.
Test logic generation. In this step, the designer, with the help of DFT
tools, generates extra logic that helps identify and/or diagnose faults. This task
is generally pursued in the early design stages. Key goals are to help feed test
vectors (data inputs) easily into the circuit and to help take out the results for
evaluation once the chip has run on these vectors.
The most common approach to test logic generation is scan-based DFT.
Scan-based DFT converts a certain number of the latches in a design into
scannable latches (defined and discussed earlier). By choosing scannable latches,
a sequential logic design is converted, for testing purposes, into something close
to a combinational circuit. With scannable latches, one can insert any input
vector desired into any of the latches in a very quick and efficient manner (i.e., by
scanning in the input data). Without scannable latches, one would have to insert
some data at the pins of the circuit, which would then propagate naturally, after a
number of clock cycles, to the latch where we want those data to be. The number
of cycles necessary for just that one test case could be thousands or millions,
which drives testing cost and time up significantlypotentially to thousands
of dollars per chip. Therefore, scan logic test generation makes a circuit more
testable. In addition to scan latches, other logic may need to be added to improve
the testability of the circuit.
As a result, DFT tools that generate these test data are called automated
test pattern generation (ATPG) tools. ATPG is a well-established field of work,
and ATPG tools are automated and integrated with other logic synthesis and
verification tools.
Figure 416 depicts a simplified description of test synthesis. In this
description, the designer uses a test synthesis tool that takes two types of inputs:
The logic itself
Input parameters and library to control the synthesis
In figure 416, the key parameter is the targeted test coverage.
RTL/Logic-Level design 87
Test verification
As in other design tasks, for every synthesis step (i.e., generate something),
there is at least one verification step (i.e., verify that what was generated actually
meets specifications). In testing, there is a verification step, in addition to the
other functional, timing, power, and signal integrity analysis steps. In this case,
the designer verifies that the circuit is testable, according to a specification. The
word testable may have different meanings, depending on the specification. A
common specification is to be near 100% testablethat is, for each combinational
circuit to have near 100% stuck-at fault coverage. Such a design task is referred
to as testability analysis.
Definition. Testability analysis comprises the set of design steps focused on
analyzing the ease of manufacturing tests for a given logic design.
Testability analysis, although a verification task, may be performed before
test synthesis, even in early design stagesthat is, it may be used to help with test
synthesis. A design with high testability will make the test synthesis task easier
and will result in a test synthesis output that is more effective. Two key concepts
are used when evaluating testability:
Controllability. Testing for a certain fault requires reaching that fault and
trying to set it to its correct value. For example, if we are testing for a
certain wire to be stuck at 1, we need to be able to insert a test vector,
possibly using the scan test infrastructure, that attempts to position at 1 at
that node. Measuring testability requires measuring the ease with which
this is possible.
Pre-PD Checking
There is (at least) one more task that needs to be completed before commencing
the conversion of those many logic gates into a complete, interconnected layout
of polygons. Design teams need to ensure that certain additional constraints
are met by the circuit under design. While these constraints are not part of the
original specificationand thus are not compulsorysatisfying them facilitates
the subsequent lower-level design stages. These constraints are better named
guidelines, except that they are actually enforced by companies; anything that
reduces the risk and effort involved in completing a design is always welcome.
Pre-PD checks are performed to verify that these constraints are met and to
provide feedback to designers as to how to satisfy them better.
Pre-PD checks can vary substantially depending on several aspects of the
design project:
Company
Technology
Chip
RTL/Logic-Level design 89
Impact of Manufacturing on
Logic Design
Manufacturing characteristics have a direct impact on the various parameters
that are examined during logic design, including timing, power, and area. At the
RTL/logic level, these parameters are estimated with a much higher accuracy
than at the system level of abstraction, based on models that are obtained from
manufacturing abstractions.
Figure 417 depicts a summary of the influence of manufacturing on RTL/
logic synthesis tasks. As figure 417 depicts, practically every part of logic design
is affectedincreasingly soby manufacturing characteristics.
During logic design entry, the types of logic blocks and/or gates that can
be entered are commonly restricted. For behavioral code, design entry does not
depend as much on the technology library, since the designer describes behavior
and the logic synthesis tool will take care of picking gates and larger blocks
from the library as appropriate. However, for the entry of structural code, the
designer may be picking directly from the library-specific logic block or gate,
sometimes even picking the size of such a block or gate. For each manufacturing
RTL/Logic-Level design 91
RTL/Logic-Level design 93
Circuit Design
Design entry
The entry of a circuit design is usually done with a schematic entry tool.
A schematic is a graphical description of a circuit, including symbols for each
of its devices (transistors for digital circuits, as well as other devices for analog
circuits) and for the wires interconnecting these devices (typically described as
simple lines).
Entering a schematic facilitates a number of critical tasks that are undertaken
subsequently, including the following:
Getting the circuit ready for simulation, since a complete schematic can
then be used as input to generate a netlist
(5.1)
In equation (5.1), K is a variable that depends on the size of the transistor and
fundamental device parameters such as the mobility of electrons:
(5.2)
where W is the width of the transistor (orthogonal to the figure), L is the transistor
length (the line between the drain and source regions), and is the electron
mobility (i.e., how mobile the electrons are that form the current in the channel).
What do equations (5.1) and (5.2) really mean? The meaning is fourfold:
As the voltage between gate and source increases, the current starts
growing very quickly. Clearly, this voltage really turns on the transistor
device and makes it a short (i.e., an almost zero-resistance device). This is
truly the electrical effect that forms the basis of digital logic: a high voltage
(a 1) makes the drain-source connection a short (a 0), and vice versa.
The threshold voltage determines the point at which the device gets
turned on and how fast. Because it determines the current per equation
(5.1), it also determines how much power is consumed.
Similarly, on the right side of figure 54, a NOR gate circuit is depicted.
When either input A or input B is on, either transistor N1 or transistor N2 is on,
while at least one of transistors P1 or P2 will be off. As a result, the output voltage
VOUT is grounded to 0, because one nMOS transistor being on will be enough
to pull the output down and because at least one of the pMOS transistors is off,
thereby cutting off the path to the supply on top of the schematic. Only when
both inputs are 0 will the output be the supply voltage (1). Why? Because both
pMOS transistors will then be on; thus, a short path between the ouput and the
upper supply voltage exists.
Creating latches is a little more complex. Still, the same basic principles
apply. Figure 55 depicts an edge-triggered latch.
As the schematic in figure 55 shows, there are two key inputs, a clock input
clk, a data input VIN, and an output VOUT. The key to understanding the latchs
behavior lies in the clk input. Note the series of inverters at the input of the latch.
These are intended to form a delay between that clock input and the other clock
inputs in the latch. When the clock input rises from 0 to 1, it will immediately
turn on the other clock input transistors, except the transistor at the end of the
inverter chain. This transistor will be off until the delay has elapsed. Until that
time, exactly during the delay time, the left vertical chain of nMOS transistors in
stage 1 will be on, and VIN will be transmitted inverted to stage 2. During this
delay time, the clock input and the bottom transistor in stage 2 will both be on;
thus, transmission will occur all the way to the output.
After the delay has elapsed, the output of the inverter chain is low. As a result,
stage 2 is isolated. Why? Because stage 2s input is set to 1. The top transistor and
the bottom transistor are both off. Thus, the output value is kept (typically, through
an additional couple of inverters connected in a loop, plus perhaps a buffer).
Styles
The style of a digital circuit at the transistor level of abstraction is the type of
circuit template that is used for such a circuit. When designing an entire chip, one
to very few circuit design styles are used. A circuit style implies a set of choices
made when creating these circuits and is fixed for a given cell circuit library.
Common choices that determine style include the following (default indicates
the most common choices, exemplified in the type of circuits described in the
previous section):
How to clock. For example, while static logic circuits have clocks only
when they are sequential (latches), all dynamic logic circuits have a clock
signal input (so that charges and values can be properly stored on clock cycles).
Simulation
Once a circuit has been entered using an entry tool, the next action taken
is simulation. Circuit simulation is perhaps the oldest circuit design task with
reasonable automation. The goal of circuit simulation is simple: to ensure the
functionality and various performance parameters of the circuit, taking into
account the electrical characteristics of the technology being used to implement
the circuit.
Is the functionality correct? That is, does this circuit actually do what it is
supposed to do?
Inputs signals and parameters. First, we need to know what signals are
entering this circuit and at what times. In the case of digital circuits, most
simulations are time basedthat is, what we are attempting to do is
simulate the circuit over time. Thus, it is important to know when a signal
Example. Consider a NOR gate circuit that needs to be verified. Let us briefly
go over the steps in simulating this circuit.
The function that needs to be verified is VOUT = A NOR B. In other words,
the circuit needs to perform this function while satisfying its other requirements,
such as timing and power consumption.
For verification of timing requirements, the circuit is simulated under key
environmental conditions (temperature, voltage, and process assumption) that
show if its speed is as required. Consider the case in which the minimum circuit
speed requirements are less than 100 picoseconds of delay (i.e., 10-12 seconds
As figure 57 shows, the latch includes two stacks of transistors. Each stack
is connected to the clock signals, either its positive version or its negative version
(as denoted with an overbar). The data input enters through the left stack, and the
data output comes out of the right stack.
When the clock signal rises from 0 to 1, the left stack is on, and the right
stack is off. As a result, the data are transferred into the latch by going through
the left transistor stack.
On the same clock cycle, when the clock falls from 1 to 0, the left stack is off,
and the right stack is on. The data are then stored permanently (or at least until
the next clock cycle) and circulate through the inverters (the shapes formed by a
triangle and a bubble). Storage is ensured through positive feedback. The inverters
propagate the signal to the output, then back into the right stack of transistors.
Circuit verification
Once a circuit has been simulated and entered using an entry tool, the next
action taken is circuit verification. Although the word verification is used, in
this context it does not refer to the task of verifying the function, power, speed,
or area of a circuit against its specifications. Circuit verification entails the final
preparations before a circuit can progress to the layout phase (i.e., before layout
generation). Think of this step as a set of checks to make sure various items are
consistentand, thus, can as a best-practices or quality-assurance design task.
Circuit verification includes the following tasks:
Circuit checks. Various checks on the circuit are run, sometimes using
software scripts written by the design team. For example, does the circuit
have a power supply? Does it have a ground? Does it have both inputs
and outputs? Is any device (e.g., a special low-power transistor) used that,
even though it went well through simulation, ultimately cannot be used in
this particular library and for this particular project (e.g., because it is too
expensive an option to select from the manufacturer)?
Pin assignments. Making sure that the pins of the circuit correspond
to other related documentation is more important than it might at first
appear to be. Pins are the entry and exit point for any digital circuit
Nevertheless, the analog circuit design process does have some important
differences as compared with the digital cell design process. First, you will
notice that instead of layout synthesis the flow chart indicates layout entry. Also,
instead of layout characterization, the flow chart indicates layout extraction and
simulation. Otherwise, the chart looks strikingly similar to the one for digital
cell design.
These differences are due to two important peculiarities of analog design.
First, analog design is much less automated than digital design. While the debate
exists, there is overall consensus in the chip design community that analog
design is at least a decade behind digital design in terms of the level of tool
automation. As a result, few mainstream automated analog synthesis tools exist.
Furthermore, creating schematics and layouts for analog circuits is largely a
manual task, although a new crop of tools have recently made significant strides
toward automation. While creating layouts for digital cells is not entirely automated
either, it tends to be more automated than for analog circuits, and the generation
of the overall layout from assembled logic cells is also largely automated.
Second, analog design is typically not used to create a library of standardized
gates from which a synthesis tool will pick and combine to form a complex
high-level function. Again, this assembly function is primarily, although not
completely, manual, and each analog circuit is used once or only a few times in
the same chip. As a result, analog circuits are not characterized for the use of
(5.3)
(5.4)
Also, as with any other circuit, we need to enter input signals to stimulate
the circuit for the simulation. However, since analog circuits are not
restricted to two logical values per signal (0 or 1), the input signals may
need to be much more complex and may need to appear in various
places in the circuit to simulate for a number of complex situations. For
this reason, it is common to have stimuli-generating devices, such as
voltage and current sources that can provide various types of waveforms.
Finally, netlist generation is still needed to generate a data form that the
simulator can take. This tends to be more complex since we need to
ensure that we are using the allowed sets of devices, and these sets are
broader than for digital devices, as discussed earlier.
The third goal of schematic entry is design layout. Circuit schematics can also
store very important information to guide the design of its layout. Indeed, in most
analog circuit design teams, the circuit designer and the layout designer may be
in very close touch or even be the same person, for a given circuit block. Why
is layout-guiding information so important if there is so much communication
between circuit and layout design? The answer is because analog layouts need to
be designed very carefully. Analog signals need to be precisely at certain levels
and do not necessarily saturate into simple 0 or 1 levels; consequently, their
values depend highly on the resistors, capacitors, and other parasitic devices that
result from the way the layout is pursued and were not intentionally created by
the designer. As a result, the schematic may have a number of annotations about
the actual topology, sizes, number of fingers, orientation of each finger, and so
forth. Detailed layout instructions are sometimes even written directly on the
schematic. The performance of the circuit depends on it!
Example. Analog circuits are often differentialthat is, they have two differential
inputs and two differential outputs, and the whole internal architecture is based
on two wires. (The differential concept was explained previously, under Styles.)
Thus, a typical annotation would read something like These two transistors need
to be laid out exactly symmetrically and vertically attached to each other.
The fourth goal of schematic entry is to pursue layout extraction/verification.
As discussed earlier, it is imperative to extract electrical information based on
the layout that was generated and then feed it back to the circuit schematic in
the form of an annotated schematicthat is, a schematic that includes all the
additional devices (most often, resistors and capacitors) derived from wires and
other items that were not included in the initial schematic. The other reason
why the schematic is important is that once the extraction has been done, we
(5.5)
V = ZI (5.6)
Note that in equation (5.6), every symbol now represents a complex number.
It turns out that any signal in most analog circuits can be expressed as a
weighted sum of a set of components, each of which represents the signals
portion that runs at a specific frequency. For this reason, to capture magnitude
and phase/frequency characteristics, impedance is typically denoted as a
complex number, which results from moving from the time domain into the
frequency domain by use of mathematical transforms, such as the Fourier
transform. While the theory and mathematical background are beyond the
scope of this book, this concept is important in analog circuits, as currents and
voltages are often smooth and sinusoidal, typically centered around a concrete
set of frequencies.
As a complex number, impedance includes a real number and an imaginary
number. Another way to look at it is as a complex number with magnitude |Z|
and phase :
(5.7)
ZR = R (5.8)
ZL = jwL (5.9)
(5.10)
Therefore, as is already evident, with everything else being equal, the larger
an inductance or resistance device in series is, the larger its impedance will be.
Conversely, large capacitors in series can be of low impedancethat is, they will
let current flow easily through them. The opposite is true for devices in parallel.
Finally, like resistances, impedances are added when in series (same wire),
(5.11)
(5.12)
Larger input transistors may thus provide larger input impedances. Returning
to figure 59, we see that the larger an input transistor is, the larger will be its
input resistance and inductance associated with its input wire and pin, since both
are in series at the entry. However, since the transistor has effective capacitors in
parallel (remember that there is an oxide separating the gate and the substrate,
which can be assumed to be close to ground), the larger its size is, the larger will
be its capacitor and equivalent impedance.
Other requirements often exist, the most important of which is bandwidth.
Although the details of evaluating circuit bandwidth are beyond the scope
of this book, it can be understood from the preceding discussion that the
higher the frequency is, the higher will be the impedance to passing through
currents and voltages, in most cases. Therefore, typical amplifiers have good
amplification gains until a certain frequency, above which gain begins to taper
off greatly. This limits the range of working frequencies to a certain level,
which is the bandwidth. Thus, there is a trade-off: with improved gain and
bandwidth (i.e., the two most common performance requirements in analog
design) comesguess whatincreased power consumption (the eternal issue
in circuit design).
The sizes, in terms of width (W) and length (L) of each of the transistors.
The current reference indication, which may also include its actual value
(four milliamperes, or four-thousandths of an ampere) and a reminder of
how that current was obtained (through a fixed voltage, of 0.5 volts, on
the bottom transistor).
To improve gain, transistors are set to have a high current run through them
and are made as large as power constraints allow, because gain is proportional
to their current and size.
Question. Why does amplifier gain depend on its current?
As can be seen in figure 59, the output voltage is
(5.13)
Thus, amplifier gain depends on the currents and how they change over time,
the sum of which is the bottom total current. Large currents will produce large signal
swings and, thus, large gain. Since current depends on size, based on equation
(5.1), you can thus deduce that transistor size matters. Again, the trade-off is power
consumption, which is why transistors in this case cannot be made infinitely large.
DC analysis
In transient analysis, we can visualize and analyze various signals in the circuit
over time. This analysis is utilized in analog design to derive key items, such as
power consumption, and various kinds of noise, including Vdd-induced noise.
AC analysis focuses on frequency, that is, on the behavior of a circuit as
viewed from the frequency domain. The chief assumption in AC analysis is
that we are dealing with a number of sinusoidal signals, each of which has only
one frequency. While this is a big assumption, AC analysis is both sufficient
and extremely useful to understand how certain key parameters change as we
vary the frequency at which signals and the circuit need to work. As such, AC
analysis is the key to understanding requirements such as gain, bandwidth, and
overall stability of the circuit. (Certain analog circuits may enter an unstable state,
where signals may saturate owing to positive-feedback effects. Stability typically
needs to be analyzed over frequency, since instability happens only at certain
frequencies. Thus, we need to ensure that the circuit is designed to never reach
those frequencies, or it will not work.)
DC analysis, by contrast, focuses only on one frequency, the zero frequency.
DC analysis assumes that all signals in the circuit have a frequency of zero;
therefore, we are talking about straight, constant voltages and currents. What
we are really doing is doing a special type of AC analysis (at frequency = 0)
and consequently analyzing only those components of all signals in the circuit.
Why this abrupt decomposition into DC and AC analysis? For circuits that
always have constant values of voltages and currents, the answer is obvious;
for all others, the answer is biasing, whereby a devices characteristics depend
on the major currents and voltages that are applied to it. If we can assume
that the device is biased by a major, constant current or voltage, plus a set
of small, sinusoidal signals, then its main characteristics as a device can be
determined by that major current or voltage. For example, the main value
of the current that goes through a transistor, such as the input transistors
in our amplifier, determines the actual gain that we can obtain. Hence, key
Single-point simulation
Parametric simulation
Statistical simulation
Single-point simulation is the default, for which every input variable, device
size, voltage, temperature, process point, and so forth, is a constant number.
Only one simulation is executed, and the results are returned quickly.
In parametric simulation, we vary, or sweep, certain variables to obtain an
understanding of a parameters behavior, either to find an optimal design or simply
to characterize it for the use of higher-level tools or the like. Frequently varied
parameters include the voltage supply, Vdd, and transistor sizes (to determine the
right size for each transistor).
In statistical simulation, we attempt to ascertain the statistical behavior of
the circuit. Unfortunately, circuit parameters of all kinds are not realistically
deterministic (i.e., not a concrete, certain number), but have some statistical
behaviorthat is, for each possible value, there is a probability that that value
actually corresponds to what is manufactured. As the size of transistors and
features keeps decreasing, it becomes ever more difficult to manufacture them
with certainty that all will be sized exactly as desired (e.g., in terms of the width or
the length of a transistor). This trend has been making statistical simulation ever
more important. While this is also becoming important for digital circuits, it has
already been of utmost importance for analog circuits for a long time, for reasons
already given. Small variations of parameters in an analog circuit can completely
ruin its behavior.
Popular parameters to be varied in statistical simulations include the threshold
voltage of transistors, Vt, and the voltage supply, Vdd. Statistical simulations are
often pursued by two methodologies:
Assuming and computing a certain statistical model for each of the key
parameters to be varied
Doing a number of single-point simulations to cover as best as possible
the space of combinations of all parameters as they vary
The input signals, or stimuli, and the conditions under which the circuit
needs to work (temperature, voltage supply, and process point). Of these
conditions, the input stimuli tend to differ the most from digital circuits;
analog circuits may require a fairly complex set of input signals over time
and/or frequency. These may be inserted directly into the circuit as signal
generators, or widgetsfor example, as a sinusoidal-voltage generator or
a fixed-current generator. In figure 510, note the sinusoidal signal that
happens to be one of the inputs (the left input) to the LNA differential
amplifier: Vin(+). This input is a sinusoidal signal (basically a single-frequency
signal) with an amplitude of 0.2 volts. In practice, there is a symmetric signal
mirroring this one, Vin(), which is the other (right) input to the amplifier and
will be exactly opposite in phase (i.e, Vin(+) will be the exact same magnitude
as Vin(), but it will be positive when Vin() is negative, and vice versa).
The device models (i.e., the mathematical models for each device,
including transistors, resistors, capacitors, inductors, diodes, etc.). These
models are a fundamental characteristic of the manufacturing technology
used for this chip design and thus are provided by the team in charge of
interfacing with manufacturing or directly by the manufacturing company
(often called the foundry) or department (if in the same company).
Based on these inputs, the simulation tool generates a netlist (i.e., the data
structure that the simulator can take directly), which represents the circuit in the
schematic, potentially including the simulation type, conditions, and device models.
The result from a typical analog circuit simulation has two components:
A graph or set of graphs (or the data from which graphs can be built)
plotting specific parameters as a function of time, frequency, temperature,
or whatever has been deemed as the critical dependent parameter
Circuit verification
As with digital circuits, analog circuits need to be prepared for layout. Analog
circuit verification has a goal preparing the circuit for layout generation precisely,
once it has been designed and correctly simulated.
Common circuit verification tasks include
Layout Design
For both digital and analog circuits, the next step after a circuit has been
simulated to correctness is to generate a correct layout of the circuit. Figure 512
depicts the process of generating a correct layout based on a correct circuit.
How is a layout really generated? For a given digital or analog cell, the answer
is mostly by hand. As figure 512 shows, the key starting points are the schematic
and/or netlist for the circuit that we are trying to implement and the targets and
Metal. Metal is used in circuits to form wires. The material used for this
purpose was aluminum until recently. Aluminum has been replaced by
copper because copper has much higher electrical conductivity, which
means less resistance, thereby making the circuit run faster with lower
power consumption.
There are other components in layout, but we wont cover them here for the sake
of simplicity.
What is missing in this picture? The design rules have yet to be incorporated.
The final layout needs to meet the layout design rules for the manufacturing
technology that is being used. Examples of rules include the distance between
two parallel polysilicon lines must be at least 0.13 m and when inserting a
contact into a diffusion (drain or source) for a transistor, the diffusion region
needs to be at least 0.07 m larger on every dimension than the contact, and the
contact needs to be entirely inside the diffusion region.
In recent layout design software packages, it is possible to access the design rules
directly via the layout editor, so that the software can aid the designer in drawing
rule-compatible shapes. In any case, layout design rule checking (DRC) is run after a
design has been edited and checked, because a number of rules still may need to be
checked, especially when assembling several blocks together into a single layout.
Once rules have been checked, the design is not done yet! Despite all
the automation in current design tools, the layout has been mostly manually
generated, so we now need to make sure that this layout faithfully represents the
circuit schematic on which it is based. This process is LVS. In LVS, the design
tool tries to match every device and wire to the set of polygons that represent
them in the layout. When it finds discrepancies, it points to those discrepancies
in such a way that the designer can solve them quickly. For example, it may be
common to highlight simultaneously both in the schematic screen/window and
in the layout screen/window the device and/or wire that is not matching exactly.
The tool may also point to a device or wire in the schematic that cannot be found
at all in the layout.
Layer and device orientation. For various reasons, certain devices and
layers tend to be laid out in one specific direction. For example, in the
layout depicted in figure 513, transistor diffusion shapes are laid out
vertically, while power supply and ground lines, as well as inputs and
outputs to the circuit, are laid out horizontally. For interconnections,
metal and polysilicon lines tend to move vertically. This level of
organization and regularity helps in assembling multiple cells together
(e.g., all power supplies fit together and can easily be routed as one),
making the cell more manufacturable (regular, grid-style layouts are
easier on the photolithography tools that need to push the patterns onto
the wafer), and making it easier to pack the devices tightly in a small
space, saving area and power while improving timing/speed.
Layout verificationLVS
Once the layout is design rule correct, a designer still needs to make it matches
the schematic from which it originated. As in many other design tasks, even
though we generate more lower-level details based on higher-level descriptions,
we still need to check that the details faithfully represent and match the higher-
level description and thus meet the requirements/specifications.
LVS verification attempts to find the electrical devices and wires in a layout
and then compares them with the schematic for the implemented circuit. Figure
515 depicts the LVS process as part of layout design, expanded to provide a
simplified example.
To do their job correctly, LVS tools have to take a data file, including all the
shapes drawn for each layer in the layout, and perform a number of computations
to determine what devices and wires are represented in the layout (e.g., a
polysilicon shape on top of a diffusion shape means a transistor). Importantly,
it need to ascertain how each device is connected to others through wires. This
includes all signals, including the power supply and ground.
Once a database listing wires and devices (including parasitic devices) has
been obtained, the LVS tool generates a netlist data structure based on that list.
Why? Because we are attempting to compare a layout with a schematic, and we
cannot compare apples (layout components) and oranges (schematic symbols).
The common language is the netlist.
Shorts and opens. When several wires are connected in the layout but
are not connected in the schematic, something is clearly wrong! This is
commonly referred to as a short, since the schematic is assumed to be
correct (although that is not always the case). Conversely, when several
wires are not connected in the layout but are connected in the schematic,
that is referred to as an open.
Wrong device. When a device of a certain type is used in the layout and
a different one is used in the schematic, we must have the wrong device.
Examples are using pMOS when nMOS should be used and using low-
threshold-voltage nMOS when standard-threshold-voltage should be used.
Missing device. Sometimes a device or wire is simply not found in the layout.
A relatively common case is the lack of a power supply or ground wires.
Although the example in figure 515 does not show all the details in the
report, it is clear that the LVS process failed, and debugging needs to happen. It
appears that the power supply has not been found in the layout.
Other potential problems found during LVS include
Some but not all of the power or ground pins in the layout are found.
The names of certain pins differ between the layout and the schematic.
There are too many nets (wires) in the layout. Some connections are
not made.
The designer reads a report in the form of an output computer file and
finds a number of violations.
The designer uses the layout or schematic editor to fix the error.
Most modern tools will be able to clearly highlight errors in both layout and
schematic directly on the screen, thereby facilitating the designers job.
The actual devices and wires that the designer intended in the first place.
We use the terms interconnect and interconnect extraction to refer to
these wires and their extraction, respectively.
The parasitic devices that come from the electrical characteristics of the
shapes in the layout, but were not explicit input in the schematic, as they
have no useful function
Characterization
Once a layout and a schematic are correctly design and matched, we need to
characterize them for the use of higher-level tools. In this way, the overall circuit
or block can be evaluated in all its glory. This is a very structured process in digital
circuits, but it is becoming more common in analog circuits as well. The main
goal of circuit characterization is to generate a simplified yet detailed model of a
cell layout for use by other tools, including timing and power analysis tools.
Cell characterization, as well as its role within overall cell design, is depicted in
figure 517. As figure 517 shows, the results of characterization are a set of models,
or curves, that describe key characteristics (hence, the term characterization) of
the circuit, in a way that the circuit is parametrized. That is, using these curves,
we can put this circuit under various different conditions (e.g., change the value
of the power supply Vdd) and obtain its key performance or power characteristics
without having to create the circuit and its layout from scratch.
How setup/hold time for a latch varies with signal slope or supply voltage
There are a number of possible output characterization forms, the most
common of which are as follows:
Careful routing. The voltage supply is extremely carefully laid out and
very thick.
For analog simulation, we may need to verify that the circuit behaves similarly
after layout extraction numerous times, as compared with digital circuits, which
Clock planning
An absolutely critical net (i.e., set of fully interconnected wires) and its
associated electrical signal in a chip is the clock signal (or signals), as discussed
previously. In the floorplanning and/or assembly of the chip, the clock signal
needs to be carefully distributed across the chip. Clock planning is the design
task that defines how the clock(s) is (are) distributed across chip.
Two key needs are addressed in clock planning:
Low-skew clocks
Clean clocks
Skew is defined as the timing difference between two signals. The whole
point of having a clock is to synchronize signals and circuits across the chip. If the
clocks for different circuits are not synchronized, it is not clear how much of a time
budget exists for signals to get from latch to latch; thus, time differences (skew)
need to be deducted from the time budget. Consequently, circuits are made more
conservatively than they should be and thus run slower and/or consume more
power. Therefore, we need to have a clock that has low skew when appropriate.
In other words, for two circuits that in theory take the same clock as an input
but in reality pull the signal from different points in the clock wire network, we
need to pay attention to skew. We need to make sure that the clock arrives at all
of these locations at the same time. Any difference in time amounts to skew and
should be deducted from the time budget.
Power planning
The other key distribution structure in a chip is the power supply
distribution network. An increasingly important design task, power planning
has as its goal the exploration of various supply distribution options early in
design cycle, to ensure that the best ones are chosen and implemented as the
design proceeds.
Can a power distribution network with 80% density handle 0.2 watts per
square millimeter without losing voltage strength?
Impact of Manufacturability
on Design
As a final topic, the most important design issue of all awaitsnamely,
manufacturability, which is the ability to manufacture a designed chip effectively.
This entails consideration of the overall yield of the chipthat is, the percentage
of correct chips out of all those coming off the manufacturing line.
Design flow
Identify a computer-aided design (CAD) tool for each step of the design
flow
Description
Logic design
Analog design
Starting with an existing design (e.g., a two-stage operational amplifier,
which can be found on the Internet and in various books), complete the following
steps:
Design flow
Description
Circuit design
Generate a schematic
Exercises 149
Keating, Michael, Russell John Rickford, and Pierre Bricaud. 2006. Reuse Methodology
Manual for System-on-a-Chip Designs. New York: Springer.
Martin, Kenneth W. 1999. Digital Integrated Circuit Design. New York: Oxford University
Press.
Pedram, Massoud, and Jan M. Rabaey, eds. 2002. Power Aware Design Methodologies.
Boston: Kluwer Academic Publishers.
Rabaey, Jan M., and Massoud Pedram, eds. 1995. Low Power Design Methodologies.
Boston: Kluwer Academic Publishers.
Zheng, Pei, and Lionel Ni. 2005. Smart Phone and Next Generation Mobile Computing.
San Francisco: Morgan Kaufmann.
System-level design
Alexander, Perry. 2007. System Level Design with Rosetta (Systems on Silicon). San
Francisco: Morgan Kaufmann.
De Micheli, Giovanni, Rolf Ernst, and Wayne Wolf, 2001. Readings in Hardware/Software
Co-design. San Francisco: Morgan Kaufmann.
Gajski, D., Nikil D. Dutt, Allen C. Wu, and Steve Y. Lin. 1992. High-Level Synthesis:
Introduction to Chip and System Design. Boston: Kluwer Academic Publishers.
Gerstlauer, Andreas, Rainer Domer, Junyu Peng, and Daniel D. Gajski. 2001. System
Design: A Practical Guide with SpecC. Boston: Kluwer Academic Publishers.
Grtker, Thorsten, Stan Liao, Grant Martin, and Stuart Swan. 2002. System Design with
SystemC. Boston: Kluwer Academic Publishers.
Jerraya, Ahmed Amine, Sungjoo Yoo, Norbert Wehn, and Diederik Verkest. 2003.
Embedded Software for SoC. Boston: Kluwer Academic Publishers.
Lavagno, Luciano, Grant Martin, and Bran Selic. 2003. UML for Real: Design of
Embedded Real-Time Systems. New York: Boston: Kluwer Academic Publishers.
Mermet, Jean. 2001. Electronic Chips and Systems Design Languages. Boston: Kluwer
Academic Publishers.
Raghunathan, Anand, Niraj K. Jha, and Sujit Dey. 1997. High-Level Power Analysis and
Optimization. Boston: Kluwer Academic Publishers.
Harris, D., M. Horowitz, and D. Liu. 1999. Timing analysis including clock skew. IEEE
Transactions on Computer-Aided Design of Integrated Circuits and Systems. 18 (11):
16081618.
Brooks, D., V. Tiwari, M. and Martonosi. 2000. Wattch: A framework for architectural-
level power analysis and optimizations. In Proceedings of the 27th International
Symposium on Computer Architecture. 8394. New York: The Association for
Computing Machinery.
Rabe, D., G. Jochens, L. Kruse, and W. Nebel. 1998. Power-simulation of cell based
ASICs: Accuracy- and performance trade-offs. In Proceedings, Design, Automation
and Test in Europe. 356-361. Los Alamitos, Calif.: The Institute of Electrical and
Electronics. Engineers.
Roy, Kaushik, and Sharat Prasad. 2000. Low-Power CMOS VLSI: Circuit Design. New
York: John Wiley & Sons.
Logic designtestability
Abramovici, Miron, Arthur D. Friedman, and Melvin A. Breuer. 1994. Digital Systems
Testing and Testable Design. New York: IEEE Press.
Bibliograhphy 151
Analog design
Abidi, Asad A., Paul R. Gray, and Robert G. Meyer. 1998. Integrated Circuits for Wireless
Communications. New York: IEEE Press.
Gray, Paul R., Paul J. Hurst, Stephen H. Lewis, and Robert G. Meyer. 2001. Analysis and
Design of Analog Integrated Circuits. 4th ed. New York: John Wiley & Sons.
Digital design
Rabaey, Jan M., Anantha Chandrakasan, and Borivoje Nikolic. 2002. Digital Integrated
Circuits. 2nd ed. Upper Saddle River, N.J.: Prentice Hall.
Manufacturability/yield
Director, Stephen W., Wojciech Maly, and Andrzej J. Strojwas. 1990. VLSI Design for
Manufacturing: Yield Enhancement. Boston : Kluwer Academic Publishers.
Index 155
Index 157
Index 159
Index 161