Anda di halaman 1dari 24

Hauptseminar

Combinational Logic Optimization


Nikolaus Hrr o

Studiengang:

Informatik

Betreuer:

Talal Arnaout

Institut fr Technische Informatik u Universitt Stuttgart a Pfaenwaldring 47 D70569 Stuttgart

Abstract
Todays combinational logic circuits have increasing area, number of gates and I/Oconnections. The latter aspect requires minimization algorithms which are highly ecient since even the notation (which is the input to the algorithms) of the implemented functions can have exponential size in the number of inputs. For the optimization of two-level logic circuits, either heuristic or exact methods are used. The practical use of the latter ones is restricted to smaller circuits. A data scheme on which dierent operations can be computed very eciently is the positional-cube notation. Multiple-level logic optimization is normally done using heuristic algorithms, due to the high computational complexity of exact algorithms.

Contents
1 Introduction 2 Denitions 2.1 2.2 2.3 Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Representations of boolean functions . . . . . . . . . . . . . . . . . . 1 2 2 2 4 6 7 8

3 Two-Level Combinational Logic Optimization 3.1 3.2 Testability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Algorithms for Exact Logic Minimization . . . . . . . . . . . . . . . . 3.2.1 3.2.2 3.2.3 3.3

Positional-cube notation of binary-valued functions . . . . . . 10 Positional-cube notation of multiple-valued functions . . . . . 12 The unate recursive paradigm . . . . . . . . . . . . . . . . . . 14

Algorithms for Heuristic Logic Minimization . . . . . . . . . . . . . . 15 16

4 Multiple-level combinational logic optimization 4.1 4.2 4.3 4.4

Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Algebraic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Boolean model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Delay optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 19 20

5 Conclusion Bibliography

List of Figures
2.1 2.2 2.3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 Redundant cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Irredundant cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minimum cover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Covers of a two-output function . . . . . . . . . . . . . . . . . . . . . Prime implicant table . . . . . . . . . . . . . . . . . . . . . . . . . . . Dominated column c1 and dominant column c2 . . . . . . . . . . . . . Dominant row r1 and dominated row r2 . . . . . . . . . . . . . . . . . 4 4 5 7 8 9 9

Reduced prime implicant table . . . . . . . . . . . . . . . . . . . . . . 10 Petricks method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Symbol encoding in positional-cube notation . . . . . . . . . . . . . . 10 Implicants in symbolic notation . . . . . . . . . . . . . . . . . . . . . 11 Fast test for intersection of two impicants in positional-cube notation 11

3.10 Positional-cube notation of a multiple-output boolean function . . . . 11 3.11 Positional-cube notation of a multi-valued input function (the green numbers show the input values) . . . . . . . . . . . . . . . . . . . . . 12 3.12 Positional-cube notation of a strongly unate function (in the original bit-order, the green numbers show the input values). . . . . . . . . . 13 3.13 Positional-cube notation of a strongly unate function (in a bit-order leaving no 0s right to a 1, the green numbers show the input values). 13

3.14 Positional-cube notation of a weakly unate function . . . . . . . . . . 13

Chapter 1 Introduction
Logic circuits can be divided into two parts: combinational logic circuits, which are circuits without feedback (in particular, no ip-ops are used or created) and therefore stateless, and sequential logic circuits which represent nite-state machines. We will discuss combinational logic circuits only, but note that sequential circuits also have combinational components. Two-level logic optimization is a means for optimizing the implementation of circuits given in two-level tabular forms in terms of area and testability. A programmable logic array (PLA), is a standard example for such a circuit form. Two-level logic optimization is also a means to reduce the information needed to describe logic functions which are components of multiple-level representations of logic circuits. And lastly it is a formal way of processing the representation of systems described by logic functions. Since any path from an input to an output has two gates in it, the delay depends on the size of the fanout stems (output capacity), which is related to the area of the circuit. So reducing the area of two-level logic circuits also reduces the delay. Multiple-level logic optimization is used when a logic function is to be implemented in multiple-level logic. A reason for using more than two levels is the size of circuits in two-level implementations. When reducing the area using multiple-level logic the paths from inputs to outputs can be longer1 than in two-level logic (where each path has two gates in it). So the importance of delay minimization increases. The goals of logic optimization are: Minimization of delay times Minimization of the required area Testability (mostly irredundancy)
The length of the path is correlated to the sum of gate-delays along the path, the number of gates can be used as approximation
1

Chapter 2 Denitions
2.1 Conventions

We assume the sum of products form is available (using De Morgans rules preserving the number of literals and the number of terms or by selecttransformations explained later). We sometimes identify a cover F and the the function f associated with F (i.e. if it makes a shorter description possible). We write x instead of x for the inverse of x. We assume positive logic (0=false, 1=true).

2.2

Basics

A completely specied boolean function f has an on set and an o set which are the subsets of the domain (all possible input assignments) where the output takes the values 1 and 0 respectively. An incompletely specied boolean function has an additionally dc set (Dont care) where the output can take any of the possible values. Boolean functions can be either completely or incompletely specied. Since the completely specied ones can be seen as incompletely specied functions with an empty dc set, we will mostly look upon incompletely specied boolean functions [1]. The variables a function depends on are called the support of the function. If f (x1 , . . . , xn ) is a boolean function depending on n variables, the support of the function f is {x1 , . . . , xn }. Cofactors are widely used in logic optimization. If f (x1 , . . . , xn ) is a boolean function, the cofactor with respect to a positive literal xi (1 i n) is fxi =

2.2 Basics

f (x1 , . . . , xi1 , 1, xi+1 , . . . , xn ) and the cofactor with respect to a negative literal xi is fxi = f (x1 , . . . , xi1 , 0, xi+1 , . . . , xn ). This denition will be extended to functions with multi-valued inputs in 3.2.2. If f (x1 , . . . , xn ) is a boolean function, then it can be transformed into a sum of products (SOP) of n literals called the minterms of the function by using Shannons expansion recursively. This transformation is also called Booles expansion and works as follows: f (x1 , . . . , xn ) = xi fxi + xi fxi , i {1 . . . n}. The select transformation corresponds to Shannons expansion with respect to the latest-arriving input [4], [5]. Each minterm corresponds to the input-assignments which imply the output of the corresponding function is 1 (true). An implicant is either a minterm of a function f (alternatively of a cover F) or comes from combining minterms of f to a product term of less literals than the minterms have. In order to increase the size of an implicant, input assignments from the dc set can also be used like minterms. An implicant is called a prime implicant or prime if it is not contained in any other implicant. A prime implicant is called essential prime implicant if it contains an implicant that is not contained in any other prime implicant A cover F of a function f is a set of implicants that satises the following constraint: Fon F Fon Fdc Some dierent covers of the same boolean function f (x, y, z) = x y z +x y z +x yz + xy z + xyz + xyz are shown in gures 2.1, 2.2 and 2.3. The size (also called cardinality) of a cover is the number of its implicants. Usually a function can be implemented by dierent covers. In particular when dealing with incompletely specied boolean functions, we can use the dc set to reduce the size of a cover. A cover F is called redundant, if there exists an implicant in it that can be removed maintaining the cover-property. An example for a redundant cover is presented in gure 2.1. A cover is irredundant or minimal if it is not a proper superset of any other cover of the function. An irredundant cover (also called minimal cover ) is shown in gure 2.2. A weaker minimality-criteria is being minimal with respect to single implicant containment. A cover is minimal with respect to single implicant containment if no implicant of the cover containes any other implicant of the cover. The cover shown in gure 2.1 is redundant, however it is minimal with respect to single implicant containment.

2.3 Representations of boolean functions

Figure 2.1: Redundant cover

Figure 2.2: Irredundant cover A minimal cover is not inevitably a cover of minimum cardinality, since there may exist other covers consisting of less (and other) implicants. A minimum cover is a cover with minimum cardinality. An example for an minimum cover is given in gure 2.3.

2.3

Representations of boolean functions

There are many dierent ways for representing boolean functions. Some of the most popular ones are shortly described here: Tabular forms are representations like the truth table. The truth table is a complete list of all input-assignments and the corresponding output-values. Since

2.3 Representations of boolean functions

Figure 2.3: Minimum cover

all input-assignments are listed (and the number of those is exponential in the number of inputs), the size of a truth table is exponential in the number of inputs and therefore it is only used for small functions [6]. Another tabular form is the implicant table. An implicant table of a single-output function consists of the implicants of the function as rows. Considering multiple-output functions, we have to add an additional column to the table which contains a row vector of the output-values corresponding to each implicant. Expression forms are used for better readability of certain equalities. Common logic expressions are the two-level forms and multiple-level forms of logic expressions. For example, f (a, b, c) = a b + b c is a two-level expression form. Binary decision diagrams (BDDs) are graph-based representations based on trees or directed acyclic graphs (DAGs) with a root element. A BDD represents a set of binary-valued (BV) decisions, making an overall decision that can be either true or false. A BDD can have a dened order on the variables, which would then be called an ordered binary decision diagram (OBDD). Since the cost for operations on OBDDs are (only) polynomial in the number of vertices, OBDDs are used more often in practical applications than any other BDD [1].

Chapter 3 Two-Level Combinational Logic Optimization


Combinational logic optimization can either be done in an exact or in a heuristic manner. While algorithms for exact logic minimization yield a minimum cover (if you have enough resources for running them), algorithms for heuristic logic minimization search for minimal (irredundant) covers, but may also nd a minimum cover. Two-level logic minimization tries to reduce the size of a boolean functions representation. Since there are dierent styles of implementing a boolean function, the denition of the size is varying slightly. A PLA implementation, for example, has the primary target in reducing the number of terms and a second target in reducing the number of literals. Other implementation styles using complex gates might have the reduction of literals as the primary objective. There are not only functions with one output, called single-output functions, but also with more than one output, called multiple-output functions. Multiple-output functions can be optimized using the same principles as for single-output functions, but it is a more complex task. If the scalar components of a function would be optimized one at a time, the result might be suboptimal, since product term sharing is not exploited. Figure 3.1 shows an optimal and a suboptimal cover of the twooutput function f with f1 (x, y, z) = x y z + x yz + xy z + xyz + xyz and f2 (x, y, z) = xy z + xyz corresponding to the rst and second output. A multiple-output minterm of a boolean function f: {0, 1}n {0, 1, }m consists of an input part and an output part, a pair of row vectors of dimensions n and m, respectivly. The input part is dened as if it were a single-output minterm for one of the (sub-)functions scalar outputs. The output part has exactly one 1-entry, corresponding to the subfunction for which the single-output minterm implies the value 1. (If an implicant implies more than one output to have the value 1, the according number of multiple-output implicants have to be used.)

3.1 Testability

Figure 3.1: Covers of a two-output function

A multiple-output implicant is an extension of multiple-output minterms like singleoutput implicants are extensions of single-output minterms. Input and output parts still have dimensions n and m, respectively. However dont cares are allowed in the input part (so this row vector can now contain -entries) and the output part can have more than only one 1-entry. The 1-entries in the output part imply the output of the corresponding subfunction is either true or dont care. An n-valued single-output function can also be interpreted as a binary-valued moutput function with m = log2 (n) by binary encoding of the input values. The interpretation the other way round is straightforward, except output value combinations that never occur can be omitted.

3.1

Testability

With shrinking chip-structures, the possibility of having defects increases, making testability one of the main targets. For many testing purposes, a stuck-at fault model is used to represent malfunctions. If a circuit has a stuck-at-0 fault, its output behaves as if a logic gates input were permanently 0. The denition of stuck-at-1 faults is straightforward. A multiple stuck-at fault is related to more than one fault, while a single stuck-at fault corresponds to a single fault [1]. Despite the possibility of equipping circuits with various kinds of additional testing circuits, it is desirable to have an irredundant cover for basic testing (i.e. testing with respect to stuck-at fault models). If a set of input assignments exists which allows the detection of all possible faults, the circuit is called fully testable for a particular fault model (e.g. single stuck-at fault model) [1]. If a (two-level) cover of a single-output function is prime and irredundant, the resulting circuit is fully testable for single- and multiple stuck-at faults. A prime and irredundant cover of a multiple-output function yields a fully testable circuit if each scalar output is represented by a prime and irredundant cover or there are no shared product terms [1].

3.2 Algorithms for Exact Logic Minimization abc 000 001 010 110 111 1 1 0 0 0 100 000 110 011 001

Figure 3.2: Prime implicant table

A non-minimum irredundant (=minimal) prime cover has the same testability pro perties as a minimum prime cover (if we do not implement additional pins or circuits for testing only)[1].

3.2

Algorithms for Exact Logic Minimization

Exact logic minimization addresses the problem of computing a minimum cover. Because of the exponential cost of the exact algorithms, the use of exact logic minimization is limited to functions with at most about 15 variables [2] . The algorithms used are based on the work of Quine and McCluskey. Quine discovered that there is a minimum cover that is prime while McCluskey formulated the search for a minimum cover as covering problem on a prime implicant table [1]. The prime implicant table, also called covering matrix [2] is a binary valued matrix. The rows and columns are corresponding to the minterms and the prime implicants, respectively. Figure 3.2 shows the prime implicant table (with named rows and columns) for function f = a b c + a b c + a bc + abc + abc. The prime implicants of f are = a b , = a c , = bc , = ab. The covering matrix can be seen as the incidence matrix of a hypergraph with vertices and edges corresponding to the minterms and prime implicants, respectively. The covering matrix can grow very fast as the number of inputs increases, since a boolean function with n inputs and one output can have O(3n /n) prime implicants and O(2n ) minterms [7]. As a result of the exponential algorithm1 on a problem of exponential size (covering n matrix), exact minimization requires a huge amount (O(23 )) of time and memory. In order to reduce the size of the problem, the prime implicant table can be reduced. To obtain a reduced prime implicant table, the following rules are used [1], [7]: Double rows often occur in practical applications and can be reduced to one row.
1

The minimum covering problem is NP-complete.

3.2 Algorithms for Exact Logic Minimization c1 c2 0 0 1 0 1 1 0 1 0 1

Figure 3.3: Dominated column c1 and dominant column c2 r1 r2 110110 010100

Figure 3.4: Dominant row r1 and dominated row r2

Essential columns correspond to the essential primes and must be part of any cover. A column is essential if it has a 1 in a row where no other column has a 1. Dominated columns can be omitted. A column c1 is dominated by another column c2 if all entries of c1 are less than or equal to the corresponding entry of c2 as shown in gure 3.3. Dominant rows can be omitted. A row r1 dominates another row r2 if all entries of r1 are greater than or equal to the corresponding entry of r2 as shown in gure 3.4. Note that this denition from [1] diers from the denition given in [7]where dominant rows and dominated rows (rows only!) are dened the other way round, and therefore dominated rows are removed. A reduced prime implicant table is shown in gure 3.5. If this table has entries left, it is called cyclic and models the so-called cyclic core of the problem, which can be solved by branching. Otherwise a minimum cover has been generated by using only the essential primes. The table shown in gure 3.5 is not cyclic, hence no branching is required and we can select one of the primes or to extend the set of essential primes and to a cover. McCluskeys original method uses branching to gure out which of the left over implicants (columns) in the cyclic have to be chosen in order to get the optimum solution. Another algorithm used to determine minimum covers is called Petricks method, shown in gure 3.6. For each minterm (row), the implicants covering it are written as a sum, which has to evaluate true, since at least one of the implicants covering a minterm has to be part of the functions cover. Then all of these sums are combined to a product of sums, which must also evaluate true (since all minterms have to be covered by a

3.2 Algorithms for Exact Logic Minimization

10

Figure 3.5: Reduced prime implicant table

Figure 3.6: Petricks method

cover). After this product of sums is transformed into a sum of products, where each product represents a possible selection of of prime implicants, we can select the product term (clause) with the fewest literals, which represents a minimum cover. At rst glance Petricks method may seem the absolutely better approach but a closer look shows that it is not because of transforming a product of sums into a sum of products is exponential in the number of operations.

3.2.1

Positional-cube notation of binary-valued functions

In order to to increase the eciency of operations on implicants, a binary encoding called postitional-cube notation has been developed. When dealing with boolean functions, the symbols are encoded using 2-bit (representing the two values a variable can take) elds as shown in gure 3.7. The positional-cube notation of a BV single-output function can be derived from the Symbol 0 1 Binary encoding 00 10 01 11

Figure 3.7: Symbol encoding in positional-cube notation

3.2 Algorithms for Exact Logic Minimization 0 0 01 10 1 01

11

Figure 3.8: Implicants in symbolic notation

Figure 3.9: Fast test for intersection of two impicants in positional-cube notation

functions implicants (as row-vectors, one below the other), replacing the symbolic entries by the corresponding 2-bit binary encoding. The symbol can occur only in implicants which are results of manipulations on (other) implicants and states, that the resulting implicant is empty (and therefore can never evaluate true, so it is void). A very ecient operation based on the positional-cube notation is the intersection of two (or more) implicants, for example. The implicants of function f (a, b, c, d) = a d + a b + ab + ac d are given in gure 3.8 as row vectors using the normal symbols. In positional-cube notation, the intersection of two implicants can be determined bit-by-bit as the product of the implicants entries. In our example, the intersection of the rst two implicants ( ) is abd and the intersection of the second and the third implicant ( ) is empty (void), gure 3.9 presents this in positional-cube notation. This notation can be extended to multiple-output BV functions by appending an output (row-) vector to each row (implicant). The output vectors have as many entries as the function has outputs and represent the outputs for which the implicant (row) implies the true-value (as with multiple output implicants). Figure 3.10 shows an example for a multiple-output function f : f1 = a b + ab, f2 = ab, f3 = a b in positional-cube notation. 01 10 110 10 01 010 01 01 001 Figure 3.10: Positional-cube notation of a multiple-output boolean function

3.2 Algorithms for Exact Logic Minimization

12

Figure 3.11: Positional-cube notation of a multi-valued input function (the green numbers show the input values)

3.2.2

Positional-cube notation of multiple-valued functions

When x is a p-valued variable of a multiple-valued input (MVI) function, there are other literals than x and x : For each subset S {0, . . . , p 1} (the values, the variable x can take), the corresponding literal is x[S] . This means a literal x[S] will evaluate true for more than one value which variable x can take, when the cardinality of S is greater than 1. In particular, x{0,...,p1} is called full literal, evaluating true for any value of x and therefore representing a dont care condition on x and, on the other hand, x is called empty literal, evaluating false for all values x can take. The positional-cube notation can also be extended to MVI functions. When dealing with BV input functions, the left and right bit in the 2-bit symbol encoding corresponds to the literal being true for the value 0 and 1, respectively. A n-valued variable is represented by a n-bit eld, so we also use one bit for every value possible. For example, the function f (x, y) = x{2} y {0,1} + x{0,1} y {1} with a ternary valued variable x (x {0, 1, 2}) and a BV variable y (y {0, 1}) can be simplied immediately to f (x, y) = x{2} + x{0,1} y {1} since the literal y {0,1} is full (representing a dont care) and therefore can be dropped. The positional-cube notation of function f is given in gure 3.11. Let x be a p-valued variable in the support of f. The cofactor of an MVI function [k] f (x1 , . . . , xn ) with respect to literal xi is fx[k] = f (x1 , . . . , xi1 , k, xi+1 , . . . , xn ), i i {1, . . . , n}, k {0, . . . , p 1}. A BV function f is called positive unate in variable x if fx fx and negative unate in variable x if fx fx . A function is positive unate if it is positive unate in all variables, negative unate if it is negative unate in all variables, otherwise it is binate [1]. The function f (a, b, c, d) = a + b + c + d is positive unate in variables a and d and is negative unate in variables b and c and therefore is binate. Function g(b, c) = b + c is negative unate and h(a, d) = a + d is positive unate. When considering functions with multiple-valued inputs, there are two kinds of unateness, namely strong unateness and weak unateness. A function f is strongly 2 unate in p-valued variable x, if an order can be imposed on the values of x, such that the following holds: i, j {0, . . . , p 1}, i j : fx[ i] fx[ j] . The
This symbol is used to indicate that any relation which is antisymmetrical, transitive and reexive can be used as order.
2

3.2 Algorithms for Exact Logic Minimization

13

Figure 3.12: Positional-cube notation of a strongly unate function (in the original bit-order, the green numbers show the input values).

Figure 3.13: Positional-cube notation of a strongly unate function (in a bit-order leaving no 0s right to a 1, the green numbers show the input values).

cover of a function is strongly unate in a variable if an order of the bit-elds in the corresponding column of the covers positional cube notation exists, in which no 0s are right of a 1. An example for such reordering is shown in gures 3.12 and 3.13. If a cover F is unate, the corresponding function is also unate. Strong unateness is not used in two-level logic minimization (so far), since another kind of unateness, called weak unateness, is easier to compute, more likely to appear and can also be used for accelerating operations which are based on tautology tests [3]. A function f is weakly unate in p-valued variable x if there exists a value i of x so that changing x in an input-assignment (where x has the value i) from i to any other value will either change the output from 0 to 1 or not change it at all. More formally: i {0, . . . , p 1}j {0, . . . , p 1} : fx[ i] fx[ j] . If a cover is weakly unate in variable x, the bit-elds in the column corresponding to x seen as a matrix contain a column of 0s, when omitting the full literals (represented by bit-elds having only 1s). (Figure 3.14)

Figure 3.14: Positional-cube notation of a weakly unate function

3.2 Algorithms for Exact Logic Minimization

14

3.2.3

The unate recursive paradigm

The unate recursive paradigm belongs to a class of algorithms for the recursive divide-and-conquer approach and (recursively) makes use of the fact that operations on sets of implicants can be done on a variable-by-variable (and value-by-value) basis. If x is a p-valued variable in the support of functions f and g, a binary operation on f and g can be executed by merging 3 the results of the operation on the cofactors p1 with respect to a chosen variable: f g = k=0 x[k] (fx[k] gx[k] ) where is an arbitrary binary operator [1]. Note, that this is an extension of Shannons expansion. By recursive decomposition of a binate function, the result may yield unate cofactors, whose processing can be done in a very ecient way. It is possible to direct the expansion towards obtaining unate cofactors in early recursion-levels by selecting the variables in a special order. Some of this classs algorithms are shortly described in the following. Tautology test: Since it is an important question in logic optimization wether a function is a tautology or not, a tautology test is needed. By negating the function, it can also be tested for being a contradiction. As a direct consequence of Shannons expansion, a BV function being a tautology is equivalent to both of its cofactors with respect to any (positive and negative) literal being tautologies. Similar, as a direct consequence of extending Shannons expansion to MVI functions, an MVI function being a tautology is equivalent to all of its cofactors with respect to any value of any variable are tautologies. So a function can be tested for tautology by recursively expanding it along the values of all its variables until it is possible to decide, wether a cofactor is a tautology or not. The earlier this decision can be made, the faster is the algorithm. Since there are criteria for this decision which only apply to unate functions, it is desirable to obtain unate cofactors in the earliest possible stage of the recursion-tree. The rules which use the unateness for terminating or simplifying the recursion are [1]: A cover is not a tautology when it is weakly unate and there is no row of all 1s (in positional-cube notation). If the cover is weakly unate in some variable, the tautology test can be done on the subset of rows not depending on the weakly unate variable.
Merging is done here by summing up the products of every result and the literal corresponding to the cofactor of the result.
3

3.3 Algorithms for Heuristic Logic Minimization

15

Containment test: A cover F containing an implicant is equivalent to F 4 being a tautology, hence containment test can be done using our tautology test. Complementation: The complementation of a Cover is simpler to obtain with unate cofactors Computation of prime implicants: Since a cover of a strongly unate function contains all prime implicants, the primes can be obtained by nding a cover.

3.3

Algorithms for Heuristic Logic Minimization

Heuristic algorithms for logic minimization were developed because of the need to reduce the size of two-level forms while having limited memory and time. Heuristic optimization can be seen as applying dierent operators to a cover which is the input to our algorithms. The dierent heuristic minimizers can be characterized by the operators they use and the order in which they are applied. When the size of the cover can not be further reduced by applying the operators used, the algorithm terminates and outputs the modied cover. Some of the operators used in heuristic logic minimization are given below. All of them use heuristics [1]. Expand: The expand operator tries to maximize the size of each implicant of cover, so that other (smaller) implicants become covered and can be deleted. Reduce: The reduce operator decreases the size of each implicant of a cover, so that successive expansion may lead to a better result. Irredundant: The irredundant operator makes a cover irredundant by deleting a maximum number of redundant implicants. Recall that an irredundant cover is minimal. Essentials: The essentials operator is used to detect the essential primes of a cover (also for exact minimization) and can be implemented using a containment test (which in turn can be implemented using a tautology test) Reshape: The reshape operator modies the cover preserving its cardinality. Therefore the implicants are processed pairwise expanding one and reducing the other so that together with the other implicants the set is still a cover of the function.

F is another extension of the cofactors and can be seen as cofactor with respect to an implicant (instead of a variable).

Chapter 4 Multiple-level combinational logic optimization


Combinational logic circuits are very often implemented as multiple-level logic networks, since additional degrees of freedom can be used to reduce area and delays. It is also possible to design a circuit in a way that special timing requirements are met for the logic paths. Although a few exact methods for optimizing multiple-level logic networks are available, they are normally not used because of their immensely high computational complexity [1]. Usually heuristic algorithms are used to optimize multiple-level logic circuits. The criteria are usually minimizing the area (respecting timing requirements) or reducing the delay (with respect to a maximum area).

4.1

Transformations

Combinational multiple-level circuits can be represented by directed acyclic graphs. The input variables are vertices having only outgoing edges. Each internal vertex corresponds to a boolean function of the (negated) results of the vertices having an outgoing edge to the vertex considered. So an internal vertex can also be considered being a variable. The outputs are represented by vertices having only one incoming edge denoting the external visibility of a variable. Some of the transformations used in multiple-level combinational logic optimization are given below: Elimination: If we replace a variable by the corresponding expression in all its occurrences, the internal vertex representing the variable can be eliminated (removed). This is the simplest transformational algorithm. Decomposition: An internal vertex can be replaced by two (or more) vertices of a subnetwork, which together represent the same expression. Extraction: If two (or more) functions (represented by vertices) have a common subexpression, this subexpression can be extracted into a new vertex and the

4.2 Algebraic model

17

corresponding variable can be used within the other functions, replacing the common subexpression. Simplication: Simplication reduces the complexity of a function by using its representations properties. Substitution: If a subexpression of a function is already represented by a vertex, the variable corresponding to that vertex can be used for substituting the subexpression.

4.2

Algebraic model

Using the algebraic model, the local boolean functions are represented by algebraic expressions. Transformations in the algebraic model only use the rules of polynomial algebra, neglecting the special properties of boolean algebra. Thus De Morgans rules, absorbtion, idempotence, the denition of complements, tautologies, contradictions and the dont care conditions cannot be used. Also only one distributive law applies [1]. For example (x + y) z = (xz) + (yz) but xy + z = (x + z) (y + z), xx = 0 and x + x = 1. The quality of the result fully depends on the function to be optimized. A given ALU, optimized using algebraic methods only, can be several times as large as the result of optimizing the same ALU using boolean methods [2]. One of the most important methods in the algebraic model is the algebraic division used in transformations like substitution and decomposition.

4.3

Boolean model

In the boolean model, the full power of boolean algebra is exploited. So transformations used have a greater computational complexity than those used in the algebraic model. Each variable represented by a vertex corresponds to a local boolean function which has a local dc set (that may be empty). When a local function is given as sum of products, it can be optimized by means of two-level optimization, targeting a minimum number of literals (instead of minimal number of terms) [1]. Dont care conditions are of great importance in multiple-level logic optimization, since they oer a certain degree of freedom for optimization. The dc set can be derived from the interconnections from and to our subnetworks inputs and outputs. It consists of input assignments that can never occur, namely the input controllability dont care set and input assignments for which the output values are never observed, called output observability dontt care set.

4.4 Delay optimization

18

4.4

Delay optimization

Technology independent delay optimization can be done using several techniques, which modify the critical path1 of a circuit [4]. Select transformation (reduces the length of the critical path) Bypass transformation (makes the critical path false) KMS Transformation (removes the critical path) Local transformations of the local path (resynthesis of parts of the circuit) clustering (builds few two-level clusters of circuit partitions)

The critical path of a circuit is the longest sensitizable path in terms of delay

Chapter 5 Conclusion
Many smaller two-level circuits can be minimized exactly today by the use of smart algorithms and data structures which are capable of exploiting the problems nature. The positional-cube notation can be used to implement data structures for product-forms of literals such as implicants. It is also possible to implement data structures for implicants of functions with multiple-valued inputs and multiple binary-valued outputs. Since multiple-valued outputs can also be represented by a logarithmic number of binary valued outputs these data structures can be used for the optimization of any discrete function. The multiple-level logic implementations of circuits are normally smaller (than twolevel logic implementations) in terms of area. This is due to the fact that common subexpressions can be shared. Because of the immensely high computational complexity of the few known exact algorithms for optimization, only heuristic methods are considered to be practical. They usually change the critical path directly or heuristically select partitions of multiple-level combinational networks which then are subject to two-level optimization.

Bibliography
[1] Giovanni De Micheli, Synthesis and optimization of digital circuits (McGrawHill, 1994) [2] Soha Hassoun and Tsutomu Sasao, Logic Synthesis And Verication (Kluwer Academic Publishers, USA, 2002) [3] Robert K. Brayton, Logic Synthesis for Hardware Systems, Lecture Notes, Lecture 3, Berkeley, USA (1998) [4] H.-J. Wunderlich, Lecture Notes on Advanced Processor Architecture, Stuttgart, Germany (2002) [5] Avinoam Kolodny, CAD of VLSI Systems, Lecture Notes, Lecture 14, Technion City, Israel (2002) [6] Priyank Kalla, VLSI Logic Test, Validation and Verication, Lecture Notes, Lecture 6, Salt Lake City, USA (2002) [7] Andreas Kuehlmann, Logic Synthesis and Verication, Lecture Notes, Lecture 7, Berkeley, USA (2003)

Anda mungkin juga menyukai