scispace - formally typeset
Open AccessJournal ArticleDOI

Low-power CMOS digital design

TLDR
In this paper, techniques for low power operation are presented which use the lowest possible supply voltage coupled with architectural, logic style, circuit, and technology optimizations to reduce power consumption in CMOS digital circuits while maintaining computational throughput.
Abstract
Motivated by emerging battery-operated applications that demand intensive computation in portable environments, techniques are investigated which reduce power consumption in CMOS digital circuits while maintaining computational throughput. Techniques for low-power operation are shown which use the lowest possible supply voltage coupled with architectural, logic style, circuit, and technology optimizations. An architecturally based scaling strategy is presented which indicates that the optimum voltage is much lower than that determined by other scaling considerations. This optimum is achieved by trading increased silicon area for reduced power consumption. >

read more

Content maybe subject to copyright    Report

IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL 27. NO 4, APRIL 1992
Low-Power CMOS Digital Design
Anantha P. Chandrakasan, Samuel Sheng, and Robert W. Brodersen, Fellow, lEEE
473
Abstract—Motivated by emerging battery-operated applica-
tions that demand intensive computation in portable environ-
ments, techniques are investigated which reduce power con-
sumption in CMOS digital circuits while maintaining
computational throughput. Techniques for low-power opera-
tion are shown which use the lowest possible supply voltage
coupled with architectural, logic style, circuit, and technology
optimization. An architectural-based scaling strategy is pre-
sented which indicates that the optimum voltage is much lower
than that determined by other scaling considerations. This op-
timum is achieved by trading increased silicon area for reduced
power consumption.
I. INTRODUCTION
w
TH much of the research efforts of the past ten
years directed toward increasing the speed of digital
systems, present-day technologies possess computing ca-
pabilities that make possible powerful personal worksta-
tions, sophisticated computer graphics, and multimedia
capabilities such as real-time speech recognition and real-
time video. High-speed computation has thus become the
expected norm from the average user, instead of being the
province of the few with access to a powerful mainframe.
Likewise, another significant change in the attitude of
users is the desire to have access to this computation at
any location, without the need to be physically tethered
to a wired network. The requirement of portability thus
places severe restrictions on size, weight, and power.
Power is particularly important since conventional nickel-
cadmium battery technology only provides 20 W h of
energy
for each pound of weight [1]. Improvements in
battery technology are being made, but it is unlikely that
a dramatic solution to the power problem is forthcoming;
it is projected that only a 30% improvement in battery
performance will be obtained over the next five years [2].
Although the traditional mainstay of portable digital
applications has been in low-power, low-throughput uses
such as wristwatches and pocket calculators, there are an
ever-increasing number of portable applications requiring
low power and high throughput. For example, notebook
and laptop computers, representing the fastest growing
segment of the computer industry, are demanding the
same computation capabilities as found in desktop ma-
chines. Equally demanding are developments in personal
communications services (PCS’s), such as the current
Manuscript rece,ved September 4, 1991; revised November 18, 1991.
This work was supported by
DARPA.
The authors are with the Department of Electrical Engineering and Com-
puter Science, University of California, Berkeley, CA 94720.
IEEE Log Number 9105976.
generation of digital cellular telephony networks which
employ complex speech compression algorithms and so-
phisticated radio modems in a pocket-sized device. Even
more dramatic are the proposed future PCS applications,
with universal portable multimedia access supporting full-
motion digital video and control via speech recognition
[3]. In these applications, not only will voice be trans-
mitted via wireless inks, but data as well. This will facil-
itate new services such as multimedia database access
(video and audio in addition to text) and supercomputing
for simulation and design, through an intelligent network
which allows communication with these services or other
people at any place and time. Power for video compres-
sion and decompression and for speech recognition must
be added to the portable unit to support these services—
on top of the already lean power budget for the analog
transceiver and speech encoding. Indeed, it is apparent
that portability can no longer be associated with low
throughput; instead,
vastly increased capabilities, ac-
tually in excess of that demanded of fixed workstations,
must be placed in a low-power portable environment.
Even when power is available in nonportable applica-
tions, the issue of low-power design is becoming critical.
Up until now, this power consumption has not been of
great concern, since large packages, cooling fins, and fans
have been capable of dissipating the generated heat. How-
ever, as the density and size of the chips and systems con-
tinue to increase, the difficulty in providing adequate
cooling might either add significant cost to the system or
provide a limit on the amount of functionality that can be
provided.
Thus, it is evident that methodologies for the design of
high-throughput, low-power digital systems are needed.
Fortunately, there are clear technological trends that give
us a new degree of freedom, so that it may be possible to
satisfy these seeming] y contradictory requirements. Scal-
ing of device feature sizes, along with the development
of high-density, low-parasitic packaging, such as multi-
chip modules [4]–[6], will alleviate the overriding con-
cern with the numbers of transistors being used. When
MOS technology has scaled to 0.2-pm minimum feature
size it will be possible to place from 1 to 10
x 109 tran-
sistors in an area of 8 in x 10 in if a high-density pack-
aging technology is used. The question then becomes how
can this increased capability be used to meet a goal of
low-power operation. Previous analyses on the question
of how to best utilize increased transistor density at the
chip level concluded that for high-performance micropro-
cessors the best use is to provide increasing amounts of
0018-9200/92$03 .00 @ 1992 IEEE

474
IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL 27, NO, 4, APRIL 1992
on-chip memory [7]. It will be shown here that for com-
putationally intensive functions that the best use is to pro-
vide additional circuitry to parallelize the computation.
Another important consideration, particularly in por-
table applications, is that many computation tasks are
likely to be real-time; the radio modem, speech and video
compression, and speech recognition all require compu-
tation that is always at near-peak rates. Conventional
schemes for conserving power in laptops, which are gen-
erally based on power-down schemes, are not appropriate
for these continually active computations. On the other
hand, there is a degree of freedom in design that is avail-
able in implementing these functions, in that once the real-
time requirements of these applications are met, there is
no advantage in increasing the computational throughput.
This fact, along with the availability of almost “limit-
less” numbers of transistors, allows a strategy to be de-
veloped for architecture design, which if it can be fol-
lowed, will be shown to provide significant power
savings.
II. SOURCESOF POWER DISSIPATION
There are three major sources of power dissipation in
digital CMOS circuits, which are summarized in the fol-
lowing equation:
Ptota~=
p, (CLv Vddf-,k)
+ I~C
‘dd + ‘leakage ‘old.
(1)
The first term represents the switching component of
power, where CL is the loading capacitance, filk is the
clock frequency, and
pt is the probability that a power-
consuming transition occurs (the activity factor). In most
cases, the voltage swing V is the same as the supply volt-
age Vdd;however, in some logic circuits, such as in sin-
gle-gate pass-transistor implementations, the voltage
swing on some internal nodes may be slightly less [8].
The second term is due to the direct-path short circuit cur-
rent I,C,
which arises when both the NMOS and PMOS
transistors are simultaneously active, conducting current
directly from supply to ground [9], [10]. Finally, leakage
current zle&..&,which can arise from substrate injection
and subthreshold effects, is primarily determined by fab-
rication technology considerations [11] (see Section III-
C). The dominant term in a “well-designed” circuit is
the switching component, and low-power design thus be-
comes the task of minimizing
pt, CL, V~~,and ~Clk,while
retaining the required functionality.
The power-delay product can be interpreted as the
amount of energy expended in each switching event (or
transition) and is thus particularly useful in comparing the
power dissipation of various circuit styles. If it is assumed
that only the switching component of the power dissipa-
tion is important, then it is given by
. .
/
energy per transition = ~~o~~l~Clk=
Ceffectlve
V;d
(2)
where C.ff.Ct,v.is the effective capacitance being switched
to perform a computation and is given by C~ff~Ctivc=
pf
c~.
III.
CIRCUIT DESIGN AND TECHNOLOGY
CONSIDERATIONS
There are a number of options available in choosing the
basic circuit approach and topology for implementing var-
ious logic and arithmetic functions. Choices between
static versus dynamic implementations, pass-gate versus
conventional CMOS logic styles, and synchronous versus
asynchronous timing are just some of the options open to
the system designer. At another level, there are also var-
ious architectural/structural choices for implementing a
given logic function; for example, to implement an adder
module one can utilize a ripple-carry, carry-select, or
carry -lookahead topology. In this section. the trade-offs
with respect to low-power design between a selected set
of circuit approaches will be discussed, followed by a dis-
cussion of some general issues and factors affecting the
choice of logic family.
A. Dynamic Versus Static Logic
The choice of using static or dynamic logic is depen-
dent on many criteria than just its low-power perfor-
mance, e.g.,
testability and ease of design. However, ‘if
only the low-power performance is analyzed it would ap-
pear that dynamic logic has some inherent advantages in
a number of areas including reduced switching activity
due to hazards, elimination of short-circuit dissipation,
and reduced parasitic node capacitances. Static logic has
advantages since there is no precharge operation and
charge sharing does not exist. Below, each of these con-
siderations will be discussed in more detail.
1)
Spurious Transitions:
Static designs can exhibit
spurious transitions due to finite propagation delays from
one logic block to the next (also called critical races and
dynamic hazards [12]), i.e., a node can have multiple
transitions in a single clock cycle before settling to the
correct logic level. For example, consider a static N-bit
adder, with all bits of the summands going from
ZERO to
ONE, with the carry input set to ZERO. For all bits, the
resultant sum should be ZERO; however, the propagation
of the carry signal causes a
ONE to appear briefly at most
of the outputs. These spurious transitions dissipate extra
power over that strictly required to perform the compu-
tation. The number of these extra transitions is a function
of input patterns, internal state assignment in the logic
design, delay skew, and logic depth. To be specific about
the magnitude of this problem, an 8-b ripple-carry adder
with a uniformly distributed set of random input patterns
will typically consume an extra 30% in energy. Though
it is possible with careful logic design to eliminate these
transitions, dynamic logic intrinsically does not have this
problem, since any node can undergo at most one power-
consuming transition per clock cycle.
2)
Short-Circuit Currents: Short-circuit (direct-path)
currents, I,C in (1), are found in static CMOS circuits.

CHANDRAKASAN et al : LOW-POWER CMOS DIGITAL DESIGN
However, by sizing transistors for equal rise and fall
times, the short-circuit component of the total power dis-
sipated can be kept to less than 20% [9] (typically < 5-
10%) of the dynamic switching component. Dynamic
logic does not exhibit this problem, except for those cases
in which static pull-up devices are used to control charge
sharing [13] or when clock skew is significant.
3)
Parasitic Capacitance: Dynamic logic typically
uses fewer transistors to implement a given logic func-
tion, which directly reduces the amount of capacitance
being switched and thus has a direct impact on the power-
delay product [14], [15]. However, extra transistors may
be required to insure that charge sharing does not result
in incorrect evaluation.
4)
Switching Activity: The one area in which dynamic
logic is at a distinct disadvantage is in its necessity for a
precharge operation. Since in dynamic logic every node
must be precharged every clock cycle, this means that
some nodes are precharged only to be immediately dis-
charged again as the node is evaluated, leading to a higher
activity factor. If a two-input N-tree (recharged high) dy-
namic NORgate has a uniform input distribution of high
and low levels, then the four possible input combinations
(00,01, 10, 11) will be equally likely. There is then a 75 %
probability that the output node will discharge immedi-
ately after the precharge phase, implying that the activity
for such a gate is 0.75 (i.e.,
PNOR= 0.75 C’zV&j&). On
the other hand, the activity factor for the static
NOR coun-
terpart will be only 3/16, excluding the component due to
the spurious transitions mentioned in Section III-A-1
(power is only drawn on a zmo-to-om transition, so
PO~,
= p(0)p(
1) = p(0) (1 p(0))). In general, gate activities
will be different for static and dynamic logic and will de-
pend on the type of operation being performed and the
input signal probabilities. In addition, the clock buffers to
drive the precharge transistors will also require power that
it not needed in a static implementation.
5)
Power-Down Modes: Lastly, power-down tech-
niques achieved by disabling the clock signal have been
used effectively in static circuits, but are not as well-suited
for dynamic techniques. If the logic state is to be pre-
served during shutdown, a relatively small amount of ex-
tra circuitry must be added to the dynamic circuits to pre-
serve the state, resulting in a slight increase in parasitic
capacitance and slower speeds.
B. Conventional Static Versus Pass-Gate Logic
A more clear situation exists in the use of transfer gates
to implement logic functions, as is used in the comple-
mentary pass-gate logic (CPL) family [8], [10]. In Fig.
1, the schematic of a typical static CMOS logic circuit for
a full adder is shown along with a static CPL version [8].
The pass-gate design uses only a single transmission
NMOS gate, instead of a full complementary pass gate to
reduce node capacitance. Pass-gate logic is attractive as
fewer transistors are required to implement important logic
functions, such as XOR’Swhich only require two pass tran-
475
w-m
I I
Vnn
c
(ZL
-c
o-
C
B+
~3t--E
A+
~
I-A
Sum
GND
VDD
3
B
A
Transistor count (conventional CMOS): 40
sin
Am
Transistor count (C’PL) :28
Fig. 1. Comparison of a conventional CMOS and CPL adders [8].
sisters in a CPL implementation. This particularly effi-
cient implementation of an XORis important since it is key
to most arithmetic functions, permitting adders and mul-
tipliers to be created using a minimal number of devices.
Likewise, multiplexer, registers, and other key building
blocks are simplified using pass-gate designs.
However, a CPL implementation as shown in Fig. 1
has two basic problems. First, the threshold drop across
the single-channel pass transistors results in reduced cur-
rent drive and hence slower operation at reduced supply
voltages; this is important for low-power design since it
is desirable to operate at the lowest possible voltages lev-
els. Second, since the “high” input voltage level at the
regenerative inverters is not
V~~,the PMOS device in the
inverter is not fully turned off, and hence direct-path static
power dissipation could be significant. To solve these
problems, reduction of the threshold voltage has proven
effective, although if taken too far will incur a cost in
dissipation due to subthreshold leakage (see Section III-
C) and reduced noise margins. The power dissipation for
a pass-gate family adder with zero-threshold pass transis-
tors at a supply voltage of 4 V was reported to be 30%
lower than a conventional static design, with the differ-

476
IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. ?7, NO, 4. APRIL 1992
ence being even more significant at lower supply voltages
[8] .
C.
Threshold Voltage Scaling
Since a significant power improvement can be gained
through the use of low-threshold MOS devices, the ques-
tion of how low the thresholds can be reduced must be
addressed. The limit is set by the requirement to retain
adequate noise margins and the increase in subthreshold
currents. Noise margins will be relaxed in low-power de-
signs because of the reduced currents being switched,
however, the subthreshold currents can result in signifi-
cant static power dissipation. Essentially, subthreshold
leakage occurs due to carrier diffusion between the source
and the drain when the gate–source voltage
V~, has ex-
ceeded the weak inversion point, but is still below the
threshold voltage
V,, where carrier drift is dominant. In
this regime, the MOSFET behaves similarly to a bipolar
transistor, and the subthreshold current is exponentially
dependent on the gate-source voltage
V~,, and approxi-
mately independent of the drain–source voltage Vds,for
V~,approximately larger than 0.1 V. Associated with this
is the subthreshold slope s~k,which is the amount of volt-
age required to drop the subthreshold current by one de-
cade. At room temperature, typical values for Stk lie be-
tween 60 and 90 mV /decade current, with 60 mV /decade
being the lower limit. Clearly, the lower Sfhis, the better,
since it is desirable to have the device “turn off’ as close
to
V, as possible. As a reference, for an L = 1.5-pm, W
= 70-pm NMOS device, at the point where
V8,equals V,,
with
Vtdefined as where the surface inversion charge den-
sity is equal to the bulk doping, approximately 1 pA of
leakage current is exhibited, or 0.014 ,uA/pm of gate
width [16]. The issue is whether this extra current is neg-
ligible in comparison to the time-average current during
switching. For a CMOS inverter (PMOS: W = 8 pm,
NMOS: W = 4 pm), the current was measured to be 64
PA over 3.7 ns at a supply voltage of 2 V. This implies
that there would be a 100% power penalty for subthresh-
old leakage if the device were operating at a clock speed
of 25 MHz with an activity factor of pt = 1/6th, i.e., the
devices were left idle and leaking current 83% of the time.
It is not advisable, therefore, to use a true zero threshold
device, but instead to use thresholds of at least 0.2 V,
which provides for at least two orders of magnitude of
reduction of subthreshold current. This provides a good
compromise between improvement of current drive at low
supply voltage operation and keeping subthreshold power
dissipation to a negligible level. This value may have to
be higher in dynamic circuits to prevent accidental dis-
charge during the evaluation phase [11]. Fortunately, de-
vice technologists are addressing the problem of
subthreshold currents in future scaled technologies, and
reducing the supply voltages also serves to reduce the cur-
rent by reducing the maximum allowable drain–source
voltage [17], [18]. The design of future circuits for lowest
power operation should therefore explicitly take into ac-
count the effect of subthreshold currents.
D. Power-Down Strategies
In synchronous designs, the logic between registers is
continuously computing every clock cycle based on its
new inputs. To reduce the power in synchronous designs,
it is important to minimize switching activity by powering
down execution units when they are not performing “use-
ful” operations. This is an important concern since logic
modules can be switching and consuming power even
when they are not being actively utilized [19].
While the design of synchronous circuits requires spe-
cial design effort and power-down circuitry to detect and
shut down unused units, self-timed logic has inherent
power-down of unused modules, since transitions occur
only when requested. However, since self-timed imple-
mentations require the generation of a completion signal
indicating the outputs of the logic module are valid, there
is additional overhead circuitry. There are several circuit
approaches to generate the requisite completion signal.
One method is to use dual-rail coding, which is implicit
in certain logic families such as the DCVSL [13], [20].
The completion signal in a combinational macrocell made
up of cascading DCVSL gates consists of simply oRing
the outputs of only the last gate in the chain, leading to
small overhead requirements. However, for each com-
putation, dual-rail coding guarantees a switching event
will occur since at least one of the outputs must evaluate
to zero. We found that the dual-rail DCVSL family con-
sumes at least two times more in energy per input transi-
tion than a conventional static family. Hence, self-timed
implementations can prove to be expensive in terms of
energy for data paths that are continuously computing.
IV.
VOLTAGE SCALING
Thus far we have been primarily concerned with the
contributions of capacitance to the power expression
CV2f.
Clearly, though, the reduction of V should yield even
greater benefits; indeed, reducing the supply voltage is
the key to low-power operation, even after taking into ac-
count the modifications to the system architecture, which
is required to maintain the computational throughput.
First, a review of circuit behavior (delay and energy char-
acteristics) as a function of scaling supply voltage and
feature sizes will be presented. By comparison with ex-
perimental data, it is found that simple first-order theory
yields an amazingly accurate representation of the various
dependencies over a wide variety of circuit styles and ar-
chitectures. A survey of two previous approaches to sup-
ply-voltage scaling is then presented, which were focused
on maintaining reliability and performance. This is fol-
lowed by our architecture-driven approach, from which
an “optimal” supply voltage based on technology, archi-
tecture, and noise margin constraints is derived.
A. Impact on Delay and Power–Delay Product
As noted in (2), the energy per transition or equiva-
lently the power-delay product in “properly designed”
CMOS circuits (as discussed in Section II) is proportional

CHANDRAKASAN et al.: LOW-POWER CMOS DIGITAL DESIGN
477
1.5—1
1:51
stageringoscillator
1.00
2 8-bitrippIe-carryadder
0.70
0.50
0.30
0.20
0.15
0.1
0.07
0.05
1
0.03
I
1
2 5
Vdd (volts)
Fig. 2. Power-delay productexhibiting square-lawdependencefor
two dif-
ferent circuits.
to V2. This is seen from Fig. 2, which is a plot of two
experimental circuits that exhibit the expected V* depen-
dence. Therefore, it is only necessary to reduce the supply
voltage for a
quadratic improvement in the power–delay
product of a logic family.
Unfortunately, this simple solution to low-power de-
sign comes at a cost. As shown in Fig. 3, the effect of
reducing Vddon the delay is shown for a variety of differ-
ent logic circuits that range in size from 56 to 44 000
transistors spanning a variety of functions; all exhibit es-
sential y the same dependence (see Table I). Clearly, we
pay a speed penalty for a Vddreduction, with the delays
drastically increasing as
V~~approaches the sum of the
threshold voltages of the devices. Even though the exact
analysis of the delay is quite complex if the nonlinear
characteristic of a CMOS gate are taken into account, it
is found that a simple first-order derivation adequately
predicts the experimentally determined dependence and is
given by
CL x ‘dd _
Td =
CL,
X v&f
I–
(3)
w., (~/.Q (Vdd
V,)*
We also evaluated (through experimental measure-
ments and SPICE simulations) the energy and delay per-
formance for several different logic styles and topologies
using an 8-b adder as a reference; the results are shown
on a log-log plot in Fig. 4, We see that the power-delay
product improves as delays increase (through reduction of
the supply voltage), and therefore it is desirable to operate
at the slowest possible speed. Since the objective is to
reduce power consumption while maintaining the overall
system throughput, compensation for these increased de-
lays at low voltages is required. Of particular interest in
this figure is the range of energies required for a transition
at a given amount of delay. The best logic family we ana-
L
ringoscillator
/
microcode
DSP chip
<
~~
2.00
4.00 6.00
Vdd (volts)
Fig. 3. Data demonstrating delay characteristics follow simple first-order
theory.
TABLE I
DETAILS
OF COMPONENTS USED FOR THE STUDY IN FIG. 3
Component
#of
(all in
2~m)
Transistors Area
Comments
Microcode DSP Chip
44802
94 mmz
20-b data path
[21]
Multiplier
20432
12.2 mm2
24x24b
Adder
256 0.083 mmz
conventional
static
Ring Oscillator
102
0.055 mmz
51 stages
Clock Generator
56 0.04 mm2
cross-coupled
NOR
200
150
1: Pass.b’ansistorLogic (CPL) - SPICE [81
4
2: O@imimd Static (with PropagatelGe.crate logic)[lOl
3: Cmwenti.nal Static[lOl
100
5:
Carry Select[101
6: Dfrerential-CacsOdeVoltage Switch
70
Logic (dynamic) - SPICE [131
%
s 50
b
1
v
a
~
30
Z
&
h
20
2
u
15
n
&
w lo
g
k,
Decreasing Vdd
5
8-bitaddersh32.O~m
3
I I
\
I
10 30
100
DELAY (ns)
Fig. 4. Data showing improvement in power-delay product at the cost of
speed for various circuit approaches.

Citations
More filters

The Landscape of Parallel Computing Research: A View from Berkeley

TL;DR: The parallel landscape is frame with seven questions, and the following are recommended to explore the design space rapidly: • The overarching goal should be to make it easy to write programs that execute efficiently on highly parallel computing systems • The target should be 1000s of cores per chip, as these chips are built from processing elements that are the most efficient in MIPS (Million Instructions per Second) per watt, MIPS per area of silicon, and MIPS each development dollar.
Journal ArticleDOI

Energy-aware wireless microsensor networks

TL;DR: This article presents a suite of techniques that perform aggressive energy optimization while targeting all stages of sensor network design, from individual nodes to the entire network.
Proceedings ArticleDOI

A scheduling model for reduced CPU energy

TL;DR: This paper proposes a simple model of job scheduling aimed at capturing some key aspects of energy minimization, and gives an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule.
Journal ArticleDOI

1-V power supply high-speed digital circuit technology with multithreshold-voltage CMOS

TL;DR: In this article, a multithreshold-voltage CMOS (MTCMOS) based low-power digital circuit with 0.1-V power supply high-speed low power digital circuit technology was proposed, which has brought about logic gate characteristics of a 1.7ns propagation delay time and 0.3/spl mu/W/MHz/gate power dissipation with a standard load.
Proceedings ArticleDOI

Scheduling for reduced CPU energy

TL;DR: A new metric for cpu energy performance, millions-of-instructions-per-joule (MIPJ), and several methods for varying the clock speed dynamically under control of the operating system, and examine the performance of these methods against workstation traces.
References
More filters
Book

Principles of CMOS VLSI Design: A Systems Perspective

TL;DR: CMOS Circuit and Logic Design: The Complemenatry CMOS Inverter-DC Characteristics and Design Strategies.

PRINCIPLES OF CMOS VLSI DESIGN A Systems Perspective Second Edition

Abstract: Introduction to CMOS Circuits. Introduction. MOS Transistors. MOS Transistor Switches. CMOS Logic. Circuit Representations. CMOS Summary. MOS Transistor Theory. Introduction. MOS Device Design Equation. The Complemenatry CMOS Inverter-DC Characteristics. Alternate CMOS Inverters. The Differential Stage. The Transmission Gate. Bipolar Devices. CMOS Processing Technology. Silicon Semiconductor Technology: An Overview. CMOS Technologies. Layout Design Rules. CAD Issues. Circuit Characterization and Performance Estimation. Introduction. Resistance Estimation. Capacitance Estimation. Inductance. Switching Characteristics. CMOS Gate Transistor Sizing. Power Consumption. Determination of Conductor Size. Charge Sharing. Design Margining. Yield. Scaling of MOS Transistor Dimensions. CMOS Circuit and Logic Design. Introduction. CMOS Logic Structures. Basic Physical Design of Simple Logic Gates. Clocking Strategies. Physical and Electrical Design of Logic Gates. 10 Structures. Structured Design Strategies. Introduction. Design Economics. Design Strategies. Design Methods. CMOS Chip Design Options. Design Capture Tools. Design Verification Tools. CMOS Test Methodolgies. Introduction. Fault Models. Design for Testability. Automatic Test Pattern Generation. Design for Manufacturability. CMOS Subsystem Design. Introduction. Adders and Related Functions. Binary Counters. Multipliers and Filter Structures. Random Access and Serial Memory. Datapaths. FIR and IIR Filters. Finite State Machines. Programmable Logic Arrays. Random Control Logic.
Journal ArticleDOI

Short-circuit dissipation of static CMOS circuitry and its impact on the design of buffer circuits

TL;DR: In this paper, a simple formula is derived for quick calculation of the maximum short-circuit dissipation of static CMOS circuits, based on the behavior of the inverter when loaded with different capacitances.