Anda di halaman 1dari 41






REGN NO.11718085










I hereby declare that I have completed four weeks summer training at Electronics corporation
of India, Hyderabad from 03 june 2019 to 02 july 2019 under the guidance of Mr.Shridara
Shetty. I have declared that I have worked with full dedication during these four weeks of
training and my learning outcomes fulfil the requirements of training for the award of degree
of electronics and communication, Lovely Professional University, Phagwara


REG NO 11718085

DATE 27-07-2019



ECIL was setup under the department of Atomic Energy in the year 1967 with a view to
generate a strong indigenous capability in the field of professional grade electronic. The initial
accent was on self-reliance and ECIL was engaged in the Design Development Manufacture and
Marketing of several products emphasis on three technology lines viz. Computers, control
systems and communications. ECIL thus evolved as a multi-product company serving multiple
sectors of Indian economy with emphasis on import of country substitution and development of
products and services that are of economic and strategic significance to the country.
Electronics Corporation of India Limited (ECIL) entered into collaboration with OSI
Systems Inc. ( and set up a joint venture "ECIL_RAPSICAN LIMITED".
This Joint Venture manufacture the equipment’s manufactured by RAPSICAN, U.K, U.S.A with
the same state of art Technology, Requisite Technology is supplied by RAPSICAN and the final
product is manufactured at ECIL facility.
Recognizing the need for generating quality IT professionals and to meet the growing
demand of IT industry, a separate division namely CED has been established to impart quality
and professional IT training under the brand name of ECIT. ECIT, the prestigious offshoot of
ECIL is an emerging winner and is at the fore front of IT education in the country.

ECIL’s mission is to consolidate its status as a valued national asset in the area of
strategic electronics with specific focus on Atomic Energy, Defense, Security and such critical
sectors of strategic national importance.
 To continue services to the country’s needs for the peaceful uses Atomic Energy. Special
and Strategic requirements of Defence and Space, Electronics Security System and
Support for Civil aviation sector.
 To establish newer Technology products such as Container Scanning Systems and
Explosive Detectors.
 To re-engineer the company to become nationally and internationally competitive by
paying particular attention to delivery, cost and quality in all its activities.
 To explore new avenues of business and work for growth in strategic sectors in addition
to working realizing technological solutions for the benefit of society in areas like
Agriculture, Education, Health, Power, Transportation, Food, Disaster Management etc.
The Company is organized into divisions serving various sectors, national and
Commercial Importance. They are Divisions serving nuclear sector like Control & Automation
Division (CAD), Instruments & Systems Division (ISD), Divisions Serving defence sector like
Communications Division (CND), Antenna Products Division (APD), Servo Systems Division

(SSD) etc., Divisions handling Commercial Products are Telecom Division (TCD), Customer
Support Division (CSD), Computer Education Division (CED).

ECIL is currently operating in major business EXPORT segments like Instruments and
systems design, Industrial/Nuclear, Servo Systems, Antenna Products, Communication, Control
and Automation and several other components.
The company played a very significant role in the training and growth of high calibre
technical and managerial manpower especially in the fields of Computers and Information
Technology. Though the initial thrust was on meeting the Control & Instrumentation
requirements of the Nuclear Power Program, the expanded scope of self-reliance pursued by
ECIL enabled the company to develop various products to cater to the needs of Defence, Civil
Aviation, Information & Broadcasting, Tele communications, etc.

Very-large-scale integration (VLSI) is the process of creating an integrrated circuit (IC) by
combining millions of transistors or devices into a single chip. VLSI began in the 1970s when
complex semiconductorand communication technologies were being developed, and the MOS
transistor was widely adopted. The microprocessor is a VLSI device. Before the introduction of
VLSI technology, most ICs had a limited set of functions they could perform. An electronic
circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI lets IC designers add all
of these into one chip.
The history of the transistor dates to the 1920s when several inventors attempted devices that
were intended to control current in solid-state diodes and convert them into triodes. Success
came after World War II, when the use of silicon and germanium crystals as radar detectors led
to improvements in fabrication and theory. Scientists who had worked on radar returned to solid-
state device development. With the invention of transistors at Bell Labs in 1947, the field of
electronics shifted from vacuum tubes to solid-state device.
With the small transistor at their hands, electrical engineers of the 1950s saw the possibilities of
constructing far more advanced circuits. However, as the complexity of circuits grew, problems
One problem was the size of the circuit. A complex circuit like a computer was dependent on
speed. If the components were large, the wires interconnecting them must be long. The electric
signals took time to go through the circuit, thus slowing the computer
The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by
making all the components and the chip out of the same block (monolith) of semiconductor
material. The circuits could be made smaller, and the manufacturing process could be automated.
This led to the idea of integrating all components on a single-crystal silicon wafer, which led to
small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the
late 1960s.
Further integration was made possible with the wide adoption of the MOS transistor, originally
invented by M. Atalla and Dawon Kahng at Bell Labs in 1959. General
Microelectronics introduced the first commercial MOS integrated circuit in 1964. In the early
1970s, MOS integrated circuit technology allowed the integration of more than 10,000 transistors
in a single chip. This paved the way for large-scale integration (LSI) and then VLSI in the 1970s
and 1980s, with tens of thousands of transistors on a single chip (later hundreds of thousands,
then millions, and now billions).
The first semiconductor chips held two transistors each. Subsequent advances added more
transistors, and as a consequence, more individual functions or systems were integrated over
time. The first integrated circuits held only a few devices, perhaps as many as
ten diodes, transistors, resistors and capacitors, making it possible to fabricate one or more logic
gates on a single device. Now known retrospectively as small-scale integration (SSI),
improvements in technique led to devices with hundreds of logic gates, known as medium-scale
integration (MSI). Further improvements led to large-scale integrration (LSI), i.e. systems with
at least a thousand logic gates. Current technology has moved far past this mark and
today's microprocessors have many millions of gates and billions of individual transistors.
At one time, there was an effort to name and calibrate various levels of large-scale integration
above VLSI. Terms like ultra-large-scale integration (ULSI) were used. But the huge number of
gates and transistors available on common devices has rendered such fine distinctions moot.
Terms suggesting greater than VLSI levels of integration are no longer in widespread use.
In 2008, billion-transistor processors became commercially available. This became more
commonplace as semiconductor fabrication advanced from the then-current generation
of 65 nm processes. Current designs, unlike the earliest devices, use extensive design
automation and automated logic synthesis to lray out the transistors, enabling higher levels of
complexity in the resulting logic functionality. Certain high-performance logic blocks like the
SRAM (static random-access memory) cell, are still designed by hand to ensure the highest
Structured design
Structured VLSI design is a modular methodology originated by Carver Mead and Lynn
Conway for saving microchip area by minimizing the interconnect fabrics area. This is obtained
by repetitive arrangement of rectangular macro blocks which can be interconnected using wiring
by abutment. An example is partitioning the layout of an adder into a row of equal bit slices
cells. In complex designs this structuring may be achieved by hierarchical nesting.[5]
Structured VLSI design had been popular in the early 1980s, but lost its popularity later because
of the advent of placement and routing tools wasting a lot of area by routing, which is tolerated
because of the progress of Moore's Law. When introducing the hardware description
language KARL in the mid' 1970s, Reiner Hartenstein coined the term "structured VLSI design"
(originally as "structured LSI design"), echoing Edsger Dijkstra's structured
programming approach by procedure nesting to avoid chaotic spaghetti-structured program
As microprocessors become more complex due to technology scaling, microprocessor designers
have encountered several challenges which force them to think beyond the design plane, and
look ahead to post-silicon:

 Process variation – As photolithography techniques get closer to the fundamental laws of

optics, achieving high accuracy in doping concentrations and etched wires is becoming more
difficult and prone to errors due to variation. Designers now must simulate across multiple
fabrication process corners before a chip is certified ready for production, or use system-
level techniques for dealing with effects of variation.
 Stricter design rules – Due to lithography and etch issues with scaling, design
rules for layout have become increasingly stringent. Designers must keep in mind an ever
increasing list of rules when laying out custom circuits. The overhead for custom design is
now reaching a tipping point, with many design houses opting to switch to electronic design
automation (EDA) tools to automate their design process.
 Timing/design closure – As clock frequencies tend to scale up, designers are finding it more
difficult to distribute and maintain low clock skew between these high frequency clocks
across the entire chip. This has led to a rising interest
in multicore and multiprocessor architectures, since an overall speedup can be obtained even
with lower clock frequency by using the computational power of all the cores.
 First-pass success – As die sizes shrink (due to scaling), and wafer sizes go up (due to lower
manufacturing costs), the number of dies per wafer increases, and the complexity of making
suitable photomasks goes up rapidly. A mask set for a modern technology can cost several
million dollars. This non-recurring expense deters the old iterative philosophy involving
several "spin-cycles" to find errors in silicon, and encourages first-pass silicon success.
Several design philosophies have been developed to aid this new design flow, including
design for manufacturing (DFM), design for test (DFT), and Design for X.


Complementary metal-oxide-semiconductor (CMOS) is a technology for

constructing integrated circuits, employing MOSFET transistors. CMOS technology is used
in microprocessors, microcontrollers, static RAM, and other digital logic circuits. CMOS
technology is also used for several analog circuits such as image sensors (CMOS
sensor), data converters, and highly integrated transceivers for many types of
communication. Frank Wanlass invented CMOS in 1963 while at Fairchild
Semiconductor and was granted US patent 3,356,858 in 1967. In this groundbreaking
patent, the fabrication of CMOS devices was outlined, on the basis of thermal oxidation of
a silicon substrate to yield a layer of silicon dioxide located between the drain contact
and the source contact.

CMOS is also sometimes referred to as complementary-symmetry metal–oxide–

semiconductor (COS-MOS). "COS-MOS" was an RCA trademark, which forced other
manufacturers to find another name. The words complementary symmetry" refer to the
typical design style with CMOS using complementary and symmetrical pairs of p-
type and n-type metal–oxide–semiconductor field-effect transistors (MOSFETs) for logic

A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a
customer or a designer after manufacturing – hence the term "field-programmable". The FPGA
configuration is generally specified using a hardware description language (HDL), similar to that
used for an Application-Specific Integrated Circuit (ASIC). Circuit diagrams were previously
used to specify the configuration, but this is increasingly rare due to the advent of electronic
design automation tools.

A Spartan FPGA from Xilinx

FPGAs contain an array of programmable logic blocks, and a hierarchy of "reconfigurable

interconnects" that allow the blocks to be "wired together", like many logic gates that can be
inter-wired in different configurations. Logic blocks can be configured to perform
complex combinational functions, or merely simple logic gates like AND and XOR. In most
FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more
complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic
functions, allowing flexible reconfigurable computing as performed in computer software.
Technical design
Contemporary field-programmable gate arrays (FPGAs) have large resources of logic gates and
RAM blocks to implement complex digital computations. As FPGA designs employ very fast
I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid
data within setup time and hold time.
Floor planning enables resource allocation within FPGAs to meet these time constraints. FPGAs
can be used to implement any logical function that an ASIC can perform. The ability to update
the functionality after shipping, partial re-configuration of a portion of the design[3] and the low
non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher
unit cost), offer advantages for many applications.

Some FPGAs have analog features in addition to digital functions. The most common analog
feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on
lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on
heavily loaded pins on high-speed channels that would otherwise run too slowly. Also common
are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, and phase-locked
loops with embedded voltage-controlled oscillators used for clock generation and management
and for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery.
Fairly common are differential comparators on input pins designed to be connected
to differential signaling channels. A few "mixed signal FPGAs" have integrated
peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with
analog signal conditioning blocks allowing them to operate as a system-on-a-chip (SoC). Such
devices blur the line between an FPGA, which carries digital ones and zeros on its internal
programmable interconnect fabric, and field-programmable analog array (FPAA), which carries
analog values on its internal programmable interconnect fabric.
The FPGA industry sprouted from programmable read-only memory (PROM)
and programmable logic devices (PLDs). PROMs and PLDs both had the option of being
programmed in batches in a factory or in the field (field-programmable). However,
programmable logic was hard-wired between logic gates.
Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in
1984 – the EP300 – which featured a quartz window in the package that allowed users to shine
an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration. In
December 2015, Intel acquired Altera.
Xilinx co-founders Ross Freeman and Berrnard Vonderschmitt invented the first commercially
viable field-programmable gate array in 1985 – the XC2064. The XC2064 had programmable
gates and programmable interconnects between gates, the beginnings of a new technology and
market. The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup
tables (LUTs). More than 20 years later, Freeman was entered into the National Inventors Hall of
Fame for his invention.
In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman
to develop a computer that would implement 600,000 reprogrammable gates. Casselman was
successful and a patent related to the system was issued in 1992. Altera and Xilinx continued
unchallenged and quickly grew from 1985 to the mid-1990s, when competitors sprouted up,
eroding significant market share. By 1993, Actel (now Microsemi) was serving about 18 percent
of the market. By 2013, Altera (31 percent), Actel (10 percent) and Xilinx (36 percent) together
represented approximately 77 percent of the FPGA market.

The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the
volume of production. In the early 1990s, FPGAs were primarily used
in telecommunications and networking. By the end of the decade, FPGAs found their way into
consumer, automotive, and industrial applications.
Companies like Microsoft have started to use FPGAs to accelerate high-performance,
computationally intensive systems (like the data centers that operate their Bing search engine),
due to the performance per watt advantage FPGAs deliver. Microsoft began using FPGAs
to accelerate Bing in 2014, and in 2018 began deploying FPGAs across other data center
workloads for their Azure cloud computingplatform.
In 2012 the coarse-grained architectural approach was taken a step further by combining
the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and
related peripherals to form a complete "system on a programmable chip". This work mirrors the
architecture created by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group in
1982 which combined a reconfigurable CPU architecture on a single chip called the SB24.
Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 All Programmable
SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within
the FPGA's logic fabric] or in the Altera Arria V FPGA, which includes an 800 MHz dual-
core ARM Cortex-A9 MPCore. The Atmel FPSLIC is another such device, which uses
an AVR processor in combination with Atmel's programmable logic architecture.
The Microsemi SmartFusion devices incorporate an ARM Cortex-M3 hard processor core (with
up to 512 kB of flash and 64 kB of RAM) and analog peripherals such as a multi-channel analog-
to-digital converters and digital-to-analog converters to their flash memory-based FPGA fabric.

A Xilinx Zynq-7000 All Programmable System on a Chip.

Soft Core
An alternate approach to using hard-macro processors is to make use of soft processor IP
cores that are implemented within the FPGA logic. Nios II, MicroBlaze and Mico32 are

examples of popular softcore processors. Many modern FPGAs are programmed at "run time",
which has led to the idea of reconfigurable computing or reconfigurable systems – CPUs that
reconfigure themselves to suit the task at hand. Additionally, new, non-FPGA architectures are
beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a
hybrid approach by providing an array of processor cores and FPGA-like programmable cores on
the same chip.
To ASIC Historically, FPGAs have been slower, less energy efficient and generally achieved
less functionality than their fixed ASIC counterparts. An older studyshowed that designs
implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic
power, and run at one third the speed of corresponding ASIC implementations. More recently,
FPGAs such as the Xilinx Virtex-7 or the Altera Stratix 5 have come to rival corresponding
ASIC and ASSP ("Application-specific standard part", such as a standalone USB interface chip
solutions by providing significantly reduced power usage, increased speed, lower materials cost,
minimal implementation real-estate, and increased possibilities for re-configuration 'on-the-fly'.
Where previously[?a design may have included 6 to 10 ASICs, the same design can now be
achieved using only one FPGA.
Advantages of FPGAs include the ability to re-program when already deployed (i.e. "in the
field") to fix bugs, and often include shorter time to market and lower non-recurring
engineering costs. Vendors can also take a middle road via FPGA prototyping: developing their
prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no
longer be modified after the design has been committed.
Complex Programmable Logic Devices (CPLD)
The primary differences between complex programmable lorgic devices (CPLDs) and FPGAs
are architectural. A CPLD has a comparatively restrictive structure consisting of one or more
programmable sum-of-products logic arrays feeding a relatively small number of
clocked registers. As a result, CPLDs are less flexible, but have the advantage of more
predictable timing delays and a higher logic-to-interconnect ratio. FPGA architectures, on the
other hand, are dominated by interconnect. This makes them far more flexible (in terms of the
range of designs that are practical for implementation on them) but also far more complex to
design for, or at least requiring more complex electronic design automation (EDA) software.
In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually
much larger in terms of resources than CPLDs. Typically only FPGAs contain more
complex embedded functions such as adders, multipliers, memory, and serializer/deserializers.
Another common distinction is that CPLDs contain embedded flash memory to store their
configuration while FPGAs usually require external non-volatile memory (but not always).

When a design requires simple instant-on (logic is already configured at power-up) CPLDs are
generally preferred. For most other applications FPGAs are generally preferred. Sometimes both
CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally
perform glue logic functions, and are responsible for “booting” the FPGA as well as
controlling reset and boot sequence of the complete circuit board. Therefore, depending on the
application it may be judicious to use both FPGAs and CPLDs in a single design
Security considerations
FPGAs have both advantages and disadvantages as compared to ASICs or secure
microprocessors, concerning hardware security. FPGAs' flexibility makes malicious
modifications during fabrication a lower risk. Previously, for many FPGAs, the
design bitstream was exposed while the FPGA loads it from external memory (typically on every
power-on). All major FPGA vendors now offer a spectrum of security solutions to designers
such as bitstream encryption and authentication. For
example, Altera and Xilinx offer AES encryption (up to 256-bit) for bitstreams stored in an
external flash memory.
FPGAs that store their configuration internally in nonvolatile flash memory, such as Microsemi's
ProAsic 3 or Lattice's XP2 programmable devices, do not expose the bitstream and do not
need encryption. In addition, flash memory for a lookup table provides single event
upset protection for space applications.[ Customers wanting a higher guarantee of tamper
resistance can use write-once, antifuseFPGAs from vendors such as Microsemi.
With its Stratix 10 FPGAs and SoCs, Altera introduced a Secure Device Manager and physically
uncloneable functions to provide high levels of protection against physical attacks
In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that FPGAs can
be vulnerable to hostile intent. They discovered a critical backdoor vulnerability had been
manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many
levels such as reprogramming crypto and access keys, accessing unencrypted bitstream,
modifying low-level silicon features, and extracting configuration data.

An FPGA can be used to solve any problem which is computable. This is trivially proven by the
fact that FPGAs can be used to implement a soft microprocessor, such as the
Xilinx MicroBlaze or Altera Nios II. Their advantage lies in that they are significantly faster for
some applications because of their parallel nature and optimality in terms of the number of gates
used for certain processes.

FPGAs originally began as competitors to CPLDs to implement glue logic for printed circuit
boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to
the point where some are now marketed as full systems on chips (SoCs). Particularly with the
introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications
which had traditionally been the sole reserve of digital signal processor hardware (DSPs) began
to incorporate FPGAs instead.
Another trend in the use of FPGAs is hardware acceleration, where one can use the FPGA to
accelerate certain parts of an algorithm and share part of the computation between the FPGA and
a generic processor. The search engine Bing is noted for adopting FPGA acceleration for its
search algorithm in 2014.] As of 2018, FPGAs are seeing increased use as AI
accelerators including Microsoft's so-termed "Project Catapult" and for accelerating artificial
neural networks for machine learning applications.
Traditionally,[when?] FPGAs have been reserved for specific vertical applications where the
volume of production is small. For these low-volume applications, the premium that companies
pay in hardware cost per unit for a programmable chip is more affordable than the development
resources spent on creating an ASIC. As of 2017, new cost and performance dynamics have
broadened the range of viable applications


Simplified example illustration of a logic cell (LUT – Lookup table, FA – Full adder, DFF – D-
type flip-flop)

The most common FPGA architecture consists of an array of logic blocks, I/O pads, and routing
channels. Generally, all the routing channels have the same width (number of wires). Multiple
I/O pads may fit into the height of one row or the width of one column in the array.
An application circuit must be mapped into an FPGA with adequate resources. While the number
of CLBs/LABs and I/Os required is easily determined from the design, the number of routing
tracks needed may vary considerably even among designs with the same amount of logic.
For example, a crossbar switch requires much more routing than a systolic array with the same
gate count. Since unused routing tracks increase the cost (and decrease the performance) of the
part without providing any benefit, FPGA manufacturers try to provide just enough tracks so that
most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed This is
determined by estimates such as those derived from Rent's rule or by experiments with existing
designs. As of 2018, network-on-chip architectures for routing and interconnection are being
In general, a logic block consists of a few logical cells (called ALM, LE, slice etc.). A typical
cell consists of a 4-input LUT, a full adder (FA) and a D-type flip-flop, as shown above. The
LUTs are in this figure split into two 3-input LUTs. In normal mode those are combined into a 4-
input LUT through the left multiplexer (mux). In arithmetic mode, their outputs are fed to the
adder. The selection of mode is programmed into the middle MUX. The output can be
either synchronous or asynchronous, depending on the programming of the mux to the right, in
the figure example. In practice, entire or parts of the adder are storerd as functions into the LUTs
in order to save space.
Hard blocks
Modern FPGA families expand upon the above capabilities to include higher level functionality
fixed in silicon. Having these common functions embedded in the circuit reduces the area
required and gives those functions increased speed compared to building them from logical
primitives. Examples of these include multipliers, generic DSP blocks, embedded processors,
high speed I/O logic and embedded memories.
Higher-end FPGAs can contain high speed multi-gigabit transceirvers and hard IP cores such
as processor cores, Ethernet medium access control units, PCI/PCI Express controllers, and
external memory controllers. These cores exist alongside the programmable fabric, but they are
built out of transistors instead of LUTs so they have ASIC-level performance and power
consumption without consuming a significant amount of fabric resources, leaving more of the
fabric free for the application-specific logic. The multi-gigabit transceivers also contain high
performance analog input and output circuitry along with high-speed serializers and
deserializers, components which cannot be built out of LUTs. Higher-level PHY layer
functionality such as line coding may or may not be implemented alongside the serializers and
deserializers in hard logic, depending on the FPGA.

Most of the circuitry built inside of an FPGA is synchronous circuitry that requires a clock
signal. FPGAs contain dedicated global and regional routing networks for clock and reset so they
can be delivered with minimal skew. Also, FPGAs generally contain analog phase-locked
loop and/or delay-locked loop components to synthesize new clock frequencies as well as
attenuate jitter. Complex designs can use multiple clocks with different frequency and phase
relationships, each forming separate clock domains. These clock signals can be generated locally
by an oscillator or they can be recovered from a high speed serial data stream. Care must be
taken when building clock domain crossing circuitry to avoid metastability. FPGAs generally
contain block RAMs that are capable of working as dual port RAMs with different clocks, aiding
in the construction of building FIFOs and dual port buffers that connect differing clock domains.
3D architecturesTo shrink the size and power consumption of FPGAs, vendors such
as Tabula and Xilinx have introduced 3D or stacked architectures. Following the introduction of
its 28 nm 7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA
product lines will be constructed using multiple dies in one package, employing technology
developed for 3D construction and stacked-die assemblies.
Xilinx's approach stacks several (three or four) active FPGA dies side-by-side on a
silicon interposer – a single piece of silicon that carries passive interconnect. The multi-die
construction also allows different parts of the FPGA to be created with different process
technologies, as the process requirements are different between the FPGA fabric itself and the
very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called
a heterogeneous FPGA.
Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting
other die/technologies to the FPGA using Intel's embedded multi-die interconnect bridge (EMIB)
Design and programming
To define the behavior of the FPGA, the user provides a design in a hardware description
language (HDL) or as a schematic design. The HDL form is more suited to work with large
structures because it's possible to specify high-level functional behavior rather than drawing
every piece by hand. However, schematic entry can allow for easier visualization of a design and
its component modules.
Using an electronic design automation tool, a technology-mapped netlist is generated. The netlist
can then be fit to the actual FPGA architecture using a process called place-and-route, usually
performed by the FPGA company's proprietary place-and-route software. The user will validate
the map, place and route results via timing analysis, simulation, and other verification and
validation methodologies. Once the design and validation process is complete, the binary file
generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the
FPGA. This file is transferred to the FPGA/CPLD via a serial interface (JTAG) or to an external
memory device like an EEPROM.
The most common HDLs are VHDL and Verilog as well as extensions such as SystemVerilog.
However, in an attempt to reduce the complexity of designing in HDLs, which have been
compared to the equivalent of assembly languages, there are moves[by whom?] to raise
the abstraction level through the introduction of alternative languages.[when?] National
Instruments' LabVIEW graphical programming language (sometimes referred to as "G") has an
FPGA add-in module available to target and program FPGA hardware.
To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex
functions and circuits that have been tested and optimized to speed up the design process. These
predefined circuits are commonly called intellectual property (IP) cores, and are available from
FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under
proprietary licenses. Other predefined circuits are available from developer communities such
as OpenCores (typically released under free rand open source licenses such as the GPL, BSD or
similar license), and other sources. Such designs are known as "open-source hardware."
In a typical design flow, an FPGA application developer will simulate the design at multiple
stages throughout the design process. Initially the RTL description in VHDL or Verilog is
simulated by creating test benches to simulate the system and observe results. Then, after
the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate-
level description where simulation is repeated to confirm the synthesis proceeded without errors.
Finally the design is laid out in the FPGA at which point propagation delays can be added and
the simulation run again with these values back-annotated onto the netlist.
More recently, OpenCL (Open Computing Language) is being used by programmers to take
advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows
programmers to develop code in the C programming language and target FPGA functions as
OpenCL kernels using OpenCL constructs. For further information, see high-level
synthesis and C to HDL.

Basic process technology types

 SRAM – based on static memory technology. In-system programmable and re-

programmable. Requires external boot devices. CMOS. Currently in use. Notably, flash
memory or EEPROM devices may often load contents into internal SRAM that controls
routing and logic.
 Fuse – One-time programmable. Bipolar. Obsolete.
 Antifuse – One-time programmable. CMOS.
 PROM – Programmable Read-Only Memory technology. One-time programmable because
of plastic packaging. Obsolete.
 EPROM – Erasable Programmable Read-Only Memory technology. One-time
programmable but with window, can be erased with ultraviolet (UV) light. CMOS. Obsolete.
 EEPROM – Electrically Erasable Programmable Read-Only Memory technology. Can be
erased, even in plastic packages. Some but not all EEPROM devices can be in-system
programmed. CMOS.
 Flash – Flash-erase EPROM technology. Can be erased, even in plastic packages. Some but
not all flash devices can be in-system programmed. Usually, a flash cell is smaller than an
equivalent EEPROM cell and is therefore less expensive to manufacture. CMOS.

A typical hardware level language (HDL) supports a mixed-level description in which gate
and netlist constructs are used with functional descriptions. This mixed level capability enables
you to describe system architectures at a high level of abstraction, then incrementally refine a
design’s detailed gate-level implementation.

description language is a language used to describe a digital system, for example, a
microprocessor or a memory or a simple flip-flop. This just means that, by using a HDL one can
describe any hardware (digital) at any level.

Verilog provides both behavioral and structural language structures. These structures
allow expressing design objects at high and low levels of abstraction. Designing hardware with a
language such as Verilog allows using software concepts such as parallel processing and object-
oriented programming. Verilog has syntax similar to C and Pascal.

Verilog is a first modern hardware description language to be invented. It was created
by Phil Moorby and Prabhu Goel during the winter of 1983/84. The wording for this process was
“Automated Integrated Design Systems” as a hardware modelling language. Gateway Design
Automation was purchased by Cadence Design Systems in 1990. Cadence now has full rights to
Gateway’s Verilog and the Verilog-XL, the HDL-the next simulator that would become the de-
facto standard for the next decade. Originally, Verilog was intended to describe and allow
simulation only afterwards was support for synthesis added.

Verilog like any other hardware description language permits the designers to create a
design in either Bottom-up or Top-down methodology.

The traditional method of electronic design is bottom-up. Each design is performed

at the gate-level using the standard gates. With increasing complexity of new designs this
approach is nearly impossible to maintain. New systems consist of ASIC or microprocessors
with a complexity of thousands of transistors. These traditional Bottom-up designs have to give
way to new structural, hierarchical design methods. Without these new design practices it would
be impossible to handle the new complexity.
The design-style of all designers is the top-down design. A real top-down design
allows early testing, easy change of different technologies, a structured system design and many
other advantages. But it is very difficult to follow a pure top-down design. Due to this fact most
designs are mix of both the methods, implementing some key elements of both design styles.

Complex circuits are commonly designed using the top-down methodology. Various
specification levels are required at each stage of the design process.

Verilog supports a design at many different levels of abstraction. Three of them are
very important:

 Behavioral level
 Register-Transfer level
 Gate level

These levels describe a system by concurrent algorithms (behavioral). Each

algorithm itself is sequential, that means it consists of a set of instructions that are executed one
after the other. Functions, tasks and always the blocks are the main elements. There is no regard
to the structural realization of the design.


Designs using the Register-transfer level specify the characteristics of a circuit by

operations and the transfer of data between the register. An explicit clock is used. RTL design
contains exact timing possibility, operations are scheduled to occur at certain times. Modern
definition of a RTL code is “Any cod that is synthesizable is called RTL code”.

Within the logic level the characteristics of a system are described by logical links
and their timing properties. All signals are discrete signals. They can only have definite logical
values (‘0’,’1’,’X’,’Z’). The usable operations are predefined logic primitives (AND, OR, NOT
etc., gates). Using gate level modelling might not be a good idea for any level of logic design.
Gate level code is generated by tools and their netlist is used for gate level simulation and for

Design is the most significant human endeavor. It is the channel through which
creativity is realized. Design determines our every activity as well as the results of those
activities, thus it includes planning, problem solving and producing.

A semiconductor process technology is a method by which working circuits can be

manufactured from designed specifications. There are many such technologies, each of which
creates a different environment or style of design.


 We can verify design functionality early in the design process. A design written as an
HDL description can be simulated immediately. Design simulation at this high level- at a
gate level before implementation allows you to evaluate architectural and design
 An HDL description is more easily read and understood than a netlist or schematic
description. HDL descriptions provide technology-independent documentation of a
design and its functionality. Because the initial HDL design description is technology
independent , you can use it again to generate the design in a different technology,
without Having to translate it from the original technology.
 Large designs are easy to handle with HDL tools than schematic tools.

Using Xilinx ISE A Brief Tutorial

ISE14.7 Quick Start Tutorial

The ISE 14.7 quick start tutorial provides Xilinx PLD designers with a quick overview of the
basic design process using ISE14.7. After you have completed the tutorial, you will have an
understanding of how to create, verify and implement a design.
This tutorial contains the following sections:

 “Getting started”.
 “Creating a new project”.

 “Creating a HDL source”.
 “Design simulation”.
 “Create timing constrains”.
 “Implement design and verify pin locations”.
 “Re implement design and verify pin locations”.
 “Download design to the spartan-3 demo board”.
For an in-depth explanation of the in-depth explanation of the ISE design tools, see the ISE
in-depth tutorial on the Xilinx website at:


To use this tutorial,you must install the following software:

 ISE 13.2i

Spartan-3 startup kit ,containing the sartan-3 startup kit demo board
Starting the ISE software:-
start -> all programs-> Xilinx ISE ->project navigator
note:- your set up path is set during the installation process and may differ from the one above.

At any time during the tutorial ,you can access online help for additional information about the
ISE software and related tools.
To open help,do either of the following:-
Press F1 or

Launch the ISE Help Contents from the help menu .It contain information about creating and
maintaining your complete design flow in ISE

Create a new project

 Select file -> new project

 The tutorial in the project name field.
 Enter to a location for new project. A tutorial subdirectory is created automatically
 Verify the HDL is selected from the top level source list
 Click next to move
 Fill in the properties as shown below:
 Product category: all
 Family : Spartan 3E
 Device : XC3S500E
 Speed grade -4
 Top level module type : HDL

Later give the inputs and then the module is selected
Click next then finish in the new source Information dialog box to complete

Electronic Voting is the standard means of conducting elections using Electronic Voting
Machines, sometimes called "EVMs" in India.The use of EVMs and electronic voting was
developed and tested by the state-owned Electronics Corporation of India and Bharat Electronics
in the 1990s. They were introduced in Indian elections between 1998 and 2001, in a phased
manner. The electronic voting machines have been used in all general and state assembly
elections of India since 2004

Prior to the introduction of electronic voting, India used paper ballots and manual counting. The
paper ballots method was widely criticized because of fraudulent voting, booth capturing where
party loyalists captured booths and stuffed them with pre-filled fake ballots. The printed paper
ballots were also more expensive, requiring substantial post-voting resources to count hundreds
of millions of individual ballots.Embedded EVM features such as "electronically limiting the rate
of casting votes to five per minute", a security "lock-close" feature, an electronic database of
"voting signatures and thumb impressions" to confirm the identity of the voter, conducting
elections in phases over several weeks while deploying extensive security personnel at each
booth have helped reduce electoral fraud and abuse, eliminate booth capturing and create more
competitive and fairer elections. Indian EVMs are stand-alone machines built with once
write, read-only memory.The EVMs are produced with secure manufacturing practices, and by
design, are self-contained, battery-powered and lack any networking capability. They do not
have any wireless or wired internet components and interface. The M3 version of the EVMs
includes the VVPAT system.

In recent elections, various opposition parties have alleged faulty EVMs after they failed to
defeat the incumbent.After rulings of Delhi High Court, the Supreme Court of India in 2011
directed the Election Commission to include a paper trail as well to help confirm the reliable
operation of EVMs. The Election Commission developed EVMs with voter-verified paper audit
trail (VVPAT) system between 2012 and 2013. The system was tried on a pilot basis in the 2014
Indian general election. Voter-verified paper audit trail (VVPAT) and EVMs are now used in
every assembly and general election in India. On 9 April 2019, Supreme Court of India ordered
the Election Commission of India to use VVPAT paper trail system in every assembly
constituency and verify these before certifying the final results. The Election Commission of
India has acted under this order and deployed VVPAT verification for 20,625 EVMs in the 2019
Indian general election.

The Election Commission of India states that their machines, system checks, safeguard
procedures and election protocols are "fully tamper proof". A team led by Vemuri Hari Prasad of
NetIndia Private Limited has shown that if criminals get physical possession of the EVMs before
the voting, they can change the hardware inside and thus manipulate the results. The Prasad team
recommended a VVPAT paper trail system for verification. The Election Commission states that
along with VVPAT method, immediately prior to the election day, a sample number of votes for
each political party nominee is entered into each machine, in the presence of polling agents. At
the end of this sample trial run, the votes counted and matched with the entered sample votes, to
ensure that the machine's hardware has not been tampered with, it is operating reliably and that
there were no hidden votes pre-recorded in each machine. Machines that yield a faulty result
have been replaced to ensure a reliable electoral process.

India used paper ballots till the 1990s. The sheer scale of the Indian elections with more than half
a billion people eligible to vote, combined with election-related criminal activity, led Indian
election authority and high courts to transition to electronic voting. According to Arvind Verma
– a professor of Criminal Justice with a focus on South Asia, Indian elections have been marked
by criminal fraud and ballot tampering since the 1950s. The first major election with large scale
organized booth capturing were observed in 1957.The journalist Prem Shankar Jha, states Milan
Vaishnav, documented the booth capturing activity by Congress party leaders, and the opposition
parties soon resorted to the same fraudulent activity in the 1960s. A booth-capture was the
phenomenon where party loyalists, criminal gangs and upper-caste musclemen entered the booth
with force in villages and remote areas, and stuffed the ballot boxes with pre-filled fake paper
ballots. This problem grew between the 1950s and 1980s and became a serious and large scale
problem in states such as Uttar Pradesh and Bihar, later spreading to Andhra Pradesh, Jammu
and Kashmir and West Bengal accompanied with election day violence.Another logistical
problem was the printing of paper ballots, transporting and safely storing them, and physically
counting hundreds of millions of votes.
The Election Commission of India, led by T.N. Seshan, sought a solution by developing
Electronic voting machines in the 1990s. These devices were designed to prevent fraud by
limiting how fast new votes can be entered into the electronic machine. By limiting the rate of
vote entered every minute to five, the Commission aimed to increase the time required to cast
fake ballots, therefore, allow the security forces to intervene in cooperation with the volunteers
of the competing political parties and the media.

The Commission introduced other features such as EVM initialization procedures just before the
elections. Officials tested each machine prior to the start of voting to confirm its reliable
operation in the front of independent polling agents. They added a security lock “close” button
which saved the votes already cast in the device's permanent memory but disabled the device's
ability to accept additional votes in the case of any attempt to open the unit or tamper. The
Commission decided to conduct the elections over several weeks in order to move and post a
large number of security forces at each booth. On the day of voting, the ballots were also locked
and then saved in a secure location under the watch of state security and local volunteer citizens.
Additionally, the Election Commission also created a database of thumb impressions and
electronic voting signatures, open to inspection by polling agent volunteers and outside
observers. The EVMs-based system at each booth matches the voter with a registered card with
this electronic database in order to ensure that a voter cannot cast a ballot more than
once. According to Debnath and other scholars, these efforts of the Election Commission of
India – developed in consultations with the Indian courts, experts and volunteer feedback from
different political parties – have reduced electoral fraud in India and made the elections fairer
and more competitive.

Electronic Voting Machines are being used in Indian general and state elections to
implement electronic voting in part from 1999 elections and in total since 2004 elections.

The EVM’s reduce the time in both casting a vote and declaring the results compared to the
old paper ballot system.

 It is a reliable machine for conducting elections.

 Has mainly 2 units: Ballot and Control Units.
 Operates on a special battery.
 Tamper-proof.
 Information recorded is qretained in memory even when the battery is removed
 Easily accesible

module evm(clk,voter_switch,voting_en,opled,invalid,dout);
input voting en,clk;//voting process will start when vote_en is on
input [2:0]voter switch;
output [6:0]dout;//Max no. of votes = 127
output reg [2:0]opled;//opled[0]=party1 led, opled[1]=party2 led, opled[2]=party3 led
output reg invalid;//invalid vote indicator led

//counters to count each party votes

reg [6:0]cnt_reg1=0;//party1
reg [6:0]cnt_reg2=0;//party2
reg [6:0]cnt_reg3=0;//party3

//counter for party1 votes

always@(posedge clk)
if(voter switch == 3’b001 && voting en == 1’b1)
cnt_reg1 <= cnt_reg1 + 1;

//Counter for party2 votes

always@(posedge clk)
if(voter_switch == 3’b010 && voting_en == 1’b1)
cnt_reg2 <= cnt_reg2 + 1;

//Counter for party3 votes
always@(posedge clk)
if(voter switch == 3’b011 && voting en == 1’b1)
cnt_reg3 <= cnt_reg3 + 1;

//Final count i.e. total number of votes

assign dout = cnt_reg1 + cnt_reg2 + cnt_reg3;
//relation of “voter switch” with “opled” & “invalid”
if(voting en)
case(voter switch)
3’b011 : begin
opled = 3’b011;
invalid = 1’b0;
3’b010 : begin
opled = 3’b010;
invalid = 1’b0;
3’b001 : begin
opled = 3’b001;
invalid = 1’b0;
3’b100 : begin
opled = 3’b000;
invalid = 1’b1;
3’b110 : begin
opled = 3’b000;
invalid = 1’b1;
3’b101 : begin
opled = 3’b000;
invalid = 1’b1;
3’b111 : begin
opled = 3’b000;
invalid = 1’b1;
3’b000 : begin
opled = 3’b000;
invalid = 1’b0;
default : begin
opled = 3’b000;
invalid = 1’b0;


module evm_tb;

// Inputs
reg clk;
reg [2:0] voter_switch;
reg voting_en;

// Outputs
wire [2:0] opled;
wire invalid;
wire [6:0] dout;

// Instantiate the Unit Under Test (UUT)

evm uut (
initial clk=0;
always #5 clk=~clk;
initial begin
// Initialize Inputs
voter_switch = 3'b000;
voting_en = 1;

// Wait 100 ns for global reset to finish

#100 voter_switch = 3'b001;
#100 voter_switch = 3'b010;
#100 voter_switch = 3'b011;
#100 voter_switch = 3'b001;

// Add stimulus here