Anda di halaman 1dari 80

BER analysis of conventional and wavelet

based OFDM in LTE using different


modulation techniques
TABLE OF CONTENT
S.NO TITLE PAGE .NO
1 Abstract
2 Introduction
3 Domain explanation
4 Existing system
5 Proposed system
6 Flow Diagram
7 Use case Diagram
8 Class Diagram
9 Sequence Diagram
10 ER diagram
11 Testing Of Product
12 Modules
13 Modules Description
14 Literature Survey
15 System Requirements
16 H/W&S/W Description
17 Feasibility Study
18 Conclusion
19 References
ABSTRACT
Orthogonal Frequency Division Multiplexing (OFDM) and Multiple Input
and Multiple Output (MIMO) are two main techniques in the Long term evolution
communication process. In the OFDM system multiple carriers are used and it
provides higher level of spectral efficiency as compared to Frequency Division
Multiplexing (FDM). In the OFDM systems there exist some problems such as
Intersymbol interference and Intercarrier Interference which due to the loss of
orthogonality between the subcarriers. In order to avoid this problem cyclic prefix is
added to the signal before passing into the channel. By adding that the cyclic prefix
data which uses 20% of available bandwidth. So the inefficiency of the bandwidth
is raised. In order to solve that Wavelet based OFDM provides good orthogonality
and with its use Bit Error Rate (BER) is improved. Wavelet based system does not
require cyclic prefix, so spectrum efficiency is increased. It is proposed to use
wavelet based OFDM at the place of Discrete Fourier Transform (DFT) based
OFDM in LTE. We have compared the BER performance of wavelets and DFT
based OFDM.
INTRODUCTION

Orthogonal frequency division multiplexing (OFDM) is a multicarrier


modulation (MCM) technique which seems to be an attractive candidate for fourth
generation wireless communication systems. OFDM offer high spectral efficiency,
immune to the multipath delay, low inter-symbol interference (ISI), immunity to
frequency selective fading and high power efficiency. Due to these merits OFDM is
chosen as high data rate communication systems such as Digital Video Broadcasting
(DVB) and based mobile worldwide interoperability for microwave access (mobile
Wi-MAX). However OFDM system suffers from serious problem of high PAPR. In
OFDM system output is superposition of multiple sub-carriers. In this case some
instantaneous power output might increase greatly and become far higher than the
mean power of system. To transmit signals with such high PAPR, it requires power
amplifiers with very high power scope. These kinds of amplifiers are very expensive
and have low efficiency-cost. If the peak power is too high, it could be out of the
scope of the linear power amplifier. This gives rise to non-linear distortion which
changes the superposition of the signal spectrum resulting in performance
degradation. If no measure is taken to reduce the high PAPR.

PAPR can be described by its complementary cumulative distribution


function (CCDF). In this probabilistic approach certain schemes have been proposed
by researchers. These include clipping, coding and signal scrambling techniques.
Under the heading of signal scrambling techniques there are two schemes included.
Which are Partial transmit sequence (PTS) and Selected Mapping (SLM). Although
some techniques of PAPR reduction have been summarized in, it is still indeed
needed to give a comprehensive review including some motivations of PAPR
reductions, such as power saving, and to compare some typical methods of PAPR
reduction through theoretical analysis and simulation results directly. An effective
PAPR reduction technique should be given the best trade-off between the capacity
of PAPR reduction and transmission power, data rate loss, implementation
complexity and Bit-Error-Ratio (BER) performance. The goal of precoding
techniques is to obtain a signal with lower PAPR than in the case of OFDM without
precoding techniques and to reduce the interference produced by multiple users. The
PAPR reduction must compensate the nonlinearities of the HPA having as effect the
reduction of the bit error rate (BER).

Traditional single carrier modulation techniques can achieve only limited data
rates due to the restrictions imposed by the multipath effect of wireless channel and
the receiver complexity. High data-rate is desirable in many recent wireless
multimedia applications. However, as the data-rate in communication system
increases, the symbol duration gets reduced. Therefore, the communication systems
using single carrier modulation suffer from severe intersymbol interference (ISI)
caused by dispersive channel impulse response, thereby needing a complex
equalization mechanism. Orthogonal Frequency Division Multiplexing (OFDM) is
a special form of multicarrier modulation scheme, which divides the entire frequency
selective fading channel into many orthogonal narrow band flat fading sub channels.
In OFDM system high-bit-rate data stream is transmitted in parallel over a number
of lower data rate subcarriers and do not undergo ISI due to the long symbol
duration.

Large envelope fluctuation in OFDM signal is one of the major drawbacks of


OFDM. Such fluctuations create difficulties because practical communication
systems are peak power limited. Thus, envelope peaks require a system to
accommodate an instantaneous signal power that is larger than the signal average
power, necessitating either low operating power efficiencies or power amplifier (PA)
saturation. In order to amplify the OFDM signal with large envelope fluctuations,
PAs with large linear range are required, which makes it very expensive. If PA has
limited linear range then its operation in nonlinear mode introduces out of band
radiation and in band distortion. It is also necessary to have D/A and A/D converters
with large dynamic range to convert discrete time OFDM signal to analog signal and
vice versa. PAPR is generally used to characterize the envelope fluctuation of the
OFDM signal and it is defined as the ratio of the maximum instantaneous power to
its average power. In addition to this, OFDM system requires tight frequency
synchronization in comparison to single carrier systems, because in OFDM, the
subcarriers are narrowband. Therefore, it is sensitive to a small frequency offset
between the transmitted and the received signal. The frequency offset may arise due
to Doppler Effect or due to mismatch between transmitter and receiver local
oscillator frequencies. The carrier frequency offset (CFO) disturbs the orthogonality
between the subcarriers, and therefore the signal on any particular subcarrier will not
remain independent of the remaining subcarriers. This phenomenon is known as
inter-carrier interference (ICI), which is a big challenge for error-free demodulation
and detection of OFDM symbols.

The same definition of PAPR is applied to MIMO-OFDM systems. A high


PAPR appears when a number of subcarriers of a given OFDM symbol are out of
phase with each other. Figure 1 shows the time domain representation of the 3
subcarriers of an OFDM symbol. The right column indicates that the subcarriers are
out of phase, which causes an increase in PAPR of about 2.5 dB compared to the
subcarriers in the left column. Depending on the out-of-phase amount per subcarrier,
the PAPR can vary up to its theoretically maximum of 10log10 (N)(dB) , where N
is the number of subcarriers. When a large number of subcarriers are out of phase, a
significant PAPR can cause the transmitter’s power amplifier (PA) to run within a
non-linear operating region. This causes significant signal distortion at the output of
the power amplifier. In addition, the high PAPR can cause saturation at the digital-
to-analog converter (DAC), leading to saturation of the PA. PAPR also causes inter-
modulation between the subcarriers and distorts the transmit signal constellation.
Therefore, the PA must operate with a large power back-off, approximate to that of
the PAPR, which leads to inefficient operation.
DOMAIN INTRODUCTION

Wireless communication, or sometimes simply wireless, is the transfer of


information or power between two or more points that are not connected by an
electrical conductor. The most common wireless technologies use radio waves. With
radio waves distances can be short, such as a few meters for Bluetooth or as far as
millions of kilometers for deep-space radio communications. It encompasses various
types of fixed, mobile, and portable applications, including two-way radios, cellular
telephones, personal digital assistants (PDAs), and wireless networking. Other
examples of applications of radio wireless technology include GPS units, garage
door openers, wireless computer mice, keyboards and headsets, headphones, radio
receivers, satellite television, broadcast television and cordless telephones.

Somewhat less common methods of achieving wireless


communications include the use of other electromagnetic wireless technologies,
such as light, magnetic, or electric fields or the use of sound. The term wireless has
been used twice in communications history, with slightly different meaning. It was
initially used from about 1890 for the first radio transmitting and receiving
technology, as in wireless telegraphy, until the new word radio replaced it around
1920. The term was revived in the 1980s and 1990s mainly to distinguish digital
devices that communicate without wires, such as the examples listed in the previous
paragraph, from those that require wires or cables. This became its primary usage in
the 2000s, due to the advent of technologies such as LTE, LTE Advanced, Wi-Fi
and Bluetooth.

Wireless operations permit services, such as long-range


communications, that are impossible or impractical to implement with the use of
wires. The term is commonly used in the telecommunications industry to refer to
telecommunications systems (e.g. radio transmitters and receivers, remote controls,
etc.) which use some form of energy (e.g. radio waves, acoustic energy,) to transfer
information without the use of wires. Information is transferred in this manner over
both short and long distances.

Data communications

Wireless data communications allows wireless networking between desktop


computers, laptops, tablet computers, cell phones and other related devices. The
various available technologies differ in local availability, coverage range and
performance, and in some circumstances users employ multiple connection types
and switch between them using connection manager software or a mobile VPN to
handle the multiple connections as a secure, single virtual network. Supporting
technologies include:

Wi-Fi is a wireless local area network that enables portable computing devices
to connect easily with other devices, peripheries, and the Internet. Standardized as
IEEE 802.11 a, b, g, n, Wi-Fi approaches speeds of some types of wired Ethernet.
Wi-Fi has become the de facto standard for access in private homes, within offices,
and at public hotspots. Some businesses charge customers a monthly fee for service,
while others have begun offering it free in an effort to increase the sales of their
goods.

Cellular data service offers coverage within a range of 10-15 miles from the
nearest cell site. Speeds have increased as technologies have evolved, from earlier
technologies such as GSM, CDMA and GPRS, to 4G networks such as W-CDMA,
EDGE or CDMA2000.

Low Power Wide Area Networks (LPWAN) bridge the gap between Wi-Fi
and Cellular for low bitrate IoT applications.
Mobile Satellite Communications may be used where other wireless
connections are unavailable, such as in largely rural areas or remote locations.
Satellite communications are especially important for transportation, aviation,
maritime and military use.

Wireless Sensor Networks are responsible for sensing noise, interference, and
activity in data collection networks. This allows us to detect relevant quantities,
monitor and collect data, formulate clear user displays, and to perform decision-
making functions

Wireless data communications are used to span a distance beyond the


capabilities of typical cabling in point-to-point communication or point-to-
multipoint communication, to provide a backup communications link in case of
normal network failure, to link portable or temporary workstations, to overcome
situations where normal cabling is difficult or financially impractical, or to remotely
connect mobile users or networks.
EXISTING SYSTEM

In the existing system the discrete Fourier transform is used for the orthogonal
basis function. First the Random data is generated and then the data is encoded by
the encoder and the corresponding modulation is processed. The modulated signal
is applied with the inverse Fourier transform is performed. After that the cyclic
prefix data is added to the modulated signal and then the signal is passed into the
channel with the noise. And the aforementioned process are performed on the
transmitter side and then the reversal operations of the aforementioned processes are
performed. In the receiver side the cyclic prefix is removed from the channel output
data and then the pilot synchronization is performed after that the Fourier transform
is applied on the synchronized data and then the demodulation is performed. And
the demodulated data is decoded at the receiver. Finally the BER performance is
estimated.

Disadvantage:

 By adding cyclic prefix which occupies the 20% bandwidth.


 Spectral efficiency is less.
PROPOSED SYSTEM

In the proposed system we are using inverse discrete Fourier transform IDWT
and discrete Fourier transform DWT at the place of IDFT and DFT. SUI - 2 channel
is used for transmission and cyclic prefixing is not used. In the transmission side
conventional encoding is performed and then interleaving is performed on the
encoded data. Then the data is converted to decimal form and modulation is done.
Here we are use the QPSK, QAM-16 and QAM-64 is performed. After modulation
the pilot insertion and sub carrier mapping is performed on the data and then IDWT
of the data, which provides the orthogonality to the subcarriers. IDWT will convert
time domain signal to the frequency domain. After passing through the channel on
the signal DWT will be performed and then pilot synchronization where the inserted
pilots at the transmitter are removed then the demodulation is performed.
Demodulated data is converted to binary form and the de-interleaved and decoded
to obtain the original data transmitted. Finally the performance of the system is
evaluated by using the BER analysis.

ADVANTAGE:

 Computational complexity is low.


 Encoding signal at a low bit rate.
 They have better Power spectral density.
 Does not occupy much bandwidth.
FLOW DIAGRAM

CLASS DIAGRAM
USECASE DIAGRAM

SEQUENCE DIAGRAM
ER DIAGRAM
TESTING OF PRODUCT

Testing of Product:

System testing is the stage of implementation, which aimed at ensuring


that system works accurately and efficiently before the live operation commence.
Testing is the process of executing a program with the intent of finding an error. A
good test case is one that has a high probability of finding an error. A successful test
is one that answers a yet undiscovered error.

Testing is vital to the success of the system. System testing makes a logical
assumption that if all parts of the system are correct, the goal will be successfully
achieved. The candidate system is subject to variety of tests-on-line response,
Volume Street, recovery and security and usability test. A series of tests are
performed before the system is ready for the user acceptance testing. Any
engineered product can be tested in one of the following ways. Knowing the
specified function that a product has been designed to from, test can be conducted
to demonstrate each function is fully operational. Knowing the internal working of
a product, tests can be conducted to ensure that “al gears mesh”, that is the internal
operation of the product performs according to the specification and all internal
components have been adequately exercised.

UNIT TESTING:

Unit testing is the testing of each module and the integration of the overall system is
done. Unit testing becomes verification efforts on the smallest unit of software
design in the module. This is also known as ‘module testing’. The modules of the
system are tested separately. This testing is carried out during the programming
itself. In this testing step, each model is found to be working satisfactorily as regard
to the expected output from the module. There are some validation checks for the
fields. For example, the validation check is done for verifying the data given by the
user where both format and validity of the data entered is included. It is very easy
to find error and debug the system.

INTEGRATION TESTING:

Data can be lost across an interface, one module can have an adverse effect on the
other sub function, when combined, may not produce the desired major function.
Integrated testing is systematic testing that can be done with sample data. The need
for the integrated test is to find the overall system performance. There are two types
of integration testing. They are:

i) Top-down integration testing.

ii) Bottom-up integration testing.

WHITE BOX TESTING:

White Box testing is a test case design method that uses the control structure of
the procedural design to drive cases. Using the white box testing methods, we
derived test cases that guarantee that all independent paths within a module have
been exercised at least once.

BLACK BOX TESTING:

 Black box testing is done to find incorrect or missing function


 Interface error
 Errors in external database access
 Performance errors
 Initialization and termination errors
In ‘functional testing’, is performed to validate an application conforms to its
specifications of correctly performs all its required functions. So this testing is also
called ‘black box testing’. It tests the external behavior of the system. Here the
engineered product can be tested knowing the specified function that a product has
been designed to perform, tests can be conducted to demonstrate that each function
is fully operational.

VALIDATION TESTING:

After the culmination of black box testing, software is completed assembly as a


package, interfacing errors have been uncovered and corrected and final series of
software validation tests begin validation testing can be defined as many, but a single
definition is that validation succeeds when the software functions in a manner that
can be reasonably expected by the customer.

USER ACCEPTANCE TESTING:

User acceptance of the system is the key factor for the success of the
system. The system under consideration is tested for user acceptance by constantly
keeping in touch with prospective system at the time of developing changes
whenever required.

OUTPUT TESTING:

After performing the validation testing, the next step is output asking the user
about the format required testing of the proposed system, since no system could be
useful if it does not produce the required output in the specific format. The output
displayed or generated by the system under consideration. Here the output format
is considered in two ways. One is screen and the other is printed format. The output
format on the screen is found to be correct as the format was designed in the system
phase according to the user needs. For the hard copy also output comes out as the
specified requirements by the user. Hence the output testing does not result in any
connection in the system.

System Implementation:

Implementation of software refers to the final installation of the package


in its real environment, to the satisfaction of the intended users and the operation of
the system. The people are not sure that the software is meant to make their job
easier.

 The active user must be aware of the benefits of using the system.
 Their confidence in the software built up.
 Proper guidance is impaired to the user so that he is comfortable in
using the application.

Before going ahead and viewing the system, the user must know that for viewing the
result, the server program should be running in the server. If the server object is not
running on the server, the actual processes will not take place.

User Training:

To achieve the objectives and benefits expected from the proposed system it
is essential for the people who will be involved to be confident of their role in the
new system. As system becomes more complex, the need for education and training
is more and more important.

Education is complementary to training. It brings life to formal training by


explaining the background to the resources for them. Education involves creating
the right atmosphere and motivating user staff. Education information can make
training more interesting and more understandable.
Training on the Application Software:

After providing the necessary basic training on the computer


awareness, the users will have to be trained on the new application software. This
will give the underlying philosophy of the use of the new system such as the screen
flow, screen design, type of help on the screen, type of errors while entering the data,
the corresponding validation check at each entry and the ways to correct the data
entered. This training may be different across different user groups and across
different levels of hierarchy.

Operational Documentation:

Once the implementation plan is decided, it is essential that the user of the
system is made familiar and comfortable with the environment. A documentation
providing the whole operations of the system is being developed. Useful tips and
guidance is given inside the application itself to the user. The system is developed
user friendly so that the user can work the system from the tips given in the
application itself.

System Maintenance:

The maintenance phase of the software cycle is the time in which


software performs useful work. After a system is successfully implemented, it
should be maintained in a proper manner. System maintenance is an important aspect
in the software development life cycle. The need for system maintenance is to make
adaptable to the changes in the system environment. There may be social, technical
and other environmental changes, which affect a system which is being
implemented. Software product enhancements may involve providing new
functional capabilities, improving user displays and mode of interaction, upgrading
the performance characteristics of the system. So only thru proper system
maintenance procedures, the system can be adapted to cope up with these changes.
Software maintenance is of course, far more than “finding mistakes”.

Corrective Maintenance:

The first maintenance activity occurs because it is unreasonable to


assume that software testing will uncover all latent errors in a large software system.
During the use of any large program, errors will occur and be reported to the
developer. The process that includes the diagnosis and correction of one or more
errors is called Corrective Maintenance.

Adaptive Maintenance:

The second activity that contributes to a definition of maintenance


occurs because of the rapid change that is encountered in every aspect of computing.
Therefore Adaptive maintenance termed as an activity that modifies software to
properly interfere with a changing environment is both necessary and commonplace.

Perceptive Maintenance:

The third activity that may be applied to a definition of maintenance


occurs when a software package is successful. As the software is used,
recommendations for new capabilities, modifications to existing functions, and
general enhancement are received from users. To satisfy requests in this category,
Perceptive maintenance is performed. This activity accounts for the majority of all
efforts expended on software maintenance.

Preventive Maintenance:

The fourth maintenance activity occurs when software is changed to


improve future maintainability or reliability, or to provide a better basis for future
enhancements. Often called preventive maintenance, this activity is characterized by
reverse engineering and re-engineering techniques.
MODULES

 OFDM Symbol Generation Model


 Modulation Model
 SUI – 2 Channel Model
 Performance Analysis Model
MODULES DESCRIPTION

OFDM Symbol Generation Model

Using a large number of parallel narrow-band subcarriers instead of a


single wide-band carrier to transport information.

OFDM is a frequency-division multiplexing (FDM) scheme used as a digital


multi-carrier modulation method and is essentially identical to coded OFDM
(COFDM) and discrete multi-tone modulation (DMT). It is used in such diverse
applications as digital television and audio broadcasting, wireless networking and
broadband internet access. OFDM has also been adopted in some military
communication systems.

In an OFDM scheme, a large number of orthogonal, overlapping, narrow band


sub-channels or subcarriers, transmitted in parallel, divide the available transmission
bandwidth. The separation of the subcarriers is theoretically minimal such that there
is a very compact spectral utilization. The attraction of OFDM is mainly due to how
the system handles the multipath interference at the receiver. Multipath generates
two effects: frequency selective fading and intersymbol interference (ISI). The
"flatness" perceived by a narrow-band channel overcomes the former, and
modulating at a very low symbol rate, which makes the symbols much longer than
the channel impulse response, diminishes the latter. Using powerful error correcting
codes together with time and frequency interleaving yields even more robustness
against frequency selective fading, and the insertion of an extra guard interval
between consecutive OFDM symbols can reduce the effects of ISI even more. Thus,
an equalizer in the receiver is not necessary.
Modulation Model:

QAM (quadrature amplitude modulation) is a method of combining two


amplitude-modulated (AM) signals into a single channel, thereby doubling the
effective bandwidth. QAM is used with pulse amplitude modulation (PAM) in
digital systems, especially in wireless applications. This modulation technique is a
combination of both Amplitude and phase modulation techniques. QAM is better
than QPSK in terms of data carrying capacity. QAM takes benefit from the concept
that two signal frequencies; one shifted by 90 degree with respect to the other can
be transmitted on the same carrier. For QAM, each carrier is ASK/PSK modulated.
Hence data symbols have different amplitudes and phases. QPSK: quadrature phase
shift keying. Quadrature means the signal shifts among phase states that are
separated by 90 degrees. The signal shifts in increments of 90 degrees from 45° to
135°, -45° (315°), or -135° (225°). Data into the modulator is separated into two
channels called I and Q. Two bits are transmitted simultaneously, one per channel.
Each channel modulates a carrier. The two carrier frequencies are the same, but their
phase is offset by 90 degrees (that is, they are “in quadrature”). The two carriers are
combined and transmitted • Four states because 2^2 = 4. Theoretical bandwidth
efficiency is two bits/second/H.

SUI – 2 Channel Model:

WiMAX technology is a broadband wireless data communications technology


based around the IEE 802.16 standard providing high speed data over a wide area.
Stanford University Interim (SUI) model is developed for IEEE 802.16 by Stanford
University. It is used for frequencies above 1900MHz. IEEE 802.16. Broadband
Wireless Access working group proposed the standards for the frequency band
below 11 GHz containing the channel model developed by Stanford University,
namely the SUI models. The SUI model describes three types of terrain; they are
terrain A, terrain B and terrain C. Terrain A can be used for hilly areas with moderate
or very dense vegetation. OFDM has been incorporated into WiMAX technology to
enable it to provide high speed data without the selective fading and other issues of
other forms of signal format.

Performance Analysis Model:

The bit error rate (BER) is the number of bit errors divided by the total number
of transmitted bits over a channel. BER although unit-less also expressed in terms
of percentage. The bit error rate (BER) is the number of bit errors per unit time. The
bit error ratio (also BER) is the number of bit errors divided by the total number of
transferred bits during a studied time interval. Bit error ratio is a unit less
performance measure, often expressed as a percentage. The BER may be improved
by choosing a strong signal strength, by choosing a slow and robust modulation
scheme or line coding scheme, and by applying channel coding schemes such as
redundant forward error correction codes. The BER may be evaluated using
stochastic (Monte Carlo) computer simulations. If a simple transmission channel
model and data source model is assumed, the BER may also be calculated
analytically.
TESTING OF PRODUCT
Testing of Product:

System testing is the stage of implementation, which aimed at ensuring


that system works accurately and efficiently before the live operation commence.
Testing is the process of executing a program with the intent of finding an error. A
good test case is one that has a high probability of finding an error. A successful test
is one that answers a yet undiscovered error.

Testing is vital to the success of the system. System testing makes a


logical assumption that if all parts of the system are correct, the goal will be
successfully achieved. The candidate system is subject to variety of tests-on-line
response, Volume Street, recovery and security and usability test. A series of tests
are performed before the system is ready for the user acceptance testing. Any
engineered product can be tested in one of the following ways. Knowing the
specified function that a product has been designed to from, test can be conducted
to demonstrate each function is fully operational. Knowing the internal working of
a product, tests can be conducted to ensure that “al gears mesh”, that is the internal
operation of the product performs according to the specification and all internal
components have been adequately exercised.

UNIT TESTING:

Unit testing is the testing of each module and the integration of the overall
system is done. Unit testing becomes verification efforts on the smallest unit of
software design in the module. This is also known as ‘module testing’. The modules
of the system are tested separately. This testing is carried out during the
programming itself. In this testing step, each model is found to be working
satisfactorily as regard to the expected output from the module. There are some
validation checks for the fields. For example, the validation check is done for
verifying the data given by the user where both format and validity of the data
entered is included. It is very easy to find error and debug the system.

INTEGRATION TESTING:

Data can be lost across an interface, one module can have an adverse effect
on the other sub function, when combined, may not produce the desired major
function. Integrated testing is systematic testing that can be done with sample
data. The need for the integrated test is to find the overall system performance.
There are two types of integration testing. They are:

i) Top-down integration testing.


ii) Bottom-up integration testing.

WHITE BOX TESTING:

White Box testing is a test case design method that uses the control structure
of the procedural design to drive cases. Using the white box testing methods, we
derived test cases that guarantee that all independent paths within a module have
been exercised at least once.

BLACK BOX TESTING:

 Black box testing is done to find incorrect or missing function


 Interface error
 Errors in external database access
 Performance errors
 Initialization and termination errors

In ‘functional testing’, is performed to validate an application conforms to its


specifications of correctly performs all its required functions. So this testing is also
called ‘black box testing’. It tests the external behavior of the system. Here the
engineered product can be tested knowing the specified function that a product has
been designed to perform, tests can be conducted to demonstrate that each function
is fully operational.

VALIDATION TESTING:

After the culmination of black box testing, software is completed


assembly as a package, interfacing errors have been uncovered and corrected and
final series of software validation tests begin validation testing can be defined as
many, but a single definition is that validation succeeds when the software functions
in a manner that can be reasonably expected by the customer.

USER ACCEPTANCE TESTING:

User acceptance of the system is the key factor for the success of the
system. The system under consideration is tested for user acceptance by constantly
keeping in touch with prospective system at the time of developing changes
whenever required.

OUTPUT TESTING:

After performing the validation testing, the next step is output asking the
user about the format required testing of the proposed system, since no system could
be useful if it does not produce the required output in the specific format. The output
displayed or generated by the system under consideration. Here the output format
is considered in two ways. One is screen and the other is printed format. The output
format on the screen is found to be correct as the format was designed in the system
phase according to the user needs. For the hard copy also output comes out as the
specified requirements by the user. Hence the output testing does not result in any
connection in the system.

System Implementation:

Implementation of software refers to the final installation of the


package in its real environment, to the satisfaction of the intended users and the
operation of the system. The people are not sure that the software is meant to make
their job easier.

 The active user must be aware of the benefits of using the system
 Their confidence in the software built up
 Proper guidance is impaired to the user so that he is comfortable in
using the application

Before going ahead and viewing the system, the user must know that for
viewing the result, the server program should be running in the server. If the server
object is not running on the server, the actual processes will not take place.

User Training:

To achieve the objectives and benefits expected from the proposed system it
is essential for the people who will be involved to be confident of their role in the
new system. As system becomes more complex, the need for education and training
is more and more important.

Education is complementary to training. It brings life to formal training


by explaining the background to the resources for them. Education involves creating
the right atmosphere and motivating user staff. Education information can make
training more interesting and more understandable.

Training on the Application Software:

After providing the necessary basic training on the computer


awareness, the users will have to be trained on the new application software. This
will give the underlying philosophy of the use of the new system such as the screen
flow, screen design, type of help on the screen, type of errors while entering the data,
the corresponding validation check at each entry and the ways to correct the data
entered. This training may be different across different user groups and across
different levels of hierarchy.

Operational Documentation:

Once the implementation plan is decided, it is essential that the user of


the system is made familiar and comfortable with the environment. A documentation
providing the whole operations of the system is being developed. Useful tips and
guidance is given inside the application itself to the user. The system is developed
user friendly so that the user can work the system from the tips given in the
application itself.

System Maintenance:
The maintenance phase of the software cycle is the time in which
software performs useful work. After a system is successfully implemented, it
should be maintained in a proper manner. System maintenance is an important aspect
in the software development life cycle. The need for system maintenance is to make
adaptable to the changes in the system environment. There may be social, technical
and other environmental changes, which affect a system which is being
implemented. Software product enhancements may involve providing new
functional capabilities, improving user displays and mode of interaction, upgrading
the performance characteristics of the system. So only thru proper system
maintenance procedures, the system can be adapted to cope up with these changes.
Software maintenance is of course, far more than “finding mistakes”.

Corrective Maintenance:

The first maintenance activity occurs because it is unreasonable to


assume that software testing will uncover all latent errors in a large software system.
During the use of any large program, errors will occur and be reported to the
developer. The process that includes the diagnosis and correction of one or more
errors is called Corrective Maintenance.

Adaptive Maintenance:

The second activity that contributes to a definition of maintenance


occurs because of the rapid change that is encountered in every aspect of computing.
Therefore Adaptive maintenance termed as an activity that modifies software to
properly interfere with a changing environment is both necessary and commonplace.

Perceptive Maintenance:
The third activity that may be applied to a definition of maintenance
occurs when a software package is successful. As the software is used,
recommendations for new capabilities, modifications to existing functions, and
general enhancement are received from users. To satisfy requests in this category,
Perceptive maintenance is performed. This activity accounts for the majority of all
efforts expended on software maintenance.

Preventive Maintenance:

The fourth maintenance activity occurs when software is changed to


improve future maintainability or reliability, or to provide a better basis for future
enhancements. Often called preventive maintenance, this activity is characterized by
reverse engineering and re-engineering techniques.
SYSTEM REQUIREMENTS

SOFTWARE REQUIREMENTS:

OS : Windows

Software : MATLAB 2015b

HARDWARE REQUIREMENTS:

Processor : Intel Pentium.

RAM : 2GB
SOFTWARE DESCRIPTION

MATLAB® is a high-level technical computing language and interactive


environment for algorithm development, data visualization, data analysis, and
numerical computation. Using MATLAB, you can solve technical computing
problems faster than with traditional programming languages, such as C, C++, and
Fortran.

Matlab is a data analysis and visualization tool which has been designed with
powerful support for matrices and matrix operations. As well as this, Matlab has
excellent graphics capabilities, and its own powerful programming language. One
of the reasons that Matlab has become such an important tool is through the use of
sets of Matlab programs designed to support a particular task. These sets of programs
are called toolboxes, and the particular toolbox of interest to us is the image
processing toolbox. Rather than give a description of all of Matlab's capabilities, we
shall restrict ourselves to just those aspects concerned with handling of images. We
shall introduce functions, commands and techniques as required. A Matlab function
is a keyword which accepts various parameters, and produces some sort of output:
for example a matrix, a string, a graph. Examples of such functions are sin, imread,
imclose. There are many functions in Matlab, and as we shall see, it is very easy (and
sometimes necessary) to write our own.

Matlab's standard data type is the matrix_all data are considered to be matrices of
some sort. Images, of course, are matrices whose elements are the grey values (or
possibly the RGB values) of its pixels. Single values are considered by Matlab to be
matrices, while a string is merely a matrix of characters; being the string's length. In
this chapter we will look at the more generic Matlab commands, and discuss images
in further chapters.
When you start up Matlab, you have a blank window called the Command Window_
in which you enter commands. Given the vast number of Matlab's functions, and the
different parameters they can take, a command line style interface is in fact much
more efficient than a complex sequence of pull-down menus.

You can use MATLAB in a wide range of applications, including signal and image
processing, communications, control design, test and measurement financial
modeling and analysis. Add-on toolboxes (collections of special-purpose MATLAB
functions) extend the MATLAB environment to solve particular classes of problems
in these application areas.

MATLAB provides a number of features for documenting and sharing your work.
You can integrate your MATLAB code with other languages and applications, and
distribute your MATLAB algorithms and applications.

When working with images in Matlab, there are many things to keep in mind such
as loading an image, using the right format, saving the data as different data types,
how to display an image, conversion between different image formats.

Image Processing Toolbox provides a comprehensive set of reference-standard


algorithms and graphical tools for image processing, analysis, visualization, and
algorithm development. You can perform image enhancement, image deblurring,
feature detection, noise reduction, image segmentation, spatial transformations, and
image registration. Many functions in the toolbox are multithreaded to take
advantage of multicore and multiprocessor computers.
MATLAB and images

• The help in MATLAB is very good, use it!

• An image in MATLAB is treated as a matrix

• Every pixel is a matrix element

• All the operators in MATLAB defined on

matrices can be used on images: +, -, *, /, ^, sqrt, sin, cos etc.

• MATLAB can import/export several image formats

– BMP (Microsoft Windows Bitmap)

– GIF (Graphics Interchange Files)

– HDF (Hierarchical Data Format)

– JPEG (Joint Photographic Experts Group)

– PCX (Paintbrush)

– PNG (Portable Network Graphics)

– TIFF (Tagged Image File Format)

– XWD (X Window Dump)

– MATLAB can also load raw-data or other types of image data

• Data types in MATLAB

– Double (64-bit double-precision floating point)

– Single (32-bit single-precision floating point)


– Int32 (32-bit signed integer)

– Int16 (16-bit signed integer)

– Int8 (8-bit signed integer)

– Uint32 (32-bit unsigned integer)

– Uint16 (16-bit unsigned integer)

– Uint8 (8-bit unsigned integer)

Images in MATLAB

Binary images : {0,1}

• Intensity images : [0,1] or uint8, double etc.

• RGB images : m-by-n-by-3

• Indexed images : m-by-3 color map

• Multidimensional images m-by-n-by-p (p is the number of layers)

IMAGE TYPES IN MATLAB

Outside Matlab images may be of three types i.e. black & white, grey scale and
colored. In Matlab, however, there are four types of images. Black & White images
are called binary images, containing 1 for white and 0 for black. Grey scale images
are called intensity images, containing numbers in the range of 0 to 255 or 0 to 1.
Colored images may be represented as RGB Image or Indexed Image.

In RGB Images there exist three indexed images. First image contains all the red
portion of the image, second green and third contains the blue portion. So for a 640
x 480 sized image the matrix will be 640 x 480 x 3. An alternate method of colored
image representation is Indexed Image. It actually exist of two matrices namely
image matrix and map matrix. Each color in the image is given an index number and
in image matrix each color is represented as an index number. Map matrix contains
the database of which index number belongs to which color.

IMAGE TYPE CONVERSION

• RGB Image to Intensity Image (rgb2gray)

• RGB Image to Indexed Image (rgb2ind)

• RGB Image to Binary Image (im2bw)

• Indexed Image to RGB Image (ind2rgb)

• Indexed Image to Intensity Image (ind2gray)

• Indexed Image to Binary Image (im2bw)

• Intensity Image to Indexed Image (gray2ind)

• Intensity Image to Binary Image (im2bw)

• Intensity Image to RGB Image (gray2ind, ind2rgb)

Key Features

• High-level language for technical computing

• Development environment for managing code, files, and data

• Interactive tools for iterative exploration, design, and problem solving

• Mathematical functions for linear algebra, statistics, Fourier analysis,


filtering, optimization, and numerical integration

• 2-D and 3-D graphics functions for visualizing data


• Tools for building custom graphical user interfaces

• Functions for integrating MATLAB based algorithms with external


applications and languages, such as C, C++, FORTRAN, Java, COM, and Microsoft
Excel.
LITERATURE SURVEY

Title 1: DCT Precoded SLM Technique for PAPR Reduction in OFDM Systems
Author: Imran Baig and Varun Jeoti

Year: 2010

High Peak to Average Power Ratio (PAPR) is still a most important challenge
in Orthogonal Frequency Division Multiplexing (OFDM) system. In this paper, we
propose a Discrete Cosine Transform (DCT) precoding based SLM technique for
PAPR reduction in OFDM systems. This technique is based on precoding the
constellation symbols with DCT precoder after the multiplication of phase rotation
factor and before the Inverse Fast Fourier Transform (IFFT) in SLM-OFDM System.
Simulation results show that our proposed technique can reduce the PAPR to about
5.5dB for =64 and V=16 at clipping probability of 10-3. Orthogonal Frequency
Division Multiplexing (OFDM) is a multicarrier transmission technique that has
become the technology for next generation wireless and wireline digital
communication systems because of its high speed data rates, high spectral efficiency,
high quality service and robustness against narrow band interference and frequency
selective fading.

OFDM thwarts Inter Symbol Interference (ISI) by inserting a Guard Interval


(GI) using a Cyclic Prefix (CP) and moderates the frequency selectivity of the Multi
Path (MP) channel with a simple equalizer. This leads to cheap hardware
implementation and makes simpler the design of the receiver. OFDM is widely
adopted in various communication standards like Digital Audio Broadcasting
(DAB), Digital Video Broadcasting (DVB), Digital Subscriber Lines (xDSL),
Wireless Local Area Networks (WLAN), Wireless Metropolitan Area Networks
(WMAN), Wireless Personal Area Networks (WPAN) and even in the beyond 3G
Wide Area Networks (WAN) etc. Additionally, OFDM is a strong candidate for
Wireless Asynchronous Transfer Mode (WATM). However, among others, Peak to
Average Power Ratio (PAPR) is still one of the major drawbacks in the transmitted
OFDM signal. Therefore, for zero distortion of the OFDM signal, the RF High
Power Amplifier (HPA) must not only operate in its linear region but also with
sufficient back-off. Thus, HPA with a large dynamic range are required for OFDM
systems. These amplifiers are very expensive and are major cost component of the
OFDM system. Thus, if we reduce the PAPR it not only means that we are reducing
the cost of OFDM system and reducing the complexity of A/D and D/A converters,
but also increasing the transmit power, thus, for same range improving received
SNR, or for the same SNR improving range. A large number of PAPR reduction
techniques have been proposed in the literature. Among them, schemes like
constellation shaping, coding schemes, phase optimization, nonlinear Companding
transforms, Tone Reservation (TR) and Tone Injection (TI), clipping and filtering,
Partial Transmit Sequence (PTS), Selected Mapping (SLM) and precoding based
techniques are popular. In Wang and Tellambura proposed a soft clipping technique
which preserves the phase and clips only the amplitude. They also put a lot of effort
to characterize the performance and discover some properties to simplify the job.
However, the PAPR gain is only estimated by simulations and is limited to a specific
class of modulation technique. In Han and Lee proposed a PAPR reduction
technique based on Partial Transmit Sequence technique in which they divide the
frequency bins into sub blocks and then they multiply each sub-block with a constant
phase shift. Choosing the appropriate phase shift values reduces PAPR. The most
critical part of this technique is to find out the optimal phase value combination and
in this regard they also proposed a simplified search method and evaluated the
performance of the proposed technique. In proposed a selected mapping (SLM)
technique for PAPR reduction. In this technique, they multiply the constellation with
a phase rotated sequence to reduce the PAPR. Liang and Ouyang also proposed a
low complexity an SLM technique in which they rotate the bins by one of the phase
sequences and then select the sequence with lower PAPR for transmissions. The
main emphasis of the paper is on method of generating the time domain results and
IFFT is not performed on every possible phase rotation. In Enchang Sun et.al,
presented a DCT based precoding technique for PAPR reduction in MSE-OFDM
system and they claims through Simulations that DCT based precoding technique
can considerably reduce the PAPR without rising the symbol error rate.

Advantages: Faster and easier deployment and revenue realization, lower


operational costs for network maintenance.

Disadvantages: A null is placed in the direction of the interferers, so the antenna


gain is not maximized at the direction of the desired user.
Title 2: PAPR Reduction Based on DFT Precoding for OFDM Signals

Author: Mohamed A. Aboul-Dahab, Esam A. A. A. Hagras, and Ahmad A.


Elhaseeb

Year: 2013

The major problem of Orthogonal Frequency Division Multiplexing (OFDM)


is its High Peak-to-Average Power Ratio (PAPR). The High PAPR increases the
complexity of Analogue to Digital (A/D) and Digital to Analogue (D/A) converters
and also reduces the efficiency of Radio Frequency High Power Amplifier (RF
HPA). In this paper, we present Discrete Fourier Transform (DFT), Discrete Hartley
Transform (DHT) and Zadoff-Chu Transform (ZCT) precoders for both clipping and
clipping and filtering to reduce PAPR. The DFT precoder provides better PAPR
compared with clipping, clipping and filtering OFDM. In addition to improve the bit
error rate (BER) this has also been taken as a performance evaluation parameter. The
DFT precoded system is better than ZCT precoder by about 1.2 dB and better than
DHT precoder by about 1.5 dB at the same Complementary Cumulative Distribution
Function (CCDF) value.

The DFT precoder is better than clipping by 1dB and better than clipping and
filtering technique by about 6dB for OFDM system at the same parameters. The DFT
Precoded system is better than other precoder as well as it improvement in BER
performance of the original OFDM by about 1 dB at the same BER. The clipping
approach is the simplest PAPR reduction scheme, which limits the maximum of
transmit signal to a pre-specified level. However, it causes in-band signal distortion;
resulting in BER performance degradation also causes out-of-band radiation, which
imposes out-of-band interference signals to adjacent channels. Although the out of-
band signals caused by clipping can be reduced by filtering which also improve BER
Performance, this result in degradation in the PAPR. In the proposed system, The
DFT precoder of the same size as IFFT is used as a "spreading" code. Thus, the
OFDM system becomes equivalent to the Single Carrier (SC) system because the
DFT and IDFT operations virtually cancel each other. In this case, the transmit signal
will have the same PAPR as in a single-carrier system which results in improvement
in PAPR. In this paper, PAPR reduction scheme based on DFT, DHT precoding
schemes has been applied to the clipped OFDM signals. The clipping is the simplest
PAPR reduction but resulting in BER Performance Degradation using filtering with
clipping improve BER but effect on PAPR reversely using DFT precoder reduce this
effect and improve BER than clipping technique, clipping and filtering and original
OFDM. The DFT precoded system is better than precoder by about 1.2 dB and better
than DHT precoder by about 1.5 dB at the same CCDF value. The DFT precoder is
better than clipping by 1dB and better than clipping and filtering technique by about
6dB for OFDM system at the same parameters. The DFT Precoded system is better
than other precoder as well as it improvement in BER performance of the original
OFDM by about 1 dB at the same BER.

Advantages: The most obvious benefit is the reduction in complexity and cost
because of less hardware usage.

Disadvantages: There is no provision to the encoder.


Title 3: Precoding Technique For Peak-To-Average-Power-Ratio (Papr)
Reduction In MIMO OFDM/A Systems

Author: Seyran Khademi , Alle-Jan Van der Veen , Thomas Svantesson

Year: 2012

This paper develops a new technique to reduce the peak to average power
ratio (PAPR) in OFDM modulation for a MIMO system. The proposed method
exploits the Eigen-beamforming mode (EM) in MIMO systems which is a common
feature in 4th generation standards: WiMAX and LTE. These systems use the same
beamforming weights for dedicated pilots and data so the weights are interpreted as
a channel effect from the receiver perspective. There is no need to invert the weights
at the receiver side since it is compensated for in channel equalization. Beamforming
performance depends on the relative phase difference between antennas but is
unaffected by a phase shift common to all antennas. In contrast, PAPR changes with
the common phase shift.

An effective optimization technique based on Sequential Quadratic


Programming is proposed to compute the common phase shifts that minimize the
PAPR. A well-known drawback of OFDM is that the amplitude of the resulting time
domain signal varies with the transmitted symbols in the frequency domain. If the
maximum amplitude of the time domain signal is large, it may push the amplifier
into the non-linear region which breaks the orthogonality of the sub-carriers and will
result in a substantial increase in the error rate. PAPR reduction is a well-known
signal processing topic in multi-carrier transmission and large number of techniques
have been proposed in the literature during the past decades. PAPR reduction
techniques are associated with costs in terms of bandwidth or/and transmit power.
Also, most of them require modifications in both transmitter and receiver which
makes it non-compliant to the existing communication standards. Multiple signal
representation methods, such as partial transmit sequence (PTS) and selected
mapping (SLM) are well-known techniques which reduce the peak amplitude of the
OFDM signal by manipulating the phase of subcarriers. The phase weights are sent
as a side information to the receiver to recover the original symbols. A new
Precoding PAPR reduction technique is proposed, based on grouping the OFDM
subcarriers in clusters and changing the phase of clusters in a manner similar to the
PTS method but without the drawback of sending explicit side information. The
proposed technique neither requires additional bandwidth nor power while
delivering equal or better PAPR reduction gain compared to existing methods. This
algorithm focuses on the practical case for a WiMAX base station with a single
transmit antenna. In this paper we consider PAPR reduction techniques for multiple
transmit antennas with Space Time Block Codes (STBC) in EM mode, which is the
case for both WiMAX and LTE standards.

Simulation result shows the probability of high PAPR increases for MIMO
comparing to the single antenna. The beamforming weights also cause extra increase
in PAPR; to avoid this, phase-only beamforming is usually used which limits the
performance. This makes it more important to find a solution for PAPR, since
MIMO-OFDM has become a popular technique for wireless communication in time-
frequency selective channels. In a MIMO scenario, the peak amplitude needs to be
searched and minimized jointly over all antennas which affects the PAPR
characteristics compared to the single antenna system. Also, the coupling between
several OFDM symbols on each antenna gives an extra degree of freedom in the
minimization algorithm. An iterative phase optimization method based on SQP
technique in has been redefined and modified for multiple antenna system which
finds the optimum weights by approximating and minimizing the quadratic objective
function at each solution point. We show that the proposed technique keeps the
PAPR in the same level as single antenna for EM-MIMO systems.

Advantages: Significant cost reduction and improved design flexibility due to the
absence of the inter-connection between the MMIC and the antenna.

Disadvantages: A low total efficiency


Title 4: Channel Estimation in OFDM Systems

Author: Srishtansh Pathak and Himanshu Sharma

Year: 2013

Orthogonal frequency division multiplexing (OFDM) provides an


effective and low complexity means of eliminating inter symbol interference for
transmission over frequency selective fading channels. This technique has received
a lot of interest in mobile communication research as the radio channel is usually
frequency selective and time variant. In OFDM system, modulation may be coherent
or differential. Channel state information (CSI) is required for the OFDM receiver
to perform coherent detection or diversity combining, if multiple transmit and
receive antennas are deployed. In practice, CSI can be reliably estimated at the
receiver by transmitting pilots along with data symbols.

Pilot symbol assisted channel estimation is especially attractive for


wireless links, where the channel is time-varying. When sing differential modulation
there is no need for a channel estimate but its performance is inferior to coherent
system. In this paper we investigate and compare various efficient pilot based
channel estimation schemes for OFDM systems. In this present study, two major
types of pilot arrangement such as block type and comb-type pilot have been focused
employing Least Square Error (LSE) and Minimum Mean Square Error (MMSE)
channel estimators. Block type pilot sub-carriers is especially suitable for slow-
fading radio channels whereas comb type pilots provide better resistance to fast
fading channels.

Also comb type pilot arrangement is sensitive to frequency selectivity


when comparing to block type arrangement. The channel estimation algorithm based
on comb type pilots is divided into pilot signal estimation and channel interpolation.
The symbol error rate (SER) performances of OFDM system for both block type and
comb type pilot subcarriers are presented in this paper.

This paper focuses on investigating the effect of fading in modern digital


communication techniques such as orthogonal frequency division multiplexing
(OFDM). It is because OFDM is most commonly used in modern mobile broadband
wireless communication systems such as mobile WiMAX and long-term extension
(LTE). Therefore, channel estimation techniques for OFDM systems in doubly
selective channels are the topic of interest in this paper. Due to its high bandwidth
efficiency, it’s simple implementation and its robustness over frequency-selective
channels, OFDM has been widely applied in wireless
communication systems.

Although channel estimation can be avoided by using differential


modulation techniques, these techniques will fail catastrophically in the fast fading
channel, where the channel impulse response (CIR) varies significantly within the
symbol duration. In fact, differential modulation techniques assume that the channel
is stationary over the period of two OFDM symbols which is not
true for the fast fading channels. The orthogonally among the subcarriers is
destroyed and intercarrier interference (ICI) is created, which, if left uncompensated
can cause high bit error rates (BERs).

Generally, the compensation for the ICI due to the fast fading channel
is based on more complex equalizers such as minimum mean-square error MMSE)
equalizers, which need not only the individual subcarrier frequency responses but
also the interference among subcarriers in each OFDM symbol. Hence, channel
estimation is more challenging for OFDM systems in fast fading channels than in
slow fading systems. In other words, the channel estimation is an integral part of the
receiver for fast fading channels and the receiver needs to perform channel
estimation for each OFDM symbol.

In this paper, the performance of two types of estimators (LSE and MMSE
estimators) has been theoretically and experimentally evaluated for both block type
and comb type pilot arrangements. The estimators in this study can be used to
efficiently estimate the channel in an OFDM system, given certain knowledge about
channel statistics. The MMSE estimators assume a priori knowledge of noise
variance and channel covariance. Moreover, its complexity is large compare to the
LSE estimator. For high SNRs, the LSE estimator is both simple and adequate.

The MMSE estimator has good performance but high complexity. The
LSE estimator has low complexity, but its performance is not as good as that MMSE
estimator basically at low SNRs. In comparison between block and comb type pilot
arrangement, block type of pilot arrangement is suitable to use for slow fading
channel, where channel impulse response is not changing very fast. So the channel
estimated in one block of OFDM symbols through pilot carriers can be used in next
block for recovery the data which are degraded by the channel.

Both data and pilot carriers in one block of OFDM symbols are used. Pilot
carriers are used to estimate the channel impulse response. The estimated channel
can be used to get back the data sent by transmitter certainly with some error. In the
simulation, 1024 number of carriers in one OFDM block is used, in which one fourth
are used for pilot carriers and rest are of data carriers.

Advantages: The major advantage of OFDM lies in processing frequency-selective


channels as multiple flat-fading sub-channels.

Disadvantages: high peak-to-average-power ratio (PAPR), bit error rate (BER) and
high sensitivity to carrier frequency offset (CFO).
Title 5: Linearly precoded or coded OFDM against wireless channel fades

Author: Z. Wang and G. B. Giannakis

Year: 2001

WiMAX is a broadband wireless communication system which


provides fixed as well as mobility services. The mobile-WiMAX offers a special
feature that has adopted an adaptive modulation and coding (AMC) in OFDM to
provide higher data rates and error free transmission. AMC technique employs the
channel state information (CSI) to efficiently utilize the channel and maximize the
throughput with better spectral efficiency. In this paper, LSE, MMSE, LMMSE,
Low rank (Lr)-LMMSE channel estimators are integrated with the physical layer.
The performance of estimation algorithms is analyzed in terms of BER, SNR, MSE
and throughput. Simulation results proved that increment in modulation scheme size
causes to improvement in throughput along with BER value. There is a trade-off
among modulation size, BER value and throughput.

Transmit power and modulation for each subcarrier is determined for


reduction of the adaptive algorithm. The channel estimators are accustomed to give
randomly varying channel state information (CSI) to the transmitter by way of a
feedback. The main algorithm that will be enhanced in this paper is the linear
minimum mean square error (LMMSE) for OFDM systems and its low rank version.
The proposed channel estimation method requires the statistic knowledge of the
channel in advance. Generally, the current channel estimation methods can be
classified into two categories. The first one is based on the pilots, and the second one
is based on blind channel estimation which does not use pilots and
are not suitable for applications with fast varying fading channels. And most useful
communication systems nowadays adopt pilot arrangement based channel
estimation, for these reasons, this work studies the first category.

To be able to adjust the signal in the receiver for a possible phase drift, pilot
carriers can be inserted. In the Serial to Parallel block, the serial QAM input symbol-
stream is transformed into a parallel stream with width equal to the number of sub-
carriers. These parallel symbols are modulated onto the sub carriers by applying the
Inverse Fast Fourier Transform. Note that in order to get an output spectrum with a
relative low out-of-band radiation, the size of the IFFT can be chosen larger than the
amount of sub carriers that's actually spent to transmit the data. Following the IFFT
block, the parallel output is converted back again to
serial and guard interval, cyclic prefix of the time domain samples, is appended to
eradicate ISI. IEEE 802.16+ grants the insertion of guard time intervals of several
lengths such as 0.25, 0.125, 0.0625 and 0.03125 are lent to the WiMAX symbol
earlier it is transmitted.

The Target BER technique employed for AMC scheme is performed


keeping in mind the error rate under a target limit say 0.01 or 0.001, maintaining a
fixed level quality level of service with regards to error probability. and the system
keeps dealing with the lowest modulation and coding scheme, namely QPSK
modulation with coding rate 1/2, until the signal-to-noise ratio allows to respect the
error rate constraint, then the system switches on higher modulation transmission
schemes to yield a much better spectral efficiency while maintaining our
desired BER target If input data streams are transmitted using 64QAM with a coding
rate of 3/4, the throughput is going to be maximized but there will be lower BER
performance. So by compromising slightly the throughput performance, the
modulation and coding rate schemes are changed to keep the error rate below our
desired BER level.
The guard interval is more than the maximum delay time and then
the inserted guard interval signal is D/A converted and passed through frequency
selective fading channels. In the receivers, the guard interval is removed and the
opposite processing is carried out to transmitter like time samples are converted by
the FFT into complex symbols. The complex symbol is demodulated adaptively
using adaptive modulation level information. Demodulated symbols are block
deinterleved. These bits are forwarded to Viterbi decoder. Decoded bits are going to
be assigned to a specific user and then extracted utilizing the required bit rate
information of the user.

This paper investigated the performance of OFDM systems with AMC in


terms of BER, SNR, MSE and throughput. The code-rate of each subcarrier is
adapted to subchannel state to maximize data rate and to satisfy average BER. The
results concluded that higher modulation schemes give higher data rate with higher
BER value. Lower modulation level schemes give a better BER performance with
poor throughput performance. Results brought a trade-off among the modulation
level, BER, MSE, FFT length, algorithm selection and throughput. Target BER has
been considered to compromise the BER vs. SNR values. Hence, care has to be taken
while choosing a particular modulation scheme and channel estimation algorithm.

Advantages: It can accomplish a bandwidth efficiency

Disadvantages: It is not suitable for applications with fast varying fading


channels.
Title 6: Cyclic prefixing or zero padding for wireless multicarrier transmissions

Author: B. Muquet, Z. Wang, G. B. Giannakis, M. de Courville, and P.


Duhamel

Year: 2002

IEEE 802.16j is an amendment to the IEEE 802.16 broadband


wireless access standard to enable the operation of multi-hop relay stations (RS). It
aims to enhance the coverage, per user throughput and system capacity of IEEE
802.16e. There are three handover techniques supported within the IEEE 802.16e
and IEEE 802.16j – Hard Handover (HHO), Fast Base Station Switching (FBSS)
and Macro Diversity Handover (MDHO). This paper presents evaluations and
comparisons over the performance of these handover techniques. The effect of the
mobile station speed on the handover techniques’ performance is also studied. The
performance metric is the overall average downlink spectral efficiency which
depends on the downlink carrier to interference and noise ratio (CINR). Results
show that MDHO outperforms FBSS and HHO. Furthermore, as the MS speed
increases, the FBSS is slightly better than HHO.

Since the wireless terminal cannot transmit and receive simultaneously at the
same time and frequency, relaying requires at least two phases. In the first phase,
source-to-relay communication takes place while in the second phase the relay
forwards the received information to the destination. It is assumed that the relays use
Decode-and-forward (DF) forwarding scheme where they demodulate, decode, re-
encode and forward the signals received from the source terminal during the first
phase. Downlink and uplink channels are perfectly separated by Time Division
Duplex (TDD). According to the design of the multi-hop enabled MAC frame,
transmissions on the first and the second hop are assumed to be perfectly
separated in time. Perfect time and frequency synchronizations are also assumed.

During HHO, the MS communicates with only one BS in each time.


Connection with the old BS is broken before the connection to a new BS is
established. Handover is executed after the signal strength from a neighbour cell
exceeds the signal strength from the current cell. This type of handover is less
complex, fairly simple but it has high latency. Higher latency causes the unsuitability
for services requiring low latency (such as VOIP). HHO is typically used for data
services.

Performance evaluation has been carried out using a simulation tool


written in MATLAB. We consider IEEE 802.16j TDD-OFDMA based two-hop
cellular wireless relay network, which consists of 7 hexagonal cells. There exists one
base station (BS) and six fixed relay stations (FRSs) in each cell. The BS is at the
centre of the cell. Each FRS is located on the line that connects the BS to one of the
six cell vertices at a 1/2 position between BS and cell boundary. The mobile stations
(MS) are generated randomly in a uniform distribution in the coverage area of the
centre cell. The transmit power from the BS is fixed as 43 dBm.

At the beginning of the simulation, MSs are generated randomly in a uniform


distribution in the coverage area of the centre cell. During the simulation, the MS
moves along a direction randomly selected in each frame and
communicates with an RS and/or BS based on the received signal quality and the
handover technique employed. The modulation and coding scheme is adjusted on a
frame-by-frame basis according to the signal quality. For each frame, the
performance metrics are recorded, and at the end of simulation, the average DL
CINR and spectral efficiency are calculated by dividing the recorded values by the
overall simulation time.

When MDHO is supported, the MS and BS maintain a list of BSs that are
involved in MDHO with the MS. This set is called an Active Set or Diversity Set.
MS communicates with all BSs in the Active Set. For downlink MDHO, two or more
BSs transmit data to MS such that diversity combining can be performed at the MS.
For uplink MDHO, MS transmission is received by multiple BSs where selection
diversity of the information received is performed.

We have investigated the performance of the MDHO, FBSS and HHO


handover techniques in IEEE 802.16j mobile multi-hop relay system. Simulation
results show that MDHO has better DL CINR and spectral efficiency than FBSS and
HHO handover techniques. FBSS and HHO show identical performance at low MS
speed. However, as the MS speed increases, FBSS shows slightly better performance
than HHO. The probability of selecting the highest spectral efficiency MCS, i.e. 64-
QAM 3/4, is higher compared to the other MCSs due to the high CINR achieved in
multi-hop relay system. However, this probability decreases as the MS speed
increases.

Advantages: it is more stable and gives better performance and smoother transition.

Disadvantages: There are some drawbacks with the use of two connections, system
overhead will increase and the MS will use more network resources. MDHO is more
complex than HHO
Title 7: Complex-field coding for OFDM over fading wireless channels

Author: Z. Wang and F. Giannakis

Year: 2003

The mobile WiMAX system is based on the IEEE 802.16m standard


which is used to develop an advanced air interface (AAI) to meet the requirements
for IMT-Advanced next generation networks , which are able to provide high speed
access and are used to provide a rate of broadband data for low mobility scenarios
up to 1 Gbit/sec. This paper investigates the application of link adaptation techniques
(AM and AMC) to the downlink for the IEEE 802.16m-depending on the mobile
WiMAX networks to achieve spectral efficiency gain. Also, by use of link adaptation
it is possible to combine the MIMO technique with link adaptation in order to
maximize the throughput.

This paper considers six various MCS for link adaptation in order to
find the largest throughput improvement. The working thresholds of the SNR for the
various combinations of modulation, coding and MIMO will be determined through
utilizing the ITU pedestrian channel model. Therefore, through employing a system
level simulation, the performance evaluation results explain that the adaptive
modulation and coding (AMC) system is noticeably superior compared to
the systems that utilize fixed modulation (FM) or adaptive modulation (AM)
schemes with regard to the spectral efficiency.

Link adaptation may be defined as a key solution that increases the spectral
efficiency of wireless systems. It is utilized to set the modulation and the coding, in
order to reflect the features for the wireless link, and to maximize the throughput.
However, if the channel changes quickly such that it cannot be estimated in a reliable
manner and fed back to the transmitter then the execution of the adaptive techniques
will be poorer. This study uses two important mechanisms such as adaptive
modulation (AM) and adaptive modulation and coding (AMC) for improving the
robustness of the link in the system.

The AM allows for the mobile WiMAX system to tune the signal modulation
scheme depending on the signal to noise ratio (SNR) condition of the radio link.
Nevertheless for a high quality radio link condition the higher modulation scheme
will be utilized which provides the highest throughput of
the system or the best spectral efficiency. During a signal fade, the mobile WiMAX
system may be shifted to a lower modulation scheme in order to maintain the
connection quality and link stability. Nevertheless it is noteworthy that it is
mandatory to have a higher SNR while utilizing a higher modulation scheme in order
to overcome any interference and to maintain a certain target bit error ratio (BER).

There is no difference when using the adaptive modulation (AM) and adaptive
coding (AC) together or just the AM alone. However, under favourable channel
conditions the mobile WiMAX system uses the highest levels of modulation and the
highest rates of channel coding, while it utilizes lower levels of modulation and
lower rates of channel coding whenever the channel condition is comparatively
harsh. It is expected that the results of the performance will be enhanced when the
AM and AC are combined together.

The Matrix A version is used for coverage gain, which means an


increase in the radius of the cell which also provides the best throughput for
subscribers where access is difficult for stations that are already suffering good
indication conditions. Matrix B is used to increase the capacity. A collaborative
Uplink MIMO represents the other technique of MIMO that will be designed through
WiMAX vendors in order to increase the capacity and spectral efficiency of the
uplink communications path.

Many features and the different options that can be used in order to
better utilize the characteristics of the wireless channel, such as adaptive modulation
and coding (AMC) have been presented. In this paper, we assessed the down link
execution for the IEEE 802.16e-based mobile WiMAX networks. We showed that
utilizing link adaptation techniques (such as AM and AMC) will affect
the spectral efficiency in the mobile WiMAX networks. The results show that the
use of these techniques will enhance the implementation of a mobile network. So, in
terms of the average spectral efficiency, it was found that AMC significantly
improves the system execution compared to the AM scheme.

Moreover, AM improves the system execution better than FM. Also,


by use of link adaptation it is possible to combine adaptive modulation and coding
(AMC) with the MIMO option in order to maximize the throughput. The operating
SNR thresholds for various modulation, coding and MIMO combinations will be
determined by utilizing an ITU pedestrian channel model, and the regions where the
Matrix B version may be utilized to increase the cell capacity. After combining the
MIMO technique with link adaptation, the authors deliberate seven various MCS
techniques for link adaptation in order to obtain the biggest improvement in
throughput.

Advantages: It utilizes lower levels of modulation and lower rates of channel coding
whenever the channel condition is comparatively harsh.

Disadvantages: access is difficult for stations that are already suffering good
indication conditions.
Title 8: Adaptive Modulation and Coding with Channel State Information in
OFDM for WiMAX

Author: B. Siva Kumar Reddy, Dr. B. Lakshmi

Year: 2015

WiMAX is a broadband wireless communication system which


provides fixed as well as mobility services. The mobile-WiMAX offers a special
feature that has adopted an adaptive modulation and coding (AMC) in OFDM to
provide higher data rates and error free transmission. AMC technique employs the
channel state information (CSI) to efficiently utilize the channel and maximize the
throughput with better spectral efficiency. In this paper, LSE, MMSE, LMMSE,
Low rank (Lr)-LMMSE channel estimators are integrated with the physical layer.
The performance of estimation algorithms is analyzed in terms of BER, SNR, MSE
and throughput. Simulation results proved that increment in modulation scheme size
causes to improvement in throughput along with BER value. There is a trade-off
among modulation size, BER value and throughput.

Transmit power and modulation for each subcarrier is determined for


reduction of the adaptive algorithm. The channel estimators are accustomed to give
randomly varying channel state information (CSI) to the transmitter by way of a
feedback. The main algorithm that will be enhanced in this paper is the linear
minimum mean square error (LMMSE) for OFDM systems and its low rank version.
The proposed channel estimation method requires the statistic knowledge of the
channel in advance. Generally, the current channel estimation methods can be
classified into two categories. The first one is based on the pilots, and the second one
is based on blind channel estimation which does not use pilots and
are not suitable for applications with fast varying fading channels. And most useful
communication systems nowadays adopt pilot arrangement based channel
estimation, for these reasons, this work studies the first category.

To be able to adjust the signal in the receiver for a possible phase drift, pilot
carriers can be inserted. In the Serial to Parallel block, the serial QAM input symbol-
stream is transformed into a parallel stream with width equal to the number of sub-
carriers. These parallel symbols are modulated onto the sub carriers by applying the
Inverse Fast Fourier Transform. Note that in order to get an output spectrum with a
relative low out-of-band radiation, the size of the IFFT can be chosen larger than the
amount of sub carriers that's actually spent to transmit the data. Following the IFFT
block, the parallel output is converted back again to
serial and guard interval, cyclic prefix of the time domain samples, is appended to
eradicate ISI. IEEE 802.16+ grants the insertion of guard time intervals of several
lengths such as 0.25, 0.125, 0.0625 and 0.03125 are lent to the WiMAX symbol
earlier it is transmitted.

The Target BER technique employed for AMC scheme is performed


keeping in mind the error rate under a target limit say 0.01 or 0.001, maintaining a
fixed level quality level of service with regards to error probability. and the system
keeps dealing with the lowest modulation and coding scheme, namely QPSK
modulation with coding rate 1/2, until the signal-to-noise ratio allows to respect the
error rate constraint, then the system switches on higher modulation transmission
schemes to yield a much better spectral efficiency while maintaining our
desired BER target If input data streams are transmitted using 64QAM with a coding
rate of 3/4, the throughput is going to be maximized but there will be lower BER
performance. So by compromising slightly the throughput performance, the
modulation and coding rate schemes are changed to keep the error rate below our
desired BER level.
The guard interval is more than the maximum delay time and then
the inserted guard interval signal is D/A converted and passed through frequency
selective fading channels. In the receivers, the guard interval is removed and the
opposite processing is carried out to transmitter like time samples are converted by
the FFT into complex symbols. The complex symbol is demodulated adaptively
using adaptive modulation level information. Demodulated symbols are block
deinterleved. These bits are forwarded to Viterbi decoder. Decoded bits are going to
be assigned to a specific user and then extracted utilizing the required bit rate
information of the user.

This paper investigated the performance of OFDM systems with AMC in


terms of BER, SNR, MSE and throughput. The code-rate of each subcarrier is
adapted to sub channel state to maximize data rate and to satisfy average BER. The
results concluded that higher modulation schemes give higher data rate with higher
BER value. Lower modulation level schemes give a better BER performance with
poor throughput performance. Results brought a trade-off among the modulation
level, BER, MSE, FFT length, algorithm selection and throughput. Target BER has
been considered to compromise the BER vs. SNR values. Hence, care has to be taken
while choosing a particular modulation scheme and channel estimation algorithm.

Advantages: It can accomplish a bandwidth efficiency

Disadvantages: It is not suitable for applications with fast varying fading


channels.
Title 9: Performance Of Wireless Ofdm System With Ls-interpolation-based
Channel Estimation In Multi-path Fading Channel

Author: A.Z.M. Touhidul Islam and Indraneel Misra

Year: 2012

In this paper we investigate the bit error rate (BER) performance of


orthogonal frequency division multiplexing (OFDM) wireless communication
system with the implementation of LS-Interpolation-based comb-type pilot symbol-
assisted channel estimation algorithm over frequency selective multi-path Rayleigh
fading channel. The Least square (LS) method is used for the estimation of channel
at pilot frequencies while different interpolation techniques such as low-pass
interpolation, cubic interpolation, spline cubic interpolation, linear interpolation and
FFT interpolation are employed to interpolate the channel at data
frequencies.

In signal mapping, the OFDM system incorporates M-ary phase-shift


keying (M-PSK) and Mary quadrature amplitude modulation (M-QAM) digital
modulation schemes. Matlab simulations are carried out to analyze the performance
of the developed OFDM system with the employment of comb-pilot
based channel estimation algorithms for various digital modulations in Rayleigh
fading environment. The impact of Doppler frequency and number of channel taps
on the BER performance is also investigated.

On the other hand, in comb-type pilot arrangement, the pilot signals are
uniformly distributed within each OFDM block. The comb-type pilot-based channel
estimation (can be introduced to satisfy the need for equalization) consists of
algorithms to estimate the channel at pilot frequencies and to interpolate the channel
at data frequencies. The LS, MMSE or Least MeanSquare (LMS) method can be
used to estimate the channel at pilot frequencies, while different
interpolation schemes are introduced for the channel estimation at data frequencies.
In this paper, our aim is to evaluate the bit error rate (BER) performance
of OFDM wireless communication system with the implementation of comb-type
pilot assisted channel estimation algorithms by incorporating M-PSK and M-QAM
digital modulation schemes over Rayleigh frequency selective multi-path fading
channel. The LS estimator is employed because of its lowest complexity. In addition,
the performance of different interpolation techniques such as low pass, cubic, spline
cubic, linear and FFT are also examined.

For channel estimation some kinds of pilot symbol is necessary as a point of


reference which allow the receiver to extract channel attenuations and phase rotation
estimates for each received symbol and facilitates their compensation. In Pilot
symbol assisted modulation (PSAM), channel estimates are achieved by
multiplexing pilot symbols into data sequence. Here we consider comb-type pilot
arrangement (can provide better resistance to fast-fading channels) in which the
pilot signals are uniformly distributed within each OFDM block.

In comb-type pilot-aided channel estimation algorithm, the pilot signals are


first extracted from the received signal. The channel transfer function is then
estimated from the received pilot signals and the known pilot signals. Because only
few sub-carriers in comb-type pilot arrangement contain pilot signals, the channel
responses of non-pilot subcarriers (carry data) can be estimated by interpolating the
neighboring pilot channel responses.

The cubic interpolation uses four known points to obtain a third degree
polynomial. In case the range of interpolation becomes larger than the range covered
by the first four reference points, it is required to obtain a second polynomial using
the next four points. The basic principle of FFT interpolation is to apply the FFT in
the data to be interpolated, the null samples are added in the frequency domain and
the oversampled vector is applied to the IFFT. In low-pass interpolation, zeros are
inserted into the original sequence, low-pass finite impulse response (FIR) filter is
applied to allow the original data to pass through unchanged and interpolates
between such that the mean-square error between the interpolated points and their
ideal values is minimized.

In this paper we investigated the BER performance of M-PSK and M-QAM-


modulated OFDM wireless communication systems with the implementation of LS-
Interpolation-based comb-type pilot symbol-assisted channel estimation algorithm
over frequency selective multi-path Rayleigh fading channel. In channel estimation,
the OFDM system employed Least square estimator for the estimation of channel at
pilot frequencies while different interpolation techniques are used to
interpolate the channel at data frequencies.

Simulation results show that the proposed OFDM system with LS channel
estimator achieves good error rate performance under the BPSK and 2QAM
modulation schemes over Rayleigh fading channel. Low-pass interpolation performs
better in channel frequency response estimation than other studied interpolation
algorithms and the BER performance of OFDM system with comb pilot-assisted
channel estimation is less affected by Doppler frequency.

Advantages: high data rate transmission capability, high spectral efficiency, allows
adaptive modulations and coding of subcarriers

Disadvantages: In block-type pilot arrangement, the pilot signal is assigned to


particular OFDM block and sent periodically in time domain.
Title 10: Performance and Complexity Comparison of Channel Estimation
Algorithms for OFDM System

Author: Saqib Saleem, Qamar-Ul-Islam

Year: 2002

To mitigate the multipath delay effect of the received signal, the


information of the time-varying channel is required at the receiver to determine the
equalizer co-efficient. In this paper two basic algorithms, known as Linear Minimum
Mean Square (LMMSE) and Least Square Error (LSE), are discussed which make
use of the channel statistics in time domain. To reduce the complexity, different
variants of these algorithms are also discussed. Channel Impulse Response (CIR)
samples and channel taps are used to compare the performance and complexity.
MATLAB simulations are carried out to compare the performance in terms of Mean
Square Error (MSE) and Symbol Error Rate (SER) of these algorithms for different
modulation techniques.

Two basic algorithms can be considered for channel estimation using pilot-
assisted technique. First one is Least Square Estimation (LSE) and the other one is
Linear Minimum Mean Square Estimation (LMMSE). LSE has less complexity as
there is no need of any channel apriority probability that’s why it is easy to
implement, but to attain superior performance LMMSE is preferred which is based
on channel autocorrelation matrix in frequency domain. It minimizes the Mean
Square Error (MSE) of the channel by utilizing the information of operating SNR
and the channel statistics due to which its complexity is higher.

The performance of LSE and LMMSE estimator in terms of MSE as a


function of SNR is compared. For less CIR samples, LMMSE outperforms LSE in
terms of less MSE, not in terms of complexity. But by increasing CIR samples,
LSE’s performance improves for higher SNR values and if we increase CIR samples
further, then LSE starts to show better performance for all SNR values. The
computational time of both LSE and LMMSE, for different CIR samples is
compared. SER performance comparison of LSE and LMMSE. The performance of
LMMSE is better as it utilizes the channel statistics.

To overcome this problem a low-rank approximation to LMMSE


has been proposed by using singular value decomposition (SVD). Complexity of this
algorithm can also be reduced by the use of channel taps and Channel Impulse
response (CIR) samples. To achieve better results in channel estimation we have to
keep track of some further parameters like channel statistics, the channel Power
Delay Profile (PDP) available to the receiver and assistance of decoder feedbacks.

For small SNR values, the modified LSE shows improved performance, but
for higher SNR values, the behavior is same as that of modified LMMSE estimators.
Regularized LSE estimator demonstrates a degraded performance at higher SNR
values. Advantages of down-sampled LSE are only in terms form of
less computational time. The performance remains same as that of LSE estimator.
The performance improves significantly as number of channel taps increases to 10,
but after that there is no improvement in MSE. So increasing the channel taps after
10, only complexity increases such that as we go from 30 to 50 channel taps, the
complexity increases 100%.

The behavior of the channel also changes especially for high mobility wireless
links due to the time-varying surrounding environment. In such a situation, the
channel PDP is difficult to know. If all PDP’s are assumed to be
having same maximum delay then the channel co-variance matrix with a uniform
PDP gives better performance. In real time situations, a prior knowledge about the
channel and noise statistics is not possible, that is why we design a filter that is
function of input data only. No probabilistic assumptions are required for LSE
channel estimation.

For less CIR samples, LMMSE outperforms LSE in terms of less MSE, not
in terms of complexity. But by increasing CIR samples, LSE’s performance
improves for higher SNR values and if we increase CIR samples further, then LSE
starts to show better performance for all SNR values. In this paper, the performance
and complexity of two algorithms, LSE and LMMSE, is evaluated in terms of MSE
and SER, based on CIR samples and channel taps. LMMSE is
capable of improving the performance by making use of prior information of noise
and channel. But this improved performance comes at the cost of more complexity.

The performance can be improved by increasing CIR samples or channel taps


but after a certain value of these factors, only complexity increases and the
performance does not have any further improvement. LSE can be made more
efficient both in term of performance and complexity by increasing CIR samples
than LMMSE. We also demonstrated that SNR value does not affect the
performance of LSE for different channel taps. So we improve the performance of
the estimator without having prior channel information, by using a larger length
channel filter. The performance and complexity can be optimized by using other
channel estimation techniques such as Transform-based and Kalman-filtering-based
algorithms.

Advantages: It provides the advantage of less delay as compared to other techniques


such as interpolation

Disadvantages: its response to error detection is not suitable that causes error
propagation and the need of huge amount of data slow down its convergence rate.
FEASIBILITY STUDY
The feasibility study is carried out to test whether the proposed system is worth
being implemented. The proposed system will be selected if it is best enough in
meeting the performance requirements.

The feasibility carried out mainly in three sections namely.

• Economic Feasibility

• Technical Feasibility

• Behavioral Feasibility

Economic Feasibility

Economic analysis is the most frequently used method for evaluating


effectiveness of the proposed system. More commonly known as cost benefit
analysis. This procedure determines the benefits and saving that are expected from
the system of the proposed system. The hardware in system department if sufficient
for system development.

Technical Feasibility

This study center around the system’s department hardware, software and to
what extend it can support the proposed system department is having the required
hardware and software there is no question of increasing the cost of implementing
the proposed system. The criteria, the proposed system is technically feasible and
the proposed system can be developed with the existing facility.

Behavioral Feasibility
People are inherently resistant to change and need sufficient amount of
training, which would result in lot of expenditure for the organization. The proposed
system can generate reports with day-to-day information immediately at the user’s
request, instead of getting a report, which doesn’t contain much detail.

System Implementation

Implementation of software refers to the final installation of the package


in its real environment, to the satisfaction of the intended users and the operation of
the system. The people are not sure that the software is meant to make their job
easier.

 The active user must be aware of the benefits of using the system
 Their confidence in the software built up
 Proper guidance is impaired to the user so that he is comfortable in
using the application

Before going ahead and viewing the system, the user must know that for
viewing the result, the server program should be running in the server. If the server
object is not running on the server, the actual processes will not take place.

User Training

To achieve the objectives and benefits expected from the proposed system it
is essential for the people who will be involved to be confident of their role in the
new system. As system becomes more complex, the need for education and training
is more and more important. Education is complementary to training. It brings life
to formal training by explaining the background to the resources for them. Education
involves creating the right atmosphere and motivating user staff. Education
information can make training more interesting and more understandable.

Training on the Application Software

After providing the necessary basic training on the computer


awareness, the users will have to be trained on the new application software. This
will give the underlying philosophy of the use of the new system such as the screen
flow, screen design, type of help on the screen, type of errors while entering the data,
the corresponding validation check at each entry and the ways to correct the data
entered. This training may be different across different user groups and across
different levels of hierarchy.

Operational Documentation

Once the implementation plan is decided, it is essential that the user of the
system is made familiar and comfortable with the environment. A documentation
providing the whole operations of the system is being developed. Useful tips and
guidance is given inside the application itself to the user. The system is developed
user friendly so that the user can work the system from the tips given in the
application itself.

System Maintenance

The maintenance phase of the software cycle is the time in which software
performs useful work. After a system is successfully implemented, it should be
maintained in a proper manner. System maintenance is an important aspect in the
software development life cycle. The need for system maintenance is to make
adaptable to the changes in the system environment. There may be social, technical
and other environmental changes, which affect a system which is being
implemented. Software product enhancements may involve providing new
functional capabilities, improving user displays and mode of interaction, upgrading
the performance characteristics of the system. So only thru proper system
maintenance procedures, the system can be adapted to cope up with these changes.
Software maintenance is of course, far more than “finding mistakes”.

Corrective Maintenance

The first maintenance activity occurs because it is unreasonable to assume that


software testing will uncover all latent errors in a large software system. During the
use of any large program, errors will occur and be reported to the developer.
The process that includes the diagnosis and correction of one or more errors is called
Corrective Maintenance.

Adaptive Maintenance

The second activity that contributes to a definition of maintenance occurs


because of the rapid change that is encountered in every aspect of computing.
Therefore Adaptive maintenance termed as an activity that modifies software to
properly interfere with a changing environment is both necessary and commonplace.

Perceptive Maintenance

The third activity that may be applied to a definition of maintenance occurs


when a software package is successful. As the software is used, recommendations
for new capabilities, modifications to existing functions, and general enhancement
are received from users. To satisfy requests in this category, Perceptive maintenance
is performed. This activity accounts for the majority of all efforts expended on
software maintenance.
Preventive Maintenance

The fourth maintenance activity occurs when software is changed to improve future
maintainability or reliability, or to provide a better basis for future enhancements.
Often called preventive maintenance, this activity is characterized by reverse
engineering and re-engineering techniques
CONCLUSION

Thus in this proposed system, we analyzed the performance of the system


based on the wavelet based OFDM system and the proposed system is compared
with the performance of DFT based OFDM system. The performance of the
proposed system which shows that the proposed system which gives better results
than the DFT based OFDM system. In that proposed system we use the 16-QAM,
64-QAM and QPSK modulation technique, which are used in LTE. In wavelet based
OFDM different types of filters can be used with the help of different wavelets
available. The Haar based wavelet and db2 wavelet system are used for the
estimation. Finally the BER vs. SNR graphs are simulated to show the performance
of the system.
REFRENCES

[1] A. Ian F., G. David M., R. Elias Chavarria, “The evolution to 4G cellular systems:
LTE-advanced”, Physical communication, Elsevier, vol. 3, no. 4, pp. 217-244, Dec.
2010.

[2] B. John A. C., “Multicarrier modulation for data transmission: an idea whose
time has come”, IEEE Communications magazine, vol. 28, no. 5, pp. 5-14, May
1990.

[3] L. Jun, T. Tjeng Thiang, F. Adachi, H. Cheng Li, “BER performance of OFDM-
MDPSK system in frequency selective rician fading and diversity reception” IEEE
Transactions on Vehicular Technology, vol. 49, no. 4, pp. 1216-1225, July 2000.

[4] K. Abbas Hasan, M. Waleed A., N. Saad, “The performance of multiwavelets


based OFDM system under different channel conditions”, Digital signal processing,
Elsevier, vol. 20, no. 2, pp. 472- 482, March 2010.

[5] K. Volkan, K. Oguz, “Alamouti coded wavelet based OFDM for multipath fading
channels”, IEEE Wireless telecommunications symposium, pp.1-5, April 2009.

[6] G. Mahesh Kumar, S. Tiwari, “Performance evaluation of conventional and


wavelet based OFDM system”, International journal of electronics and
communications, Elsevier, vol. 67, no. 4, pp. 348-354, April 2013.

[7] J. Antony, M. Petri, “Wavelet packet modulation for wireless communication”,


Wireless communication & mobile computing journal, vol. 5, no. 2, pp. 1-18, March
2005.

[8] L. Madan Kumar, N. Homayoun, “A review of wavelets for digital wireless


communication”, Wireless personal communications, Kluwer academic publishers-
Plenum publishers, vol. 37, no. 3-4, pp. 387-420, May 2006.
[9] L. Alan, “Wavelet packet modulation for orthogonally multiplexed
communication”, IEEE transaction on signal processing, vol. 45, no. 5, pp. 1336-
1339, May 1997.

[10] K. Werner, P. Gotz, U. Jorn, Z Georg, “A comparison of various MCM


schemes”, 5th International OFDM-workshop, Hamburg, Germany, pp. 20-1 – 20-
5, July 2000.

[11] IEEE std., IEEE proposal for 802.16.3, RM wavelet based (WOFDM), PHY
proposal for 802.16.3, Rainmaker technologies, 2001.

[12] O. Eiji, I Yasunori, I Tetsushi, “Multimode transmission using wavelet packet


modulation and OFDM”, IEEE vehicular technology conference, vol. 3, pp. 1458-
1462, Oct. 2003.

[13] L. Louis, P. Michael, “The principle of OFDM” RF signal processing,


http://www.rfdesign.com, pp. 30-48, Jan 2001.

[14] “LTE in a nutshell: The physical layer”, white paper, 2010,


http://www.tsiwireless.com.

[15] R. Mika, T. Olav, “LTE, the radio technology path toward 4G”, Computer
communications, Elsevier, vol. 33, no. 16, pp. 1894-1906, Oct. 2010.

[16] Broughton SA, Bryan K. Discrete Fourier analysis and wavelets. New Jersey,
John Wiley, 2009.

[17] C. See Jung, L. Moon Ho, P. Ju Yong, “A high speed VLSI architecture of
discrete wavelet transform for MPEG-4”, IEEE transaction consumer electron, vol.
43, no. 3, pp. 623-627, June 1997.
SCREEN SHOT