Anda di halaman 1dari 12

Austen Roberson

17 January 2019
Intern/Mentor G/T

Annotated Source List

Allen, M. (2002). Curing Radio Noise. ​Popular Mechanics, 179​(11), 138.

This article discusses possible sources of radio-frequency interference (RFI) in


automotives. Constant, intermittent and engine-speed following are the three classes of RFI (also
referred to as noise) that are common in automotive devices. Although the noise may appear as if
is coming from the speaker, it may actually be caused by multiple devices in the vehicle.
Engine-speed following noise is caused by devices that fluctuate with the speed of the engine,
like the alternator, ignition, or the fuel injector. In contrast, constant speed noises are caused by
electric motors that operate at a constant speed as long as the engine continues to run. These
devices include electric fuel pumps and an electric fan if kept at the same setting. Finally,
intermittent noise can be associated with power-window motors or electric seat adjusters. To
verify the source of the radio-frequency interference, the device will have to be disconnected. To
fix it, an inductance in a power cable can prevent the noise from traveling, or a capacitor can be
used to redirect the noise to ground. Another common source of RFI is poor electric wiring
connections. Some systems may have to be rewired to fix the noise.
This was a decent source but did not provide much technical detail. It did however,
provide information about three distinct types of RFI found in automotives. This was not present
in prior sources and may require further research. Although this article did not directly refer to
aircraft systems, many of the same core components are present in automobiles and aircraft so
the information maintains degrees of relevance.

Dawson, B. (2014). The Hough Transform in Machine Vision. ​Quality​, ​53​(11), 26–30.

While the human mind can define lines based on shape, machine algorithms do not
understand lines the same way. Computers must use an algorithm that uses knowledge of
contrast and equations of lines to constrain which points are part of a line. The Hough Transform
is a method of improving image detection when a shape can be described by set of parameters in
an equation. For example, a straight line would be defined as y = (m * x) + b . In that case, the
two parameters would be m and b. M would represent the slope of the line, while b would
represent the intercept with the y-axis. The Hough Transform program would generate possible
parameter values for m and b through each detected point the image. The possible lines through
each point then vote for their m and b value in the limited parameter range. The votes are then
accumulated within the parameter range to see which values received the most votes. A peak
threshold must be indicated within the program to differentiate the true lines from the false
peaks. The value peaks indicate the presence of a line and give its equation within the image.
This same method can also be applied to other shapes with parameterized equations, such as
circles.
This article describes the goal of the program in detail as well as the background
processes required to make the program successful. It was simplistic enough for a beginner
programmer to understand and provided example code after the description of the program to aid
in the understanding of the code employed during the computation. This article led to further
research into the types of image processing and shape recognition algorithms.

Deitel, P. J., Deitel, H. M., & Sengupta, P. (2012). ​C++ how to program​ (8th ed.). Boston,
Mass.:
Prentice Hall.

This book is a tutorial for all C++ functions and statements. It begins with basic functions
and builds upon them in later chapters to reach advanced functions. The book acts as a substitute
for the instruction of the C++ coding language in a traditional classroom setting. It includes
examples of images from the integrated development environments (IDE) in which the code is
processed. Each new concept is accompanied by an example which displays the concept being
utilized to complete a random task. This provides the readers with a visual example of how the
code should appear when entered into an IDE, which can be compared to the readers own code.
Exercises are provided at the end of each chapter to allow the readers to attempt to employ the
functions explained throughout the chapter to solve complex challenges.
This book is extremely helpful to me as I attempt to learn C++ as it teaches each function
step by step and breaks each action down quite simply. It slowly builds upon accrued knowledge
and avoids jumping between unrelated concepts. If a more complex concept is needed to
understand a simpler concept, it makes sure to explain the bare minimum originally, but later
returns to that example and goes further in depth after a sufficient foundation of knowledge has
been built on the topic. I am currently on page 331 of the book and I am reading about the
functionality of pointers.

Greengard, S. (2017). It’s All About Image. ​Communications of the ACM,​ ​60(​ 9), 13–15

The ability to facilitate the training of artificial intelligence is enabling incredible


breakthroughs in the field of image processing. Improved graphic processing units, expanded
datasets to train with, and improved intelligence algorithms have created these breakthroughs.
Some say artificial intelligence programs are beginning to outperform humans. Generative
programs are beginning to be used in tandem with image-recognition programs to increase
accuracy. Generative programs create fake images and the image recognition network then
functions as a discriminator, analyzing the images to distinguish the real from the artificial. The
discriminator then verifies the validity of its findings with a human operator and uses its results
to further refine its algorithm. The discriminator then can instruct the generator on how to create
more realistic images using its algorithm. In this method, the discriminator, learns the most
important parts of the image over time, exemplifying human-like intuition. Results improve
significantly as this approach reduces the time necessary to train an intelligent program by
decreasing the volume of images required to produce valid results, as well as improving the
accuracy of image detection.
The strength of this source lies in the use of industry professionals to present first hand
accounts of the technologies that they employ in their fields. The professionals then describe the
functionality and the processes of the systems in understandable terms. This article leads to
research on generative programs and their function in the training process of artificially
intelligent programs.

Gregoire, M. (2014). ​Professional C++​ (3rd ed.). Indianapolis: Wiley.

This book is a reference for C++ functions and statements used in the professional
workspace. This book does not teach the language, rather it defines the functions and protocols
and describes their respective actions. It is more akin to a dictionary rather than a textbook. The
book examples of the code described when entered into the integrated development
environments (IDE). These examples appear in text form and detail the functions described
within each chapter. When the book attempts to describe multiple related concepts, it enters the
information into tables to compare similarities and differences between concepts. This book
places emphasis on designing with C++ and the principles required to have a successful code in
the professional and business environments. Most of the book assumes that the reader begins
with basic knowledge of certain principles and functions and explains how to make already
existing code more efficient.
This book is not helpful in teaching me C++ as it assumes that I already have basic
knowledge. As a supplement to other books, it gains usefulness as it goes further in depth than
most other books. However, I do not spend much time reading this book and instead choose to
search in the index for particular functions if I require more information. I do not consider the
book beginner friendly but perhaps as I become a more advanced coder I will find the book to be
more helpful by increasing the efficiency of my code.

Herrman, J. (2011). Fighting RF Interference. ​Popular Mechanics, 188(​ 2), 102

Electromagnetic radiation is present in everything, electronic or otherwise. Wireless


communication lies within a portion of this spectrum, and most personal wireless devices operate
under five gigahertz (GHz). This is due to a combination of multiple factors including, range,
antenna size, cost, and government regulations. The result is a crowded signal space with
multiple different signals interacting within a relatively small range of frequencies. Radio
frequency interference (RFI) is when a device searches for a specific signal at a specific
frequency, but finds another stray signal. The Federal Communications Commission (FCC) aims
to prevent this through emissions standards, and standards that regulate the susceptibility of a
device to such emissions. However, RFI shielding can be deteriorated through poor construction
methods. This makes maintaining these regulations difficult as devices become more complex
and overseas companies attempt to reduce production costs simultaneously. Most often, the best
way to combat RFI is to acquire updated devices. However, RFI can also be reduced by
attempting to alter the frequencies that the device operates within. There are also materials that
can filter out interference signals if added to a device. Finally, keeping interfering signals far
away from each other can mitigate the interference experienced between the devices.
This article’s biggest strength lies in the easily understandable explanation of radio
frequency interference. It provides an in-depth, yet simplistic explanation and provides possible
causes for the phenomena. This brief article also provided suggestions to solve the problem when
it is encountered.

Howard, C. E. (2016). Common technologies for manned and unmanned aircraft.​ Military &
Aerospace Electronics, 27​(2), 15.

Unmanned and manned aircraft now share more of the same systems than ever before.
Innovations from a variety of markets, for example, medical, industrial, and automotive, are
beginning to crossover into the avionics sector. Systems that have existed for years in other
industries are beginning to be adapted to aircraft designs are leading new developments. All that
are involved stand to benefit from these collaborations. Military programs are saving time,
reducing costs, and increasing safety compliance with the usage of commercial avionics
components. Developers are now increasingly pressured to create designs that encompass both
military functionality and capabilities with a commercial compliance. Military aircraft are
beginning to have to fly in commercial airspace as the skies become busier. Without
commercially compliant devices, they would be forced to remain in exclusive military routes,
which are quickly diminishing. Using standardized commercial parts also allows for easier
maintenance due to readily available spare parts.
This article discussed the discrepancies and the similarities of the technologies used in
many avionics systems. Identifying the policies and the examples of the shared aircraft systems
was a strength of this article. It also did well in collecting opinions from industry leaders and
professionals on the current trends in innovation. It provided diverse viewpoints yet consolidated
one solid conclusion on the future of aviation based on those various opinions. In all, this article
lead to further research of examples of shared military and commercial aircraft systems.

Howard, C. E. (2018). Mitigating electromagnetic and radio-frequency interference. ​Military &


Aerospace Electronics, 29​(7), 18.

This article discusses electromagnetic interference (EMI) and the actions that aerospace
and defense companies are taking to ensure its negative effects are limited in their devices. The
materials used and the density of the digital assemblies are taken into heavy consideration during
the creation of systems as they play a critical role in determining the EMI reception levels.
Systems engineers are concerned with mitigating the EMI emissions of a system as well. The
radio-frequency interference (RFI) is a subset of EMI that operates within the radio frequency
spectrum. The more RFI a system produces, the more susceptible the system is to receiving the
same type of interference. In avionics systems, EMI/RFI emissions are reduced in order to ensure
reliability during operation. When RFI and EMI signals are emitted in an environment, the
environment is called noisy. These noisy environments are remedied by techniques such as EMI
filtering and shielding. Another effective technique to decrease the level of transmission is to
ensure proper cabling of the system, in addition to using the proper power sources. This
minimizes the amount of stray current signals through reduction of the amount of return loops to
the ground. As the avionics systems begin to become more complex, the EMI/RFI reduction
techniques will be required to continue to evolve in order to sustain system reliability.
This article assisted in the understanding of RFI and EMI reduction techniques; however,
it did not explain why the signals are initially required. This source led to research of the origins
of EMI and RFI emissions. This source provided specific technical examples and techniques that
can aid in creation of a prototype emission mitigation system.

Howard, C. E. (2017). Shielding against electromagnetic and RF interference for safety and
mission success. ​Military & Aerospace Electronics, 28(​ 7), 18.

Aerospace and defense corporations combat electromagnetic jamming technologies as


they have damaging effects on the electronic systems. Electromagnetic interferences (EMI) and
radio-frequency interferences (RFI) are increasingly becoming an issue as incompatibilities
between person electronic devices (PEDs) and embedded systems cause disruptions such as the
misrepresentation of data being sent across a system. Loss of operations or the complete failure
of an electronic defense system are some of the effects of EMI. Problems begin to arise when the
emissions of one device begin to exceed the processing limits of the other device. Signals that
are critical to mission success and safety are at higher risk of being tampered with by enemy
forces and therefore require greater EMI and RFI protections. As the frequency and the rate of
the signals being sent increase, so does the possibility of interference. Attacks such as radar
jamming or electromagnetic pulses will overload a system with converse signals that purposely
counteract original signals from the system, resulting in extreme interference and the system’s
emergency shutdown.
This article identified types of interference that may occur outside of common
electromagnetic emissions from personal devices. It was helpful in explaining how the signals
cause the system to shut down and what the strategic value would be of devices that create EMI
or RFI. This journal lead to further research into the electromagnetic jamming systems that are
being developed by defense companies. The article also lead to research regarding the
interactions between signals on the electromagnetic spectrum and how they counteract one
another.

Jindong Zhang, Xiaoyan Jia, & Jinfeng Li. (2015). Integration of scanning and image processing
algorithms for lane detection based on fuzzy method. ​Journal of Intelligent & Fuzzy
Systems,​ ​29(​ 6), 2779–2786.

Lane Departure Warning systems employ computer vision and specialized algorithms to
detect lane markings, and then convey a warning to the driver if the vehicle is travelling outside
of the markings. Most roads contain white lane markings; however, adverse environmental
conditions such as inclement weather and time of day may inhibit the computer vision. In this
case, image preprocessing will become a critical component to lane detection, which mitigate the
effects of poor environmental conditions. Modelling of the road or area that the vehicle is
travelling in can set the parameters and the thresholds for the detection of the image. For a lane
detection system, the performance will be affected by the light intensity on the road surface.
Therefore, the image must be binarized, or transformed into grayscale, to create proper contrast.
Using a fuzzy algorithm, the edges of the lane can be detected based on an adaptive threshold
value. Experiments indicate that the average response time for the lane detection in each frame is
16.7639 ms, at an accuracy of 95%.
This article is not created for beginner programmers and does not provide any
information regarding the program used or the description of a fuzzy program. However, it does
provide extensive experimental data to demonstrate the effectiveness of fuzzy programs in the
detection of contrasting values. This makes the article more reliable and informational rather
than instructive. This article led to the research of fuzzy programming and the process of image
binarization.

Malik, J. (2017). What Led Computer Vision to Deep Learning? ​Communications of the ACM,​
60​(6), 82–83.

Artificial neural networks are the most common application of machine learning. The
first implementation of neural networks began in the 1950s, the second in the 1980s, and the
third in the 2010s. The newest wave is known as deep learning. Deep learning emphasizes the
involvement of numerous layers of neurons between the input and the output of the neural
network. The 1980s main design features were kept in the new implementations. A hierarchical
model of the visual cortex with neurons inspired the first neural network architecture for pattern
recognition in that era. The 1990s and 2000s saw lessened interest in neural networks, as the new
developments did not lend to much success on benchmark problems. Efforts were primarily
focused towards the development of unsupervised learning techniques. In 2010, that trend shifted
with technical innovations and the emergence of big data and big computation. Training data for
extensive neural networks was provided by expansive datasets, such as ImageNet.
This article clearly describes the history and correlation of neural networks and image
recognition/computer vision. However, it is written in first person and often inserts personal
opinion into the descriptions of the history. This makes the article less reliable as personal bias
appears to be present throughout. This article is helpful for understanding the basic correlation of
a neural network and computer vision, but requires corroboration from other sources in order to
increase its dependability and verify its information.

McManus, D. (2002, May). Writing Effective Use Cases (Book Reviews). ​Technical
Communication,​ ​49(​ 2), 240+.

Use cases model the possible interactions between the user and the system. Use cases are
difficult to define, as the creation of a use case requires agreement about the project scope
between the writers, developers, and users of the system. They should be written as a prose
essay, clear and simplistic. Creating a use case can be broken down into sections - the actor-goal
list, an in-out list, and use case briefs. An actor-goal list lists the goals of the user, an in-out list
are the inputs and outputs of the system, and use case briefs are sentences that describe the
activities and failures. The most important part is the user goal, the goal of the user while using
the system. The use case must discuss the who, what, where, how, and why the system functions.
The design process should not begin until all use cases have been completed and discussed. Use
cases assist in framing the design to meet the system's behavioral requirements.
This summary clearly outlines the content of the book and provides an overview of the
most important information provided within each chapter. It meticulously proceeds through each
chapter and breaks down the purpose, the advantages, and the downfalls of the content provided.
The summary also delves into detail about the author’s qualifications and the general background
regarding the book. This article is amazing for understanding both the content and structure of
the book and the function use cases in engineering applications. This summary led to further
research into the book summarized to gain more detailed examples of use cases.

Oldfield, P. (2017, July 11). Searching for Parking Costs Americans $73 Billion a Year.

According to the parking analysis firm INRIX, Americans spend on average, 17 hours
per year searching for parking. This results in a cost of $345 per driver in wasted time, fuel and
emissions. Overpaying for parking spaces costs more than $20 billion a year or $97 per driver.
The study combined data from parking databases with survey results from nearly 6,000 drivers in
ten of the largest U.S. cities. Nearly two-thirds of drivers surveyed reported feeling stressed
searching for parking, almost half missed an appointment, one-third abandoned a trip entirely
because of an inability to locate parking, and one-quarter reported experiencing road rage. New
York was reported to have the worst parking problem, with drivers reporting spending 107 hours
per year searching for parking at an annual cost of $2,243 per driver in wasted time. Next came
Los Angeles and San Francisco with 85 hours at a cost of $1,785 and 83 hours at a cost of $1,735
respectively.
The study by Inrix provided hard data to describe the reality of the struggles of parking in
American cities and the impact on both society and the economy. One drawback is that it did not
divulge the study methodology nor did it have an extensive sample size. This may lead to
skewed data due to the lack of a diverse sample population. However, the report does provide
valuable baseline information to classify parking as both a social and economic issue. The report
leads to research on the other adverse effects of the search for parking on the American
economy.

Savage, N. (2016). Seeing More Clearly. ​Communications of the ACM,​ ​59(​ 1), 20–22.

Stanford University in California developed a computer to generate captions that describe


images using artificial intelligence technology. The captions are remarkably accurate, made
possible by large sets of training data available in data sets. Collections such as ImageNet and
Common Objects in Context, hold hundreds of thousands images available for computers to
analyze and learn from. Computers have become powerful enough to apply machine learning to
image recognition. Each neuron in the neural network is a filter that analyzes a small section of
an image and calculates a value based on how confident the computer is that a given object is
within that segment. The next step for visual intelligence is recognizing the relationship between
objects and actions. Networks are trained using human generated datasets with captions for
computers to recognize correlations. Then, it identifies objects in an image to generate possible
applicable words. Those words are then utilized to construct possible sentences. Finally, it ranks
the sentences in order of likeliness to describe what is in the picture.
This article is great for describing the basic principles and strategies behind machine
learning and image processing. However, it does not describe many of the technical terms and
processes required to create a neural network. The article does not delve into deep detail and
rather provides an overview of how the neural network learns and operates. This article is helpful
for understanding the function of a neural network and the general training process, but leaves
room for further research into the technical aspects of neural network creation and the coding
requirements.

Schölkopf, B. (2015). Artificial intelligence: Learning to see and act. ​Nature​, ​518​(7540), 486.
As technology grows, computers are increasingly able to learn intelligent behaviour
directly from data. In machine learning, programs are trained to deduce patterns from
observational data. Supervised learning is the process of learning a pattern through training data
and examples of inputs and outputs. The machine is then tasked with applying that pattern to
larger data sets, separating the input data into classes that are defined by the programmers.
Humans solve these same problems on a day to day basis, as the brain interprets sensory data to
determine how to control the body. However, there is no supervisor to classify the correct
outputs. Machine learning mirrors this by substituting supervision for a numerical reward signal,
with the goal to maximize the future award. In the Q learning technique, Q* represents this
award. Q* is approximated using a neural network, where the input is processed through layers
of computations that analyze specific visual features to compute the value of possible outputs.
The system picks output actions on the basis of its current estimate of Q* to maximize the value.
The program stores its previous training data in the system’s memory and re-trains based on
previous experiences.
This article was strong in breaking down the technical process of machine learning and
providing specific examples of processes and techniques used to execute various outcomes. It
explains the process thoroughly step by step and defines terms that may have been previously
unknown. In all, this article led to further exploration of the Q learning technique and its
applications.

Strauss, B., & Morgan, M. G. (2002). Everyday Threats to Aircraft Safety. ​Issues in Science &
Technology​, ​19​(2), 82.

This article discusses the coming threats to aircraft in the future as technology progresses
and what that progress means to the future of aviation. It explored radio frequency interference
and the role that personal mobile devices play in transmitting those types of signals. Radio
frequency interference is reported to have played a part in a growing amount of commercial air
accidents in recent years. Radio frequency interference distracts the pilot during complicated
maneuvers and can cause errors in instrument readings. Consumer devices are now being forced
to be under a threshold set by the Federal Communications Commission (FCC) for radio
frequency emissions. Other electromagnetic devices will not be interfered with if these standards
are met. As more users demand to operate such personal devices during flights, these standards
will become more crucial for aircraft safety. Sources of these emissions may include, but are not
limited to, phones, laptops, tablets, and gaming systems. Some argue that the use of these
personal devices should be banned for the sake of in-flight safety; however, this becomes a
problem in emergencies like the September 11th terror attacks where cell phones were crucial in
communicating the details of the dangers on board the flight and reaching emergency responders.
This article was informative and was written in an easily comprehensible manner. It
simplified the details of the electromagnetic spectrum and radio frequencies to facilitate
understanding. It did well in presenting both sides of the argument and provided different
perspectives from consumers, professionals, and government agencies. This source provided a
problem to be further researched and provided possible solutions to the problem.
Thompson, C. (2016). No Parking Here. ​Mother Jones,​ ​41(​ 1), 16.

The average car spends 95% of its lifespan parked. A study in 2011 reported that the US
has approximately one billion parking spaces for 253 million passenger cars. This means that
there are four parking spaces on average for every car. Studies have also found that 30 to 60
percent of city driving is dedicated to “cruising” to find a parking space. That computes to
around 20 minutes a trip looking for parking downtown. This cruising burns 47,000 gallons of
gas and generates 730 tons of carbon dioxide per year. However, the solution to this problem
may be self-driving cars. The robotic efficiency of the cars combined with computerized
knowledge of the location of potential riders could result in mass ride sharing without the need to
park. With this new technology, people may be discouraged to drive cars, instead opting to use
the mass ride sharing service. The result being the number of parking spaces required reducing
drastically. The autonomous robocars could replace up to twelve regular passenger vehicles.
This paper is helpful for providing both statistics and solutions to the parking problem
that is plaguing the United States currently. The author compiles the data from multiple sources
and combines the findings with quotes and opinions from experts in the field to bolster the
reliability of his claims. The report is an interesting read and provides an intriguing outlook into
what could be the future of motor transport in the United States.

Tawk, Y., Jovanovic, A., Tome, P., Leclere, J., Botteron, C., Farine, P.-A., ...Spaeth, B. (2013).
A
new movement recognition technique for flight mode detection. ​International Journal of
Vehicular Technology.

This article discusses the possibility of using smartphone accelerometers to detect motion
to turn the smartphone into flight mode. The raw accelerometer measurement values were
collected by setting an iPhone into flight mode and carrying the phone inside of a handbag
aboard multiple flights. There were then static values taken when the phone was immobile for an
extended period of time. These values were then used to determine the average accelerometer
values of a smartphone while in flight. A moving variance was then created to analyze the sets of
data points by creating different subsets of the data sets found. An algorithm was then derived
using this moving variance to determine the difference between the static and dynamic states of
the accelerometers. This algorithm was plugged into MATLAB to simulate flight conditions and
determine inconsistencies and possibilities of a false alarm in the motion detection. The
implementation of the algorithm was then analyzed based on the response time, hardware
resources, power consumption, and the elements of implementation and optimization.
This source was helpful in the understanding of how the disruptive signals emitted from
mobile devices could be neutralized during flight, however, did not take time to discuss what
these signals were and how they affected the instruments. The source properly explained the
circumstances and the evaluation of the experiment while providing ample visuals and examples
of the results. This source aided in the understanding of the development of a sample program to
lessen the effects of the disruptive signals during the flight.

Wellman, B. A., & Hahn, G. R. (2014). Sharing the electromagnetic spectrum. ​Army
Communicator, 39​(2), 38.

All mobile phones, radar, satellite communications systems, and WiFi devices operate
through the electromagnetic spectrum. This spectrum is crucial to both public and private
agencies across the globe to ensure stable communication. Because of this, it is important to
ensure access to this spectrum to these agencies as it becomes increasingly crowded. The
National Telecommunications and Information Administration (NTIA) and the Federal
Communications Commission (FCC) regulate the electromagnetic spectrum in the United States.
Both agencies have designated federal and non-federal bands (sections) of the spectrum that they
then assign to their stakeholders. In recent years, pressures to allow previously government
allocated bands of the spectrum to be utilized commercially have mounted. As the amount of
wireless systems exponentially grew, so has the necessity of an increased bandwidth. Producers
of wireless devices have attempted to make devices more efficient by increasing the data speed
over the existing bandwidth, but the growth in efficiency is unsustainable. The prospect of
spectrum sharing between federal and commercial users is a concept currently explored by the
NTIA and the FCC to resolve the issue. However, spectrum sharing results in loss of privacy
between spectrum users. The two agencies are currently discussing new technologies that will
allow commercial systems to continue to utilize the system to increase wireless capabilities while
allowing the Department of Defense to conduct its missions with discretion.
This article explained the reasons for the overcrowding on the electromagnetic spectrum
and what is being done to mitigate the issue. It provided crucial information on the policies that
are taken into account when creating the designs of the products. The article prompted further
research into the innovations created by the communications industries to increase efficiency.

Wilson, A. (2015). Machine learning leverages image classification techniques. ​Vision Systems
Design​, ​20​(2), 31–33.

Machine learning systems utilized in the industry employ image processing algorithms to
separate products that are up to standard and those that are not. Features including color, length
and area are gathered and then compared with a known standard to determine if the product
meets production requirements. Image classifiers can increase the accuracy of the test by
reducing false positives. Classifiers may include data like the RGB color values of a particular
object. Numerous examples must first be given to the system in order to initialize these image
classifiers. However, even those may not generate the required information to distinguish the
image. Supervised, unsupervised or semisupervised learning techniques can be employed to
classify images in these situations. In supervised learning, manually labeled images are
administered to the system in order to teach the differences. Unsupervised learning employs
patterns of values in image data to create a model from unknown images. Semisupervised
learning techniques combine the two methods and incorporate labeled data into the unsupervised
clustering to improve the accuracy.
This source is detailed and descriptive while explaining technical terminology and
processes. It provides graphical examples and quotes from professionals to substantiate the
claims and information provided within the paragraphs. The article provides many examples of
the programs that are described in the article to lead the reader to further research. For example,
this article led to research on the MVTech software designed in Germany.

Anda mungkin juga menyukai