Anda di halaman 1dari 14

Fluoroscopy

A barium swallow exam taken via fluoroscopy.


Fluoroscopy is an imaging technique that uses X-rays to obtain real-time moving images of the
internal structures of a patient through the use of a fluoroscope. In its simplest form, a fluoroscope
consists of an X-ray source and fluorescent screen between which a patient is placed. However,
modern fluoroscopes couple the screen to an X-ray image intensifier and CCD video
camera allowing the images to be recorded and played on a monitor.
The use of X-rays, a form of ionizing radiation, requires the potential risks from a procedure to be
carefully balanced with the benefits of the procedure to the patient. While physicians always try to
use low dose rates during fluoroscopic procedures, the length of a typical procedure often results in
a relatively highabsorbed dose to the patient. Recent advances include the digitization of the images
captured and flat panel detector systems; modern advances allow further reduction of the radiation
dose to the patient.
Red adaptation goggles were developed by Wilhelm Trendelenburg in 1916 to address the problem
of dark adaptation of the eyes, previously studied by Antoine Beclere. The resulting red light from the
goggles' filtration correctly sensitized the physician's eyes prior to the procedure, while still allowing
him to receive enough light to function normally. The development of the X-ray image intensifier by
Westinghouse in the late 1940s
[1]
in combination with closed circuit TV cameras of the 1950s
revolutionized fluoroscopy. The red adaptation goggles became obsolete as image intensifiers
allowed the light produced by the fluorescent screen to be amplified and made visible in a lighted
room. The addition of the camera enabled viewing of the image on a monitor, allowing a radiologist
to view the images in a separate room away from the risk ofradiation exposure. More modern
improvements in screen phosphors, image intensifiers and even flat panel detectors have allowed
for increased image quality while minimizing the radiation dose to the patient. Modern fluoroscopes
use CsI screens and produce noise-limited images, ensuring that the minimal radiation dose results
while still obtaining images of acceptable quality.
Experimenter in 1890s examining his hand with fluoroscope.

Operation during World War 1 using a fluoroscope screen to find embedded bullets

1950s fluoroscope
Invention of commercial instruments
Analog instrument
Thomas Edison began investigating materials for ability to fluoresce when x-rayed in the late 1890s
and by the turn of the century had invented a fluoroscope with sufficient image intensity to be
commercialized. Edison had quickly discovered thatcalcium tungstate screens produced brighter
images. Edison, however, abandoned his researches in 1903 because of the health hazards that
accompanied use of these early devices. A glass blower of lab equipment and tubes at Edisons
laboratory was repeatedly exposed, suffering radiation poisoning and, later, succumbing to an
aggressive cancer. Edison himself damaged an eye in testing these early fluoroscopes.
[2]



This Adrian Fluoroscope model was used for testing the fit of shoes in shoe stores.
During this infant commercial development, many incorrectly predicted that the moving images of
fluoroscopy would completely replace roentgenographs or diagnostic radiograph still image films, but
the then superior diagnostic quality of the roentgenograph and their already alluded safety
enhancement of shorter radiation dose prevented this from occurring. More trivial uses of the
technology also appeared in the 1930s-1950s, including a shoe-fitting fluoroscope used at shoe
stores.
[3]


Digital instrument
Later, in the early 1960s, Frederick G. Weighart
[4]
and James F. McNulty
[5]
at Automation Industries,
Inc., then, in El Segundo, California produced the worlds first image to be digitally generated in real-
time on a fluoroscope, while developing a later commercialized portable apparatus
for nondestructive testing of naval aircraft. Square wave signals were detected by the pixels of
a cathode ray tube to create the image. Digital imaging technology was reintroduced to fluoroscopy
after development of improved detector systems from the late 1980s.
Equipment


A fluoroscopy x-ray machine is a great asset during surgery for implants
The first fluoroscopes consisted of an x-ray source and fluorescent screen between which the
patient would be placed. As the x-rays pass through the patient, they areattenuated by varying
amounts as they interact with the different internal structures of the body, casting a shadow of the
structures on the fluorescent screen. Images on the screen are produced as the unattenuated x rays
interact with atoms in the screen through the photoelectric effect, giving their energy to the electrons.
While much of the energy given to the electrons is dissipated as heat, a fraction of it is given off as
visible light, producing the images. Early radiologists would adapt their eyes to view the dim
fluoroscopic images by sitting in darkened rooms, or by wearingred adaptation goggles.
X-ray image intensifiers
Main article: Image intensifier
The invention of X-ray image intensifiers in the 1950s allowed the image on the screen to be visible
under normal lighting conditions, as well as providing the option of recording the images with a
conventional camera. Subsequent improvements included the coupling of, at first, video cameras
and, later, CCD camerasto permit recording of moving images and electronic storage of still images.
Modern image intensifiers no longer use a separate fluorescent screen. Instead, a caesium
iodide phosphor is deposited directly on the photocathode of the intensifier tube. On a typical
general purpose system, the output image is approximately 10
5
times brighter than the input image.
This brightness gain comprises a flux gain (amplification of photon number) andminification
gain (concentration of photons from a large input screen onto a small output screen) each of
approximately 100. This level of gain is sufficient that quantum noise, due to the limited number of x-
ray photons, is a significant factor limiting image quality.
Image intensifiers are available with input diameters of up to 45 cm, and a resolution of
approximately 2-3 line pairs mm
1
.
Flat-panel detectors
Main article: Flat panel detector
The introduction of flat-panel detectors allows for the replacement of the image intensifier in
fluoroscope design. Flat panel detectors offer increased sensitivity to X-rays, and therefore have the
potential to reduce patient radiation dose. Temporal resolution is also improved over image
intensifiers, reducing motion blurring. Contrast ratio is also improved over image intensifiers: flat-
panel detectors are linear over a very wide latitude, whereas image intensifiers have a maximum
contrast ratio of about 35:1. Spatial resolution is approximately equal, although an image intensifier
operating in 'magnification' mode may be slightly better than a flat panel.
Flat panel detectors are considerably more expensive to purchase and repair than image
intensifiers, so their uptake is primarily in specialties that require high-speed imaging, e.g., vascular
imaging and cardiac catheterization.
Contrast agents
A number of substances have been used as positive contrast
agents: silver, bismuth, caesium, thorium, tin, zirconium,tantalum, tungsten and lanthanide compoun
ds have been used as contrast agents. The use of thoria (thorium dioxide) as an agent was rapidly
stopped as thorium causes liver cancer. Most modern injected radiographic positive contrast media
are iodine-based. Iodinated contrast comes in two forms: ionic and non-ionic compounds. Non-ionic
contrast is significantly more expensive than ionic (approximately three to five times the cost),
however, non-ionic contrast tends to be safer for the patient, causing fewer allergic reactions and
uncomfortable side effects such as hot sensations or flushing. Most imaging centers now use non-
ionic contrast exclusively, finding that the benefits to patients outweigh the expense.
Negative radiographic contrast agents are air and carbon dioxide (CO
2
). The latter is easily absorbed
by the body and causes less spasm. It can also be injected into the blood, where air absolutely
cannot.



Neural engineering
Neural engineering (also known as Neuroengineering) is a discipline within biomedical
engineering that uses engineering techniques to understand, repair, replace, enhance, or otherwise
exploit the properties of neural systems. Neural engineers are uniquely qualified to solve design
problems at the interface of living neural tissue and non-living constructs.
Overview[edit]
The field of neural engineering draws on the fields of computational neuroscience, experimental
neuroscience, clinicalneurology, electrical engineering and signal processing of living neural tissue,
and encompasses elements from robotics,cybernetics, computer engineering, neural tissue
engineering, materials science, and nanotechnology.
Prominent goals in the field include restoration and augmentation of human function via direct
interactions between the nervous system and artificial devices.
Much current research is focused on understanding the coding and processing of information in
the sensory and motorsystems, quantifying how this processing is altered in the pathological state,
and how it can be manipulated through interactions with artificial devices including brain-computer
interfaces and neuroprosthetics.
Other research concentrates more on investigation by experimentation, including the use of neural
implants connected with external technology.
Neurohydrodynamics is a division of neural engineering that focuses on hydrodynamics of the
neurological system.
History[edit]
As neural engineering is a relatively new field, information and research relating to it is comparatively
limited, although this is changing rapidly. The first journals specifically devoted to neural
engineering, The Journal of Neural Engineering and The Journal of NeuroEngineering and
Rehabilitation both emerged in 2004. International conferences on neural engineering have been
held by the IEEE since 2003, from 29 April until 2 May 2009 in Antalya, Turkey 4th Conference on
Neural Engineering, the 5th International IEEE EMBS Conference on Neural Engineering in
April/May 2011 in Cancn, Mexico, and the 6th conference in San Diego, California in November
2013.
Fundamentals[edit]
The fundamentals behind neuroengineering involve the relationship of neurons, neural networks,
and nervous system functions to quantifiable models to aid the development of devices that could
interpret and control signals and produce purposeful responses.
Neuroscience[edit]
Messages that the body uses to influence thoughts, senses, movements, and survival are directed
by nerve impulses transmitted across brain tissue and to the rest of the body. Neurons are the basic
functional unit of the nervous system and are highly specialized cells that are capable of sending
these signals that operate high and low level functions needed for survival and quality of life.
Neurons have special electro-chemical properties that allow them to process information and then
transmit that information to other cells. Neuronal activity is dependent upon neural membrane
potential and the changes that occur along and across it. A constant voltage, known as
the Membrane potential, is normally maintained by certain concentrations of specific ions across
neuronal membranes. Disruptions or variations in this voltage create an imbalance, or polarization,
across the membrane. Depolarization of the membrane past its Threshold potential generates an
action potential, which is the main source of signal transmission, known as Neurotransmission of the
nervous system. An action potential results in a cascade of ion flux down and across an axonal
membrane, creating an effective voltage spike train or "electrical signal" which can transmit further
electrical changes in other cells. Signals can be generated by electrical, chemical, magnetic, optical,
and other forms of stimuli that influence the flow of charges, and thus voltage levels across neural
membranes(He 2005).
Engineering[edit]
Engineers employ quantitative tools that can be used for understanding and interacting with complex
neural systems. Methods of studying and generating chemical, electrical, magnetic, and optical
signals responsible for extracellular field potentials and synaptic transmission in neural tissue aid
researchers in the modulation of neural system activity(Babb et al. 2008). To understand properties
of neural system activity, engineers use signal processing techniques and computational
modeling(Eliasmith & Anderson 2003). To process these signals, neural engineers must translate
the voltages across neural membranes into corresponding code, a process known as neural
coding. Neural coding uses studies on how the brain encodes simple commands in the form of
central pattern generators (CPGs), movement vectors, the cerebellar internal model, and
somatotopic maps to understand movement and sensory phenomena. Decoding of these signals in
the realm ofneuroscience is the process by which neurons understand the voltages that have been
transmitted to them. Transformations involve the mechanisms that signals of a certain form get
interpreted and then translated into another form. Engineers look to mathematically model these
transformations(Eliasmith & Anderson 2003). There are a variety of methods being used to record
these voltage signals. These can be intracellular or extracellular. Extracellular methods involve
single-unit recordings, extracellular field potentials, amperometry, or more recently, Multielectrode
arrays which have been used to record and mimic signals.
Scope[edit]
Neuromechanics[edit]
Neuromechanics is the coupling of neurobiology, biomechanics, sensation and perception, and
robotics(Edwards 2010). Researchers are using advanced techniques and models to study the
mechanical properties of neural tissues and their effects on the tissues' ability to withstand and
generate force and movements as well as their vulnerability to traumatic loading.(Laplaca & Prado
2010) This area of research focuses on translating the transformations of information among the
neuromuscular and skeletal systems to develop functions and governing rules relating to operation
and organization of these systems (Nishikawa et al. 2007). Neuromechanics can be simulated by
connecting computational models of neural circuits to models of animal bodies situated in virtual
physical worlds(Edwards 2010). Experimental analysis of biomechanics including the kinematics and
dynamics of movements, the process and patterns of motor and sensory feedback during movement
processes, and the circuit and synaptic organization of the brain responsible for motor control are all
currently being researched to understand the complexity of animal movement. Dr. Michelle
LaPlaca's lab at Georgia Institute of Technology is involved in the study of mechanical stretch of cell
cultures, shear deformation of planar cell cultures, and shear deformation of 3D cell containing
matrices. Understanding of these processes is followed by development of functioning models
capable of characterizing these systems under closed loop conditions with specially defined
parameters. The study of neuromechanics is aimed at improving treatments for physiological health
problems which includes optimization of prostheses design, restoration of movement post injury, and
design and control of mobile robots. By studying structures in 3D hydrogels, researchers can identify
new models of nerve cell mechanoproperties. For example, LaPlaca et al. developed a new model
showing that strain may play a role in cell culture. (LaPlaca et al. 2005)
Neuromodulation[edit]
Neuromodulation aims to treat disease or injury by employing medical device technologies that
would enhance or suppress activity of the nervous system with the delivery of pharmaceutical
agents, electrical signals, or other forms of energy stimulus to re-establish balance in impaired
regions of the brain. Researchers in this field face the challenge of linking advances in
understanding neural signals to advancements in technologies delivering and analyzing these
signals with increased sensitivity, biocompatibility, and viability in closed loops schemes in the brain
such that new treatments and clinical applications can be created to treat those suffering from neural
damage of various kinds(Potter 2012). Neuromodulator devices can correct nervous system
dysfunction related to Parkinson's disease, dystonia, tremor, Tourette's, chronic pain, OCD, severe
depression, and eventually epilepsy. (Potter 2012) Neuromodulation is appealing as treatment for
varying defects because it focuses in on treating highly specific regions of the brain only, contrasting
that of systemic treatments that can have side effects on the body. Neuromodulator stimulators such
as microelectrode arrays can stimulate and record brain function and with further improvements are
meant to become adjustable and responsive delivery devices for drugs and other stimuli. (2012a)
Neural regrowth and repair[edit]
Neural engineering and rehabilitation applies neuroscience and engineering to investigating
peripheral and central nervous system function and to finding clinical solutions to problems created
by brain damage or malfunction. Engineering applied toNeuroregeneration focuses on engineering
devices and materials that facilitate the growth of neurons for specific applications such as the
regeneration of peripheral nerve injury, the regeneration of the spinal cord tissue for spinal cord
injury, and the regeneration of retinal tissue. Genetic engineering and Tissue engineering are areas
developing scaffolds for spinal cord to regrow across thus helping neurological problems. (Potter
2012, Schmidt & Leach 2003)
Research and applications[edit]
Research is focused on neural engineering utilizes devices to study how the nervous system
functions and malfunctions. (Schmidt & Leach 2003)
Neural imaging[edit]
Neuroimaging techniques are used to investigate the activity of neural networks, as well as the
structure and function of the brain. Neuroimaging technologies include functional magnetic
resonance imaging (fMRI), magnetic resonance imaging (MRI), positron emission tomography (PET)
and computed axial tomography (CAT) scans. Functional neuroimaging studies are interested in
which areas of the brain perform specific tasks. fMRI measures hemodynamic activity that is closely
linked to neural activity. It probes the brain by tuning the brain scanner to a certain wavelength to
see which part of the brain are activated doing different tasks by seeing what lights up doing different
things. PET, CT scanners, and electroencephalography (EEG) are currently being improved and
used for similar purposes(Potter 2012).
Neural networks[edit]
Scientists can use experimental observations of neuronal systems and theoretical and computational
models of these systems to create Neural networks with the hopes of modeling neural systems in as
realistic a manner as possible. Neural networks can be used for analyses to help design further
neurotechnology devices. Specifically, researchers handle analytical or finite element modeling to
determine nervous system control of movements and apply these techniques to help patients with
brain injuries or disorders. Artificial neural networks can be built from theoretical and computational
models and implemented on computers from theoretically devices equations or experimental results
of observed behavior of neuronal systems. Models might represent ion concentration dynamics,
channel kinetics, synaptic transmission, single neuron computation, oxygen metabolism, or

POSIX Threads
POSIX Threads, usually referred to as Pthreads, is a POSIX standard for threads. The
standard, POSIX.1c, Threads extensions (IEEE Std 1003.1c-1995), defines an API for creating and
manipulating threads.
Implementations of the API are available on many Unix-like POSIX-conformant operating systems
such as FreeBSD, NetBSD,OpenBSD, GNU/Linux, Mac OS X and Solaris. DR-DOS and Microsoft
Windows implementations also exist: within theSFU/SUA subsystem which provides a native
implementation of a number of POSIX APIs, and also within third-partypackages such as pthreads-
w32,
[1]
which implements pthreads on top of existing Windows API.
Contents
Pthreads defines a set of C programming language types, functions and constants. It is implemented
with a pthread.hheader and a thread library.
There are around 100 Pthreads procedures, all prefixed "pthread_" and they can be categorized into
four groups:
Thread management - creating, joining threads etc.
Mutexes
Condition variables
Synchronization between threads using read/write locks and barriers
The POSIX semaphore API works with POSIX threads but is not part of threads standard, having
been defined in thePOSIX.1b, Real-time extensions (IEEE Std 1003.1b-1993) standard.
Consequently the semaphore procedures are prefixed by "sem_" instead of "pthread_".
Example
An example illustrating the use of Pthreads in C:
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>

#define NUM_THREADS 5

void *task_code(void *argument)
{
int tid;

tid = *((int *) argument);
printf("Hello World! It's me, thread %d!\n", tid);

/* optionally: insert more useful stuff here */

return NULL;
}

int main(void)
{
pthread_t threads[NUM_THREADS];
int thread_args[NUM_THREADS];
int rc, i;

// create all threads one by one
for (i=0; i<NUM_THREADS; ++i) {
thread_args[i] = i;
printf("In main: creating thread %d\n", i);
rc = pthread_create(&threads[i], NULL, task_code, (void *)
&thread_args[i]);
assert(0 == rc);
}

// wait for each thread to complete
for (i=0; i<NUM_THREADS; ++i) {
// block until thread i completes
rc = pthread_join(threads[i], NULL);
printf("In main: thread %d is complete\n", i);
assert(0 == rc);
}

printf("In main: All threads completed successfully\n");
exit(EXIT_SUCCESS);
}
This program creates five threads, each executing the function task_code that prints the unique
number of this thread to standard output. If a programmer wanted the threads to communicate with
each other, this would require defining a variable outside of the scope of any of the functions,
making it a global variable.
POSIX Threads for Windows[edit]
Windows does not support the pthreads standard natively, therefore the Pthreads-w32 project seeks
to provide a portable and open-source wrapper implementation. It can also be used to
port Unix software (which use pthreads) with little or no modification to the Windows platform.
[2]
With
some additional patches the last version 2.8.0 is compatible with 64-bit Windows systems.
[3][4][5]
2.9.0
is said to also be 64-bit compatible.
[6]

The mingw-w64 project also contains a wrapper implementation of pthreads, winpthreads,
[7]
which
tries to use more native system calls than the Pthreads-w32 project.
[8]

Interix environment subsystem available in the Windows Services for UNIX/Subsystem for UNIX-
based Applications package provides a native port of the pthreads API, i.e. not mapped on
Win32/Win64 API but built directly on the operating systemsyscall interface.
[9]

Central Monitoring System
From Wikipedia, the free encyclopedia
The Central Monitoring System, abbreviated to CMS, is a clandestine masselectronic
surveillance data mining program installed by the Centre for Development of Telematics (C-DOT),
an Indian Government ownedtelecommunications technology development centre,
[1]
and operated
byTelecom Enforcement Resource and Monitoring (TERM) Cells.
[2]
The CMS gives India's security
agencies and income tax officials centralized access toIndia's telecommunications network
[3]
and the
ability to listen in on and recordmobile, landline and satellite
[4]
calls and voice over Internet
Protocol (VoIP), and read private emails, SMS and MMS and track the geographical location of
individuals,
[5]
all in real time.
[6]
It can also be used to monitor posts shared on social media such
as Facebook, LinkedIn and Twitter, and to track users' search historieson Google,
[7][8][9]
without any
oversight by courts or Parliament. According to a government official, an agency "shall enter data
related to target in the CMS system and approach the telecom services provider (TSP), at which
point the process is automated, and the provider simply sends the data to a server which forwards
the requested information".
[10]
The intercepted data is subject to pattern recognition and other
automated tests to detect emotional markers, such as hate, compassion or intent, and different
forms of dissent.
[6]
Telecom operators in India are obligated by law to give access to their networks
to every legal enforcement agency.
[11]
From 2014 onwards, all mobile telephony operators will be
required to track and store the geographical location from which subscribers make or receive
calls,
[12]
meaning that, in addition to the contact number of the person a caller speaks to, the duration
of the call and details of the mobile tower used, the Call Data Records (CDR) will now also contain
details of the caller's location. The system aims to attain a tracking accuracy of 80% in the first year
of operation, followed by 95% accuracy in the second year, in urban areas. Commander (rtd)
Mukesh Saini, former national information security coordinator of the Government of India,
expressed fears that all CDR details would eventually be fed into the central server for access
through the CMS.
[13]

itrary monitoring. The new system comes under the jurisdiction of the Indian Telegraph Act, a law
formulated by the British in 1885 during the Raj, which allows for monitoring communication in the
"interest of public safety."
System details[edit]
CMS creates central and regional databases, which authorized Central and State level government
agencies can use to intercept and monitor any landline, mobile or internet connection in India. The
CMS will converge all the interception lines at one location, for access by authorized government
agencies. CMS is connected with the Telephone Call Interception System (TCIS) which will helps in
monitoring voice calls, SMS and MMS, fax communications on landlines, CDMA, GSM, video calls
and 3G networks.
[42]
CMS equips government agencies with Direct Electronic Provisioning, filters
and alerts on the target numbers, and enables Call Data Records (CDR) analysis and data mining to
identify the personal information of the target numbers, without any manual intervention from
telecom service providers (TSPs).
[43]

The Indian government agencies known to have been authorized to make intercept requests through
CMS are the Central Board of Direct Taxes (CBDT), the Central Bureau of Investigation (CBI),
the Defense Intelligence Agency (DIA), theDirectorate of Revenue Intelligence (DRI),
the Enforcement Directorate, the Intelligence Bureau (IB), Narcotics Control Bureau (NCB), National
Investigation Agency (NIA) and the Research & Analysis Wing (R&AW),
[34]
as well as Military
Intelligence of Assam and Jammu and Kashmir, and the Home Ministry.
[16]
Authorized agencies are
not required to seek a court order for surveillance or depend, as they did prior to CMS, on Internet or
telephone service providers to give them the data. The government has built intercept data servers
on the premises of private telecommunications firms,
[18]
which will allow it to tap into
communications, at will, without informing the service providers. The top bureaucrat in the home
ministry and his state-level deputies have the authority to approve requests for surveillance of
specific phone numbers, e-mails or social media accounts.
[11]
According to the Press Trust of
India (PTI), the government plan
Media reaction[edit]
Business Standard criticized the lack of a court warrant stating, "Making the new system unusually
draconian is the discretion it provides bureaucrats to approve requests for surveillance, which can be
made by any one of nine government agencies, including the Central Bureau of Investigation,
Intelligence Bureau and Income Tax Department. With the union and state home secretaries
permitted to approve requests for surveillance, this bypasses the traditional system of a court
warrant being needed for monitoring a citizen." Firstpost criticized the lack of information from the
government about the project and the lack of a legal recourse for a citizen whose personal details
were misused or leaked from the database. The paper stated, "One of the primary concerns raised
by experts is the sheer lack of public information on the project. So far, there is no official word from
the government about which government bodies or agencies will be able to access the data; how will
they use this information; what percentage of population will be under surveillance; or how long the
data of a citizen will be kept in the record." It also criticized the lack of judicial oversight, but
conceded that "given the use of technology by criminals and terrorists, government surveillance per
se, seems inevitable".
[24]

Human rights and civil-liberties groups reactions[edit]
Human rights and civil-liberties groups have expressed concerns that the CMS is prone to abuse,
and is an infringement of privacy and civil liberties.
[23]
Critics have described it as "abuse of privacy
rights and security-agency overreach", and counterproductive in terms of security.
[17]
Indian activists
have also raised concerns that the system will inhibit them from expressing their opinions and
sharing information, especially because the government has repeatedly used the Information
Technology Act, since it was amended in 2008, to arrest people for posting comments on social
media that are critical of the government, as well as to put pressure on websites such as Facebook
and Google to filter or block content, and impose liability on private intermediaries to filter and
remove content from users.
[15]



Optical fiber
From Wikipedia, the free encyclopedia




Stealth Fiber Crew installing a 432-count fiber cable underneath the streets of Midtown Manhattan, New York City


A TOSLINK fiber optic audio cable with red light being shone in one end transmits the light to the other end


An optical fiber junction box. The yellow cables are single mode fibers; the orange and blue cables are multi-mode
fibers: 50/125 m OM2 and 50/125 m OM3 fibers respectively.
An optical fiber (or optical fibre) is a flexible, transparent fiber made of extruded glass (silica) or
plastic, slightly thicker than a human hair. It can function as awaveguide, or light pipe,
[1]
to transmit
light between the two ends of the fiber.
[2]
Power over Fiber (PoF) optic cables can also work to deliver
an electric current for low-power electric devices.
[3]
The field of applied
science and engineeringconcerned with the design and application of optical fibers is known as fiber
optics.
Optical fibers are widely used in fiber-optic communications, where they permit transmission over
longer distances and at higher bandwidths (data rates) than wire cables. Fibers are used instead
of metal wires because signals travel along them with less loss and are also immune
to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so
that they may be used to carry images, thus allowing viewing in confined spaces. Specially designed
fibers are used for a variety of other applications, including sensors and fiber lasers.
Optical fibers typically include a transparent core surrounded by a transparentcladding material with
a lower index of refraction. Light is kept in the core by total internal reflection. This causes the fiber
to act as a waveguide. Fibers that support many propagation paths or transverse modes are
called multi-mode fibers (MMF), while those that only support a single mode are called single-mode
fibers (SMF). Multi-mode fibers generally have a wider core diameter, and are used for short-
distance communication links and for applications where high power must be transmitted. Single-
mode fibers are used for most communication links longer than 1,000 meters (3,300 ft).
Joining lengths of optical fiber is more complex than joining electrical wire or cable. The ends of the
fibers must be carefully cleaved, and then carefully spliced together with the cores perfectly aligned.
A mechanical splice holds the ends of the fibers together mechanically, while fusion splicing uses
heat to fuse the ends of the fibers together. Special optical fiber connectors for temporary or semi-
permanent connections are also available.
Advantages over copper wiring[edit]
The advantages of optical fiber communication with respect to copper wire
systems are:
Broad bandwidth
A single optical fiber can carry 3,000,000 full-duplex voice calls or
90,000 TV channels.
Immunity to electromagnetic interference
Light transmission through optical fibers is unaffected by
other electromagnetic radiation nearby. The optical fiber is electrically
non-conductive, so it does not act as an antenna to pick up
electromagnetic signals. Information traveling inside the optical fiber
is immune to electromagnetic interference, even electromagnetic
pulses generated by nuclear devices.
Low attenuation loss over long distances
Attenuation loss can be as low as 0.2 dB/km in optical fiber cables,
allowing transmission over long distances without the need
for repeaters.
Electrical insulator
Optical fibers do not conduct electricity, preventing problems
with ground loops and conduction of lightning. Optical fibers can be
strung on poles alongside high voltage power cables.
Material cost and theft prevention
Conventional cable systems use large amounts of copper. In some
places, this copper is a target for theft due to its value on the scrap
market.
Sensors[edit]
Main article: Fiber optic sensor
Fibers have many uses in remote sensing. In some
applications, the sensor is itself an optical fiber. In other
cases, fiber is used to connect a non-fiberoptic sensor to a
measurement system. Depending on the application, fiber
may be used because of its small size, or the fact that
no electrical power is needed at the remote location, or
because many sensors can be multiplexed along the length
of a fiber by using different wavelengths of light for each
sensor, or by sensing the time delay as light passes along
the fiber through each sensor. Time delay can be
determined using a device such as an optical time-domain
reflectometer.
Power transmission[edit]
Optical fiber can be used to transmit power using a photovoltaic cell to
convert the light into electricity.
[33]
While this method of power transmission
is not as efficient as conventional ones, it is especially useful in situations
where it is desirable not to have a metallic conductor as in the case of use
near MRI machines, which produce strong magnetic fields.
[34]
Other
examples are for powering electronics in high-powered antenna elements
and measurement devices used in high-voltage transmission equipment.

Anda mungkin juga menyukai