Anda di halaman 1dari 67

Unit V

Safety & Risk

Safety and Risk can be defined as “ product or a project is safe with respect to a
person or a group at a given time, if its risks ere fully known and the risks are
judged to be acceptable in the light of settled perspectives.
Probability of safety = 1 – probability of risk
Risk is a function of level of hazard and the probability of occurrence of the
hazard.
Risk = Probability of occurrence of hazard x consequence in magnitude

Risk may be defined as the undesirable consequences of an activity in relation to


the likelihood of these consequences being realized.

Biotechnology-derived pharmaceuticals (biopharmaceuticals) were


initially developed in the early 1980s. The first marketing authorizations were
granted later in the decade. Several guidelines and points-to-consider documents
have been issued by various regulatory agencies regarding safety assessment of
these products. Review of such documents, which are available from regulatory
authorities, may provide useful background in developing new
biopharmaceuticals. Considerable experience has now been gathered with
submission of applications for biopharmaceuticals. Critical review of this
experience has been the basis for development of this guidance that is intended to
provide general principles for designing scientifically acceptable preclinical
safety evaluation programs.

Objectives
Regulatory standards for biotechnology-derived pharmaceuticals have
generally been comparable among the European Union, Japan and United States.
All regions have adopted a flexible, case-by-case, science-based approach to
preclinical safety evaluation needed to support clinical development and
marketing authorisation. In this rapidly evolving scientific area, there is a need
for common understanding and continuing dialogue among the regions. The
primary goals of preclinical safety evaluation are: 1) to identify an initial safe
dose and subsequent dose escalation schemes in humans; 2) to identify potential
target organs for toxicity and for the study of whether such toxicity is reversible;
and 3) to identify safety parameters for clinical monitoring. Adherence to the
principles presented in this document is intended to improve the quality and
consistency of the preclinical safety data supporting the development of
biopharmaceuticals.

Scope
This guidance is intended primarily to recommend a basic framework for the
preclinical safety evaluation of biotechnology-derived pharmaceuticals. It applies
to products derived from characterised cells through the use of a variety of
expression systems including bacteria, yeast, insect, plant, and mammalian cells.
The intended indications may include in vivo diagnostic, therapeutic, or
prophylactic uses. The active substances include proteins and peptides, their
derivatives and products of which they are components; they could be derived
from cell cultures or produced using recombinant DNA technology including
production by transgenic plants and animals. Examples include but are not
limited to: cytokines, plasminogen activators, recombinant plasma factors,
growth factors, fusion proteins, enzymes, receptors, hormones, and monoclonal
antibodies.

Assessment of safety and risk

Risk is a function of the level of hazard and the probability of occurrence of the
hazard. The risk may be defined as the undesirable consequences of an activity in
relation to the likelihood of these consequences being realized. A biohazard is any
imaginable adverse effect that can be identified and measured. Risk assessment
involves determination of the potential and anticipated adverse effect of the
recombinant DNA research to the concerned workers, and of the products of such
research on human health and environment consequent to their accidental or
deliberate release (in case of living organisms) or as a result of their consumption.
Risk assessment should be carried out in a scientifically sound and transparent
manner should be in accordance with recognized risk assessment techniques.

Microbiological risk assessment (WHO 2004)

The backbone of the practice of biosafety is risk assessment. While there are
many tools available to assist in the assessment of risk for a given procedure or
experiment, the most important component is professional judgement. Risk
assessments should be performed by the individuals most familiar with the
specific characteristics of the organisms being considered for use, the equipment
and procedures to be employed, animal models that may be used, and the
containment equipment and facilities available. The laboratory director or
principal investigator is responsible for ensuring that adequate and timely risk
assessments are performed and for working closely with the institution’s safety
committee (if existing) and biosafety personnel (if existing) to ensure that
appropriate equipment and facilities are available to support the work being
considered. Once performed, risk assessments should be routinely reviewed and
revised when necessary, taking into consideration acquisition of new data having
a bearing on the degree of risk and other relevant new information from the
scientific literature. One of the most helpful tools available for performing a
microbiological risk assessment is the listing of risk groups for microbiological
agents. However, simple reference to the risk grouping for a particular agent is
insufficient in the conduct of a risk assessment. Other factors that should be
considered, as appropriate, include:
– pathogenicity of the agent and infectious dose
– consideration of the outcome of exposure
– natural route of infection
– other routes of infection, resulting from laboratory manipulations (parenteral,
airborne, ingestion)
– stability of the agent in the environment
– concentration of the agent and volume of concentrated material to be
manipulated
– presence of a suitable host (human or animal)
– information available from animal studies and reports of laboratory-acquired
infections or clinical reports
– laboratory activity planned (concentration, sonication, aerosolization,
centrifugation, etc.)
– any genetic manipulation of the organism that may extend the host range of
the agent or alter the agent’s sensitivity to known, effective treatment
regimens
– local availability of effective prophylaxis or therapeutic interventions.
On the basis of the information ascertained during the risk assessment, a biosafety level
can be assigned to the planned work and appropriate personal protective equipment
selected.

Classification of infective microorganisms by risk group

Risk Group 1 (no or low individual and community risk)


A microorganism that is unlikely to cause human or animal disease.
Risk Group 2 (moderate individual risk, low community risk)
A pathogen that can cause human or animal disease but is unlikely to be a serious hazard
to laboratory workers, the community, livestock or the environment. Laboratory
exposures may cause serious infection, but effective treatment and preventive measures
are available and the risk of spread of infection is limited.
Risk Group 3 (high individual risk, low community risk)
A pathogen that usually causes serious human or animal disease but does not ordinarily
spread from one infected individual to another. Effective treatment and preventive
measures are available.
Risk Group 4 (high individual and community risk)
A pathogen that usually causes serious human or animal disease and that can be readily
transmitted from one individual to another, directly or indirectly. Effective treatment and
preventive measures are not usually available.

Specimens for which there is limited information

The risk assessment procedure described above works well when there is adequate
information available. However, there are situations when the information is insufficient
to perform an appropriate risk assessment, for example, with clinical specimens or
epidemiological samples collected in the field. In these cases, it is prudent to take a
cautious approach to specimen manipulation. 1. Standard precautions (2) should always
be followed, and barrier protections applied (gloves, gowns, eye protection), whenever
samples are obtained from patients.
2. Basic containment – Biosafety Level 2 practices and procedures should be the
minimum requirement for handling specimens.
3. Transport of specimens should follow national and/or international rules and
regulations.
Some information may be available to assist in determining the risk of handling these
Specimens:
1. Medical data on the patient
2. Epidemiological data (morbidity and mortality data, suspected route of transmission,
other outbreak investigation data)
3. Information on the geographical origin of the specimen.
In the case of outbreaks of disease of unknown etiology, appropriate ad hoc guidelines
may be generated and posted by national competent authorities and/or WHO on the
World Wide Web (as was the case during the 2003 emergence of the severe acute
respiratory syndrome (SARS)) to indicate how specimens should be consigned for
shipment and the biosafety level at which they should be analysed.
Risk assessment and genetically modified microorganisms

Biosafety considerations for biological expression systems

Biological expression systems consist of vectors and host cells. A number of criteria must
be satisfied to make them effective and safe to use. An example of such a biological
expression system is plasmid pUC18. Frequently used as a cloning vector in combination
with Escherichia coli K12 cells, the pUC18 plasmid has been entirely sequenced. All
genes required for expression in other bacteria have been deleted from its precursor
plasmid pBR322. E. coli K12 is a non-pathogenic strain that cannot permanently colonize
the gut of healthy humans or animals. Routine genetic engineering experiments can
safely be performed in E. coli K12/pUC18 at Biosafety Level 1, provided the inserted
foreign DNA expression products do not require higher biosafety levels.

Biosafety considerations for expression vectors

Higher biosafety levels may be required when:

1. The expression of DNA sequences derived from pathogenic organisms may increase
the virulence of the GMO
2. Inserted DNA sequences are not well characterized, e.g. during preparation of
genomic DNA libraries from pathogenic microorganisms
3. Gene products have potential pharmacological activity
4. Gene products code for toxins.
Viral vectors for gene transfer
Viral vectors, e.g. adenovirus vectors, are used for the transfer of genes to other cells.
Such vectors lack certain virus replication genes and are propagated in cell lines that
complement the defect. Stocks of such vectors may be contaminated with replication-
competent viruses, generated by rare spontaneous recombination events in the
propagating cell lines, or may derive from insufficient purification. These vectors should
be handled at the same biosafety level as the parent adenovirus from which they are
derived.

Transgenic and “knock-out” animals


Animals carrying foreign genetic material (transgenic animals) should be handled in
containment levels appropriate to the characteristics of the products of the foreign
receptor generated in different laboratories were susceptible to poliovirus infection by
various inoculation routes and the resulting disease was clinically and histopathologically
similar to human poliomyelitis. However, the mouse model differs from humans in that
alimentary tract replication of orally administered poliovirus is either inefficient or does
not occur. It is therefore very unlikely that escape of such transgenic mice to the wild
would result in the establishment of a new animal reservoir for poliovirus. Nevertheless,
this example indicates that, for each new line of transgenic animal, detailed studies
should be conducted to determine the routes by which the animals can be infected, the
inoculum size required for infection, and the extent of virus shedding by the infected
animals. In addition, all measures should be taken to assure strict containment of receptor
transgenic mice.

Transgenic plants
Transgenic plants expressing genes that confer tolerance to herbicides or resistance to
insects are currently a matter of considerable controversy in many parts of the world.
The discussions focus on the food-safety of such plants, and on the long-term ecological
consequences of their cultivation. Transgenic plants expressing genes of animal or human
origin are used to develop medicinal and nutritional products. A risk assessment should
determine the appropriate biosafety level for the production of these plants.

Risk assessments for genetically modified organisms


Risk assessments for work with GMOs should consider the characteristics of donor and
recipient/host organisms. Examples of characteristics for consideration include the
following.
Hazards arising directly from the inserted gene (donor organism)
Assessment is necessary in situations where the product of the inserted gene has known
biologically or pharmacologically active properties that may give rise to harm, for
example:
1. Toxins
2. Cytokines
3. Hormones
4. Gene expression regulators
5. Virulence factors or enhancers
6. Oncogenic gene sequences
7. Antibiotic resistance
8. Allergens.
The consideration of such cases should include an estimation of the level of expression
required to achieve biological or pharmacological activity.

Genes. Animals with targeted deletions of specific genes (“knock-out” animals) do not
generally present particular biological hazards. Examples of transgenic animals include
animals expressing receptors for viruses normally unable to infect that species. If such
animals escaped from the laboratory and transmitted the transgene to the wild animal
population, an animal reservoir for that particular virus could theoretically be generated.
This possibility has been discussed for poliovirus and is particularly relevant in the
context of poliomyelitis eradication. Transgenic mice expressing the human poliovirus
Hazards associated with the recipient/host
1. Susceptibility of the host
2. Pathogenicity of the host strain, including virulence, infectivity and toxin production
3. Modification of the host range
4. Recipient immune status
5. Consequences of exposure.

Hazards arising from the alteration of existing pathogenic traits Many modifications do
not involve genes whose products are inherently harmful, but adverse effects may arise as
the result of alteration of existing non-pathogenic or
pathogenic traits. Modification of normal genes may alter pathogenicity. In an attempt to
identify these potential hazards, the following points may be considered (the list is not
exhaustive).
1. Is there an increase in infectivity or pathogenicity?
2. Could any disabling mutation within the recipient be overcome as a result of the
insertion of the foreign gene?
3. Does the foreign gene encode a pathogenicity determinant from another organism?
4. If the foreign DNA does include a pathogenicity determinant, is it foreseeable that this
gene could contribute to the pathogenicity of the GMO?
5. Is treatment available?
6. Will the susceptibility of the GMO to antibiotics or other forms of therapy be affected
as a consequence of the genetic modification?
7. Is eradication of the GMO achievable?

Further considerations
The use of whole animals or plants for experimental purposes also requires careful
consideration. Investigators must comply with the regulations, restrictions and
requirements for the conduct of work with GMOs in host countries and institutions.
Countries may have national authorities that establish guidelines for work with GMOs,
and may help scientists classify their work at the appropriate biosafety level. In some
cases classification may differ between countries, or countries may decide to classify
work at a lower or higher level when new information on a particular vector/host system
becomes available. Risk assessment is a dynamic process that takes into account new
developments and the progress of science. The performance of appropriate risk
assessments will assure that the benefits of recombinant DNA technology remain
available to humankind in the years to come.

Risk benefit analysis from proposal to product


Researchers usually study specific and detailed problems that have drawn their curiosity
and belong to their field of expertise.Research is financed from various sources and
usually includes resources supplied by the government and/or industry. Public funding
usually covers specific and predefined research programs
that are designed to attract competitive proposals. Therefore scientists will usually be
required to prepare research applications that will be evaluated by expert panels.
Application writing has become an almost full-time job of laboratory heads.Usually, a
number of groups contribute to a research proposal. It is not unusual to write over 200
pages for a single research proposal. Having succeeded in attracting funds, the conduct
of research is monitored by various agencies and it may be required that scientists follow
certain guidelines that include safety and ethical considerations. Examples of strictly
controlled circumstances are the handling of GM organisms, pathogenic organisms,
radioactive materials or the use of laboratory animals. As the size of the proposals grow,
so does the need to make sure that the consortium of research groups agree on some basic
cooperation principles that will be followed throughout the project duration. Such
consortium agreements may also take a lot of time to prepare and can be highly complex,
especially if industrial partners are involved. Since public funding is paid for by tax
money, it has been understood that the outcome of the research is shared with all others
in the scientific community and utilised to the benefit of the public community.
However, continuous reductions in public funding, the increased involvement of
industrial parties and the calls for increased spending in applied sciences have changes
these views. Scientists and public institutions are more inclined now than in the past to
seek intellectual property protection for important scientific discoveries. Often, the
proceeds of such activities will be re-channelled to the research groups where the
invention took place. New research results may be published in scientific journals.
Before publication takes place, the results and any interpretations are scrutinised by
fellow scientists. Through this process of review it will be determined whether the results
are of sufficient quality and novelty to justify publication. The idea behind the necessity
of publication is twofold. First, other researchers must have the opportunity to check the
results and continue to develop the published findings. Second, publications are one way
of measuring the productivity and quality of a given group or individual.With the
publication of scientific results the knowledge enters the public domain, and anyone,
including industrial parties, may try to put the knowledge to work and develop new
products that can be marketed. However, if researchers protect their invention by filing
for a patent, third parties would need to pay for licensing rights to further work and to
market the research findings. Industry support is usually directly invested in research
groups that have gained a reputation or have shown a special expertise in a specific area
of science. Industry funded research typically concerns problems that are linked to
promising applications. The contractors usually stipulate that although the patent rights
may be held by the research group, any rights to the exploitation of new discoveries will
be owned and developed by the sponsoring industrial partner. If the product makes it to
the market place, usually a very difficult and uncertain task, the research
group/organisation would receive a certain percentage from net sales. In industrial
settings, research and development is usually divided into various phases. In the first
phase research is carried out to verify and prove the concept of the discovery. This
discovery may originate from internal research activities or from collaborations with
public institutes. During a second phase, the research findings are refined and adapted to
the needs of the company. Throughout this process an eye is kept on competitors to see
whether similar research is being done and what progress is being made. During the
development phase, scientific results will lead to real products. Questions regarding costs
of production and marketing will finally determine whether the product will be developed
or shelved. All products must be tested and their safety assessed before introduction onto
the market place. In the case of GM crops, extensive glasshouse and field trials as well as
health safety tests are carried out before crops can be approved for environmental release.
Safety considerations are very important, as are the approval procedures and consistent
legislation. If the product is exported, it needs approval by the appropriate agencies
around the world. This process is complicated by the fact that different countries have
different standards for product approval. At the end it may be that some products that
industry would like to place onto the market may have delays in approval or be rejected.
A lot of invested money and time may thus be wasted. The risks associated with
developing a new product is borne by industry. Thus one successful product may need to
pay for many other products that have failed to make it onto the market. Products based
on GM plants are much more closely scrutinised than products of traditional plant
breeding. They have an especially tough time making it onto the market. In contrast to
plant breeding products, GM plants need to be shown to pose no hazard to public health
and the environment. In Europe, such new varieties are given only a limited time permit
for general release. All food products made, containing or composed of GMOs need to be
labelled and traced as such. This adds costs and complicates marketing of GM products.
Risk analysis includes risk assessment, risk management and risk communication. Risk
assessment is the first and crucial part of the risk analysis process of GMOs. The
principal approach on a case-by-case basis and proceeding step-bystep is generally
accepted, but harmonisation of the different methods used on an international basis is
needed. Risk assessment needs to comply with high scientific standards. Scientific
uncertainty in assessing potential risks needs to be
acknowledged and dealt with in an open and transparent way that also includes the
public.More research is necessary to fill some of the knowledge gaps.

Risk assessment has a long tradition in regulating human activities with the aim to
minimise or avoid risk to human health and the environment. Examples can be found in
the production of medical products, chemistry or nuclear power. According to European
regulations, the safety of GMOs has to be assessed prior to releases into the environment
and placing on the market. The approach is described in more detail in Directive
2001/18/EC on the deliberate release into the environment of GMOs, which was adopted
in April 2001 and repealed Directive 90/220/EEC in October 2002. In the Annex II of
this Directive the principles 92 3 Regulation, Assessment and Monitoring of GMOs for
the so-called environmental risk assessment, which also includes human health effects,
are laid down. Concerning food, Regulation (EC) 258/97 on Novel Food and Novel Food
Ingredients stipulates risk assessment for foods that have not been used for human
consumption to a significant degree in the European Union before. Foods and feed
containing or consisting of or derived from GMOs are covered by Regulation (EC)
1829/2003, requiring one single risk assessment, carried out by the newly founded
European Food Safety Authority. The overall aim is to release only those GMOs that do
not pose any risk to human health or the environment. Possible positive effects of GMOs
are not subject to risk assessment.
What is Risk Assessment?
Risk assessment, the first part of risk analysis, is followed by risk management and risk
communication. Environmental risk assessment is defined by Directive 2001/18/EC as
the evaluation of risks to human health and the environment, whether direct or indirect,
immediate or delayed, which experimental deliberate release or deliberate release by
placing GMOs on the market may pose. Direct effects refer to primary effects, which are
due to the GMO itself, e.g. allergenicity of the derived novel GM food. In contrast,
indirect effects occur through a causal chain of events, e.g. interaction with other
organisms or effects due to a change of agricultural management due to the use of GM
crops. Immediate effects could be observed during the period of release of the GMO, e.g.
the establishment of weedy GM plants outside the agriculturally used fields. They can be
direct or indirect. Delayed effects would be observable at a later stage as a direct or
indirect effect as such as long-term effects from changed consumption patterns due to
GM food. Additionally, cumulative long-term effects on the environment and human
health have to be assessed. The objective of environmental risk assessment, according to
European legislation, is to identify and evaluate potential adverse effects of a GMO and
to elucidate if there is a need for risk management and suitable measures to be taken. In
the context of this section, the terms hazard and risk are defined as follows: A hazard is a
potential harmful characteristic (here of a GMO), which is an intrinsic property of the
organism investigated. Hazards can give rise to negative consequences. These
consequences can have different orders of magnitude and different likelihood of actually
coming true. Risk can be quantified by combining the likelihood of consequences of a
specific hazard with their magnitude. The principal approach to assess the safety of
GMOs is largely accepted. First of all risk assessment should be science-based and
carried out ensuring a very high scientific standard. For every GMO the risk assessment
is done on a case by- case basis and in a stepwise manner. This means that for example
each GM plant is tested first in the laboratory then on a small scale in a field trial,
followed by a large-scale field trial before authorisation for placing on the market can be
requested. The following step can only be carried out if the preceding step has shown that
the GMO does not pose any risk to human health or the environment. In contrast, the
interpretation and use of the results of the risk assessment differ within the European
Union Member States and internationally, depending for example on the models used for
comparisons. For example, Germany and the UK compare the use and the effects of GM
crops to conventional agriculture, while Austria or Sweden take an organic-oriented input
reduced agriculture as the scale.

How is Risk Assessment Carried Out?


The steps in environmental risk assessment are outlined in Table 3.2. Potential adverse
effects on the environment and human health depend strictly on the specific
characteristics of the GMO and thus to a certain extent on the inserted transgene(s) and
the respective traits. Potential hazards associated with GM crops are listed in a general
way in Table 3.3 and will partly be explained in the following sections, distinguishing
between environmental hazards and hazards for human health.

Spreading of the GMO in the Environment


What is the degree of invasiveness of conventional crops, and can transgenic traits
increase the potential of survival in non-cultivated surrounding areas or as volunteers on
the same plot? Many GM crops developed today carry herbicide tolerance as a new
trait,which is not expected to increase the fitness of the plants in the absence of the
selecting factor, i.e. the respective herbicide. The situation might be different when new
traits such as increased tolerance to dryness, salt or a reduced need for nutrients are
developed

Vertical and Horizontal Gene Transfer


The transfer of transgenes from GM crops to other related crops or weeds (vertical gene
transfer, out-crossing) is a very intensively studied and discussed issue.

94 3 Regulation,Assessment and Monitoring of GMOs

Table 3.2. Steps in environmental risk assessment


1 Identification of characteristics that may cause adverse effects
2 Evaluation of the potential consequences of each adverse effect if it occurs
3 Evaluation of the likelihood of the occurrence of each identified potential adverse effect
4 Estimation of risk posed by each identified characteristic of the GMO
5 Application of management strategies for risks from the deliberate release or marketing
of the GMO
6 Determination of the overall risk of the GMO

Potential hazards associated with GM crops

• Expression of toxic or Potential for production of substances that are allergenic


compounds toxic or allergenic to human beings or other species
• Effects on biogeochemistry Potential to negatively influence decomposition
processes in the soil and thus causing changes in nitrogen and carbon recycling
• Increased persistence on the Potential to confer an ecological fitness advantage
environment and invasiveness to the GM crop causing persistence and
invasiveness (superweeds)
• Transfer of genetic material Potential to transfer the newly introduced genetic
material to other crops or weeds via cross-pollination or to other organisms via
horizontal gene transfer.
• Depending on the transferred trait such gene transfer might not present a hazard
• Instability of genetic modification Potential of reversing down-regulation of a
naturally occurring hazardous trait
• Unintended effects Potential that genetic modification leads to unintended
effects, e.g. influencing other genes of the organisms, which might lead to
unexpected hazards.

The risk of gene transfer to related weed species depends very much on the GM plant
itself. Maize and potato do not have any compatible indigenous related weeds in Europe
that could receive transgenes via pollen flow. In contrast, oilseed
rape is a cross-pollinating species for which several related species exist, so outcrossing
cannot be ruled out. The extent of out-crossing depends on climatic conditions,
agricultural practices, viability of pollen, and availability of out-crossing partners. The
establishment of a trait in the wild population depends on the selective advantage the new
trait might confer. The possibility of gene transfer with in the same crop species depends
on the specific crop. It can present a potential problem for agriculture, as in the case of
organic agriculture where only very low levels of GM plants might be tolerated in the
harvest. The term horizontal gene transfer describes non-sexual gene transfer e.g. from
plant to micro-organisms. Micro-organisms, especially bacteria have the ability to take up
DNA from other organisms or their environment and to integrate the DNA into their
genome.Horizontal gene transfer has been discussed as a risk of gene escape into the
environment without any control. During evolution, horizontal gene transfer has taken
place, but it is considered to be a very rare event. Still, it cannot be ruled out and in the
context of antibiotic-resistance marker genes this possibility has attracted a lot of
attention. According to Directive 2001/18/EC antibiotic-resistance marker genes should
be phased out for GMOs to be placed on the market until the end of 2004. Of course,
alternative marker genes such as those conferring the possibility of metabolising new
substrates, have to undergo new risk assessments.

Potential Trait-Specific Environmental Effects

The potential consequences of the general effects discussed above depend mainly on the
transgenic trait of the GMO.Up to now, the main traits for GM crops are herbicide
tolerance and pest resistance. Herbicide tolerance genes confer tolerance to broad
spectrum herbicides like glyphosate (Round-Up) or glufosinate (Basta). This trait
represented 75% of all GM crops planted commercially in the year 2002.Possible trait-
specific environmental effects, apart from the ones discussed in the previous section, are
mainly due to the application of the respective herbicide and changes in crop
management. Glyphosate and glufosinate are said to be more environmentally friendly
than other herbicides in use.Easier and less applications might lead to less pollution of
soil and ground water. The possibility of a later application during cultivation could lead
to a better soil coverage with plants (weeds) and less erosion. On the other hand a
permanent use could reduce biodiversity of weeds and related animals considerably. In
2002, 17% of commercially planted GM crops world-wide were insect-resistant through
the expression of a toxin from the soil bacterium Bacillus thuringiensis (Bt) (see also
Sect. 2.2). The Bt toxin has been used for many years as a spray in organic agriculture.
Out-crossing of Bt-crops resulting in certain advantage for Bt-producing weeds is a
potential negative effect. Of greater concern are the unintended effects of Bt plants. This
issue has been widely discussed in the context of assumed damage to larvae of the
Monarch butterfly in the USA after being fed pollen of Bt-maize in a laboratory
setting.Adverse effects could not be confirmed by field trials. Soil organisms might come
into contact with the Bt toxin, as it is exuded via the plant roots. The effect on the soil
ecosystem is still unclear. Another important issue is the development of resistance
mechanisms against the toxin by the targeted pests. This is a normal process, taking place
for conventional synthetic pesticides after approximately 10 years.Development of insect
resistance is therefore assumed, which would also render the Bt toxin useless for organic
agriculture. The application of certain risk management strategies, with refuge areas
where non-Bt-plants are grown to delay the development of resistance, is requested in the
USA.

Potential Effects on Human Health


Food consisting of,or derived from, GMOs is tested for potential negative effects on
human health according to Regulation (EC) 1829/2003. The assessment includes tests
for toxic effects, allergenicity and unfavourable changes in nutrient composition. Not
only genetic modification but also plant breeding in general could potentially lead to
unexpected or unintended changes in concentration of toxic substances, anti-nutrients or
nutrient composition. However, conventional food is not subject to similar examinations.
A starting point for the safety evaluation of GM foods is the application of the concept of
substantial equivalence. This concept was first formulated by OECD in 1993 as a guiding
tool and has been developed further since then, meanwhile it has been internationally
accepted, although criticised as being too general and poorly defined. In the EU, with the
introduction of Regulation (EC) 1829/2003, the concept has been abandoned.Substantial
equivalence is based on the comparison of the GM crop with the appropriate
conventional counterpart (considered to be safe on the basis of long experience of use)
with respect to phenotype, agronomic characteristics and food composition (key
nutrients, ntinutrients, toxicants typical of the plant). Three scenarios are distinguished:
1. The GM food or plant is substantially equivalent to its conventional counterpart
and is thus considered to be as safe as this conventional counterpart. This is the case
when the end product does not contain the newly introduced protein, e.g. sugar from GM
sugar beets,or the newly introduced protein has been part of human diet before.No further
safety testing would be necessary.
2. The GM food or plant is substantially equivalent except for the inserted trait e.g. the
Bt-protein from GM maize. The safety tests would apply only to the newly introduced
protein.
3. The GM food or plant is not equivalent to its conventional counterpart. This would be
the case for oil from oilseed rape with changed oil composition. In this case the whole
plant or food would be subject to safety assessment.

An analysis of key components is carried out to compare GM plants or food with


conventional counterparts. The OECD has compiled so called Consensus Documents, the
minimal key components of specific crops that should be checked for comparing GM and
non-GM crops. Consensus Documents are available for potato, sugar beet, soybean and
low erucic acid oilseed rape. International harmonisation is considered necessary to
prevent trade barriers. In July 2003 the
Codex Alimentarius Commission adopted the “Principles for the risk analysis of foods
derived from biotechnology”. Toxicology assessments are not considered to pose any
problems with highly purified substances but are more difficult with whole foods.Many
conventional crops produce low levels of known toxic substances (e.g. lectins in beans,
solanine in potatoes, erucic acid in rapeseed) or antinutrients (e.g. trypsin proteases
inhibitors interfering with protein digestion, phytic acid binding minerals). These
substances are present at levels significant to human health but are inactivated by food
processing, e.g. cooking. It is difficult to assess the potential for allergenicity. Until
today there have been no methods that allow the identification of new proteins as
allergenic. Indirect methods are used, based on general characteristics of known allergens
as such as typically large protein size, exceptional stability, amino acid sequence
homology to known allergens and the quantity of the respective protein in the crop
(generally above 1%). Of the huge number of proteins in food, only very few are
allergens. Known allergens are found in milk, eggs, peanuts, tree nuts, soybean, fish,
crustaceans and wheat. Currently, only in one case has a transgenic protein been shown to
be allergenic. A protein from Brazil nut, which was transferred to soybean to enhance the
nutritive value for feed purposes, turned out to be a major allergen. This GM soybean has
never been marketed. Starlink maize is another GM crop for which potential
allergenicity of the newly introduced 3.5 Genetically Modified Plants and Risk Analysis
97 protein has been discussed. This GM maize contains the Bt protein Cry9C, which
could be a potential allergen because it shows some of the general features of allergenic
proteins, e.g. molecular weight and relative resistance to gastric proteolytic degradation
as well as to heat and acid treatment. For this reason Starlink maize was only authorised
to be used for feed in the U.S. However, Starlink maize has been detected in small
amounts in maize food products, which put in question the segregation systems in place.
Some consumers reported allergic reactions after consumption of maize products, but a
connection to Starlink and thus to the Cry9C protein has not been found by U.S. Centers
for Disease Control and Prevention (CDC).However, due to some shortcomings in
carrying out the investigation, the question of whether or not Cry9C is an allergen still
cannot be answered with absolute certainty.

Scientific Uncertainty

In many cases of potential environmental or health risks, the scientific knowledge base is
not good enough to assess potential risks in a quantitative way and with sufficient
certainty. Profound understanding of complex ecological systems is lacking as well as
knowledge to predict the long-term effects of novel food in the diet on the health
status.However, it is important to be aware of the fact that this is not only true for GM
crops and GM food but also for new varieties of conventional crops and novel exotic
foods that have not been consumed in Europe before. It should also be noted that GM
crops and food are examined to a much
higher extent than any other conventional crop or food. As quantitative risk assessment
is not possible in many cases, a qualitative evaluation system has been developed. The
magnitude of potential consequences can be described as negligible, low, moderate and
severe. Also the likelihood that these consequences will come into effect can be assigned
as negligible, low, moderate or high. The risk must then be assessed by combining the
likelihood with the magnitude of consequences. For example, a high magnitude of
consequences of an adverse effect combined with a low likelihood of the adverse effect
being realised could result in a moderate risk. The final evaluation depends on the
specific GMO and needs to be considered on a case-by-case basis.

Risk Management
If the risk assessment identified a risk, a risk management strategy may be developed to
minimise or mitigate it (see also Sect. 4.5). A 100% safety or 0% risk is not achievable as
a result of risk assessment, therefore uncertainty is an unavoidable part of risk assessment
and risk management. Risk management measures could include:
– Confinement strategies, e.g. certain GM crops are only allowed to be grown in
greenhouses.
– Restricted use, e.g. the growth of GM crops could be restricted to certain geographical
areas. Monitoring following experimental release of GM crops or commercialisation of
GM crops or GM food. Monitoring can be used to identify predicted or unforeseen
effects.
– Guidelines and technical support, e.g. introduction of refuge areas to minimise
resistance development of pests or advice for good agricultural practices as such as crop
rotation and weed control to avoid weediness of GM crops and GM volunteer plants.
– Record keeping (the use of documentation), e.g. as foreseen in Regulation (EC)
1830/2003 on traceability of GM crops and food as an important part of risk
management.
In addition, the design of GM crops could be changed towards male sterile varieties or to
the production of sterile seeds (e.g. terminator technology). The latter is especially
controversial as the production of sterile seeds will prevent farmers from saving seeds,
forcing them to buy new seeds every year.
The Precautionary Principle as Part of Risk Management
Very often scientific data is not available or is insufficient to assess a possible risk in
relation to a GM crop in a significant manner. Several questions are not addressed due to
lack of data on fundamental biological phenomena as such as out-crossing behaviour in
oilseed rape or the effects of GM crops on the soil ecosystem. Scientific uncertainty in
risk assessment leads to the question of how to deal with risks that cannot be sufficiently
quantified. The precautionary principle was introduced at the 1992 Rio Conference on the
Environment and Development in Article 15 of the Rio Declaration:
In order to protect the environment the precautionary approach shall be widely applied by
States according to their capability. Where there are threats of serious or irreversible
damage, lack of full scientific certainty shall not be pursued as a reason for postponing
cost-effective measures to prevent environmental degradation. The precautionary
principle is also included amongst the other international treatise and declarations, and
referred to in Directive 2001/18/EC. However, the application of the precautionary
principle is not clearly defined and harmonised and gives rise to different interpretations.
Generally, the precautionary principle encompasses a forward-looking approach, which
includes the prevention of damage, and has a cost-benefit analysis of action or lack of
action and the ratio
of this response refers to the cost-effectiveness of the action. The application of
the precautionary principle should be non-discriminatory and consistent, i.e. comparable
situations should not be treated differently and measures should be consistent with
measures adopted under similar circumstances. Measures taken have to be reviewed as
new scientific developments evolve.

Risk Communication
Risk communication to stakeholders is a key area of risk analysis. The expression
of each risk assessment should be unambiguous, transparent and relevant. Key rules,
identified by the Scientific Steering Committee (SSC) of the European Commission
include:
– Completeness of information
– Public access to documentation
– Transparency of discussions and motivations
– Frank acknowledgement of the various positions and contrasting view, including
speculations
– Clarity in wording and accuracy in use of specific expressions
– Recognition of different interests and stakeholders
– Recognition of social, cultural and ethical issues
Awareness of risk perception is another important factor in communicating risk. Risk
perception of experts and the general public might differ considerably, because personal
opinions are formed by information from different sources and integrated with personal
experiences. Among the factors influencing public perception of risk are, for example,
the extent to which the risk is voluntary, controllability of the risk and the novelty of the
risk form. The SSC suggests expressing conclusions of risk assessment in a more user-
acceptable manner by putting them into some form of context, e.g. through risk ranking
by comparing risk assessments of different, but related, sources of risk, the risk of
possible replacements and by using risk benefit analysis.

Reducing risk
Monitoring of GMOs

Plant biotechnology holds the promise of becoming an increasingly valuable tool


in the efforts to improve our heath and achieve sustainable solutions for agriculture and
the environment. Improved vaccines, increased food production and more effective waste
treatment of polluted lands are but some of the results we may expect. However, plant
biotechnology may create undesirable side effects. In order to reduce these risks and at
the same time fully exploit the potential of this technology, a number of actions need to
be taken. The first is the creation and implementation of rules and regulations to govern
the application and trade of plant biotechnology products and second, enforcement of
these rules through risk assessment, risk monitoring and transparent management.
Why the Need for Monitoring of GM Products
The rapid increase in the commercial scale of transgenic plant in the world from 1.6
million ha in 1996 up to more than 80 million ha today indicates the increasing
importance of GM crops worldwide. Public attitude towards GM products varies from
total rejection to full acceptance. In order to address the societal and environmental
concerns, EU legislators have agreed on the general principles of traceability and
labelling of GM products to give consumers choice and for ensuring tractability of GM
products throughout the entire production and distribution chain. An important part of
these new requirements is the monitoring of GMOs. The requirements for monitoring of
GMOs are detailed in Directive 2001/18. The envisaged monitoring plan should be case-
specific and used to identify the occurrence of adverse effects on human health and the
environment that were not anticipated in the initial risk assessment. The general
monitoring has to establish a routine surveillance practice, which includes the regular
monitoring of agricultural practice including its phytosanitary and veterinary regimes and
medical products. As both plant quarantine and veterinary inspections have
internationally recognised control systems their adjustment to include the surveillance of
GMOs is also envisaged in some countries. According to the EC Directive 2001/18, if a
notification for deliberate release in a member state is filed, it must include a monitoring
plan, accompanied by relevant methodology along with the post-release monitoring.
When a GM plant is considered for placing on the market, its monitoring plan is confined
for a 10-year period (the time the product is allowed to be marketed under the new
regulations). Under the Directive’s Article 20 (governing the monitoring and handling of
new information) the notifier is responsible for monitoring and reporting to the
Commission and the competent authorities of the member states. The competent
authorities have the opportunity to communicate with the Commission on new
information about the risks the GMO poses to human health and or the environment and
thus lodge reasonable objections to further placing on the market of the particular
GMO.Member states have the opportunity of provisionally restricting or prohibiting the
sale or use of the particular GMO in their sovereign territories if new scientific findings
based on the monitoring data have an impact on environmental risk assessment or the
potential risks to human health or the environment. At present, such decisions rely to a
large degree on our ability to properly monitor each and every GM product placed or to
be placed onto the market.

What Needs to be Monitored and How


Monitoring should be seen as part of the decision-making process that also includes risk
assessment and risk management. As a general rule, risk assessment
addresses product development prior to its eventual placement onto the market (e.g. for
field trials or market introduction), while monitoring concentrates on events both prior to
and after the product has been authorised for specific use.Monitoring can be seen in
specific cases as a part of continuing risk assessment to identify previously unknown or
unintended hazards and risks. It may be imposed also as part of precautionary actions.
Monitoring can be categorised according to whether we wish to concentrate on the
impact of GMOs
onto the environment, or the impact on the food industry and consumption, especially
since this relates to animal and human welfare. Due to the different natures of the
modified organisms and the introduced traits, all cases should be considered individually
on a case by case basis. Case- 102 3 Regulation, Assessment and Monitoring of GMOs
specific surveillance should be interdisciplinary and carried out over a sufficient
timescale to detect any unanticipated delayed or longer term, direct and indirect, health
and environmental effects of GMOs. Discussion an the type of monitoring to be carried
out is beyond the scape of this section as it will need to be tailored to individual cases.
Nevertheless, the first and most important step is common to all monitoring activities: the
ability to detect GMOs. Indeed, the current EU labelling and traceability regulatory
requirements for GMOs will put an increased focus on monitoring activities in
uncontained situations to detect and analyse GM materials that have already been
authorised and released for human or animal consumption. This will require that GMOs
can be detected at any point within the food chain, from the farm to the market. A
number of technical challenges exist in ensuring the reliable detection and evaluation of
GMOs. The challenges can be divided into three groups:
1. Handling and sampling methods, including those needed for identity preservation
2. Detection, identification and quantification methods
3. Availability of reference material

Handling and Sampling Methods


Concentrated efforts will be needed to ensure that GM material can be traced,
using appropriate sampling procedures, throughout the food chain. Sampling
needs to be carried out at the following points:
– Seed suppliers
Plant breeders will need to assure purity and identity of supplied plant material, to ensure
that GM materials can be traced back to their original sources (the so-called material
identity preservation). Point of origin sampling and
certification will be crucial.
– Farm level
Farmers will need to keep planting and harvesting equipment clean to avoid
cross contamination. They will need to assure there is no cross-pollination
between GM and non-GM plants and storage facilities will need to be kept
segregated.
– Transport and further storage level
Random samples will need to be taken to ascertain sample purity and all
equipment and storage facilities will need to be kept segregated, or at least
clean, to assure there is no cross contamination.
– Processing and distribution
Each component of the final product will need to be labelled so that its origin
could be traced.
Sampling methods are the key to obtaining meaningful qualitative and quantitative
results on GM content of food products and any subsequent safety tests. Statistical
analysis is an important element of designing appropriate sampling methodologies.Where
and how samples are taken as well as size of the sample is critical to a final test result. All
samples submitted for testing should be representative of the batch tested. The less
uniform the contamination, the higher
is the probability of false negative detection. This is crucial for commodity trade
as any false results can lead to expensive recalls.
Detection and Identification of GMOs
GMOs contain one or more additional characteristics, such as changed protein, sugar or
secondary metabolite levels. Genetic modification involves insertion of a foreign piece of
DNA into the genome of the organism to be modified. Such foreign sequences can be
detected both at the DNA or protein level. In both cases,quantitative and qualitative
methods are available, although with different sensitivities.While detection and
identification of GM raw material on the farm is relatively easy, in processed food the
detection became more and more difficult, indeed in some cases almost impossible.
Europe tends to use DNA detection methods and USA relies primarily on the
identification of the expressed gene product, i.e. its protein. Food containing or derived
from GMOs needs to be labelled as such. In addition, unintended contamination of non-
GM products with GMOs at a level higher than 0.9% requires such non-GM products to
be labelled as containing GMOs. It needs to be emphasised that agreed-upon levels of
detection (ie 0.9%) have nothing to do with the safety of the product. A product with
50% or 100% GMO content is just as safe to eat as a product with no GMO content. If a
product is not safe it will not be allowed onto the market regardless of whether it has 0%,
50%, or 100% GMO content. The agreed-upon level of labelling a product GMO or non-
GMO has to do with the advances of detection technologies. It has been recommended
that tolerance for GMO presence in food products should in the future be based on
agreed-upon contamination levels in the supply chain, not on the technological
developments of detection sensitivities.

DNA Analytical Methods


DNA-based detection methods are primarily based on multiplying a specific (for example
genetically modified) DNA with the polymerase chain reaction (PCR) technique.Two
short pieces of synthetic DNA (called primers) are needed, each complementary to one
end of the DNA to be multiplied. During the reaction, copies of the target DNA sequence
are made and subsequently visualised. No copy is detected if the target DNA is not
present. It is possible to detect DNA in fresh plant tissue, but also in highly processed
foods like cakes or chocolate. The PCR results can be quantified giving an estimation of
the amount of GM component in the sample tested. Advantages of the method are: it
quantifies molecules of interest (expressed on genomic equivalents basis); it allows GMO
content quantification of ingredients in virtually all foods on the market today; the quality
of sample preparation is not very important; it has high sensitivity.

Disadvantages of the method are: it does not indicate whether the introduced
DNA (gene) is active; it requires skilled technicians.
Protein Analysis
The method is based on detecting the presence of specific proteins (antigens) with
antibodies, and on enzyme assays that detect the activity of a specific protein.Recently a
very easy and simple test was developed called lateral flow strips, in which colour-dyed
antibodies are fixed to the nitro-cellulose filter are dipped into the extract of the plant
tissue bearing the transgenic proteins. The actual reaction time is less than 10 min and
allows economical and fast visual evaluation of the results. In food industry, enzyme-
linked immunoassays such as ELISA are well-accepted technologies for detection of food
contamination. Advantages of the method are: it indicates whether the new gene is active
and to what extent in the recipient organism as indicated by the detected protein,
sensitivity for the specific questions to be answered, quantitative (expressed on
weight/weight basis),does not need special training or new sophisticated laboratory
equipment. Disadvantages of the method are: quality of the extracted material is
important, cannot be used efficiently on processed food, if the protein to be analysed
changes in structure it may not be detected, proteins are much more easily degraded than
DNA making them more difficult to handle and giving possible false negative results.

Reference Material
Detection and analysis of GM samples is useful only if both the positive (certified
reference material) and negative controls are available for comparison with the analysed
GM samples. These comparisons will greatly increase the confidence and ability to make
meaningful conclusions about the analysed samples.

Certified Reference Material


During the last few years several companies and state agencies have developed reference
materials and different PCR systems to standardise their activity for harmonisation of the
monitoring methods. Certified GM and non-GM material that is well characterized and of
predictable quality is needed to allow laboratories across Europe to calibrate their
equipment and procedures. In addition, knowledge of the DNA sequences inserted into
the donor material need to be available in order to detect GMOs. Consequently, the new
EU legislatures require the GMO applicants to provide sufficient information on the
detection methods for each GMO to be authorized for use in the EU.
In recent validation studies at ISPRA Joint Research Centre of the EU, almost 30
laboratories from 13 countries used the same PCR primers to correctly identify transgene
from soybean and corn (which contained 2% of foreign materials). The detection was
more difficult in the case of corn, which was attributed to its larger genome. This type of
extensive validation is necessary to be certain that the observed results are correct. The
validation should be performed by independent laboratories using internationally
accepted standard methods.

Need for Valid Comparisons


Besides detecting the presence and levels of expression of transgenes, additional
monitoring needs to be carried out to evaluate their safety, for example to ascertain
the possible effects of the transgene on the target organism or the environment. This can
be, for example, effects on plant metabolism or the population ecology. The results need
to be compared to the impact that organic and conventional farming may have in similar
circumstances. Such data are needed to make meaningful comparisons between GM and
non-GM counterparts. However, our baseline knowledge on current agricultural practice,
including organic and conventional farming, is incomplete and indeed may be less than
what we now know about GM crops. If there are serious gaps in this knowledge, it
follows that monitoring, including gene detection, has to be extended to non-GM crops to
gain comparative data.

Future Trends
The future developments in the monitoring of GMOs will depend primarily on
three issues:
1. Advancements in the detection technologies
2. Improvements in the baseline knowledge
3. Trends in international agreements in as far as labelling and monitoring of
GMOs is concerned
Further developments in PCR technology may lead to lowering of the 0.9% detection and
labelling threshold. Already today, it is possible, although not consistently, to detect
GMOs when representing only 0.1% of the product being tested. Improvements in
baseline knowledge will require that more research money is spent in characterising and
assessing non-GM counterparts. Finally, international trade requirements, public opinion
and advances in scientific knowledge will likely influence the scope and type of
monitoring activities. Internationally agreed-upon traceability and labelling requirements
may reduce the costs of monitoring in the long run.

WHICH RISKS ARE RELEVANT?


Of course, it is a far from simple matter to achieve a unanimous decision, among the
various stakeholders, about what risks are relevant when it comes to GM crops. And
since risk analysis will always be based on some kind of decision about what to look for,
and how far and deep to look, there is ample room for disagreements about the extent to
which GM crops pose a risk. To make this point we will start by arguing that risk
assessment always takes place within a more or less well-defined risk window.

The risk window


Jensen et al. (2003) provide the following description of scientific risk assessment: such
assessment, they say, is based on scientific and technical data, but these data must “fit
into a normative framework that is not of scientific nature. This normative framework
stems from the decision problem of whether or not a given application for releasing and
marketing a particular GMO should be approved. The framing of this decision problem,
and the further framing of the questions that the risk assessment is required to answer,
depend on a number of value judgements concerning the criteria for approval and,
consequently, the risks it is considered relevant to assess. Hence, an environmental risk
assessment views the world through a ‘risk window’ that only makes visible that which
has been predefined as relevant risks; and the particular size and structure of the ‘risk
window’ depends on value judgements as to what is considered to be an adverse effect
within what is considered the relevant horizon of time and space”. In other words, the
scientific risk assessment of a GMO is not a ‘mechanistic process’, but rather a process
dependent on the context (when, where) and the personnel (who) performing the
evaluation. In the following we shall try to support this statement by, first, offering a
crude analysis of the risks that have been judged relevant by scientists publishing
on the risks associated with GM crops; and, secondly, showing that the risk window goes
hand in hand with the regulatory requirements. Finally, we shall show that, even within
the scientific risk window, there are discrepancies among the experts when it comes to
the interpretation of available data.
What risks associated with GM crops have scientists judged relevant?
To obtain a crude indication of the kinds of adverse effect associated with GM crops that
scientists have concentrated upon we performed a literature survey using the database
Web-of-Science. This database covers approximately 8,500 research journals. We
searched for genetically modified plants and risks, models or experiments. This gave a
total of 2044 hits, which were then searched in order to determine the number of
publications addressing each key issue per year (Table 19-1). These results are
summarized graphically in Figure 19-1. It can be seen, for example, that vertical gene
flow and herbicide resistance both attract a fair amount of scientific attention, whereas
soil micro-organisms and fitness of insects living on GM crops seem to have been
considered less. Note that the human health (food safety) issue is a relatively minor
concern among the issues identified in the literature search. However, according to the
latest Eurobarometer survey, concerns about GM food, in particular, may indicate that
Europeans are more concerned about food safety than the environmental impact of agri-
food biotechnologies

The risk window has changed with new regulation


As mentioned previously, the risk window defined in the EU regulation has expanded
with the move from the former to the current directive. One of the most plausible
explanations for this expansion is that herbicideresistant (HR) crops are the most
abundant modified crop, currently covering 82% of the total area with GM crops.
Public Attitudes, ethic, public concern
For these crops the major environmental risk seems to be connected with herbicide use.
In particular, there is a worry that tolerant or resistant weeds and crop volunteers will
develop, and that this will lead to environmentally unacceptable increases in herbicide
use when farmers increase doses, or mix herbicides having a different mode of action, in
order to control weeds. The herbicides used on HR crops are often believed to be less
environmentally problematic than those used on similar conventional varieties. However,
they are often highly effective in controlling weeds, and thus may leave fields with lower
weed numbers than their conventional counterparts. Some people believe this to be an
environmental issue in itself, because it may reduce the habitat available to other
organisms.

Scientists sometimes have different values – the MON 863maize example


For a long time now, scientists have agreed that the current GM crops present no risk to
human health. In 2004, however, some scientists had doubts about a particular new type
of Bt-maize called MON 863 which had been developed to resist attacks on its roots by
larvae of the corn root worm. In one of the toxicological tests conducted on this crop, a
90-day feeding study involving rats, the rats reacted differently from the control rats
receiving normal feed.
Figure 19-1. Number of references per year incorporating specific risk keywords relating
to GM crops.

According to researchers running the study, these differences were not significant, nor of
the type to cause concern. The scientific panel on GMOs in EFSA came to a similar
conclusion, and EFSA recommended MON 863 for approval by EU politicians (EFSA,
2004b). In the course of the approval process, this recommendation was sent to the
national authorities of Member States, and at this point a French scientist expressed
doubts about whether it would be safe to approve the maize on the basis of these data.
Later the NGO Greenpeace asked to see the report with the data; they were denied access
as the report was confidential. Greenpeace did, however, obtain permission for a scientist
who was critical of GMOs to read and comment on the report. This scientist concluded
that the data could not be interpreted to show that the MON 863 would be a food hazard;
but that nor could it be concluded that it was safe either, and that therefore additional
experiments were needed (Greenpeace, 2005). This case and the resulting controversy
raise several questions about the risk assessment. First, how can an expert panel
unanimously agree that the data did not give rise to a genuine concern when several
scientists beyond the panel were to become concerned? A plausible explanation is that
panel members had similar features from the beginning in order to be appointed for this
job. Second, the members may influence each other by collectively drawing conclusions
about the scientific data put forward. Third, the scientist disagreed about the quantity of
experimental data needed to make an informed decision; and fourth, this case raised
questions about
transparency of the process and information on which the decision was made; in
particular, the critics here were not allowed full access to the report in question.

CONCERNS BEYOND RISK ASSESSMENT


Section 1 described the scientific-technical frameworks set up by various authorities to
protect human health and the environment. However, it is a second aim of regulation, in
general, to meet public concerns about uses of technology, and thus to ensure that the
public will trust that the authorities have the technological developments under control.
However, both large population surveys within the EU and focus group interviews in
Denmark make it clear that past regulatory approaches have not properly dealt with
people’s worries about GMOs.
The 2002 Eurobarometer survey showed that, in general, and after a decade of decline,
optimism about biotechnology had increased to levels last seen in the early 1990s. For
GM crops and food specifically, support seems to have stabilised across Europe between
1999 and 2002. However, in 2002 the majority of Europeans still did not support GM
foods. Such foods were not perceived to be useful and were felt to present risks to
society. This suggests that lack of usefulness is one of the main concerns for many
Europeans. In the following section, therefore, we try to unravel the underlying
arguments about usefulness.
Usefulness
A GM crop can be beneficial, or have a positive impact, in at least two distinct ways: by
being profitable for the producer, or by fulfilling important societal needs (Madsen et al.,
2003). Global figures show that 90 million ha were sown with GM crops in 2005, with
approximately 38% of these in the developing world. In view of this, it can hardly be
denied that GM crops benefit the farmers growing them in developed and developing
countries around the world. It has also been estimated that if these crops were grown in
the EU, there would be significant yield increases, savings for growers, and pesticide use
reductions. However, when members of the general public insist that GM crops must be
useful, they typically seem to have the second definition of usefulness in mind: that GM
crops must fulfil important societal needs. A GM crop can fulfil such needs in several
ways: 1) by giving us more healthy food,
2) by mitigating the environmental impact of agriculture,
3) by producing raw materials which at present require costly industrial processing, or
4) by improving the situation in developing countries and feeding a rising world
population.
HR crops have been developed chiefly for agronomic benefits, and therefore the
usefulness of these crops has not been obvious to the general public in Europe (Lassen et
al., 2002). The latest Eurobarometer survey asked respondents if they would buy GM
foods offering particular benefits. The most persuasive reason for buying a GM food
product was that it contained reduced pesticide residue (approximately 40% tended to
agree). This was followed by environmental benefit. Less than 25% respondents would
buy GM food just because it was cheaper. The report comments that there may well be a
difference between a person’s response as a citizen and as a consumer — if these crops
were GM product because it had a lower price (assuming it did). However, in connection
with every one of the benefits set out, the majority of respondents said that they would
not buy GM food.
Other socioeconomic issues
In the developing world, GM crops may present a direct socioeconomic dilemma if they
are introduced without prior public acceptance in importing countries such as those in the
EU. This was realised in 2002 when Zambia and Zimbabwe rejected maize with GM
content as food aid. Zambia’s Vice President, Enoch Kavindele, explained to UN aid
workers that their decision to reject some of these foods was made in response to fears
that they would lose the European market if they started growing GM foods. In
Zimbabwe the government ended up grinding the maize grains, thus ensuring that
farmers could not use the maize seed actually on the market, more people would in
practice probably buy. In Zambia, however, it seems that local people later broke into the
stores and stole the GM maize. A more diffuse socioeconomic issue which has often
been discussed in developed countries concerns seed companies and the agrochemical
industry. Large businesses like these, which are regarded as having an invidious
association with (or as actually being) monopolies, are often perceived as the major
driving force in the development of HR crops. Many people resent developments towards
monopolisation. Instead they wish to protect the smaller plant-breeding companies, and
to secure influence over development at community level. Tied in with this attitude is
concern about the ‘patenting of life’, as it is often put. A patent
gives its holder exclusive controlling rights over an innovation for a substantial number
of years, while society gets access to the information in the patent for further research.
Society may benefit from investments being made within the area of biotechnology, but
many people are alarmed at the idea that private companies will have exclusive rights
over the utilisation of nature.
The consumer’s right to choose – co-existence
Another issue overlooked within the framework of risk analysis is the effect that the
cultivation of GM crops may have on consumer choice. Even with strict regulations
governing the cultivation and segregation GM and non-GM crops, trace-levels of GM
material (via gene flow, the contamination of seed lots and so on) cannot be avoided for
some products; and some people perceive this as a violation of freedom of choice.

There is a specific issue here about impacts on organic production, since gene-flow from
GM crops may undermine the claim of an organic producer to be GM-free. This issue has
given rise to strong reactions from organic producers and consumer organisations.
Responding to this problem, the EU commission recommended in 2003 that Member
States issue guidelines on the development of national strategies and best practice to
ensure the co-existence of genetically modified crops with conventional and organic
farming. ‘Co-existence’ here refers to the ability of farmers to make a practical choice
between conventional, organic
and GM-crop production — a choice meeting the legal requirements for labelling and/or
purity standards. The conditions under which European farmers work are extremely
diverse. For this reason the Commission, expressed a preference for an approach that
would leave it up to Member States to develop and implement management measures for
co-existence. The role of the Commission would include gathering and coordinating
relevant information based on on-going studies at community and national level, and
offering advice and issuing guidelines which may assist Member States in establishing
best practice for co-existence (Commission recommendation of 23 July 2003,
2003/556/EC). So far, only Germany, Denmark, Italy and five regions of Austria have
laws regulating GMO cultivation. The main Dutch farming organisations have reached a
voluntary agreement; another eight countries are drafting legislation, and in this process
Spain, Luxembourg, Portugal, Poland and the Czech Republic are most advanced (Smith,
2005). The Danish regulation, which was the first to be introduced, stipulates certain
kinds of crop cultivation and management practice. Thus growers of GM crops must
follow rules on the distance between fields grown with
GM crops and neighbouring conventional or organic fields; neighbours must be informed
if they have fields within a certain distance, depending on the GM crop, in speech;
farmers must attend a course in the cultivation and management of GM crops; and
information about the whereabouts of these fields must be available to the public (Danish
law about growing of genetically modified crops, LOV nr 436 af 09/06/2004). Even with
strict regulations governing the cultivation and segregation of GM and non-GM crops,
trace-levels of GM material.

Other moral concerns


The issues presented centre on the consequences or impact of agricultural uses of gene
technology. However, it is clear that some people worry that the technology as such is
unnatural. In the 1999 Eurobarometer survey, the following two statements were
presented: “Even if GM food has advantages, it is fundamentally unnatural”, and “GM
foods threatens the natural order of things.” In response, 45% and 38%, respectively,
strongly agreed with these statements, and 27% and 31% somewhat agreed (INRA,
2000). This indicates that perceived naturalness is an important factor in the public’s
assessment of GM foods — and thus also in their assessment of GM plants. During a
series of Danish focus group interviews, issues connected with nature and naturalness
were spontaneously taken up within all of the groups, again suggesting that concerns
about the violation of nature play an important role in the discussions about genetic
engineering. To some people, the terms ‘nature’ and ‘naturalness’ appear to be
connected with serious moral concern about departures from what is natural. This refers
to the perception that nature itself embodies a guiding principle, or incorporates an
inherent order of things, that reaches beyond the influence of mankind. Midgley (2000)
has suggested that this perception may be grounded in our traditional understanding of
nature. In this understanding, each species is represented as having been carefully
optimized to fit its ecological niche through the process of natural selection. In myths,
moreover, transgressions across the boundaries of species have lead to monsters. From
this perspective, gene technology violates the sanctity of species, and this admittedly
imprecise concept of sanctity is fundamental in our current understanding of the world.
The ‘natural order’ argument thus refers to a moral critique reaching beyond the
scientific evaluation of the risks associated with genetic modification. However, for
those making use of the “nature as safety mechanism” argument, the notion of
unnaturalness is clearly a proxy for criticism of, or doubt about, the effectiveness of
existing risk assessment procedures. They
are concerned about potential risks to the environment arising from the combination of
hereditary material moving across natural boundaries and the limits of scientific foresight
of long-term consequences (Madsen et al., 2002). These people appear to link concern
about GM crops being unnatural to risk issues. Moral questions can be difficult to
discuss and reach consensus on in society. In part this is because it is difficult to find
common ground on which to base the discussion. It is also because people balance such
concerns differently. Nevertheless, frameworks to achieve clarity and address value
questions have been formulated, and in the following section we present one such
approach.
Ethical criteria
In 2002 proposals about the overall argumentative framework within which the various
concerns could be balanced against each other were made in a Danish government report
on ethics and genetic engineering. This report was based on a debate book from the
BioTIK group, i.e. a group of experts from natural science, sociology and philosophy
brought together by the Minister of Economic and Business Affairs. The framework
consists of a list of four ‘ethical criteria’. These criteria may be interpreted in two ways:
either as necessary conditions to be fulfilled (criteria proper), or as a set of factors to be
considered in the risk assessment and decision process before a final decision is made.
The proposed criteria are:
(A) the technology should be employed for the economic and, most important, qualitative
benefit of humans, society and nature;
(b) respect must be shown for the autonomy, dignity, integrity and vulnerability of living
beings;
(c) the burdens and benefits associated with the technology must be distributed fairly; and
(d) decisions to use the technology must be taken with openness and respect for the
individual human being’s right to selfdetermination.
Neither this framework nor any other suggested approach to the handling of value
questions — e.g. Mepham (1996), Carr and Levidow (2000), Madsen et al. (2002) —
have yet been put into use, although ethical questions are gradually appearing within the
regulatory framework. Thus, for example, one EU directive, 2001/18/EC, states that the
EU commission must report annually on any ethical issues rose by GMOs and may
recommend amendments to the directive 2001/18/EC. However, a recommendation from
an ethical committee cannot stop, delay or change the procedure for approval of a GMO
and will, therefore, have a limited impact on the approval process of any specific GMO.

Three Mile Island and Chernobyl Case Studies

Chernobyl Case Studies:

RBMK reactor 4 at the Chernobyl Nuclear Power Plant was due to temporarily close for
routine maintenance on April 25 1986. The personnel decided this would be the perfect
opportunity to run a particular test on this reactor. This test was to ensure that during a
shutdown, enough electrical power would be available to run the emergency equipment
and the water cooling supply until the diesel power came on. Here is the sequence of
events on April 25 and 26 which ended in the disaster.
The test started in the morning of April 25. Part of the test was to shutdown the
emergency core cooling system (ECCS) so it wouldn't interrupt to the test later on. This
shutdown of the ECCS was not a cause of the accident, although, had it not been shut
down the severity of the accident may have been reduced. With this shutdown the reactor
was carried on at half the power. At about 23.00h on April 25 the power was reduced
further. The reason it was so late on in the day was because the grid controller had
requested the reactor operator to keep delivering electricity throughout the working hours
causing a delay of the test.

Once the reduction of power had recommenced the reactor should have been stabilised at
1000MW before it was shutdown, but an operational error made the power drop to about
30MW where the positive void coefficient became a problem (this is additional steam in
the cooling channels) . The operators did their best to redeem the problem by freeing the
control rods manually, this way they managed to stabilise the reactor at 200MW.
Shorly after that, the coolant flow increased and the steam pressure dropped requiring the
operators to remove almost all the rods, making the reactor very unstable. There is a
minimum requirement of 20 rods that need to be inserted in the reactor at any time. There
were probably only about 6 left after the operators had finished removing them but the
automatic rods in the reactor increase this number towards 20. The operators had to
maintain the steam pressure, they managed to do this by reducing the flow of feed water.
The cooling of the reactor became less and less because the pumps powered by turbine
were slowing down as the turbine was slowing down. This meant the positive void
coefficient occurred and the operators were now unable to control the power surge.
The temperature increased rapidly causing part of the fuel to rupture. This fuel then got
into the water and fuel particles started reacting with the water causing a steam explosion,
this then destroyed the core of the reactor and 2 minutes later a second explosion due to
expansion of fuel vapour occurred causing more destruction to the reactor. These 2
explosions resulted in the pile cap lifting up allowing air to enter the reactor and to react
with the graphite moderator blocks producing carbon monoxide (CO). CO is a very
flammable gas and ignited easily causing a fire in the reactor.
About 8 of the 140 tonnes of fuel, containing plutonium and other highly radioactive
fission products, were released from the reactor along with the graphite moderator which
is was also highly radioactive. Along with this vapours from caesium and iodine were
released with the explosion and the fire which burnt long after.

Causes of the Accident:

There was not one cause of this accident, there were several which all contributed to it.
This accident happened while testing an RMBK reactor. A chain reaction occurred in the
reactor and got out of control, causing explosions and a huge fireball which blew off the
heavy concrete and steel lid on the reactor. These are the causes:
1. Design fault in RBMK reactor
2. A violation of procedures
3. Breakdown of communication
4. Lack of a 'Safety Culture' in the power plant

1. Design fault in RBMK reactor


An RBMK reactor is a pressurised water reactor using a graphite moderator and a water
coolant, this combination is found in no other reactorsmoderator . What are the graphite
and water coolant?
• The graphite moderator is a series of graphite blocks surrounding a number of
pressure tubes. They slow down neutrons during fission so that the fission is
contained.
• The water coolant is to deduct heat from the fission process so the reactor does
not overheat.
This reactor is also unusual as it was used for both plutonium and power production. The
problem with this reactor is that at low power it is instable so may undergo a rapid and
uncontrollable power increase, all other reactors have built-in features to stop this
instability from occurring. The design fault in the RBMK reactor was that there were
excess steam pockets which led to a power increase (this is called the positive void
coefficient). Because of this excess power more steam is produced and more water is
needed as a coolant meaning less water can be used to absorb the neutrons, this process is
very hard to stop as it supplies itself and this chain reaction continues.
2. A violation of procedures
While testing the reactor a series of compulsory safety procedures should have been
followed. However several of these procedures were disregarded by the technicians.
Firstly during the test only 6-8 control rods were used when there is a minimum of 20
rods required to retain control. Secondly, the emergency cooling system was disabled,
which was not a real cause but if it had been available the severity of the accident may
have been decreased.
3. Breakdown of communication
During the test there was little or almost no proper exchange of information between the
people who were in charge of the test and the people carrying out the test on the nuclear
reactor. This means that when a fault may have been detected by the tecnicians they
probably did not report it immediately to their superiors because they were not aware of
the severity of the fault.
4. Lack of a 'Safety Culture' in the power plant
Because of this lack of safety culture, the people at Chernobyl Nuclear Power Plant were
inable to restore the design fault of the RBMK reactor even though they were aware of it
before the accident. A secret USSR memorandum in the Russian archives clearly
suggests this.

Consequences:

Environmental consequences
The radioactive fallout caused radioactive material to deposit itself over a large areas of
ground. It has had an effect over most of the northern hemisphere in one way or another.
In some local ecosystems within a 6 mile (10km) radius of the power plant the radiation
is lethally high especially in small mammals such as mice and coniferous trees. Luckily
within 4 years of the accident nature began to restore itself, but genetically these plants
may be scarred for life.
2. Health effects
Firstly, there was a huge increase in Thyroid Cancer in Ukranian children (from birth to
15 years old). From 1981-1985 there was an average of 4-6 patients per million but
between 1986 and 1997 this increased to an average of 45 patients per million. It was also
established that 64% of Thyroid Cancer patients lived in the most contaminated areas of
the Ukraine (Kiev province, Kiev city, provinces of Rovno, Zhitomir, Cherkassy and
Chernigov).
Thyroid Cancer is cancer of the tyroid gland, a gland found near the larynx that
secretes growth and metabolism hormones.

There have also been increases in other cancers, mainly in the population living in the
most contaminated areas and the people who helped clean up the accident.
3. Psycological consequences
There has been an increase in psycological disorders such as anxiety, depression,
helplessness and other disorders which lead to mental stress. These disorders are not a
consequence of radiation, but a consequence from the stress of evacuation, the lack of
information given after the accident and the stress of knowing that their health and their
children's health could be affected.
4. Economic, political and social consequences
The worst contaminated areas were economically, socially and politically declining as the
birth rate had decreased and emigration numbers had substantially risen which had
caused a shortage in labour force. These areas could not evolve industrially or
agriculturally because of strict rules that were introduced because the area was too
contaminated. The few products made were hard to sell or export because people were
aware that it had come from the Ukraine and so were scared of being affected, this caused
a further economic decline. Socially people have been limited on their activities making
everyday life very difficult.
Now in the year 2000, everything is looking a lot better and is starting to rise again and
probably in about 10 years time almost everything will be as good as normal in the
Ukraine.

Three Mile Island: 1979


• In 1979 a cooling malfunction caused part of the core to melt in the # 2 reactor at
Three Mile Island in USA. The reactor was destroyed.
• Some radioactive gas was released a couple of days after the accident, but not
enough to cause any dose above background levels to local residents.
• There were no injuries or adverse health effects from the accident.
The Three Mile Island power station is near Harrisburg, Pennsylvania in USA. It had two
pressurized water reactors. One PWR was of 800 MWe and entered service in 1974. It
remains one of the best-performing units in USA. Unit 2 was of 900 MWe and almost
brand new.
The accident to unit 2 happened at 4 am on 28 March 1979 when the reactor was
operating at 97% power. It involved a relatively minor malfunction in the secondary
cooling circuit which caused the temperature in the primary coolant to rise. This in turn
caused the reactor to shut down automatically.
Shut down took about one second. At this point a relief valve failed to close, but
instrumentation did not reveal the fact, and so much of the primary coolant drained away
that the residual decay heat in the reactor core was not removed. The core suffered severe
damage as a result.The operators were unable to diagnose or respond properly to the
unplanned automatic shutdown of the reactor. Deficient control room instrumentation and
inadequate emergency response training proved to be root causes of the accident

The chain of events

Within seconds of the shutdown, the pilot-operated relief valve (PORV) on the reactor
cooling system opened, as it was supposed to. About 10 seconds later it should have
closed. But it remained open, leaking vital reactor coolant water to the reactor coolant
drain tank. The operators believed the relief valve had shut because instruments showed
them that a "close" signal was sent to the valve. However, they did not have an
instrument indicating the valve's actual position.

Responding to the loss of cooling water, high-pressure injection pumps automatically


pushed replacement water into the reactor system. As water and steam escaped through
the relief valve, cooling water surged into the pressuriser, raising the water level in it.
(The pressuriser is a tank which is part of the primary reactor cooling system,
maintaining proper pressure in the system. The relief valve is located on the pressuriser.
In a PWR like TMI-2, water in the primary cooling system around the core is kept under
very high pressure to keep it from boiling.)

Operators responded by reducing the flow of replacement water. Their training told them
that the pressuriser water level was the only dependable indication of the amount of
cooling water in the system. Because the pressuriser level was increasing, they thought
the reactor system was too full of water. Their training told them to do all they could to
keep the pressuriser from filling with water. If it filled, they could not control pressure in
the cooling system and it might rupture.

Steam then formed in the reactor primary cooling system. Pumping a mixture of steam
and water caused the reactor cooling pumps to vibrate. Because the severe vibrations
could have damaged the pumps and made them unusable, operators shut down the
pumps. This ended forced cooling of the reactor core. (The operators still believed the
system was nearly full of water because the pressuriser level remained high.) However,
as reactor coolant water boiled away, the reactor?s fuel core was uncovered and became
even hotter. The fuel rods were damaged and released radioactive material into the
cooling water.

At 6:22 am operators closed a block valve between the relief valve and the pressuriser.
This action stopped the loss of coolant water through the relief valve. However,
superheated steam and gases blocked the flow of water through the core cooling system.

Throughout the morning, operators attempted to force more water into the reactor system
to condense steam bubbles that they believed were blocking the flow of cooling water.
During the afternoon, operators attempted to decrease the pressure in the reactor system
to allow a low pressure cooling system to be used and emergency water supplies to be put
into the system.

Cooling Restored

By late afternoon, operators began high-pressure injection of water into the reactor
cooling system to increase pressure and to collapse steam bubbles. By 7:50 pm on 28
March, they restored forced cooling of the reactor core when they were able to restart one
reactor coolant pump. They had condensed steam so that the pump could run without
severe vibrations.

Radioactive gases from the reactor cooling system built up in the makeup tank in the
auxiliary building. During March 29 and 30, operators used a system of pipes and
compressors to move the gas to waste gas decay tanks. The compressors leaked, and
some radioactive gas was released to the environment.

The Hydrogen Bubble

When the reactor's core was uncovered, on the morning of 28 March, a high-temperature
chemical reaction between water and the zircaloy metal tubes holding the nuclear fuel
pellets had created hydrogen gas. In the afternoon of 28 March, a sudden rise in reactor
building pressure shown by the control room instruments indicated a hydrogen burn had
occurred. Hydrogen gas also gathered at the top of the reactor vessel.

From 30 March through 1 April operators removed this hydrogen gas "bubble" by
periodically opening the vent valve on the reactor cooling system pressuriser. For a time,
regulatory (NRC) officials believed the hydrogen bubble could explode, though such an
explosion was never possible since there was not enough oxygen in the system.

Cold Shutdown

After an anxious month, on 27 April operators established natural convection circulation


of coolant. The reactor core was being cooled by the natural movement of water rather
than by mechanical pumping. The plant was in "cold shutdown".

Public concern and confusion

When the TMI-2 accident is recalled, it is often in the context of what happened on
Friday and Saturday, March 30-31. The height of the TMI-2 accident-induced fear, stress
and confusion came on those two days. The atmosphere then and the reasons for it are
described well in the book "Crisis Contained, The Department of Energy at Three Mile
Island," by Philip L Cantelon and Robert C. Williams, 1982. This is an official history of
the Department of Energy's role during the accident.

"Friday appears to have become a turning point in the history of the accident because of
two events: the sudden rise in reactor pressure shown by control room instruments on
Wednesday afternoon (the "hydrogen burn") which suggested a hydrogen explosion?
became known to the Nuclear Regulatory Commission [that day]; and the deliberate
venting of radioactive gases from the plant Friday morning which produced a reading of
1,200 millirems (12 mSv) directly above the stack of the auxiliary building.

"What made these significant was a series of misunderstandings caused, in part, by


problems of communication within various state and federal agencies. Because of
confused telephone conversations between people uninformed about the plant's status,
officials concluded that the 1,200 millirems (12 mSv) reading was an off-site reading.
They also believed that another hydrogen explosion was possible, that the Nuclear
Regulatory Commission had ordered evacuation and that a meltdown was conceivable.

"Garbled communications reported by the media generated a debate over evacuation.


Whether or not there were evacuation plans soon became academic. What happened on
Friday was not a planned evacuation but a weekend exodus based not on what was
actually happening at Three Mile Island but on what government officials and the media
imagined might happen. On Friday confused communications created the politics of
fear."

Throughout the book, Cantelon and Williams note that hundreds of environmental
samples were taken around TMI during the accident period by the Department of Energy
(which had the lead sampling role) or the then-Pennsylvania Department of
Environmental Resources. But there were no unusually high readings, except for noble
gases, and virtually no iodine. Readings were far below health limits. Yet a political
storm was raging based on confusion and misinformation.
No Radiological Health Effects

The TMI-2 accident caused concerns about the possibility of radiation-induced health
effects, principally cancer, in the area surrounding the plant. Because of those concerns,
the Pennsylvania Department of Health for 18 years maintained a registry of more than
30,000 people who lived within five miles of Three Mile Island at the time of the
accident. The state's registry was discontinued in mid 1997, without any evidence of
unusual health trends in the area.

Indeed, more than a dozen major, independent health studies of the accident showed no
evidence of any abnormal number of cancers around TMI years after the accident. The
only detectable effect was psychological stress during and shortly after the accident.

Case Studies:

The studies found that the radiation releases during the accident were minimal, well
below any levels that have been associated with health effects from radiation exposure.
The average radiation dose to people living within 10 miles of the plant was 0.08
millisieverts, with no more than 1 millisievert to any single individual. The level of 0.08
mSv is about equal to a chest X-ray, and 1 mSv is about a third of the average
background level of radiation received by U.S. residents in a year.

In June 1996, 17 years after the TMI-2 accident, Harrisburg U.S. District Court Judge
Sylvia Rambo dismissed a class action lawsuit alleging that the accident caused health
effects. The plaintiffs have appealed Judge Rambo's ruling. The appeal is before the U.S.
Third Circuit Court of Appeals. However, in making her decision, Judge Rambo cited:

· Findings that exposure patterns projected by computer models of the releases compared
so well with data from the TMI dosimeters (TLDs) available during the accident that the
dosimeters probably were adequate to measure the releases.

· That the maximum offsite dose was, possibly, 100 millirem (1 mSv), and that projected
fatal cancers were less than one.
· The plaintiffs' failure to prove their assertion that one or more unreported hydrogen
"blowouts" in the reactor system caused one or more unreported radiation "spikes",
producing a narrow yet highly concentrated plume of radioactive gases.

Judge Rambo concluded: "The parties to the instant action have had nearly two decades
to muster evidence in support of their respective cases.... The paucity of proof alleged in
support of Plaintiffs' case is manifest. The court has searched the record for any and all
evidence which construed in a light most favourable to Plaintiffs creates a genuine issue
of material fact warranting submission of their claims to a jury. This effort has been in
vain."

More than a dozen major, independent studies have assessed the radiation releases and
possible effects on the people and the environment around TMI since the 1979 accident at
TMI-2. The most recent was a 13-year study on 32,000 people. None has found any
adverse health effects such as cancers which might be linked to the accident.

Increased safety & reliability

Disciplines in training, operations and event reporting that grew from the lessons of the
TMI-2 accident have made the nuclear power industry demonstrably safer and more
reliable. Those trends have been both promoted and tracked by the Institute for Nuclear
Power Operations (INPO). To remain in good standing, a nuclear plant must meet the
high standards set by INPO as well as the strict regulation of the US Nuclear Regulatory
Commission.

A key indicator is the graph of significant plant events, based on data compiled by the
Nuclear Regulatory Commission. The number of significant events decreased from 2.38
per reactor unit in 1985 to 0.10 at the end of 1997.

On the reliability front, the median capability factor for nuclear plants - the percentage of
maximum energy that a plant is capable of generating - increased from 62.7 percent in
1980 to almost 90 percent in 2000. (The goal for the year 2000 was 87 percent.)
Other indicators for US plants tracked by INPO and its world counterpart, the World
Association of Nuclear Operators (WANO) are the unplanned capability loss factor,
unplanned automatic scrams, safety system performance, thermal performance, fuel
reliability, chemistry performance, collective radiation exposure, volume of solid
radioactive waste and industrial safety accident rate. All are reduced, that is, improved
substantially, from 1980.

Summary of three mile island case study:

• The reactor's fuel core became uncovered and more than one third of the fuel
melted.
• Inadequate instrumentation and training programs at the time hampered operators'
ability to respond to the accident.
• The accident was accompanied by communications problems that led to
conflicting information available to the public, contributing to the public's fears
• Radiation was released from the plant. The releases were not serious and were not
health hazards. This was confirmed by thousands of environmental and other
samples and measurements taken during the accident.
• The containment building worked as designed. Despite melting of about one-third
of the fuel core, the reactor vessel itself maintained its integrity and contained the
damaged fuel.

Longer-Term Impacts:

• Applying the accident's lessons produced important, continuing improvement in


the performance of all nuclear power plants.
• The accident fostered better understanding of fuel melting, including
improbability of a "China Syndrome" meltdown breaching the reactor vessel or
the containment building.
• Public confidence in nuclear energy, particularly in USA, declined sharply
following the TMI accident. It was a major cause of the decline in nuclear
construction through the 1980s and 1990s.
Collegiality and loyalty
Colleagues are those explicitly united in a common purpose and respecting each other's
abilities to work toward that purpose. A colleague is an associate in a profession or in a
civil or ecclesiastical office.
Thus, the word collegiality can connote respect for another's commitment to the common
purpose and ability to work toward it. In a narrower sense, members of the faculty of a
university or college are each other's colleagues; very often the word is taken to mean
that. Sometimes colleague is taken to mean a fellow member of the same profession. The
word college is sometimes construed broadly to mean a group of colleagues united in a
common purpose, and used in proper names, such as Electoral College, College of
Cardinals, College of Pontiffs.
Sociologists of organizations use the word collegiality in a technical sense, to create a
contrast with the concept of bureaucracy. Classical authors such as Max Weber consider
collegiality as an organizational device used by autocrats to prevent experts and
professionals from challenging monocratic and sometimes arbitrary powers. More
recently, authors such as Eliot Freidson (USA), Malcolm Waters (Australia) and
Emmanuel Lazega (France) have shown that collegiality can now be understood as a full
fledged organizational form. This is especially useful to account for coordination in
knowledge intensive organizations in which interdependent members jointly perform non
routine tasks -an increasingly frequent form of coordination in knowledge economies. A
specific social discipline comes attached to this organizational form, a discipline
described in terms of niche seeking, status competition, lateral control, and power among
peers in corporate law partnerships, in dioceses, in scientific laboratories, etc. This view
of collegiality is obviously very different from the ideology of collegiality stressing
mainly trust and sharing in the collegium.
Loyalty evolved as devotion for one's family, gene-group and friends. Loyalty comes
most naturally amongst small groups or tribes where the prospect of the whole casting out
the individual seems like the ultimate, unthinkable rejection. Loyalty to tribes evolved
from the evolutionary tactic that there is a greater chance of survival and procreation if
animals form packs/tribes.
In a feudal society, centered on personal bonds of mutual obligation, accounting for
precise degrees of protection and fellowship can prove difficult. Loyalty in these
circumstances can become a matter of extremes: alternative groups may exist, but lack of
mobility will enter a personal sense of loyalty.
The rise of states (and later nation states) meant the harnessing of the "loyalty" concept to
foster allegiance to the sovereign or established government of one’s country, also
personal devotion and reverence to the sovereign and royal family.
Wars of religion and their interminglings with wars of states have seen loyalty used in
religious senses too, involving faithful support of a chosen or traditional set of beliefs or
of sports representatives. And in modern times marketing has postulated loyalties to
abstract concepts such as the brand. Customer churn has become the opposite of loyalty,
just as high treason once stood as the opposite of the same idea. Compare loyalty card.
Loyalty is also seen in business in a variety of ways. As governments have grown in size
and scope, some people are more loyal to a company rather than to a country. As
corporation complexity has grown, people have shifted their loyalties to individuals
rather than companies. As those individuals move between companies, they often take
other people with them. Stock options are one method devised to keep people loyal to a
company.

All of us are aware, out of our own experience, that it is not easy to be obedient to
authority. It appears that there is an innate desire on the part of all of us to be independent
and to have our own way. This seems to be an inborn, natural, universal tendency of man.
Even our children do not like to be told when to get up, how to dress, what to eat, what to
do, and where to go. Any superimposed authority from outside ourselves is likely to be
met with resentment. I sometimes think of the human race as like a herd of wild horses,
running free on the open range of our western mountain country. Occasionally we have
seen pictures of wild horses enjoying the freedom of the open range. Then we have seen
men ensnare some of these horses and put them in corrals. Imagine if you can a
particularly handsome young stallion fenced in for the first time in his life. He stamps the
ground; he snorts and whinnies; he rears on his hind legs. He resents the loss of his
freedom and is ready to fight anything that comes near him. Then, watch as a skilled
trainer over a period of days and weeks struggles to get a bridle in his mouth. Then, later
on, a saddle on his back. It does not come easily, but eventually the wild range horse is
brought under control and tamed. Now, and this is the point, he is ready to do some work
that is worthwhile. Until his wild, restless spirit is tamed and his powerful strength is
harnessed he accomplishes nothing. Only when he is brought under control does he do
any worthwhile work. The same is true with the proud rebellious spirit of man.
Similarly, while it is always possible to bring out unwanted/unethical results using GMO.
But we should be resisted to ourselves to obey to the authority and follow the guidelines
prescribed so that the others get the maximum benefit out of it.

Collective bargaining
Collective bargaining is the process whereby workers organize collectively and bargain
with employers regarding the workplace. In various national labor and employment law
contexts collective bargaining takes on a more specific legal meaning. In a broad sense,
however, it is the coming together of workers to negotiate their employment.
A Collective agreement is a labor contract between an employer and one or more unions.
Collective bargaining consists of the process of negotiation between representatives of a
union and employers (represented by management, in some countries by employers'
organization) in respect of the terms and conditions of employment of employees, such as
wages, hours of work, working conditions and grievance-procedures, and about the rights
and responsibilities of trade unions. The parties often refer to the result of the negotiation
as a Collective Bargaining Agreement (CBA) or as a Collective Employment Agreement
(CEA).
Theories
A number of theories – from the fields of industrial relations, economics, political
science, history and sociology (as well as the writings of activists, workers and labor
organizations) – have attempted to define and explain collective bargaining.
One theory suggests that collective bargaining is a human right and thus deserving of
legal protection. Article 23 of the Universal Declaration of Human Rights identifies the
ability to organise trade unions as a fundamental human right. Item 2(a) of the
International Labor Organization's Declaration on Fundamental Principles and Rights at
Work defines the "freedom of association and the effective recognition of the right to
collective bargaining" as an essential right of workers.
In June 2007 the Supreme Court of Canada extensively reviewed the rationale for
considering collective bargaining to be a human right. In the case of Facilities Subsector
Bargaining Assn. v. British Columbia, the Court made the following observations:
• The right to bargain collectively with an employer enhances the human dignity,
liberty and autonomy of workers by giving them the opportunity to influence the
establishment of workplace rules and thereby gain some control over a major
aspect of their lives, namely their work.
• Collective bargaining is not simply an instrument for pursuing external ends…
rather [it] is intrinsically valuable as an experience in self-government.
• Collective bargaining permits workers to achieve a form of workplace democracy
and to ensure the rule of law in the workplace. Workers gain a voice to influence
the establishment of rules that control a major aspect of their lives.
Economic theories also provide a number of models intended to explain some aspects of
collective bargaining. The first is the so-called Monopoly Union Model (Dunlop, 1944),
according to which the monopoly union has the power to maximise the wage rate; the
firm then chooses the level of employment. This model is being abandoned by the recent
literature. The second is the Right-to-Manage model, developed by the British school
during the 1980s (Nickell). In this model, the labour union and the firm bargain over the
wage rate according to a typical Nash Bargaining Maximin (written as Ώ = U βΠ1-β, where
U is the utility function of the labour union, Π the profit of the firm and β represents the
bargaining power of the labour unions). The third model is called efficient bargaining
(McDonald and Solow, 1981), where the union and the firm bargain over both wages and
employment (or, more realistically, hours of work)

Confidentiality
Confidentiality has been defined by the International Organization for Standardization
(ISO) as "ensuring that information is accessible only to those authorized to have access"
and is one of the cornerstones of Information security. Confidentiality is one of the
design goals for many cryptosystems, made possible in practice by the techniques of
modern cryptography.
Confidentiality also refers to an ethical principle associated with several professions (eg,
medicine, law, religion, professional psychology, journalism, and others). In ethics, and
(in some places) in law and alternative forms of legal dispute resolution such as
mediation, some types of communication between a person and one of these
professionals are "privileged" and may not be discussed or divulged to third parties. In
those jurisdictions in which the law makes provision for such confidentiality, there are
usually penalties for its violation.
Confidentiality of information, enforced in an adaptation of the military's classic "need-
to-know" principle, forms the cornerstone of information security in today's corporates.
The so called 'confidentiality bubble' restricts information flows, with both positive and
negative consequences.[1]
Legal confidentiality
Lawyers are often required by law to keep confidential anything pertaining to the
representation of a client. The duty of confidentiality is much broader than the attorney-
client evidentiary privilege, which only covers communications between the attorney and
the client.
Both the privilege and the duty serve the purpose of encouraging clients to speak frankly
about their cases. This way, lawyers will be able to carry out their duty to provide clients
with zealous representation. Otherwise, the opposing side may be able to surprise the
lawyer in court with something which he did not know about his client, which makes
both lawyer and client look stupid. Also, a distrustful client might hide a relevant fact
which he thinks is incriminating, but which a skilled lawyer could turn to the client's
advantage (for example, by raising affirmative defenses like self-defense).
However, most jurisdictions have exceptions for situations where the lawyer has reason
to believe that the client may kill or seriously injure someone, may cause substantial
injury to the financial interest or property of another, or is using (or seeking to use) the
lawyer's services to perpetrate a crime or fraud.
In such situations the lawyer has the discretion, but not the obligation, to disclose
information designed to prevent the planned action. Most states have a version of this
discretionary disclosure rule under Rules of Professional Conduct, Rule 1.6 (or its
equivalent).
A few jurisdictions have made this traditionally discretionary duty mandatory. For
example, see the New Jersey and Virginia Rules of Professional Conduct, Rule 1.6. In
some jurisdictions the lawyer must try to convince the client to conform his or her
conduct to the boundaries of the law before disclosing any otherwise confidential
information.
Note that these exceptions generally do not cover crimes that have already occurred, even
in extreme cases where murderers have confessed the location of missing bodies to their
lawyers but the police are still looking for those bodies. The U.S. Supreme Court and
many state supreme courts have affirmed the right of a lawyer to withhold information in
such situations. Otherwise, it would be impossible for any criminal defendant to obtain a
zealous defense.
California is famous for having one of the strongest duties of confidentiality in the world;
its lawyers must protect client confidences at "every peril to himself or herself." Until an
amendment in 2004, California lawyers could not breach their duty even if they knew
that a client was about to commit murder.
Recent legislation in the UK curtails the confidentiality professionals like lawyers and
accountants can maintain at the expense of the state. Accountants, for example, are
required to disclose to the state any suspicions of fraudulent accounting and, even, the
legitimate use of tax saving schemes if those schemes are not already known to the tax
authorities.
Clinical psychology
The ethical principle of confidentiality requires that information shared by the client with
the therapist in the course of treatment is not shared with others. This is important for the
therapeutic alliance, as it promotes an environment of trust. However, there are important
exceptions to confidentiality, namely where it conflicts with the clinician's duty to warn
or duty to protect. This includes instances of suicidal or homicidal ideation, child abuse,
elder abuse and dependent adult abuse.
Conflicts of Interest
A conflict of interest is a situation in which someone in a position of trust, such as a
lawyer, insurance adjuster, a politician, executive or director of a corporation or a
medical research scientist or physician, has competing professional or personal interests.
Such competing interests can make it difficult to fulfill his or her duties impartially. A
conflict of interest exists even if no unethical or improper act results from it. A conflict of
interest can create an appearance of impropriety that can undermine confidence in the
person, profession, or court system. A conflict can be mitigated by third party verification
or third party evaluation noted below—but it still exists.
However, conflicts of interest do not only apply to professionals. A conflict of interest
arises when anyone has two duties which conflict - for example an employee's duty to
well and faithfully perform their work as purchasing manager, and that same employee's
familial duty to their sibling who happens to be tendering for the sale of widgets to the
employee's employer. In that case the employee has a conflict of interest, despite the fact
that they are not a lawyer, doctor, politician etc.
Conflict of Interest generally (unrelated to the practice of law)
More generally, conflict of interest can be defined as any situation in which an individual
or corporation (either private or governmental) is in a position to exploit a professional or
official capacity in some way for their personal or corporate benefit.
Depending upon the law or rules related to a particular organization, the existence of a
conflict of interest may not, in and of itself, be evidence of wrongdoing. In fact, for many
professionals, it is virtually impossible to avoid having conflicts of interest from time to
time. A conflict of interest can, however, become a legal matter for example when an
individual tries (and/or succeeds in) influencing the outcome of a decision, for personal
benefit. A director or executive of a corporation will be subject to legal liability if a
conflict of interest breaches their Duty of Loyalty.
There often is confusion over these two situations. Someone accused of a conflict of
interest may deny that a conflict exists because he/she did not act improperly. In fact, a
conflict of interest does exist even if there are no improper acts as a result of it. (One way
to understand this is to use the term "conflict of roles". A person with two roles - an
individual who owns stock and is also a government official, for example - may
experience situations where those two roles conflict. The conflict can be mitigated - see
below - but it still exists. In and of itself, having two roles is not illegal, but the differing
roles will certainly provide an incentive for improper acts in some circumstances.)
Types of conflicts of interests
The following are the most common forms of conflicts of interest:
• Self-dealing, in which public and private interests collide, for example issues
involving privately held business interests.
• Outside employment, in which the interests of one job contradict another,
• Family interests, in which a spouse, child, or other close relative is employed (or
applies for employment) or where goods or services are purchased from such a
relative or a firm controlled by a relative. For this reason, many employment
applications ask if one is related to a current employee. If this is the case, the
relative could then recuse from any hiring decisions.
• Gifts from friends who also do business with the person receiving the gifts. (Such
gifts may include non-tangible things of value such as transportation and lodging.)
• Pump and dump, A stock broker (from a boiler room down the street to a big
broker uptown) which owns a security artificially inflates the price by
"upgrading" it or spreading rumors, sells the security and adds short position, then
"downgrade" the security or spread negative rumors to push the price down.
Other improper acts that are sometimes classified as conflicts of interest are probably
better classified elsewise. Accepting bribes can be classified as corruption; almost
everyone in a position of authority, particularly public authority, has the potential for
such wrongdoing. Similarly, use of government or corporate property or assets for
personal use is fraud, and classifying this as a conflict of interest does not improve the
analysis of this problem. Nor should unauthorized distribution of confidential
information, in itself, be considered conflict of interest. For these improper acts, there is
no inherent conflict of roles (see above), unless being a (fallible) human being rather than
(say) a robot in a position of power or authority is considered to be a conflict.
Codes of ethics
Generally, codes of ethics forbid conflicts of interest. Often, however, the specifics can
be controversial. Should therapists, such as psychiatrists, be allowed to have
extraprofessional relations with patients? Ex-patients? Should a faculty member be
allowed to have an extraprofessional relationship with a student, and should that depend
on whether the student is in a class of, or being advised by, the faculty member?
Codes of ethics help to minimize problems with conflicts of interest because they can
spell out the extent to which such conflicts should be avoided, and what the parties
should do where such conflicts are permitted by a code of ethics (disclosure, recusal,
etc.). Thus, professionals cannot claim that they were unaware that their improper
behavior was unethical. As importantly, the threat of disciplinary action (for example, a
lawyer being disbarred) helps to minimize unacceptable conflicts or improper acts when a
conflict is unavoidable.
As codes of ethics cannot cover all situations, some governments, e.g., Canada, have
established an office of the ethics commissioner. Ethics commissioner should be
appointed by the legislature and should report to the legislature.

Occupational Crime
Occupational crime is crime that is committed through opportunity created in the course
of legal occupation. Thefts of company property, vandalism, the misuse of information
and many other activities come under the rubric of occupational crime.The concept of
occupational crime - as one of the principal forms of white collar crime - has been quite
familiar and widely invoked since the publication of Clinard and Quinney's influential
Criminal Behavior Systems: A Typology. More recently, however, the term occupational
crime has been applied to activities quite removed from the original meaning of white
collar crime, and it has been used interchangeably with such terms as occupational
deviance and workplace crime. In the interest of greater conceptual clarity within the
field of white collar crime the argument is made here for restricting the term
'occupational crime' to illegal and unethical activities committed for individual financial
gain - or to avoid financial loss - in the context of a legitimate occupation. The term
'occupational deviance' is better reserved for deviation from occupational norms (e.g.
drinking on the job; sexual harassment), and the term 'workplace crime' is better reserved
for conventional forms of crime committed in the workplace (e.g. rape; assault). The
conceptual conflation of fundamentally dissimilar activities hinders theoretical, empirical,
and policy-related progress in the field of white collar crime studies.

Professional rights
The U.S. media and state and federal policymakers have devoted a great deal of attention
this year to the issue of pharmacists refusing to dispense emergency contraception and
other prescription contraceptives. Little about this issue is, in fact, new; policymakers
have engaged for decades in an ever-broadening debate over whether and in what
circumstances individuals or institutions involved in the provision of health care or
related services can refuse to provide services or information on moral or religious
grounds What has often been absent from this debate over providers’ rights has been any
serious discussion about providers’ responsibilities—to their patients, colleagues,
employers and the public. Some of these obligations are encoded in law; perhaps more
importantly, they are enshrined in professional codes of ethics that define what it means
to be a health care professional and supplemented by individual professional
associations’ policy statements on various issues.
The Values at Stake
Although different associations and professions frame the issues differently, core values
that are generally agreed upon across health care professions and in the field of bioethics
underlie the rights and the responsibilities of all health care providers:
• Beneficence requires the provider to act in the best interest of the patient and her
welfare and is closely related to nonmaleficence, the basic obligation to do no harm.
• Justice underlies the principle of non discrimination and the obligation of health care
providers to work for the public good.
• Respect for autonomy leads to such principles as informed consent and confidentiality,
as well as respect for the decisions of colleagues. These core values have been translated
into more specific ethical principles by numerous professional associations. Such
guidelines are necessary in part because these values can at times conflict or appear to
point in different directions. In the absence of respect for autonomy, for instance,
beneficence can easily turn into paternalism in the hands of a highly trained health care
provider caring for patients with inferior knowledge. And, while the International Code
of Medical Ethics of the World Medical Association (WMA) asserts that “a physician
shall always bear in mind the obligation of preserving human life,” in a separate
declaration on abortion, the WMA discusses how the “diversity of attitudes towards the
life of the unborn child” can lead to differences in how to interpret this obligation.
Professional standards help to mediate these differences. Despite the complexities of
balancing these values, the professional medical associations have been remarkably
consistent when it comes to the concept of refusal. In essence, professional standards
typically endorse a provider’s right to step away, or “withdraw,” from providing a health
care service that violates his or her moral or religious beliefs. At the same time, these
standards make clear that there must be limits to this right in order to ensure that patients
receive the information, services and dignity to which they are entitled. Although not
always spelled out in one place or in every association’s guidelines, this balancing leads
to several clear obligations, including that:
• providers must impart full, accurate and unbiased information so patients can make
informed decisions about their health care;
• patients must always have access to services in emergency circumstances;
• providers must not abandon patients but instead must refer them to another provider
willing and ready to take over care; and
• providers seeking to “step away” must give adequate and timely notice to patients,
employers and others who will be affected by their doing so. It should come as no
surprise that many of the most detailed standards and policy statements about refusal
focus on abortion, contraception and other forms of reproductive health care, along with
end-of-life care. These services have often generated controversy among policymakers
and the general public. The professional associations have made their position clear,
however: A health care
provider’s moral or religious beliefs cannot justify attempts to override a patient’s
autonomy. The right to withdraw from services cannot be used as a pretext for blocking
or denying patients’ own rights to care.
Employee Right

IPR discrimination
The international environment with respect to intellectual property has changed
considerably with the conclusion of the TRIPs Agreement. The TRIPs Agreement
accommodates the demands of the industrialized countries for higher international
standards of protection by mandating the extension of patentability to virtually all fields
of technology recognized in developed country patent systems, by prolonging the patent
protection for a uniform term of twenty years, and by providing legal recognition of the
patentee’s exclusive rights to import the patented products. The patent rights are
enjoyable without discrimination as to the place of invention, the field of technology and
whether products are imported or locally produced. All the signatories to the trade
negotiations are, therefore, obliged to adhere to the minimum standards prescribed by
TRIPs Agreement and to provide product patents for pharmaceuticals and chemicals. The
coverage of the patent protection has also been expanded by the provision for patents on
micro-organisms and protection of plant varieties either by patents or by an effective sui
generis system or by any combination thereof. The full implementation of the TRIPs
Agreement is likely to have an important bearing on the patterns of development in
developing countries. In what follows we briefly review some of the important
dimensions of these effects.

a) Local Technological Capability Building


The strengthening and harmonization of IPR regimes worldwide has considerable
implications for the process of acquisition of local technological capability by developing
countries. The provision of product patents on chemical and pharmaceutical products, for
instance, would adversely affect the process of innovative activity of the developing
country enterprises in the manufacture of chemicals covered by patents. The development
of new chemical compounds is generally beyond the capability of most developing
country enterprises in view of the huge resources involved. Therefore, they focus
attention on process innovations for the known chemicals and bulk drugs. This imitative
duplication or reverse engineering activity is an important source of learning in
developing countries. Indeed, most industrialized countries of today and newly
industrialized countries encouraged local learning through soft patent laws and the
absence of product patents in chemicals in the early stages of their development as
highlighted earlier. It means that the poorer countries of today will not be able to benefit
from an important source of total factor productivity growth (viz. absorption of spillovers
of foreign inventions) that was available to countries that have developed already. In that
respect the TRIPs Agreement is highly inequitable. The probability of stronger IPR
regime encouraging innovative activity in developing countries is very small.
b) Industrialization, Technology Transfers and Trade
Recent trends suggest a reversal of trend of the growing importance of arm’s length
licensing as a mode of technology transfer as MNEs prefer to internalize the technology
transactions. The strengthening of IPRs regime may further limit the access of technology
by developing country enterprises. Kim (1997) provides a number of examples of Korean
corporations being denied technology licenses by patent holders in the Western world
forcing them to reverse engineer the products. A number of local enterprises in
developing countries will come under pressure to close down or form alliances with
larger firms, resulting in a concentration of the industry [World Bank 2002:137].
Dependence on imports may go up. Mascus and Penubarti (1997) for instance, find that
TRIPs could affect import volumes significantly; e.g. in Mexico, the anticipated rise in
manufactured imports could be of the magnitude of $ 6.3 billion, amounting to 9.4 per
cent of its real manufactured imports in 1995 (as cited in World Bank 2002:132).
c) Prices of Medicines and Loss of Consumer Welfare
A number of studies have examined the effect on prices of medicines after introduction of
product patents and have simulated welfare losses for consumers in developing countries.
It is widely believed that drug prices will go up upon introduction of product patents as
happened in China which introduced them in 1993. Nogues (1993) finds the welfare
losses to 6 developing countries (Argentina, Brazil, India, Mexico, Korea and Taiwan)
from introduction of product patents to be between US$ 3.5 billion to $10.8 billion
depending upon the assumptions. The gains to the patent owners from such introduction
would range between $ 2.9 billion to $ 14.4 billion. The welfare loss to India could be
between $ 1.4 billion to $ 4.2 billion in a year. Watal (2000) simulates the likely increase
in pharmaceutical prices and decrease in welfare in India with the introduction of product
patents in 22 existing pharmaceutical products and finds that weighted mean drug price in
India could increase between 26 per cent (for a linear demand function) to 242 per cent
(with a constant elasticity-type demand function). An earlier study by Subramanian
(1994) had found the maximum price increase of 67 per cent for India following the
introduction of product patents. Fink (2000) finds the range of price increase between 182
to 225 per cent. That suggests that introduction of product patents would affect prices of
medicines significantly and unless new drugs are more efficient, there will be a decline in
the health levels of population (May 2000). The recent case of huge differences between
prices of HIV Aids drugs sold by patent holders in South Africa and their generic
substitutes just provides a further evidence to the potential of price increases following
the introduction of product patents. It may be argued that the vast majority of drugs are
out of patent protection and hence will not be affected. Yet the AIDS drugs controversy
shows that effective treatment for many of scourges of the day such as cancer, cardiac
failures, renal problems, among others, may be affected. Given the near complete
domination of developed countries on technology generation as evident from the 95 per
cent ownership of US patents (Table 1), the strengthening and harmonization of IPRs
regime will lead to a substantial increase in flow of royalties and license fees from
developing countries to developed countries. McCalman (1999) quantifies the impact of
patent harmonization finds that it has the capacity to generate large transfers of income
between countries, with US being the major beneficiary. World Bank (2002: Table 5.1)
updates the computations of McCalman and suggests that the net patent rents derived by
the US for the year 2000 (in current US$) could add up to over $ 19 billion, to Germany $
6.7 billion, and Japan $ 5.7 billion. Among the developing countries, China could see an
outflow of patent rents of the order of $5.1 billion, India $ 903 million, Israel $ 3.8
billion.
Furthermore, the extension of IPRs to plant varieties could further increase the outgo of
royalties for the breeder lines of the seed companies even though the basic raw material
for the development of these varieties, viz. genetic diversity which is largely found in
developing countries and is based on the work of generations of farmers in these
countries, is generally available to them free.
e) Impact on Global Technological Activity and Availability of Drugs
One of the arguments in favour of a stronger IPR regime is based on the premise that
expenditures on R&D were significantly determined by appropriability conditions.
Hence, ensuring adequate appropriability with more stringent IPR protection was deemed
to be a necessary condition for sustaining the pace of innovation in the global economy.
The empirical literature, however, does not support this presumption as patent protection
was found to be instrumental for only a small proportion of innovations. On the other
hand, studies show that spillover effects of R&D activity of other firms to be a lot more
important in inducing firms to undertake R&D compared to appropriability. The R&D
outputs of other firms form valuable inputs for the R&D efforts of these firms. Hence,
tightening of IPRs is likely to affect innovative activity adversely by stifling these
spillovers. Therefore, it is by no means clear that strengthening of IPRs will increase
innovative activity even in the developed world especially for solving the problems and
diseases faced by developing countries. Furthermore, the research
priorities of MNEs are determined by the purchasing power and very little R&D is
currently done on tropical diseases (World Bank 2002). Unless some steps are taken by
the international community, such as those discussed by the recent report of WHO’s
Commission on Macroeconomics and Health (CMH), the pattern is not likely to change
significantly in the future.

Concluding Remarks and Issues for National and International Action


The preceding discussion suggests that the ongoing trend of strengthening and
harmonization of IPR regime is going to affect the process of development of poorer
countries in a significant manner by choking an important contributor of growth that has
been variously described as imitative duplication, reverse engineering or knowledge
spillovers from abroad. It is also likely to affect the prices of a large number of important
drugs and thus affect the health systems in
poorer countries. It would lead to income transfers from poorer to richer countries. It is
likely to adversely affect the manufacturing activity in developing countries and may
increase their imports but does not guarantee increased in FDI inflows, access to
technology or R&D investments in tropical diseases. These challenges require a response
at the national policy levels as well as a response from the international community. In
what follows, we outline some of the policy responses that could help in moderating the
adverse effects of TRIPs Agreement on developing countries.

GIST to Remember:
• Objectives of safety and risk
• Microbiological risk assessment
• Infective microorganisms by risk group
• Biosafety considerations for biological expression systems
• Transgenic Plants and Risk assessment for Genetically Modified Systems
• Spreading risk assessment in environment
• Genetransfer
• Risk Management
• Monitoring of GMO
• Socio-economic issues
• Three mile and Chernobyl Island disasters
• Collegiality and Loyalty
• Confidentiality and Occupational Crime
• Rights of the employees

Reference

(1) Laboratory Biosafety manual. 3rd Edn. WHO


(2) Singh, B.D. 2002. Biotechnology, 2nd Edn. Kalyani Publishers.
(3) http://www.safetyrisk.com
(4) http://dbtbiosafety.nic.in/
(5) www.world-nuclear.org
(6) www.threemileisland.org
(7) www.ipr-helpdesk.org

Anda mungkin juga menyukai