Anda di halaman 1dari 7

STUDY DESIGN FOR DRUG EFFICACY

Outline of the lecture:


- Different designs used in studies to evaluate the efficacy of drugs
- Which elements make a study valid and useful
- Common errors made in designing or interpreting studies to evaluate the efficacy of drugs
- How good study designs can also be biased
- The hidden world

Practicing Evidence-Based Medicine: Sources of Evidence

Retrospective studies - case observations: go back and look what happened to people who received
a drug, through your notes or hospital databases. Outcomes from people who received a treatment
are analysed. This is the lowest level of evidence.
Prospective uncontrolled trials: monitor people starting from the moment you practice an
intervention, and register what happens to people undergoing it, e.g. drop out for lack of efficacy,
side effects, . This is uncontrolled because there is no comparison with people receiving no
treatment or a different one.
Randomized controlled trials (RCTs): these are prospective studies, in which you monitor people on
which an intervention is made, but you also do a comparative study with people undergoing another
intervention, that can be placebo or another active treatment. The kind of intervention must be
assigned in a randomized way.
(Metanalysis of RCTs): consider in a systematic review every randomized studies you can find on an
intervention, put all the data form all the studies together and get evidence out of this. It works well
in some areas, but you can only put together data if the methodology used in different studies was
similar, otherwise assumptions must be made that are not necessarily correct.

Its important to learn how to look critically to these different kind of studies, because they are the
evidence based on which you will apply a treatment to a patient. To keep up to date you must be
able to judge yourself if a study is valid or if it is biased.
Why is this so important?
98% of clinical studies are sponsored, performed or monitored by a pharmaceutical company, that
does so in order to get a licence for the drug (collect evidence for the regulatory agency to approve
it); these studies are very scientific, they must be for the drug to be approved, but they also are
artificial, they just prove that the drug they are producing is better than nothing but not that it is
better than the already present treatments. If a pharmaceutical company performs a comparison
with other drugs (this is not mandatory to be done and is not done so often), they will do whatever
they can to make sure that their drug is the winner. Therefore, comparison studies are often biased.
Only 2 % of studies on comparative efficacy of drugs are done independently from pharmaceutical
companies. Therefore, most of the information about the differences in efficacy of different drugs
are biased.
When comparative studies are done independently, the result is that often the winner is not the
new, more expensive drug, but the older, cheaper drug. The money saved by keeping administering
the old drug is a huge amount compared to the money that would be needed to perform such a
study!

Companies often get away with these biased studies because physicians are not able to pick them up. So,
what should we be careful to?
The first step: Scrutinize 4 key elements: PICO
Patients Description, Inclusion and exclusion criteria. This has tremendous consequences
on the results! Manipulating inclusion and exclusion criteria can bias hugely a
study. How?
Suppose a comparison study is done. The company might know form previous
studies that one of the side effect of their drug is behavioural problems, and
people with a history with behavioural problems are more likely to develop them.
They also know that the well-established old drug may cause gastric problems as
side effect. So, they will select for the study patients with a history of gastric
problems and no history of behavioural problems. Of course, the old drug will
cause much more problems in this group than it would have in a normal
population, while your drug will result much less harmful.
Intervention Type, dosing scheme, duration, route
Dosing scheme: Often many drugs need an optimal titration scheme. This isnt
always defined before marketing, so this can be exploited. In a comparative study
the company might study and use the perfect titration scheme for their drug,
without applying the best titration scheme for the drug of the competitor instead,
and they can justify this by following the doses reported on the leaflet, even
though in the meantime, during clinical use, that might have been discovered to
not be the optimal dosage.
Another mechanism to introduce a bias is to overdose slightly the drug of the
competitor, while under-dosing the new drug a bit, so there will be more side
effects given by the old drug and less by the new one, without affecting efficacy
too much
Comparison Type of control, placebo, other treatment
The drug should always be compared with the golden standard available
treatment

Outcomes What results, and how assessed? Look at which end points were chosen. Of course,
if the company pays for the study it will chose the end point that is more likely to
make its drug win

Sources of Bias in Industry-Driven RCTs


Maneuvers that introduce bias (often deliberately):
Comparator used suboptimally
Selection criteria designed to favor the sponsors drug
Efficacy and safety measures selected to favor the sponsors drug

Problems with Uncontrolled Trials: Objective is not the Same as Unbiased!


Lamotrigine is a drug for epilepsy. After it came on the market, paediatricians wanted to find out whether it
also worked for children, that were a neglected population in the clinical studies.
All the studies done starting from the marketing of the drug until 1997 were uncontrolled, all prospective,
they had a protocol, with inclusion and exclusion criteria. The responsiveness to the drug was considered to
be present in children in which the frequency of seizures would decrease at least of 50% while taking the
drug, i.e. if the child has less than 50% seizures in a certain period of time compared to the number he had
before taking the drug, he is a responder.
All the studies showed that most of the children were responsive! Because of these great results, the
company decided to perform a study to be able to market the drug also with indications for children. The
study was randomized and double-blind. One group received placebo + all the previous medication, the other
group got lamotrigine + all the others. Only 33% of the children on lamotrigine were responsive. 16% of the
children in placebo were responsive too, so this means that around 50% of the ones responsive to lamotrigine
got better because of placebo, not actually thanks to the drug.
There was an actual gain thanks to the drug only in 17% of the children taking it. Still, it was better than
nothing, so it was marketed for children too.

On the opposite, in another case it might happen that during a double blind study, people do great, but after
the study is done and they find out who was taking what, it is discovered that the ones who were getting
better are equally split between placebo and drug takers. So the drug actually has a 0% response.

You can never tell if an improvement is due to placebo or due to the drug.
Why does placebo make people better?
- The thought of taking something that is supposed to make you better makes you better: Impact of
emotional or psychological factors on the manifestations of the disease
- Observer-related systematic bias (unconscious trend to see what we expect, or what we hope for)
- Patient-related systematic bias (trend to please the physician)
- People may get better by chance
- Most conditions are actually self remitting (acute diseases, such as a cough). Spontaneous remission
is also another reason why placebo works, the drug wouldnt have been needed anyways.
- People will never have seizures attacks in a scheduled way, so in any window of time it can happen
that they simply had less, then in another window of time they had more and by considering the
mean you cancel tis diffenrece.
This is a phenomenon of huge importance: regression to the mean. In every disease in which there
are episodes that can be counted, in which there is an average, (eg: 1 seizure in a week), episodes
occur randomly, there might be 3 in a week and then none for a month.
So, if you follow a patient even without any treatment, the disease will tend to go better just by
chance: it regresses to the average of the disease.
Plus, if the patient is in a good period, he feels good; if he is in a bad period, with many episodes in a
week, he will immediately call the doctor. This is a normal fluctuation of the disease, that introduces
a bias: people see the doctor only in the bad phases.

Magnitude of placebo:
In most therapeutic areas, placebo-associated responses account for 30 to 70% of the improvement
in clinical response observed during drug treatment
Magnitude of placebo responses varies unpredictably, from 0 to over 80%
Response to active drug in some studies is less than response to placebo in other studies in the same
type of patients
In open-label studiies, placebo responses can be up to 4-fold greater than those observed in double
blind studies

Is it ethical to keep patients on placebo if you see that a drug in a double blind experiment is helping
enormously someone? If the benefit seems really huge, the ethical thing to do is to ask to un-blind the study.
But actually this never happens, if there is a better response to the new drug compared to the response of
the already present drug, it is usually extremely small and marginal. Actually most of the drugs approved now
are not better than what is already there.
Curiosity! Awareness about this is increasing, so nowadays sometimes to market a drug you must prove non-
inferiority.

Objection: why did people o lamotrigine in the double blind study did not do as well as in uncontrolled
studies?
In an uncontrolled study, you follow a protocol, the patient signs an informed consent and then, the day
after, the administration of the drug starts. There is the potential for a huge improvement due to regression
to the mean. On the opposite in a double blind study the base line is retrospective, ie: the month before
starting the drug. This means that, before starting the drug administration you wait usually 4 weeks in which
you monitor and register a number of measures, and for example you count the episodes of seizures. In this
way you will pick up the improvement due to regression to the mean and you will have data to compare the
improvements that might be present after administration of the drug to see if they are as big as the natural
improvement due to the course of the disease or if they are more significant.

Problem: people may not be happy to enrol in a study if they have to wait for one month and may end up on
placebo when they are sick. Also, because there are 27 drugs for seizures, they can try something else instead.
So, companies will look for patients in countries such as India, where only 2 drugs for seizures have been
marketed, and they cost a lot, so people are willing to enrol because they will receive a treatment for free
and because they dont have many different options. This creates another problem: to enrol in the study
there might be some criteria, e.g. having had 4 seizures in a month. One patient may only had 3, so he might
lie to enter the study. Therefore, the baseline increases and the improvement will look better than it actually
is.
How to detect this bias: higher placebo effect in South America and East Asia than in EU or US

Another study: retigabine


Patient randomized in 4 groups, receiving placebo + 3 different dosages
- 1 dose borderline, probably wont do anything
- 1 dose that you think is better than placebo
- 1 dose that might give side effects
This is a very informative study!

Can you use historical placebo rates in following experiments? It would make the experiment cheaper
because you would need less patients.
No! two different studies performed at the distance of 6 years showed that even if they were performed by
the same author, in the same pool of patients, with the same seizure type, resulted in a number of responser
even if under placebo extremely different! Response rate to placebo was 17% in the first study and 49% in
the second one.
The point is: placebo is not always constant! You always have to create a placebo group, you cant
use historical controls.
The will rogers phenomenon
This phenomenon takes its ame by the quote When the Okies left Oklahoma
and moved to California, they raised the average intelligence level in both
states. Will Roger was saying that people in texas have a bell shaped
distribution of IQ. People in Utah also do, but are much smarter. The least
intelligent people in Utah are football players. If these people moved to Texas,
IQ in Texas would improve, and IQ in Utah would increase too!

This can be applied to medicine:


Survival in patients with localized carcinoma is much better than in
patients with metastatic carcinoma. But probably among the localized
carcinomas there are people who have small metastases unnoticeable.
If physicians become able to detect them, those people will switch group
and raise survival in both categories.
This can be mistaken for a positive effect of a drug, but actually it is just
thanks to better imaging techniques

Usefulness of Uncontrolled Trials (the vast majority of studies unfortunately)


To assess pharmacokinetics and drug interactions
To explore, and generate signals about potential efficacy (and tolerability) in specific syndromes, prior to
conduction of controlled studies: give the drug in open-label, observe in general the effects. After you get
the signal though, you need to a controlled study to determine if it is more than a signal, an actual effect
To provide (misleading) supportive evidence for drug promotion (seeding trials)

Seeding trials: these are performed less nowadays, in some countries they are theoretically forbidden
They are post marketing studies of little or no scientific value:
- Familiarizing physicians with the use of the sponsors drug: companies will trick you to take part into
a study to assess efficacy and side effects of a drug, maybe they will offer money, or they will flatter
you somehow. Involving you in a study works better than simply giving you the file with the
information about the just developed drug, that you might look at, but then keep using the treatment
you are familiar with.
At the end of the study you might find out that the drug might work, or it might not, but still in the
meantime you will get used to it.
- Increasing prescriptions through recruitment in the study. Patients for many diseases may require
treatment for many years. The company may offer to you to enrol your patients in a study, that might
last 1 year. If the patient is doing well at the end of the study, he probably will want to keep taking
the drug, but now the company is not paying for it anymore, the national health care service will.
This is a way to recruit patients, you pay for their treatment for a year and you ensure that they will
keep buying it for much longer.
This is commonly done in poor countries: one example is Pherbatol, in Argentina: the drug was
effective, not more than others, but it was aggressively marketed and therefore patients asked for it
and it was prescribed. Within a few months it came out that it was toxic, it caused aplastic anemia
and many people died. In Argentina, the company offered this drug for free for an uncontrolled
clinical trial in newly diagnosed patients (these patients would have answered very well to already
present drugs!). The study would last 1 year, then they would have to pay for it, and of course
switching drug is something not desirable if the current treatment is working. The problem is that
the cost for a year of treatment was 1/3 of the average salary of a person in a year, so it was an awful
situation
- At times, extend the use of the drug to non-approved indications.
A drug may be approved to treat peptic ulcers. Typically, these drugs do not work for gastritis. But
the company may convince physicians to try to give it also to patients with gastritis. A physician may
be convinced and perform a uncontrolled study and find positive improvements. But in a patient with
gastritis, you usually also suggest to correct the diet, so gastritis is a condition that normally improves
easily just because of that. Therefore, you can easily find positive results even if the drug did nothing,
and you can publish a paper. A regulatory agency would never approve it for this use, they would
require a double blind controlled study, but what can be done is to spread said paper and for sure
some physicians will be convinced.

Memorandum (from marketing dept to sales reps) found by ispectors of the FDA in the marketing
department of a pharmaceutical company:
Make no mistake: The Ypertin study is the single most important sales initiative for 1993. Phase I provides
2500 physicians with the opportunity to observed in their patientsblood pressure controlby Ypertin. If at
least 20,000 of the 25,000 patients involved in the study remain on Ypertin it could mean up to a $ 10,000,000
boost in sales. In phase II, this figure could double..
This basically says that the study itself is the single most important initiative to make money

Some Common Justifications for Conducting Uncontrolled Trials (ie: how they try to justify not having done
a control study)
They are easier to conduct (true, but not a good reason to do bad science)
They are the only way to mimic clinical practice, ie: in a controlled study many things are artificial (wrong
randomized trials can mimic clinical practice equally well)
They are the only option in rare syndromes, for which it is impossible to find enough patients for a RCT
(wrong randomized trials can be effectively conducted with few patients, even in just one patient! How?
Switch in different time periods to different treatments: e.g. first treatment A, then placebo, then b, then
placebo again, A, B, placebo, and so on, in a randomized, double blind way)

Non-randomized Controlled trials:


Sometimes to circumvent the objection that an uncontrolled study is useless, they make a pseudo controlled,
not randomized study. But these are as useless as uncontrolled studies

Results of Observational Studies Found to be Wrong when Tested in Randomized Studies:


There have been cases in which observational uncontrolled studies performed in huge observational studies,
changed medical practice and then this was proved to be useless of harmful when tested scientifically:
Reduction in the risk of CHD with hormonal substitution therapy
Reduction of cardiovascular disease with vitamin E
Reduction in the risk of lung cancer with carotene

There are some ways to avoid this kind of malpractice, but it must be a slow change:
- Try to change the system as a whole very hard to achieve
- Educate physicians to look critically at evidences
- Ethical committees should stop trials (that must always be approved by them) if there is some bias.
The problem is, often there is no one with the right expertise to pick up these bias
- Before results are published, they must go through peer review. The expertise of reviewers to pick
up these biases is also not adequate.
- Many journals nowadays require the study to report many information in a single table, a sort of
form that needs to be filled:

Responder Rates as a Function of Dose


Study on oxcarbazepine, based on the measurement of responder rate (epilepsy patients with at least 50%
reduction in sezures rate)
Responder rates were calculated when the drug was administered at different doses. Between the highest
dose and placebo there is a big difference! It seems very good. But if you consider also the number of patients
that left the study because they had side effects too hard to tolerate, you will also see that among the ones
taking the highest concentration of the drug, 74% dropped out of the study. Still, the study reports that 50%
of those who started taking the highest dose had a positive response, so how is it possible if only 24% of them
actually stayed in the study?
Results are analysed using a locked methodology (LOCF: last observation carried forward).
If a patient drops out of the study, they register the response had up until that point and project it
to the end of the study. E.g. a child may stop the drug after 2 days because of serious side effects,
but in those 2 days he did not have seizures, so he is listed as responsive.
The problem lies in the fact that it is ok to know how many had a good effect even if they dropped
out, it might be useful, but the most important information is, of considering only those patients who
tolerated the drug, how may responded positively? Only approximately 3% of the publications report
this information. They tell you how many patients were responsive, how many dropped out, but they
dont tell you how many of the responders were able to tolerate the drug and complete the study.
Of course, it is in their interest to hide it and reviewers and publishers often do not pick it up.

Anda mungkin juga menyukai