Clinical trials – getting it right

Astrazeneca reported a failure in Phase III clinical trials for their non-small cell lung carcinoma (NSCLC) candidate Selumetinib, which was being tested for its use in combination with docetaxel. This comes in the wake of a prior failed PIII trial of  selumetinib with dacarbazine for metastatic uveal melanoma (a form of eye cancer). Given each PIII clinical trial is estimated to cost 30-40 million USD, which makes up 60% of all clinical trial costs, that is an incredible amount of money to be investing on something with a chance of coming to naught.

Just how many of these trials are failing you ask? From the FDA website, PI trials have a success rate of 70%, PII trial success is reportedly lower at 33% and the success rate of a PIII trial ranged from 25-30%.

What are the reasons for failure? There are worryingly too many but I list a key few here:

  • A mouse is not a little man. Its a problem scientists face all the time, poor translation from animal models to humans. Numerous drugs have worked in mice but failed disastrously when put in humans.
  • Heterogeneity of disease. NSCLC is a great example. It is actually made up of ten different mutation-specific diseases so patients tend to have rather varying disease etiologies making them harder to treat with the same approach. The PIII success rate for NSCLC is just 26.1%!
  • Poor trial design. Designing a trial is not a walk in the park. Choosing the right patients to test, randomizing treatments, blinding treatment, getting the dose right, having good end-points and surrogate measures are all factors that can severely affect trial  outcome. See Dr Richard Chin’s blogpost for a good overview.
  • False discovery rate. Performing a large number of measures on a small number of patients (oftentimes the case in PII trials) has a tendency to increase the number of false positives. E.g. If 10 tests are performed per patient in a trial with 100 patients (i.e 1000 measurements), there will be 50 positive results that are actually false (if a significance level of 0.05 is used). The FDR is a correction measure proposed by Benjamini and Hochberg (1995) – it looks only at significant results and calculates the proportion of false positives within this (see here and here for more info). Though it reduces the level of false positives overall, it still lets in a small proportion of false positives which may increase when using too many or inappropriate measures. This of course may create the impression that a drug appears to be working when it is not (which is later found out in the more highly-powered PIII trial)
  • Risky re-positioning. The success rate of drugs in trials for diseases which they were not designed to treat, i.e non-lead/secondary indications, are often lower (about 1.5-3X less likelihood of approvals). This is often due to institutional bias where less time and effort go into monitoring patient selection or establishment of scientific rationale.

These problems are not easy to solve but at least the last 3 points are immediately addressable.

To counter poor trial design , adaptive clinical trial design is now gaining popularity as it allows for modification of the treatment regime/dose and patient selection/sample size based on observing the results as they come out. They also allow early trial terminations when the drug is seen to be futile, which could save lots of money. However, there is an inertia to adopt this due to logistic/operation concerns, so only an estimated 20% of clinical trials are following adaptive design, but hopefully this grows.

To reduce the FDR, we should adopt more stringent significance criteria, e.g. a p value < 0.001. It also helps to look at what the multiple measures are telling you as a whole, rather than singling out one which gives a statistically significant result. Testing a hypothesis that is very unlikely to be true tends to increase the number of false positives, so it is advised not to include this in your battery of tests. Benjamini has also now come out with a new weighted FDR approach, that assigns importance to the measured endpoints which enforces better control on the overall error rate.

Finally, the repositioning of drugs for non-lead or secondary indications is something pharma has to closely regulate. Though it makes sense to use drugs that have cleared all the initial safety hurdles and have well-established pharmacokinetics, how suited it is to treat another disease should be properly established. Costly, failed clinical trials are known to bankrupt biotechs and set pharmas well-behind in the development of other drugs that would otherwise have created significant impact in the lives of patients. That said, there have been several cases of successful repositioning  – thalidomide previously a sedative is now used for pain/inflammation in leprosy, dapoxetine an anti-depressant apparently also works for premature ejaculation. There are a growing number of biotechs focussed solely on repositioning, with several targeted strategies. This presents a new market niche altogether which might improve the great waste we currently see in drug development.



The search for immortality

Ageing has always been a subject of great interest and intrigue. In particular, how to prevent it is probably one of the key goals in life of scientists and millionaires, and desired by all at some point or other.

It’s a rather selfish objective when there are so many people suffering from other diseases that require our attention. Yet ageing is still a rather substantially funded research area as you can see:


I made this chart using categorical spending figures from the NIH website. For the whole list, see here.

However, the thing about ageing is that growing old increases the risk of developing various diseases. So it kind of makes sense to understand it. But how does one go about studying ageing?

1. Look at rapidly ageing human models.

Progeria is a disease that resembles rapid ageing – skin gets wrinkled, muscles waste and hip dislocation can occur easily. This disease is highly linked to mutations in a gene encoding Lamin A, a protein found at the inner edges of cell nuclei  that helps maintain nuclear stability. The mutation produces a splice site, resulting in a truncated form of Lamin A that is unable perform its nuclear support function, giving rise to abnormally shaped nuclei. Interestingly, senescent cells (i.e. cells that have stopped dividing) also express this truncated form of Lamin A, indicating it tends to appear on ageing.


Image from Wikipedia-Progeria. A young girl with progeria (left). A healthy cell nucleus (right, top) and a progeric cell nucleus (right, bottom).

2. Try to extend lifespan in model organisms

Worms, flies and yeast all have two things in common aside from being invertebrates, they’re easy to genetically manipulate and they do not live very long. This makes them ideal for reverse genetics where scientists screen thousands of genes, knocking each one down to see whether lifespan is extended. The African Killifish which lives for 3 months is now being used more and more for ageing research and Amber Dance wrote a great article on it recently.

This nice infographic shows the range of choice in model organisms (from the article “Live Fast Die Young” by Amber Dance):

ageing models.jpg

Strangely enough, there is little overlap between genes implicated in premature ageing diseases and genes that have extended lifespan in model organisms (such as mammalian target of rapamycin or mTOR and p53-related genes). However, it may be that the genes overlap in terms of downstream pathways and molecular mechanisms. So far it seems oxidative stress, protein homeostasis and TOR signalling may play larger roles.

In particular the mTOR inhibitor, rapamycin, has proven to actually extend lifespan in yeast, mice and now dogs. However it comes with some rather nasty side effects so it may not quite be the life-extending elixir we are looking for. And so the search continues…

Synchronous lysis of bacteria for drug delivery

A group at UCSD recently published in Nature that they have programmed bacteria to lyse itself at intervals, releasing drugs that kill tumour cells. Yet another application of synthetic biology, this advancement enables better control of bacteria levels, reducing the risk of adverse immunological responses in patients.

Synthetic biology is based on the design of biological circuits (very much like electrical engineering) within organisms, programming them to perform the specified function. My simplified version of the group’s circuit design is as follows:


There are basically four components in their circuit:

  • The drug, haemolysin E, a pore-forming anti-tumour toxin.
  • GFP or green fluorescent protein, which makes bacteria glow green as they are about to lyse. They used a super-folder version which ensures GFP is robustly folded even when fused to poorly folded peptides. Previous versions of GFP tend to follow the folding ability of its fused protein which can impact its fluorescence .
  • LuxI – an enzyme that catalyses the production of AHL (acylhomoserine lactone), a small molecule that controls the expression of all promoters. AHL freely diffuses among cells, ensuring every bacteria is synchronized. AHL binds to its receptor LuxR to carry out its functions.
  • Lysis factor – The authors call this protein E, a lysis protein derived from bacteriophage lysis gene (φX174 E) that triggers lysis of the bacteria.

When the bacteria population is low, AHL is synthesized at a basal level but tends to be at a low concentration as it diffuses out into the extracellular environment. Only when the bacteria population reaches a specific level (see quorum sensing), does the level of AHL build up to sufficient amounts. Binding of AHL to LuxR activates the promoters, driving the expression of all four components – triggering the bacteria to glow green, then explode due to production of the lysis factor, releasing the drug that bathes the surrounding cancer cells. A few outliers survive, seeding the next round of bacterial growth. And the process repeats. There’s a cool video of the whole process, but you probably need a Nature subscription to watch it.

The group has demonstrated efficacy in vitro as well as in mice, where bacteria were made to carry and release different types of anti-tumour reagents – namely the toxin, an immune response stimulator or an apoptosis inducer. A combination of all three factors caused the greatest tumour regression. The bacteria were also effective when given orally, and when used together with chemotherapeutic reagent, 5-fluorouracil, increased the mean survival time of animals harbouring incurable colorectal metastases by 50%.

The use of bacteria has been gaining popularity in cancer treatments, especially with the increasing ease of genetic engineering. Several clinical trials are being run and there is already an FDA-approved bacterial therapy for bladder cancer which uses Bacillus Calmette-Guérin (BCG) (yup, the same injection we get in school to immunize us against tuberculosis) to stimulate an immune response against cancer cells. There are several advantages that come with using bacteria for cancer treatment:

  • They are easily genetically modified – providing limitless possibilities for modulating their ability to sense environmental cues and carry out desired actions.
  • They can be grown in vast amounts very rapidly.
  • They are self-propelled enabling better tumour penetrance. Bacteria can even be made to sense chemicals produced by the tumour and propel themselves towards it.
  • Anaerobic bacteria can thrive in low-oxygen conditions, often found in tumours. In contrast, chemotherapy which normally targets rapidly dividing cells, often fail to target quiescent cells in the depths of the tumour where glucose/oxygen are lacking.
  • They are externally detectable by MRI/PET/bioluminescence/fluorescence, enabling easy monitoring of treatment efficacy and tumour state.

Of course, there are various challenges as well. The most worrying of which is probably ensuring these bacteria do not mutate into monster bacteria that go on a rampage in our bodies. The high resistance that bacteria have already developed against numerous antibiotics goes to show making them genetically stable may be an uphill task. Bacteria are also nasty triggers of the immune response, and having fevers and chills in addition to the already damaging side effects of chemotherapy would not be something to look forward to. Getting bacteria to target cancer efficiently in different subgroups of patients and establishing effective combination therapy regimes with existing treatments would also take a lot of time, money and effort to get right.

All of this has never stopped Man before though. George Church’s group has already come up with several safety mechanisms to generate safer bacteria that require a synthetic amino acid that can only be made in the lab to survive. And now by controlling the population of bacteria by synchronous lysing, immunostimulatory effects can also be tightly controlled. The possibilities it seems, are endless.



Din, M. O., Danino, T., Prindle, A., Skalak, M., Selimkhanov, J., Allen, K., … Hasty, J. (2016). Synchronized cycles of bacterial lysis for in vivo delivery. Nature, advance online publication. Retrieved from

Zhou, S. (2016). Synthetic biology: Bacteria synchronized for drug delivery. Nature, advance online publication. Retrieved from