Young blood transfusions against aging – what we know

My dad is getting on in years. He’s 73 and is not exactly at the peak of health. In fact, I should count my lucky stars he’s still alive. He had several big health scares but the doctors have ‘worked their magic’ and he remains in good spirits, with an alert mind, and an ever-active daily schedule though with somewhat restricted mobility.

He asked me one day, me being the scientist of the family, what I thought of young blood transfusions. I stared at him in a mixture of horror and amusement. He had read online that young blood transfusions provided some health benefits and apparently asked his doctor for one! Of course the doctor politely declined, stating blood transfusions were typically reserved for emergencies and he could only have one if his haemoglobin levels dropped below a certain (very low) level. I asked if he would like my blood or my brother’s, but he said we were ‘too old’. Images of vampires and creepy old ladies who bathed in the blood of virgins to remain youthful flashed through my mind.. along with my dad drinking the blood of a teenager.

Perhaps the most well-cited studies of this wild idea, are those carried out by neuroscientist Dr Tony Wyss-Coray’s lab at Stanford University. He published two studies in Nature, one in 2014 and one in April this year, demonstrating cognitive improvement in old mice exposed to young blood.

The earlier study involved a surgical procedure called parabiosis that joined an old mice to a young mice such that they shared the same blood circulation. Five weeks after the surgery, the old mice showed significant cognitive enhancement. This was demonstrated by upregulation of genes associated with increased synaptic plasticity, an increased number of dendritic spines in the hippocampus (which is known to decline with age), and enhanced long term potentiation (LTP, an electrophysiological phenomena in the brain representing memory).

Importantly, they also injected old mice with the plasma from young mice over 4 weeks and found they exhibited improved memory in cognitive behavioral task. Essentially they measured the freezing time after auditory and electrostimulation – fear conditioning – and spatial memory – water maze. Pretty convincing results!

The second study done this year identified factors in the plasma responsible for this rejuvenating effect. Interestingly, they also found human umbilical cord plasma produced the same cognition-enhancing effects when injected in old mice, suggesting a common anti-aging molecule shared between mouse and man. The factor identified was tissue inhibitor of metalloproteinases 2 (TIMP2). Injection of this factor alone enhanced synaptic plasticity markers, LTP and improved cognition in various behavioral tasks. Conversely, inhibiting TIMP2 with antibodies reduced cognition and removing it from the young plasma prior to injection, also removed the cognitive-enhancing effects. A surprisingly clear-cut result in neurobiology.

A separate study by a group at Harvard University identified another factor in young blood, Growth Differentiation Factor 11 (GDF11),  that induced “vascular remodeling, culminating in increased neurogenesis and improved olfactory discrimination in aging mice.”

It’s no wonder that people are getting excited about this, and my dad may actually have a point!

Looking at the biotech scene, there are two biotechs involved in young blood therapy. One is called Alkahest, a Stanford spin-out with Dr Wyss-Coray as Chairman of the scientific advisory board. They have partnered with a Spanish pharmaceutical, Grifols, a world leader in plasma-based products. Grifols invested US$38 million in Alkahest with an additional US$13 million to develop and sell Alkehest’s plasma-based products (see Press release).

Another company is Ambrosia, started by Princeton graduate Jesse Karmazin, which has sparked some controversy by carrying out a human clinical trial with a US$8000 participation fee. Judging from the website that has very little detail at all, much less about science/technology, it seems to be purely a money-making endeavor that thrives off fear of aging. They are using healthy volunteers and do not even have a placebo group in the trial. Most in the field have written it off as “the scientific equivalent of fake news”, but undoubtedly, Ambrosia is not likely to suffer from a lack in participants desperately seeking the fountain of youth.

While we might be able to one day find the ‘cure for aging’, let’s not lose touch with reality. As physicist Sean Carroll says in “The Big Picture”:

“But eventually all of the stars will have exhausted their nuclear fuel, their cold remnants will fall into black holes, and those black holes will gradually evaporate into a thin gruel of elementary particles in a dark and empty universe. We won’t really live forever, no matter how clever biologists get to be”

*For a great review on the current state of anti-aging efforts in biotech/pharma, read this: João Pedrode Magalhães et al., “The Business of Anti-aging Science“, Trends in Biotechnology, 35(11), 2017

**After this post was published, results of Alkahest’s trial of 18 participants was released showing young blood transfusions while safe produced minimal, if any, benefits – reported in Science Magazine

Advertisements

The Ansoff Matrix

As mentioned in the previous post, business success involves thinking and planning ahead. The two main vectors of commercial growth are product development and market development. The Ansoff Matrix is probably one of the oldest business strategy tools, designed to help businesses think about strategies for growth and their associated risks. Developed by  Igor Ansoff, a Russian-American Mathematician and Business Strategist, it was published in a 1957 Harvard Business Review article “Strategies for Diversification“.

Ansoff_Matrix

Image from Wikipedia Commons “Ansoff Matrix

There are four main strategies and the colours represent the extent of risks involved:

1. Market Penetration

Market penetration involves increasing sales of existing product in the existing market. It is the lowest risk strategy as the business has all it needs to make and sell its product in a familiar market, so no new research or development is necessary.

Some key ways to increase market penetration:

  • Competitive pricing (low vs premium)
  • Advertising/promotion
  • Increase sales effort
  • Taking away existing market from competitors e.g. through differentiation or acquiring competitor
  • Increase use by existing customers e.g. loyalty programs
  • Increase ease of purchase
  • Educate potential customers
  • Increase distribution channels
  • Modifying product packaging to increase appeal

2. Market Development

Market development involves selling the same product to new market(s). The vertical movement indicates some risks are involved as one is venturing into new markets. Resources are allocated to expanding marketing efforts so a thorough understanding of the new market is necessary to ensure profit is reaped. The business should also have the ability to cope with increased demands on production.

Ways to develop new markets:

  • Increase geographical reach e.g. sell to other countries or locations
  • Use different sales channels e.g. online vs physical store
  • Advertise through different media
  • Perform market segmentation to identify new customer groups by demography, psychography
  • Pricing strategy to reach different markets
  • Explore selling to other  businesses (B2B) instead of direct to customers (B2C) or vice versa

3. Product Development

This involves enhancing the product, increasing the product range or developing new products with related technologies to sell to the same markets. Again, some risks are involved with the horizontal movement. This is necessary when the product is losing its appeal and is falling behind the competition, or alternatively if the business wants to be the first in the market with a novel product.

This is done by:

  • Investing in research and development to develop new products
  • Acquiring the rights to produce someone else’s product
  • Joint ownership of new product with another company
  • Buying over the product and rebranding it under the company’s name
  • Understanding changing customer needs and creating new products to address them

4. Diversification

Diversification is the highest risk strategy as it involves branching out into a completely new field with new product(s) and new market(s). Some reasons given for diversification include to avoid being technologically obsolete, to distribute risk, to make use of increased production capability, to reinvest earnings, to increase profits by entering a lucrative industry, develop capabilities in a new “growth” industry, or to obtain top management.

There are 3 types of diversification

  • Vertical diversification – where the company decides to make its own production components or materials instead of relying on vendors. The technology involved in making these parts can be very different from assembling the final product, hence it does involve a substantial learning curve.
  • Horizontal diversification – where the product is completely different from the existing one but builds on the company’s existing know-how in technology, finance and marketing.
  • Lateral diversification – where the company delves into a completely new industry. Like when Coca-Cola bought movie production company Columbia Pictures.

 

Interestingly, there is also a Personal Ansoff Matrix which one can use for career development. Igor Ansoff nicely summarized the necessity of staying ahead of the growth curve by quoting the Red Queen (from Alice in Wonderland):

“Now, here, it takes all the running you can do to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!”

The 10 commandments of business success

We spend most of our lives working for businesses but how many of us actually understand what it takes to run one? Businesses come in different flavors but they are based on one inherent principle – providing value to customers.  The main challenge however is getting the people, machines and communications in place to do this effectively. Being profitable is the main goal of many businesses, but these days it does not usually have to be the case. Amazon for example, pumps all the profits it makes back into the company as a form of investment to make sure it stays ahead of the competition.

There are so many components to a business – marketing, operations, production, customer care, growth strategies etc. – which I am still trying to get my head around. Working in a small biotech and thrust with the responsibility of “managing” marketing and product development, I can only approach this the scientific way – research and experimentation. So for now, here goes the research bit. This post contains some notes I took when I read “The Ten Commandments of Business Failure” by Donald R. Keough. In the book, he wrote the advice in the form of negative statements i.e. what you should not be doing. I converted them into positive ones to avoid wearing out my simple brain. The actual commandments are in the brackets. There are actually 11 as he gave one as a bonus… I hope you find them as useful as I did.

1.  Take calculated risks (Quit taking risks)

Creating profits in the long-term requires innovation in the now. Business leaders are paid to “be discontented”, to take the calculated risks that will ensure the company’s success in the future. “When you’re comfortable, the temptation to quit taking risks is so great, it’s almost irresistible”, but it is the number one way to seal your fate and fail. Mistakes and miscalculations, even very costly ones, are simply the price of staying in business.

2. Keep improving (Be inflexible)

The “if it ain’t broke, don’t fix it” mentality is the second best way to secure the demise of a business. There is no one formula for success that will continue to work always; leaders must constantly challenge themselves to change. “Flexibility is a continual deeply thoughtful process of examining situations and when warranted, quickly adapting to changing circumstances.” Darwin’s concept natural selection is applicable not just to organic species, but to the survival of businesses as well.

3. Talk to ground staff (Isolate yourself)

Staying in touch with customers, distributors, managers and staff is essential to continued growth and success. It is temptingly easy to physically isolate yourself from “distractions” in the comfort of leather sofas and plush carpets in corner offices on high floors guarded by layers of Personal Assistants. Creating your own “executive bubble” is a great way to be the last to know when anything is going wrong. Answer your own phone, make your own coffee, know the names of your people – walk around and find out how they are doing and what the Company needs to be doing better.

4. Remain humble (Assume Infallability)

Another great way to fail successfully is to never ever admit a problem or a mistake. Develop the artful skill of finger-pointing. Blame external forces such as currency fluctuations or the unusually active hurricane season. Cover up mistakes for as long as possible without admitting that anything is going wrong. It’s best to wait until there is a full-blown crisis and then say “mistakes were made…” (but not by me).

5. Keep your integrity (Play the game close to the foul line)

When you consistently “play it close to the foul line”, your employers will not trust you, and neither will your customers. If you achieve success by destroying your principles in the process, it will not last. Build a reputation for doing the right thing – to be forthright, honest and fair. Build trust. Honor and decency are virtues which never become outdated.

6. Think and plan ahead (Don’t Take Time to Think)

“Thought is hard” ~Goethe. In many ways technology often adds to the complexity of life without providing appreciable advantages. With the steady stream of data constantly bombarding us, it is appealing to believe that being busy is the same as being effective. Base decisions on careful evaluation. Objectively analyze mistakes; they are a powerful opportunity to see what went wrong. Making time to think is essential for success.

7. Take responsibility (Put All Your Faith in Experts and Outside Consultants)

“It is better to know some of the questions than all of the answers.” -James Thurber. Putting too much faith in outside expertise can lead to disastrous consequences. Quite often, managers insecure in their authority blame restructuring, layoffs and other unpleasant decisions on plans drawn up by outside experts. This is just another cowardly way of passing the buck. Good business leaders take responsibility for the future of their businesses, they don’t farm out important strategy decisions to third parties.

8. Streamline processes and avoid red tape (Love Your Bureaucracy)

If you want fail spectacularly, put administrative concerns ahead of everything else. Chains of command, paper pushing, and general red tape can lead to endemic dysfunction. Bureaucracy within organizations causes responsibility to become so diluted that the managers become incapable of making objective decisions. Action becomes  impossible. In a crisis, the results can be catastrophic.

9. Communicate clearly and frequently (Send Mixed Messages)

Communication does not occur unless the message is both heard and understood. For example, rewarding employees who have not met performance targets sends the message that the targets really don’t matter. Be consistent in the message you send. Apply accountability and follow through with the consequences.

10. Be optimistic (Be Afraid of the Future)

If you want to paralyze your business, start proceeding with caution all the time, allow pessimism to thrive. Unquenchable optimism is the spirit the engenders achievement and success. Move boldly ahead – approach the future with optimism – especially when the circumstances are unfavorable.

11. Do not settle (Lose Your Passion for Work, for Life)

To fail, just continue to set low expectations for yourself and everyone around you; keep saying “that’s good enough”, or “that’s not my job”. All achievement requires passion. Work is hard, but it is worth the effort to those who are convinced that they are capable of being better. It is the strong desire to do better and solve problems that should drive your passion to work harder. Successful people perform at a higher level, just for the satisfaction of doing it. Passion can be cultivated; form a strong emotional connection with whatever you are doing, and never stop.

Advances in GPCR research – GLP-1 receptor structural studies

The latest issue of Nature highlighted four papers that unveiled new knowledge regarding the structure of glucagon-like peptide 1 (GLP-1) receptor, a B class or secretin family G-protein coupled receptor (GPCR). The structural studies capture the receptor in various states (active/inactive) bound to peptide agonists, allosteric modulators or G proteins.

GPCRs make up an important drug target class for a variety of reasons. A substantial proportion of the genome codes for GPCRs (~4%) and 25-30% of currently marketed drugs target GPCRs. The top selling GPCR targeting drugs reap in $5-9 billion / year/ drug, Furthermore, there are about 120 GPCRs where endogenous ligands are unknown (called orphan GPCRs), indicating much room for development potential. The well-established role that GPCRs play in signalling pathways in a variety of physiological functions and their involvement in disease, further supports their study as drug targets.

The structure of GPCRs however are notoriously difficult to study because they become highly unstable when taken out of cell membranes. This therefore makes it difficult to design potent drugs against them. The authors of one of the papers from UK-based Heptares Therapeutics counter this with their StaR® (stabilised receptor) technology. This involves introduction of point mutations which improve GPCR thermostability without affecting their pharmacology. Other authors go about it with other techniques such as cryo-electron microscopy (cryo-EM).

GPCRs have extracellular N-terminals, seven canonical transmembrane domains, and an intracellular C-terminal that regulates downstream signaling. This rather detailed diagram shows which parts are involved with what.

GPCR_in_membrane (1)

Image from Wikipedia Commons, read more about how GPCRs work here.

GLP-1 is a 30 amino acid long hormone, produced by intestinal cells and certain neurons in the brainstem. It controls blood sugar levels by binding to GLP-1 receptor in pancreatic beta cells to stimulate insulin secretion. The GLP-1 receptor is also expressed in the brain, where GLP-1 mediates appetite suppression and now recently, nicotine avoidance. Interestingly, GLP-1 also produced protective effects in neurodegenerative disease models and a clinical trial of Exendin-4 for Alzheimer’s disease was recently completed, with results yet to be released.

Sidenote: Exendin-4 is a hormone isolated from the Gila Monster (venomous lizard found in the US and Mexico) that closely resembles GLP-1, inducing the same glucose-regulating effects. A synthetic version is now marketed as Exenatide by Astrazeneca for the treatment of diabetes.

The new structural information unveiled allowed researches from Heptares Therapeutics to design more potent peptide agonists than the currently available Exendin-4, which showed efficacy in a mouse diabetes model. The authors modeled a peptide (peptide 5) to fit deep in the binding  pocket of GLP-1 receptor with its C-terminus extending towards the extracellular portion of the receptor. They weren’t able to get the full-length structure of the peptide bound to the GLP-1 receptor, but they superimposed the known structure of GLP-1 peptide bound to only the extracellular domain (ECD) of GLP-1 receptor onto peptide 5, and found some differences. This they claimed to be due to the flexibility of the ECD or its altered behavior when expressed alone.

To improve the pharmacokinetics and stability of peptide 5, they introduced chemical modifications to the N-terminal end of the peptide producing peptide 6 and 7. Another peptide 8 was made by adding a PEG (polyethylene glycol) group which was predicted to extend its half-life by reducing proteolysis and increasing stability in plasma.

The latter worked best as peptide 8 showed the highest efficacy compared to other peptides in stimulating insulin secretion from isolated rat pancreatic islets. In mice that were fasted and fed glucose (in an oral glucose tolerance test, OGTT), peptide 8 introduced subcutaneously performed comparably to Exendin-4. It lowered glucose levels at lower doses, and had a longer duration of action compared to Exendin-4.

Heptares was co-founded by Dr Fiona Marshall, who is currently their Chief Scientific officer. Having extensive experience in molecular pharmacology from her previous work at GSK and Millennium Pharmaceuticals, she is leading the way in GPCR research. For more on her, read an interview by Cell.

Authors of the GLP-1 cryo-EM study also involve highly esteemed GPCR researchers including Dr Brian Kobilka (Stanford), who won the Nobel Prize for Chemistry in 2012 for his work on GPCRs with Dr Robert Lefkowitz (Duke). He has founded a company called ConfometRx, which his wife Tong Sun Kobilka manages, and also focuses on GPCR-based drug development.

So it appears the GPCR field will continue to thrive. For a more detailed look into the history of GPCRs, read Dr Robert Lefkowitz’s Nobel Prize lecture.

Artificial intelligence – fears and cheers in science and healthcare

Artificial intelligence (AI), defined as the theory and development of computer systems able to perform tasks normally requiring human intelligence, is increasingly being used in healthcare, drug development and scientific research.

The advantages are obvious. AI has the ability to draw on an incredible amount of information to carry out multiple tasks in parallel, with substantially less human bias and error, and without constant supervision.

The problem with human bias is one of particular importance. In case you haven’t seen it, watch Dr Elizabeth Loftus’ TEDtalk, on how humans easily form fictional memories that impact behavior, sometimes with severe consequences. I am not sure to what extent AI can be completely unbiased, programmers may inadvertently skew the importance that AI places on certain types of information. However, its still an improvement from the largely impulsive, emotion-based, and reward-driven human condition.

Applications of AI in healthcare includes its use in diagnosis of disease. IBM’s Watson, a question answering computer system designed to successfully beat two human contestants in the game show Jeopardy! outperformed doctors in diagnosing lung cancer with a 90% success rate, compared to just 50% for the doctors. Watson’s success was attributed to its ability to make decisions based on more than 600,000 pieces of medical evidence, more than two million pages from medical journals and the further ability to search through up to 1.5 million patient records. A human doctor in contrast, typically relies largely on personal experience, with only 20% of his/her knowledge coming from trial-based evidence.

AI systems are also being used to manage and collate electronic medical records in hospitals. Praxis for example uses machine learning to generate patient notes, staff/patient instructions, prescriptions, admitting orders, procedure reports, letters to referring providers, office or school excuses, and bills. It apparently gets faster, the more times it sees similar cases.

In terms of scientific research, AI is being explored in the following applications (companies involved):

  • going through genetic data to calculate predisposition to disease in an effort to administer personalized medicine or to implement lifestyle changes (Deep Genomics, Human Longevity, 23andMe, Rthm)
  • delivery of curated scientific literature based on custom preferences (Semantic ScholarSparrhoMeta now acquired by the Chan-Zuckerberg initiative)
  • going through scientific literature and ‘-omic’ results (i.e. global expression profiles of RNA, protein, lipids etc.) to detect patterns for targeted drug discovery efforts. Also termed de-risking drug discovery (Deep Genomics again, InSilico Medicine, BenevolentAI, NuMedii)
  • in silico drug screening where AI uses machine learning and 3D neural networks of molecular structures to reveal relevant chemical compounds (Atomwise, Numerate)

There is incredible investor interest in AI with 550 startups raising $5 billion in funding in 2016 (not limited to healthcare). Significantly, China is leading the advance in AI with iCarbonX achieving Unicorn status (> $1 billion) in funding. It was founded by Chinese genomicist Jun Wang, who previously managed Beijing Genomic Institute (BGI), one of the world’s sequencing centers that was involved in the Human Genome Project. iCarbonX now competes with Human Longevity in its effort to make sense of large amounts of genetic, imaging, behavioral and environmental data to enhance disease diagnosis or therapy.

Some challenges that AI faces in healthcare is the ultra-conservatism in terms of making changes to current practices. The fact that a large proportion of the healthcare sector do not understand how AI works, makes it more challenging for them to see the utility that AI can bring.

Another problem is susceptibility to data hacking, especially when it comes to patient records. One thing’s for sure, we can’t treat healthcare data the same way we are currently treating credit card data.

Then there’s the inherent fear of computers taking over the world. One that Elon Musk  and other tech giants seem to feel strongly about:

elon-musk-AI-04-17-02

Image from Vanity Fair’s “ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE” by Maureen Dowd.

Though he wasn’t fearing computers develop a mind of their own, more so that AI may be unintentionally programmed to self-improve a process that spells disaster for humankind. And with AI having access to human health records, influencing patient management and treatment, and affecting drug development decisions, I think he has every right to be worried! If we’re not careful, we might be letting AI manage healthcare security as well. Oops, we already are: Protenus.

 

Other Sources:

PharmaVentures Industry insight: “The Convergence of AI and Drug Discovery” by Peter Crane

TechCrunch: “Advances in AI and ML are reshaping healthcare” by Megh Gupta Qasim Mohammad

ExtremeTech: “The next major advance in medicine will be the use of AI” by Jessica Hall

Phenotypic or target-based screening in drug discovery? Insights from HCPS2017

Drug discovery has always been a topic close to my heart and I was fortunate to attend and present at the High Content Phenotypic Screening conference organised by SelectBio in Cambridge UK recently. The conference covers the latest technologies in high content screening and was attended by pharma scientists and screening experts, offering relevant insights into what issues are currently being faced in the search for new drugs.

Dr Lorenz Mayer from Astrazeneca summed it up nicely when he explained pharma’s dire lack of novel targets, typically not more than 20 drug targets/year, many of which largely overlap between pharmas. Dr Anton Simeonov from NIH continued with the bleak outlook by highlighting how drug discovery currently follows Eroom’s Law – i.e. Moore’s law spelled backwards. Moore’s Law, named after Intel co-founder Gordon Moore, basically highlights the doubling efficiency every 2 years of the number of transistors that can be placed inexpensively on integrated circuits. Eroom’s law in contrast, highlights the halving efficiency every 9 years of new drug approvals per billion spent in the USA.

erooms-law-moores-law-2

Figure from BuildingPharmaBrands blog with reference to Scannell et al., Diagnosing the decline in pharmaceutical R&D efficiency, Nature Rev Drug Discovery, 2012

Reasons given for the opposing trends include the greater complexity of biology compared to solid-state physics, the tightening regulations of drug approval, and of particular interest, the shift towards target-based drug discovery as opposed to phenotypic-based drug discovery.

Phenotypic-based drug discovery, the main route drug discoverers took before advances in molecular biology in the 1980s, involves finding drugs without necessarily knowing its molecular target. Back in the day, it was done mostly by measuring efficacy of compounds given to animal disease models or to willing but possibly uninformed patients. Despite the simplistic approach, it was the most productive period in drug discovery, yieding many drugs still being used today.

These days however, target-based drug discovery dominates the pharmaceutical industry. It can be simplified into a linear sequence of steps — target identification, tool production and assay development, hit finding and validation, hit-to-lead progression, lead optimization and preclinical development. Drug approvals these days are rare where the molecular target is unknown, and large resources are put into increasing the throughput and efficacy of each step. The problems associated with this approach however, are as follows:

  • poor translatability where the drug fails due to lack of efficiency – either resulting from the target being irrelevant or the assay not sufficiently representing the human disease
  • assumes disrupting a single target is sufficient
  • heavily reliant on scientific literature which is shown to be largely irreproducible

The advantages of a target-based approach however is the ability to screen compounds at a much higher throughput. It also has resulted in a vast expansion of chemical space, where many tool compounds (i.e. compounds that efficiently bind a target but cannot be used in humans due to toxic effects) have been identified. Tool compounds are great to use as comparison controls in subsequent phenotypic (hit validation) assays.

Phenotypic-based screening is still being performed today, typically in the form of cellular assays with disease-relevant readouts. For example, cellular proliferation assays for cancer drug screening. And now more sophisticated assays involving the use of high content imaging, where changes in expression or movement of physiologically relevant molecules or organelles can be imaged at high-throughput.

The advantages of phenotypic-based screening of course counter that of target-based approaches, namely that the target will be disease-relevant, does not exclude the possibility of hitting multiple targets, and is not dependent on existing knowledge.

However, it’s not always a bed of roses:

  • though knowing the mechanism of action (MOA) is not required for drug approval, it greatly facilitates the design of clinical trials in terms of defining patient populations, drug class and potential toxicities. Pharmas therefore typically try to find the target or mechanism of action post-phenotypic screening, which again can take large amounts of resources to elucidate.
  • the phenotypic assay may pick out more unselective or toxic compounds
  • setting up a robust and physiologically relevant phenotypic assay usually takes much longer and typically has a much lower throughput.
  • and how translatable would a phenotypic assay really be? Would we need to use induced pluripotent stem cells from patients that are difficult to culture, and can sometimes take months to differentiate into relevant cell types? The use of 3D organoids as opposed to 2D cell culture to mimic tissue systems add another layer of complexity.
  • Dr Mayer also highlighted the important “people” aspect – explaining to shareholders why you are now screening 100x less compounds in a more “physiologically relevant” assay that has not been proven to work.

Its difficult to get actual numbers on which approach has shown to be most effective so far but two reviews have tried to do so.

  • Swinney and Anthony (2011) looked at 75 first-in-class medicines (i.e. having novel MOA) from 1999-2008 and found that ~60% of these drugs were derived from phenotypic screening while only 40% was from target-based screening even though this approach was already being highly incorporated by pharma.
  • Another more recent study by Eder et al., (2014) that looked at 113 first-in-class drugs from 1999-2013 saw 70% of drugs arising from target-based screens. Out of the 30% of drugs identified from systems-based screening, about 3/4 of these drugs were derived from already known compound classes (termed chemocentric) and only 1/4 was from true phenotypic screens.

The large discrepancies between the two studies was attributed mostly to the longer time window analysed that may be required for target-based screening approaches to fully  mature.

The key metric to evaluate however would probably be the cost/compound progressed by each approach. Eder et al. claimed target-based approaches shortened length of drug development but gave no indication regarding the amount of resources used.

Interestingly, the types of compounds and disease indications differed widely between each approach used. With kinase and protease inhibitors featuring prominently in target-based approaches and drugs targeting ion channels being identified more in phenotypic screens.

Which approach is best? There is no right answer and a lot I imagine would depend on the disease being studied. Target-based approaches were more relevant in identifying drugs for cancer and metabolic disease while phenotypic-based approaches were more effective for central nervous system disorders and infectious disease.

In essence, both approaches could be used in parallel. Ideally, it would be interesting to see if incorporating phenotypic screens as the primary step may help reduce the current large attrition rates. The now enhanced library of tool compounds available and existing natural product derivatives serve as good starting candidates in these phenotypic screens. Target elucidation however is still likely required, so technologies that can successfully identify molecular targets will remain high in demand.

A key focus however should be on increasing translatability of phenotypic assays in order to reduce inefficiencies in drug screening. An unbiased approach is essential, one not just dependent on ease of set up or how things are traditionally done.

 

Cancer – luck of the draw

You don’t smoke, you don’t drink, you eat moderately and you exercise 3x a week. What are the chances you’ll still get cancer?

It’s luck of the draw, according to researchers Cristian Tomasetti, Ph.D., and Bert Vogelstein, M.D., both from Johns Hopkins Kimmel Cancer Center. They recently published in Science that majority of cancer mutations are largely attributed to DNA copying errors made during cellular replication (named R in study). The other two driving factors – environment (E) and genetic inheritance (H) – took a backseat in most cancers, with the exception of lung, skin and esophageal cancers.

Their conclusions were based mainly on looking at the correlation between stem cell divisions and cancer incidence. The idea being that the more divisions a cell makes, the greater the occurrence of DNA copying errors. As mutations accumulate, the risk of cancer correspondingly increases.

Looking at 69 countries and 17 different tissue types, they found correlation values ranged between 0.6-0.9. This high correlation was “surprising” as it was expected that the diverse environmental factors in different countries would have dampened the impact of stem cell divisions. They found this correlation increased with greater age range (0-89) although stem cell divisions do not increase proportionally with age in certain tissue types like bone and brain. I do not have access to the supplementary materials but would have liked to see what the correlation values were based on tissue type and different age ranges.

The authors attributed DNA copying error-induced mutations to four sources:    1) mispairing   2) polymerase errors   3) base deamination (i.e. losing of the amine group) and 4) free radical damage (i.e. oxidative stress).

To be honest, the finding is hardly surprising. The accumulation of spontaneous mutations over one’s lifetime is a well-known fact. Cancer occurs when the scales eventually tip – i.e. enough mutations occur such that tumour suppressors are no longer  able to hold oncogenes in check – setting the cell on a path to rapid multiplication and eventual destruction.

What perhaps is controversial is that what people may take away from the finding is that there’s no point in living healthy, we’re all going to die anyway!

Here’s the breakdown of each contributing factor when 32 different cancer types were modeled based on epidemiological findings. Their mathematical model assumed that cancers not induced by environment (E) or inheritance (H) was due to DNA copying errors (R):

“The median proportion of driver gene mutations attributable to E was 23% among all cancer types. The estimate varied considerably: It was greater than 60% in cancers such as those of the lung, esophagus, and skin and 15% or less in cancers such as those of the prostate, brain, and breast. When these data are normalized for the incidence of each of these 32 cancer types in the population, we calculate that 29% of the mutations in cancers occurring in the United Kingdom were attributable to E, 5% of the mutations were attributable to H, and 66% were attributable to R.”

Researchers are chiding the study as being over-simplified and going against previous evidence that 42% of cancers can be prevented by lifestyle and diet changes. The controversy has happened before as a previous study by the same authors essentially demonstrated the same findings but with sampling limited to the US.

However, its not like the authors entirely dismissed environment and genetic inheritance, deeming it unimportant. They obviously can impact outcomes as seen for certain cancers. In addition, they acknowledge certain cancers cannot be explained by DNA copying errors alone and require further epidemiological investigation.

More importantly, is what one can do to counter this source of mutations. The authors call for early diagnosis or introducing “more efficient repair mechanisms”. This can certainly drive efforts towards a new therapeutic approach focusing on DNA repair. With genetic intervention becoming commonplace in the lab and scientists relying on DNA repair mechanisms to introduce artificial mutations, its only a matter of time till we learn how to better write DNA or find ways to correct copying errors. And if this may lead to the cure for cancer then…  that’s one big problem solved!