Artificial intelligence – fears and cheers in science and healthcare

Artificial intelligence (AI), defined as the theory and development of computer systems able to perform tasks normally requiring human intelligence, is increasingly being used in healthcare, drug development and scientific research.

The advantages are obvious. AI has the ability to draw on an incredible amount of information to carry out multiple tasks in parallel, with substantially less human bias and error, and without constant supervision.

The problem with human bias is one of particular importance. In case you haven’t seen it, watch Dr Elizabeth Loftus’ TEDtalk, on how humans easily form fictional memories that impact behavior, sometimes with severe consequences. I am not sure to what extent AI can be completely unbiased, programmers may inadvertently skew the importance that AI places on certain types of information. However, its still an improvement from the largely impulsive, emotion-based, and reward-driven human condition.

Applications of AI in healthcare includes its use in diagnosis of disease. IBM’s Watson, a question answering computer system designed to successfully beat two human contestants in the game show Jeopardy! outperformed doctors in diagnosing lung cancer with a 90% success rate, compared to just 50% for the doctors. Watson’s success was attributed to its ability to make decisions based on more than 600,000 pieces of medical evidence, more than two million pages from medical journals and the further ability to search through up to 1.5 million patient records. A human doctor in contrast, typically relies largely on personal experience, with only 20% of his/her knowledge coming from trial-based evidence.

AI systems are also being used to manage and collate electronic medical records in hospitals. Praxis for example uses machine learning to generate patient notes, staff/patient instructions, prescriptions, admitting orders, procedure reports, letters to referring providers, office or school excuses, and bills. It apparently gets faster, the more times it sees similar cases.

In terms of scientific research, AI is being explored in the following applications (companies involved):

  • going through genetic data to calculate predisposition to disease in an effort to administer personalized medicine or to implement lifestyle changes (Deep Genomics, Human Longevity, 23andMe, Rthm)
  • delivery of curated scientific literature based on custom preferences (Semantic ScholarSparrhoMeta now acquired by the Chan-Zuckerberg initiative)
  • going through scientific literature and ‘-omic’ results (i.e. global expression profiles of RNA, protein, lipids etc.) to detect patterns for targeted drug discovery efforts. Also termed de-risking drug discovery (Deep Genomics again, InSilico Medicine, BenevolentAI, NuMedii)
  • in silico drug screening where AI uses machine learning and 3D neural networks of molecular structures to reveal relevant chemical compounds (Atomwise, Numerate)

There is incredible investor interest in AI with 550 startups raising $5 billion in funding in 2016 (not limited to healthcare). Significantly, China is leading the advance in AI with iCarbonX achieving Unicorn status (> $1 billion) in funding. It was founded by Chinese genomicist Jun Wang, who previously managed Beijing Genomic Institute (BGI), one of the world’s sequencing centers that was involved in the Human Genome Project. iCarbonX now competes with Human Longevity in its effort to make sense of large amounts of genetic, imaging, behavioral and environmental data to enhance disease diagnosis or therapy.

Some challenges that AI faces in healthcare is the ultra-conservatism in terms of making changes to current practices. The fact that a large proportion of the healthcare sector do not understand how AI works, makes it more challenging for them to see the utility that AI can bring.

Another problem is susceptibility to data hacking, especially when it comes to patient records. One thing’s for sure, we can’t treat healthcare data the same way we are currently treating credit card data.

Then there’s the inherent fear of computers taking over the world. One that Elon Musk  and other tech giants seem to feel strongly about:

elon-musk-AI-04-17-02

Image from Vanity Fair’s “ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE” by Maureen Dowd.

Though he wasn’t fearing computers develop a mind of their own, more so that AI may be unintentionally programmed to self-improve a process that spells disaster for humankind. And with AI having access to human health records, influencing patient management and treatment, and affecting drug development decisions, I think he has every right to be worried! If we’re not careful, we might be letting AI manage healthcare security as well. Oops, we already are: Protenus.

 

Other Sources:

PharmaVentures Industry insight: “The Convergence of AI and Drug Discovery” by Peter Crane

TechCrunch: “Advances in AI and ML are reshaping healthcare” by Megh Gupta Qasim Mohammad

ExtremeTech: “The next major advance in medicine will be the use of AI” by Jessica Hall

Advertisements

Phenotypic or target-based screening in drug discovery? Insights from HCPS2017

Drug discovery has always been a topic close to my heart and I was fortunate to attend and present at the High Content Phenotypic Screening conference organised by SelectBio in Cambridge UK recently. The conference covers the latest technologies in high content screening and was attended by pharma scientists and screening experts, offering relevant insights into what issues are currently being faced in the search for new drugs.

Dr Lorenz Mayer from Astrazeneca summed it up nicely when he explained pharma’s dire lack of novel targets, typically not more than 20 drug targets/year, many of which largely overlap between pharmas. Dr Anton Simeonov from NIH continued with the bleak outlook by highlighting how drug discovery currently follows Eroom’s Law – i.e. Moore’s law spelled backwards. Moore’s Law, named after Intel co-founder Gordon Moore, basically highlights the doubling efficiency every 2 years of the number of transistors that can be placed inexpensively on integrated circuits. Eroom’s law in contrast, highlights the halving efficiency every 9 years of new drug approvals per billion spent in the USA.

erooms-law-moores-law-2

Figure from BuildingPharmaBrands blog with reference to Scannell et al., Diagnosing the decline in pharmaceutical R&D efficiency, Nature Rev Drug Discovery, 2012

Reasons given for the opposing trends include the greater complexity of biology compared to solid-state physics, the tightening regulations of drug approval, and of particular interest, the shift towards target-based drug discovery as opposed to phenotypic-based drug discovery.

Phenotypic-based drug discovery, the main route drug discoverers took before advances in molecular biology in the 1980s, involves finding drugs without necessarily knowing its molecular target. Back in the day, it was done mostly by measuring efficacy of compounds given to animal disease models or to willing but possibly uninformed patients. Despite the simplistic approach, it was the most productive period in drug discovery, yieding many drugs still being used today.

These days however, target-based drug discovery dominates the pharmaceutical industry. It can be simplified into a linear sequence of steps — target identification, tool production and assay development, hit finding and validation, hit-to-lead progression, lead optimization and preclinical development. Drug approvals these days are rare where the molecular target is unknown, and large resources are put into increasing the throughput and efficacy of each step. The problems associated with this approach however, are as follows:

  • poor translatability where the drug fails due to lack of efficiency – either resulting from the target being irrelevant or the assay not sufficiently representing the human disease
  • assumes disrupting a single target is sufficient
  • heavily reliant on scientific literature which is shown to be largely irreproducible

The advantages of a target-based approach however is the ability to screen compounds at a much higher throughput. It also has resulted in a vast expansion of chemical space, where many tool compounds (i.e. compounds that efficiently bind a target but cannot be used in humans due to toxic effects) have been identified. Tool compounds are great to use as comparison controls in subsequent phenotypic (hit validation) assays.

Phenotypic-based screening is still being performed today, typically in the form of cellular assays with disease-relevant readouts. For example, cellular proliferation assays for cancer drug screening. And now more sophisticated assays involving the use of high content imaging, where changes in expression or movement of physiologically relevant molecules or organelles can be imaged at high-throughput.

The advantages of phenotypic-based screening of course counter that of target-based approaches, namely that the target will be disease-relevant, does not exclude the possibility of hitting multiple targets, and is not dependent on existing knowledge.

However, it’s not always a bed of roses:

  • though knowing the mechanism of action (MOA) is not required for drug approval, it greatly facilitates the design of clinical trials in terms of defining patient populations, drug class and potential toxicities. Pharmas therefore typically try to find the target or mechanism of action post-phenotypic screening, which again can take large amounts of resources to elucidate.
  • the phenotypic assay may pick out more unselective or toxic compounds
  • setting up a robust and physiologically relevant phenotypic assay usually takes much longer and typically has a much lower throughput.
  • and how translatable would a phenotypic assay really be? Would we need to use induced pluripotent stem cells from patients that are difficult to culture, and can sometimes take months to differentiate into relevant cell types? The use of 3D organoids as opposed to 2D cell culture to mimic tissue systems add another layer of complexity.
  • Dr Mayer also highlighted the important “people” aspect – explaining to shareholders why you are now screening 100x less compounds in a more “physiologically relevant” assay that has not been proven to work.

Its difficult to get actual numbers on which approach has shown to be most effective so far but two reviews have tried to do so.

  • Swinney and Anthony (2011) looked at 75 first-in-class medicines (i.e. having novel MOA) from 1999-2008 and found that ~60% of these drugs were derived from phenotypic screening while only 40% was from target-based screening even though this approach was already being highly incorporated by pharma.
  • Another more recent study by Eder et al., (2014) that looked at 113 first-in-class drugs from 1999-2013 saw 70% of drugs arising from target-based screens. Out of the 30% of drugs identified from systems-based screening, about 3/4 of these drugs were derived from already known compound classes (termed chemocentric) and only 1/4 was from true phenotypic screens.

The large discrepancies between the two studies was attributed mostly to the longer time window analysed that may be required for target-based screening approaches to fully  mature.

The key metric to evaluate however would probably be the cost/compound progressed by each approach. Eder et al. claimed target-based approaches shortened length of drug development but gave no indication regarding the amount of resources used.

Interestingly, the types of compounds and disease indications differed widely between each approach used. With kinase and protease inhibitors featuring prominently in target-based approaches and drugs targeting ion channels being identified more in phenotypic screens.

Which approach is best? There is no right answer and a lot I imagine would depend on the disease being studied. Target-based approaches were more relevant in identifying drugs for cancer and metabolic disease while phenotypic-based approaches were more effective for central nervous system disorders and infectious disease.

In essence, both approaches could be used in parallel. Ideally, it would be interesting to see if incorporating phenotypic screens as the primary step may help reduce the current large attrition rates. The now enhanced library of tool compounds available and existing natural product derivatives serve as good starting candidates in these phenotypic screens. Target elucidation however is still likely required, so technologies that can successfully identify molecular targets will remain high in demand.

A key focus however should be on increasing translatability of phenotypic assays in order to reduce inefficiencies in drug screening. An unbiased approach is essential, one not just dependent on ease of set up or how things are traditionally done.