Advances in GPCR research – GLP-1 receptor structural studies

The latest issue of Nature highlighted four papers that unveiled new knowledge regarding the structure of glucagon-like peptide 1 (GLP-1) receptor, a B class or secretin family G-protein coupled receptor (GPCR). The structural studies capture the receptor in various states (active/inactive) bound to peptide agonists, allosteric modulators or G proteins.

GPCRs make up an important drug target class for a variety of reasons. A substantial proportion of the genome codes for GPCRs (~4%) and 25-30% of currently marketed drugs target GPCRs. The top selling GPCR targeting drugs reap in $5-9 billion / year/ drug, Furthermore, there are about 120 GPCRs where endogenous ligands are unknown (called orphan GPCRs), indicating much room for development potential. The well-established role that GPCRs play in signalling pathways in a variety of physiological functions and their involvement in disease, further supports their study as drug targets.

The structure of GPCRs however are notoriously difficult to study because they become highly unstable when taken out of cell membranes. This therefore makes it difficult to design potent drugs against them. The authors of one of the papers from UK-based Heptares Therapeutics counter this with their StaR® (stabilised receptor) technology. This involves introduction of point mutations which improve GPCR thermostability without affecting their pharmacology. Other authors go about it with other techniques such as cryo-electron microscopy (cryo-EM).

GPCRs have extracellular N-terminals, seven canonical transmembrane domains, and an intracellular C-terminal that regulates downstream signaling. This rather detailed diagram shows which parts are involved with what.

GPCR_in_membrane (1)

Image from Wikipedia Commons, read more about how GPCRs work here.

GLP-1 is a 30 amino acid long hormone, produced by intestinal cells and certain neurons in the brainstem. It controls blood sugar levels by binding to GLP-1 receptor in pancreatic beta cells to stimulate insulin secretion. The GLP-1 receptor is also expressed in the brain, where GLP-1 mediates appetite suppression and now recently, nicotine avoidance. Interestingly, GLP-1 also produced protective effects in neurodegenerative disease models and a clinical trial of Exendin-4 for Alzheimer’s disease was recently completed, with results yet to be released.

Sidenote: Exendin-4 is a hormone isolated from the Gila Monster (venomous lizard found in the US and Mexico) that closely resembles GLP-1, inducing the same glucose-regulating effects. A synthetic version is now marketed as Exenatide by Astrazeneca for the treatment of diabetes.

The new structural information unveiled allowed researches from Heptares Therapeutics to design more potent peptide agonists than the currently available Exendin-4, which showed efficacy in a mouse diabetes model. The authors modeled a peptide (peptide 5) to fit deep in the binding  pocket of GLP-1 receptor with its C-terminus extending towards the extracellular portion of the receptor. They weren’t able to get the full-length structure of the peptide bound to the GLP-1 receptor, but they superimposed the known structure of GLP-1 peptide bound to only the extracellular domain (ECD) of GLP-1 receptor onto peptide 5, and found some differences. This they claimed to be due to the flexibility of the ECD or its altered behavior when expressed alone.

To improve the pharmacokinetics and stability of peptide 5, they introduced chemical modifications to the N-terminal end of the peptide producing peptide 6 and 7. Another peptide 8 was made by adding a PEG (polyethylene glycol) group which was predicted to extend its half-life by reducing proteolysis and increasing stability in plasma.

The latter worked best as peptide 8 showed the highest efficacy compared to other peptides in stimulating insulin secretion from isolated rat pancreatic islets. In mice that were fasted and fed glucose (in an oral glucose tolerance test, OGTT), peptide 8 introduced subcutaneously performed comparably to Exendin-4. It lowered glucose levels at lower doses, and had a longer duration of action compared to Exendin-4.

Heptares was co-founded by Dr Fiona Marshall, who is currently their Chief Scientific officer. Having extensive experience in molecular pharmacology from her previous work at GSK and Millennium Pharmaceuticals, she is leading the way in GPCR research. For more on her, read an interview by Cell.

Authors of the GLP-1 cryo-EM study also involve highly esteemed GPCR researchers including Dr Brian Kobilka (Stanford), who won the Nobel Prize for Chemistry in 2012 for his work on GPCRs with Dr Robert Lefkowitz (Duke). He has founded a company called ConfometRx, which his wife Tong Sun Kobilka manages, and also focuses on GPCR-based drug development.

So it appears the GPCR field will continue to thrive. For a more detailed look into the history of GPCRs, read Dr Robert Lefkowitz’s Nobel Prize lecture.

Artificial intelligence – fears and cheers in science and healthcare

Artificial intelligence (AI), defined as the theory and development of computer systems able to perform tasks normally requiring human intelligence, is increasingly being used in healthcare, drug development and scientific research.

The advantages are obvious. AI has the ability to draw on an incredible amount of information to carry out multiple tasks in parallel, with substantially less human bias and error, and without constant supervision.

The problem with human bias is one of particular importance. In case you haven’t seen it, watch Dr Elizabeth Loftus’ TEDtalk, on how humans easily form fictional memories that impact behavior, sometimes with severe consequences. I am not sure to what extent AI can be completely unbiased, programmers may inadvertently skew the importance that AI places on certain types of information. However, its still an improvement from the largely impulsive, emotion-based, and reward-driven human condition.

Applications of AI in healthcare includes its use in diagnosis of disease. IBM’s Watson, a question answering computer system designed to successfully beat two human contestants in the game show Jeopardy! outperformed doctors in diagnosing lung cancer with a 90% success rate, compared to just 50% for the doctors. Watson’s success was attributed to its ability to make decisions based on more than 600,000 pieces of medical evidence, more than two million pages from medical journals and the further ability to search through up to 1.5 million patient records. A human doctor in contrast, typically relies largely on personal experience, with only 20% of his/her knowledge coming from trial-based evidence.

AI systems are also being used to manage and collate electronic medical records in hospitals. Praxis for example uses machine learning to generate patient notes, staff/patient instructions, prescriptions, admitting orders, procedure reports, letters to referring providers, office or school excuses, and bills. It apparently gets faster, the more times it sees similar cases.

In terms of scientific research, AI is being explored in the following applications (companies involved):

  • going through genetic data to calculate predisposition to disease in an effort to administer personalized medicine or to implement lifestyle changes (Deep Genomics, Human Longevity, 23andMe, Rthm)
  • delivery of curated scientific literature based on custom preferences (Semantic ScholarSparrhoMeta now acquired by the Chan-Zuckerberg initiative)
  • going through scientific literature and ‘-omic’ results (i.e. global expression profiles of RNA, protein, lipids etc.) to detect patterns for targeted drug discovery efforts. Also termed de-risking drug discovery (Deep Genomics again, InSilico Medicine, BenevolentAI, NuMedii)
  • in silico drug screening where AI uses machine learning and 3D neural networks of molecular structures to reveal relevant chemical compounds (Atomwise, Numerate)

There is incredible investor interest in AI with 550 startups raising $5 billion in funding in 2016 (not limited to healthcare). Significantly, China is leading the advance in AI with iCarbonX achieving Unicorn status (> $1 billion) in funding. It was founded by Chinese genomicist Jun Wang, who previously managed Beijing Genomic Institute (BGI), one of the world’s sequencing centers that was involved in the Human Genome Project. iCarbonX now competes with Human Longevity in its effort to make sense of large amounts of genetic, imaging, behavioral and environmental data to enhance disease diagnosis or therapy.

Some challenges that AI faces in healthcare is the ultra-conservatism in terms of making changes to current practices. The fact that a large proportion of the healthcare sector do not understand how AI works, makes it more challenging for them to see the utility that AI can bring.

Another problem is susceptibility to data hacking, especially when it comes to patient records. One thing’s for sure, we can’t treat healthcare data the same way we are currently treating credit card data.

Then there’s the inherent fear of computers taking over the world. One that Elon Musk  and other tech giants seem to feel strongly about:

elon-musk-AI-04-17-02

Image from Vanity Fair’s “ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE” by Maureen Dowd.

Though he wasn’t fearing computers develop a mind of their own, more so that AI may be unintentionally programmed to self-improve a process that spells disaster for humankind. And with AI having access to human health records, influencing patient management and treatment, and affecting drug development decisions, I think he has every right to be worried! If we’re not careful, we might be letting AI manage healthcare security as well. Oops, we already are: Protenus.

 

Other Sources:

PharmaVentures Industry insight: “The Convergence of AI and Drug Discovery” by Peter Crane

TechCrunch: “Advances in AI and ML are reshaping healthcare” by Megh Gupta Qasim Mohammad

ExtremeTech: “The next major advance in medicine will be the use of AI” by Jessica Hall

Phenotypic or target-based screening in drug discovery? Insights from HCPS2017

Drug discovery has always been a topic close to my heart and I was fortunate to attend and present at the High Content Phenotypic Screening conference organised by SelectBio in Cambridge UK recently. The conference covers the latest technologies in high content screening and was attended by pharma scientists and screening experts, offering relevant insights into what issues are currently being faced in the search for new drugs.

Dr Lorenz Mayer from Astrazeneca summed it up nicely when he explained pharma’s dire lack of novel targets, typically not more than 20 drug targets/year, many of which largely overlap between pharmas. Dr Anton Simeonov from NIH continued with the bleak outlook by highlighting how drug discovery currently follows Eroom’s Law – i.e. Moore’s law spelled backwards. Moore’s Law, named after Intel co-founder Gordon Moore, basically highlights the doubling efficiency every 2 years of the number of transistors that can be placed inexpensively on integrated circuits. Eroom’s law in contrast, highlights the halving efficiency every 9 years of new drug approvals per billion spent in the USA.

erooms-law-moores-law-2

Figure from BuildingPharmaBrands blog with reference to Scannell et al., Diagnosing the decline in pharmaceutical R&D efficiency, Nature Rev Drug Discovery, 2012

Reasons given for the opposing trends include the greater complexity of biology compared to solid-state physics, the tightening regulations of drug approval, and of particular interest, the shift towards target-based drug discovery as opposed to phenotypic-based drug discovery.

Phenotypic-based drug discovery, the main route drug discoverers took before advances in molecular biology in the 1980s, involves finding drugs without necessarily knowing its molecular target. Back in the day, it was done mostly by measuring efficacy of compounds given to animal disease models or to willing but possibly uninformed patients. Despite the simplistic approach, it was the most productive period in drug discovery, yieding many drugs still being used today.

These days however, target-based drug discovery dominates the pharmaceutical industry. It can be simplified into a linear sequence of steps — target identification, tool production and assay development, hit finding and validation, hit-to-lead progression, lead optimization and preclinical development. Drug approvals these days are rare where the molecular target is unknown, and large resources are put into increasing the throughput and efficacy of each step. The problems associated with this approach however, are as follows:

  • poor translatability where the drug fails due to lack of efficiency – either resulting from the target being irrelevant or the assay not sufficiently representing the human disease
  • assumes disrupting a single target is sufficient
  • heavily reliant on scientific literature which is shown to be largely irreproducible

The advantages of a target-based approach however is the ability to screen compounds at a much higher throughput. It also has resulted in a vast expansion of chemical space, where many tool compounds (i.e. compounds that efficiently bind a target but cannot be used in humans due to toxic effects) have been identified. Tool compounds are great to use as comparison controls in subsequent phenotypic (hit validation) assays.

Phenotypic-based screening is still being performed today, typically in the form of cellular assays with disease-relevant readouts. For example, cellular proliferation assays for cancer drug screening. And now more sophisticated assays involving the use of high content imaging, where changes in expression or movement of physiologically relevant molecules or organelles can be imaged at high-throughput.

The advantages of phenotypic-based screening of course counter that of target-based approaches, namely that the target will be disease-relevant, does not exclude the possibility of hitting multiple targets, and is not dependent on existing knowledge.

However, it’s not always a bed of roses:

  • though knowing the mechanism of action (MOA) is not required for drug approval, it greatly facilitates the design of clinical trials in terms of defining patient populations, drug class and potential toxicities. Pharmas therefore typically try to find the target or mechanism of action post-phenotypic screening, which again can take large amounts of resources to elucidate.
  • the phenotypic assay may pick out more unselective or toxic compounds
  • setting up a robust and physiologically relevant phenotypic assay usually takes much longer and typically has a much lower throughput.
  • and how translatable would a phenotypic assay really be? Would we need to use induced pluripotent stem cells from patients that are difficult to culture, and can sometimes take months to differentiate into relevant cell types? The use of 3D organoids as opposed to 2D cell culture to mimic tissue systems add another layer of complexity.
  • Dr Mayer also highlighted the important “people” aspect – explaining to shareholders why you are now screening 100x less compounds in a more “physiologically relevant” assay that has not been proven to work.

Its difficult to get actual numbers on which approach has shown to be most effective so far but two reviews have tried to do so.

  • Swinney and Anthony (2011) looked at 75 first-in-class medicines (i.e. having novel MOA) from 1999-2008 and found that ~60% of these drugs were derived from phenotypic screening while only 40% was from target-based screening even though this approach was already being highly incorporated by pharma.
  • Another more recent study by Eder et al., (2014) that looked at 113 first-in-class drugs from 1999-2013 saw 70% of drugs arising from target-based screens. Out of the 30% of drugs identified from systems-based screening, about 3/4 of these drugs were derived from already known compound classes (termed chemocentric) and only 1/4 was from true phenotypic screens.

The large discrepancies between the two studies was attributed mostly to the longer time window analysed that may be required for target-based screening approaches to fully  mature.

The key metric to evaluate however would probably be the cost/compound progressed by each approach. Eder et al. claimed target-based approaches shortened length of drug development but gave no indication regarding the amount of resources used.

Interestingly, the types of compounds and disease indications differed widely between each approach used. With kinase and protease inhibitors featuring prominently in target-based approaches and drugs targeting ion channels being identified more in phenotypic screens.

Which approach is best? There is no right answer and a lot I imagine would depend on the disease being studied. Target-based approaches were more relevant in identifying drugs for cancer and metabolic disease while phenotypic-based approaches were more effective for central nervous system disorders and infectious disease.

In essence, both approaches could be used in parallel. Ideally, it would be interesting to see if incorporating phenotypic screens as the primary step may help reduce the current large attrition rates. The now enhanced library of tool compounds available and existing natural product derivatives serve as good starting candidates in these phenotypic screens. Target elucidation however is still likely required, so technologies that can successfully identify molecular targets will remain high in demand.

A key focus however should be on increasing translatability of phenotypic assays in order to reduce inefficiencies in drug screening. An unbiased approach is essential, one not just dependent on ease of set up or how things are traditionally done.

 

Cancer – luck of the draw

You don’t smoke, you don’t drink, you eat moderately and you exercise 3x a week. What are the chances you’ll still get cancer?

It’s luck of the draw, according to researchers Cristian Tomasetti, Ph.D., and Bert Vogelstein, M.D., both from Johns Hopkins Kimmel Cancer Center. They recently published in Science that majority of cancer mutations are largely attributed to DNA copying errors made during cellular replication (named R in study). The other two driving factors – environment (E) and genetic inheritance (H) – took a backseat in most cancers, with the exception of lung, skin and esophageal cancers.

Their conclusions were based mainly on looking at the correlation between stem cell divisions and cancer incidence. The idea being that the more divisions a cell makes, the greater the occurrence of DNA copying errors. As mutations accumulate, the risk of cancer correspondingly increases.

Looking at 69 countries and 17 different tissue types, they found correlation values ranged between 0.6-0.9. This high correlation was “surprising” as it was expected that the diverse environmental factors in different countries would have dampened the impact of stem cell divisions. They found this correlation increased with greater age range (0-89) although stem cell divisions do not increase proportionally with age in certain tissue types like bone and brain. I do not have access to the supplementary materials but would have liked to see what the correlation values were based on tissue type and different age ranges.

The authors attributed DNA copying error-induced mutations to four sources:    1) mispairing   2) polymerase errors   3) base deamination (i.e. losing of the amine group) and 4) free radical damage (i.e. oxidative stress).

To be honest, the finding is hardly surprising. The accumulation of spontaneous mutations over one’s lifetime is a well-known fact. Cancer occurs when the scales eventually tip – i.e. enough mutations occur such that tumour suppressors are no longer  able to hold oncogenes in check – setting the cell on a path to rapid multiplication and eventual destruction.

What perhaps is controversial is that what people may take away from the finding is that there’s no point in living healthy, we’re all going to die anyway!

Here’s the breakdown of each contributing factor when 32 different cancer types were modeled based on epidemiological findings. Their mathematical model assumed that cancers not induced by environment (E) or inheritance (H) was due to DNA copying errors (R):

“The median proportion of driver gene mutations attributable to E was 23% among all cancer types. The estimate varied considerably: It was greater than 60% in cancers such as those of the lung, esophagus, and skin and 15% or less in cancers such as those of the prostate, brain, and breast. When these data are normalized for the incidence of each of these 32 cancer types in the population, we calculate that 29% of the mutations in cancers occurring in the United Kingdom were attributable to E, 5% of the mutations were attributable to H, and 66% were attributable to R.”

Researchers are chiding the study as being over-simplified and going against previous evidence that 42% of cancers can be prevented by lifestyle and diet changes. The controversy has happened before as a previous study by the same authors essentially demonstrated the same findings but with sampling limited to the US.

However, its not like the authors entirely dismissed environment and genetic inheritance, deeming it unimportant. They obviously can impact outcomes as seen for certain cancers. In addition, they acknowledge certain cancers cannot be explained by DNA copying errors alone and require further epidemiological investigation.

More importantly, is what one can do to counter this source of mutations. The authors call for early diagnosis or introducing “more efficient repair mechanisms”. This can certainly drive efforts towards a new therapeutic approach focusing on DNA repair. With genetic intervention becoming commonplace in the lab and scientists relying on DNA repair mechanisms to introduce artificial mutations, its only a matter of time till we learn how to better write DNA or find ways to correct copying errors. And if this may lead to the cure for cancer then…  that’s one big problem solved!

It’s how you move that matters – studying protein vibrations

giphy (1)

Molecular interactions resemble a dance, a composition of energy and motion.

A novel tool that may provide fresh perspectives to structural biologists has been developed by Dr Andrea Markelz’s group from the University of Buffalo. Its application towards understanding a biological function was published in  “Moving in the Right Direction: Protein Vibrational Steering Function” in the Biophysical Journal, with Dr Katherine Niessen as first author.

The group developed a technique called anisotropic terahertz microscopy (ATM) which was able to distinguish directional motions of protein vibrations. This is opposed to traditional methods that only measure total vibrational energy distribution by neutron inelastic scattering.

One way distal mutations (i.e. mutations far away from the site of disruption) affect ligand (i.e. drug/protein/other molecule) binding is by inducing long-range motions in protein structure that allows accommodation of the ligand. This technique could therefore provide a fresh look at drug-protein interactions, allowing scientists to discern and model drugs that not only bind but produce the desired vibrations to cause a particular effect.

Niessen et al. demonstrated the utility of ATM by studying the chicken egg white lysozyme (CEWL) and its binding to inhibitor (tri-N-acetyl-D-glucosamine, 3NAG). Binding of the inhibitor did not produce much change in the vibrational density of states (VDOS or energy distribution) whereas ATM could detect dramatically different changes between bound and unbound states.

They attributed these differences to the direction of vibrational movement. By computer modelling and simulation, the unbound (apo) form exhibited clamping motions around the binding site while the bound (holo) form displayed more twisting motions. Quite a stark difference when you consider the dance moves in “Heyyy Macarena” from “Twist and Shout”.

Furthermore, CEWL with distal mutations that induced higher catalytic activity without changes in binding, saw no significant change in VDOS but distinct differences in anisotropic absorbance. This demonstrates the utility of ATM to evaluate long-range mutations and their effect on protein activity.

Challenges remain as highlighted by Dr Jeremy Smith, a key expert in the field. This involves the need to crystallize and align proteins, and evaluating the detection sensitivity of the technique on a range of vibration types and under various conditions. But even he agreed it was a step (or shimmy) in the right direction.

 

Two man-made nucleotide bases produces never before seen semi-synthetic organism

As if altering the genetic code was not enough, now there might be a possibility to completely change the way it was written. Two new nucleotide bases have been created that were stably replicated in bacteria, expanding the possible number of base-pairs to 3 and increasing the coding capacity of DNA exponentially.

In his initial study published in Nature, Dr. Floyd E. Romesberg, a Chemistry Professor at the Scripps Research Institute in California, produced two nucleoside triphosphates called dNaMTP and d5SICSTP, which basepair via hydrophobic interactions instead of hydrogen bonding. A nucleotide transporter had to be expressed in the bacteria in order for their uptake, but once inside, DNA polymerases had no problem incorporating and amplifying the synthetic nucleotides, making copies of an introduced plasmid containing a single base-pair of the unnatural bases.

There were concerns initially that DNA repair mechanisms may work to remove these foreign bases, but that did not seem to be the case. The unnatural bases persisted for more than 3 days after he stopped supplying the bacteria with nucleotides at a retention of 45% (at 0 days, retention was in excess of 95% as suggested by termination of reads by Sanger sequencing) and 6 days at 15%. This led him to guess that their decay over time was due to replication-mediated mispriming instead of DNA repair.

In his most recent study in PNAS, Dr. Romesberg and his graduate student Yorke Zhang made improvements to the nucleotide transporter used, which previously induced some toxicity. They also further optimized one of the unnatural bases to be better incorporated by DNA polymerases, increasing copying efficiency. Finally, to prevent the decay of the unnatural bases over time, they used CRISPR to eliminate the DNA that had lost the unnatural bases. The resulting bacteria was able to indefinitely retain the unnatural bases under various sequence context.

romesberg

Image taken from News Medical Life Sciences‘ interview with Professor Romesberg

The so-called semi-synthetic organism produced therefore uses 3 base-pairs and has 6 nucleotide types instead of 4. It remains to be seen whether its DNA can be transcribed into RNA and then more importantly into protein. But it holds the potential to greatly change the way things are currently done – imagine what we could do with 172 amino acids instead of 20!

 

Role models in science – Dr Susan Lindquist

plos_lindquist

It has been a month since renown researcher of protein folding, Dr Susan Lindquist, passed away to cancer at the age of 67. I remember watching her video on prion biology while working on my PhD project looking at mechanisms involved in neurodegenerative disease. She seemed pretty cool, I thought.

She grew up in Chicago to a Swedish father and Italian mother who never expected her to come so far in her career. Being a woman, their hopes for her was to marry someone decent and successful. Coming home from a party the night before New Year’s Eve and seeing their daughter hard at work on a paper, their comments were “Are you still working? When are you going to settle down?”  I’m not so sure that parental thinking has changed significantly since then…

She found inspiration in a book detailing the life of Elizabeth Blackwell, the first woman who obtained a medical degree in the US, and various teachers who stimulated her interest in science. Under the guidance of her microbiology professor Jan Drake, she applied successfully for a National Science Foundation scholarship to do research in his lab. With his encouragement, she applied to graduate school in Harvard, and got in, something she had never dreamed would happen. She worked in the lab of Matthew Meselsen but failed to get any data for her first project. After talking to a colleague down the hall who noted particular phenomena to heat-exposed fruitflies however, she decided to test if any similar responses would happen in cells. That was the turning point in her career, as she found and characterized the upregulation of specific proteins induced by heat, a mechanism termed the heat shock response that would be found to be highly conserved across many organisms.

She continued to work on the heat shock response during her post-doc rather independently in the lab of Hewson Smith at the University of Chicago. She characterized how the expression of these heat shock proteins were regulated via transcription, translation, splicing or degradation. Realising that these proteins were so highly conserved across different species and were being in expressed in every cell in response to stress that occurs frequently in life and disease, Lindquist was driven to find out exactly what these proteins were doing. Her research brought her into broad and vastly different fields, as it was found that these proteins played essential roles from enhancing malignancy in cancer to managing protein aggregates so often found in neurodegenerative diseases.

She had surpassed her initial dream of writing grants under the supervision of a male superior, to managing her own lab at the Whitehead Institute at MIT. She even co-founded a company – FoldRx Pharmaceuticals – which utilized her favourite model, yeast, in a high-throughput functional assay to search for drugs that could alleviate protein aggregation in protein misfolding diseases. This was later bought by Pfizer as they sought to obtain the rights to the drug Tamafidis, which was approved for the treatment of early stage transthyretin-related hereditary amyloidosis or familial amyloid polyneuropathy or FAP.

Susan Lindquist is definitely a role model to look up to, especially for women in science. There are still far lesser women compared to men in leadership positions in science and beyond. I gather this is attributable to the demands of family rearing, the discrimination that comes hand-in-hand with being a woman attempting to lead, and the internal fight women go through to overcome natural feelings of inadequacy. But Susan shows us it can be done. And I think we would probably do a better job than men sometimes as Sandi Toksvig would agree in her hilarious TED Talk.

Read and watch more about Dr Susan Lindquist here:

Fearless about Folding | The Scientist Magazine® http://www.the-scientist.com/?articles.view%2FarticleNo%2F44769%2Ftitle%2FFearless-about-Folding%2

Gitschier, Jane. “A Flurry of Folding Problems: An Interview with Susan Lindquist”. PLoS Genetics. 7 (5): e1002076. doi:10.1371/journal.pgen.1002076. PMC 3093363Freely accessible. PMID 21589898.

Short video Q&A with Susan Lindquist http://www.moleclues.org/interviews/they-can-make-mess-hurry