Advances in GPCR research – GLP-1 receptor structural studies

The latest issue of Nature highlighted four papers that unveiled new knowledge regarding the structure of glucagon-like peptide 1 (GLP-1) receptor, a B class or secretin family G-protein coupled receptor (GPCR). The structural studies capture the receptor in various states (active/inactive) bound to peptide agonists, allosteric modulators or G proteins.

GPCRs make up an important drug target class for a variety of reasons. A substantial proportion of the genome codes for GPCRs (~4%) and 25-30% of currently marketed drugs target GPCRs. The top selling GPCR targeting drugs reap in $5-9 billion / year/ drug, Furthermore, there are about 120 GPCRs where endogenous ligands are unknown (called orphan GPCRs), indicating much room for development potential. The well-established role that GPCRs play in signalling pathways in a variety of physiological functions and their involvement in disease, further supports their study as drug targets.

The structure of GPCRs however are notoriously difficult to study because they become highly unstable when taken out of cell membranes. This therefore makes it difficult to design potent drugs against them. The authors of one of the papers from UK-based Heptares Therapeutics counter this with their StaR® (stabilised receptor) technology. This involves introduction of point mutations which improve GPCR thermostability without affecting their pharmacology. Other authors go about it with other techniques such as cryo-electron microscopy (cryo-EM).

GPCRs have extracellular N-terminals, seven canonical transmembrane domains, and an intracellular C-terminal that regulates downstream signaling. This rather detailed diagram shows which parts are involved with what.

GPCR_in_membrane (1)

Image from Wikipedia Commons, read more about how GPCRs work here.

GLP-1 is a 30 amino acid long hormone, produced by intestinal cells and certain neurons in the brainstem. It controls blood sugar levels by binding to GLP-1 receptor in pancreatic beta cells to stimulate insulin secretion. The GLP-1 receptor is also expressed in the brain, where GLP-1 mediates appetite suppression and now recently, nicotine avoidance. Interestingly, GLP-1 also produced protective effects in neurodegenerative disease models and a clinical trial of Exendin-4 for Alzheimer’s disease was recently completed, with results yet to be released.

Sidenote: Exendin-4 is a hormone isolated from the Gila Monster (venomous lizard found in the US and Mexico) that closely resembles GLP-1, inducing the same glucose-regulating effects. A synthetic version is now marketed as Exenatide by Astrazeneca for the treatment of diabetes.

The new structural information unveiled allowed researches from Heptares Therapeutics to design more potent peptide agonists than the currently available Exendin-4, which showed efficacy in a mouse diabetes model. The authors modeled a peptide (peptide 5) to fit deep in the binding  pocket of GLP-1 receptor with its C-terminus extending towards the extracellular portion of the receptor. They weren’t able to get the full-length structure of the peptide bound to the GLP-1 receptor, but they superimposed the known structure of GLP-1 peptide bound to only the extracellular domain (ECD) of GLP-1 receptor onto peptide 5, and found some differences. This they claimed to be due to the flexibility of the ECD or its altered behavior when expressed alone.

To improve the pharmacokinetics and stability of peptide 5, they introduced chemical modifications to the N-terminal end of the peptide producing peptide 6 and 7. Another peptide 8 was made by adding a PEG (polyethylene glycol) group which was predicted to extend its half-life by reducing proteolysis and increasing stability in plasma.

The latter worked best as peptide 8 showed the highest efficacy compared to other peptides in stimulating insulin secretion from isolated rat pancreatic islets. In mice that were fasted and fed glucose (in an oral glucose tolerance test, OGTT), peptide 8 introduced subcutaneously performed comparably to Exendin-4. It lowered glucose levels at lower doses, and had a longer duration of action compared to Exendin-4.

Heptares was co-founded by Dr Fiona Marshall, who is currently their Chief Scientific officer. Having extensive experience in molecular pharmacology from her previous work at GSK and Millennium Pharmaceuticals, she is leading the way in GPCR research. For more on her, read an interview by Cell.

Authors of the GLP-1 cryo-EM study also involve highly esteemed GPCR researchers including Dr Brian Kobilka (Stanford), who won the Nobel Prize for Chemistry in 2012 for his work on GPCRs with Dr Robert Lefkowitz (Duke). He has founded a company called ConfometRx, which his wife Tong Sun Kobilka manages, and also focuses on GPCR-based drug development.

So it appears the GPCR field will continue to thrive. For a more detailed look into the history of GPCRs, read Dr Robert Lefkowitz’s Nobel Prize lecture.

Artificial intelligence – fears and cheers in science and healthcare

Artificial intelligence (AI), defined as the theory and development of computer systems able to perform tasks normally requiring human intelligence, is increasingly being used in healthcare, drug development and scientific research.

The advantages are obvious. AI has the ability to draw on an incredible amount of information to carry out multiple tasks in parallel, with substantially less human bias and error, and without constant supervision.

The problem with human bias is one of particular importance. In case you haven’t seen it, watch Dr Elizabeth Loftus’ TEDtalk, on how humans easily form fictional memories that impact behavior, sometimes with severe consequences. I am not sure to what extent AI can be completely unbiased, programmers may inadvertently skew the importance that AI places on certain types of information. However, its still an improvement from the largely impulsive, emotion-based, and reward-driven human condition.

Applications of AI in healthcare includes its use in diagnosis of disease. IBM’s Watson, a question answering computer system designed to successfully beat two human contestants in the game show Jeopardy! outperformed doctors in diagnosing lung cancer with a 90% success rate, compared to just 50% for the doctors. Watson’s success was attributed to its ability to make decisions based on more than 600,000 pieces of medical evidence, more than two million pages from medical journals and the further ability to search through up to 1.5 million patient records. A human doctor in contrast, typically relies largely on personal experience, with only 20% of his/her knowledge coming from trial-based evidence.

AI systems are also being used to manage and collate electronic medical records in hospitals. Praxis for example uses machine learning to generate patient notes, staff/patient instructions, prescriptions, admitting orders, procedure reports, letters to referring providers, office or school excuses, and bills. It apparently gets faster, the more times it sees similar cases.

In terms of scientific research, AI is being explored in the following applications (companies involved):

  • going through genetic data to calculate predisposition to disease in an effort to administer personalized medicine or to implement lifestyle changes (Deep Genomics, Human Longevity, 23andMe, Rthm)
  • delivery of curated scientific literature based on custom preferences (Semantic ScholarSparrhoMeta now acquired by the Chan-Zuckerberg initiative)
  • going through scientific literature and ‘-omic’ results (i.e. global expression profiles of RNA, protein, lipids etc.) to detect patterns for targeted drug discovery efforts. Also termed de-risking drug discovery (Deep Genomics again, InSilico Medicine, BenevolentAI, NuMedii)
  • in silico drug screening where AI uses machine learning and 3D neural networks of molecular structures to reveal relevant chemical compounds (Atomwise, Numerate)

There is incredible investor interest in AI with 550 startups raising $5 billion in funding in 2016 (not limited to healthcare). Significantly, China is leading the advance in AI with iCarbonX achieving Unicorn status (> $1 billion) in funding. It was founded by Chinese genomicist Jun Wang, who previously managed Beijing Genomic Institute (BGI), one of the world’s sequencing centers that was involved in the Human Genome Project. iCarbonX now competes with Human Longevity in its effort to make sense of large amounts of genetic, imaging, behavioral and environmental data to enhance disease diagnosis or therapy.

Some challenges that AI faces in healthcare is the ultra-conservatism in terms of making changes to current practices. The fact that a large proportion of the healthcare sector do not understand how AI works, makes it more challenging for them to see the utility that AI can bring.

Another problem is susceptibility to data hacking, especially when it comes to patient records. One thing’s for sure, we can’t treat healthcare data the same way we are currently treating credit card data.

Then there’s the inherent fear of computers taking over the world. One that Elon Musk  and other tech giants seem to feel strongly about:

elon-musk-AI-04-17-02

Image from Vanity Fair’s “ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE” by Maureen Dowd.

Though he wasn’t fearing computers develop a mind of their own, more so that AI may be unintentionally programmed to self-improve a process that spells disaster for humankind. And with AI having access to human health records, influencing patient management and treatment, and affecting drug development decisions, I think he has every right to be worried! If we’re not careful, we might be letting AI manage healthcare security as well. Oops, we already are: Protenus.

 

Other Sources:

PharmaVentures Industry insight: “The Convergence of AI and Drug Discovery” by Peter Crane

TechCrunch: “Advances in AI and ML are reshaping healthcare” by Megh Gupta Qasim Mohammad

ExtremeTech: “The next major advance in medicine will be the use of AI” by Jessica Hall

Phenotypic or target-based screening in drug discovery? Insights from HCPS2017

Drug discovery has always been a topic close to my heart and I was fortunate to attend and present at the High Content Phenotypic Screening conference organised by SelectBio in Cambridge UK recently. The conference covers the latest technologies in high content screening and was attended by pharma scientists and screening experts, offering relevant insights into what issues are currently being faced in the search for new drugs.

Dr Lorenz Mayer from Astrazeneca summed it up nicely when he explained pharma’s dire lack of novel targets, typically not more than 20 drug targets/year, many of which largely overlap between pharmas. Dr Anton Simeonov from NIH continued with the bleak outlook by highlighting how drug discovery currently follows Eroom’s Law – i.e. Moore’s law spelled backwards. Moore’s Law, named after Intel co-founder Gordon Moore, basically highlights the doubling efficiency every 2 years of the number of transistors that can be placed inexpensively on integrated circuits. Eroom’s law in contrast, highlights the halving efficiency every 9 years of new drug approvals per billion spent in the USA.

erooms-law-moores-law-2

Figure from BuildingPharmaBrands blog with reference to Scannell et al., Diagnosing the decline in pharmaceutical R&D efficiency, Nature Rev Drug Discovery, 2012

Reasons given for the opposing trends include the greater complexity of biology compared to solid-state physics, the tightening regulations of drug approval, and of particular interest, the shift towards target-based drug discovery as opposed to phenotypic-based drug discovery.

Phenotypic-based drug discovery, the main route drug discoverers took before advances in molecular biology in the 1980s, involves finding drugs without necessarily knowing its molecular target. Back in the day, it was done mostly by measuring efficacy of compounds given to animal disease models or to willing but possibly uninformed patients. Despite the simplistic approach, it was the most productive period in drug discovery, yieding many drugs still being used today.

These days however, target-based drug discovery dominates the pharmaceutical industry. It can be simplified into a linear sequence of steps — target identification, tool production and assay development, hit finding and validation, hit-to-lead progression, lead optimization and preclinical development. Drug approvals these days are rare where the molecular target is unknown, and large resources are put into increasing the throughput and efficacy of each step. The problems associated with this approach however, are as follows:

  • poor translatability where the drug fails due to lack of efficiency – either resulting from the target being irrelevant or the assay not sufficiently representing the human disease
  • assumes disrupting a single target is sufficient
  • heavily reliant on scientific literature which is shown to be largely irreproducible

The advantages of a target-based approach however is the ability to screen compounds at a much higher throughput. It also has resulted in a vast expansion of chemical space, where many tool compounds (i.e. compounds that efficiently bind a target but cannot be used in humans due to toxic effects) have been identified. Tool compounds are great to use as comparison controls in subsequent phenotypic (hit validation) assays.

Phenotypic-based screening is still being performed today, typically in the form of cellular assays with disease-relevant readouts. For example, cellular proliferation assays for cancer drug screening. And now more sophisticated assays involving the use of high content imaging, where changes in expression or movement of physiologically relevant molecules or organelles can be imaged at high-throughput.

The advantages of phenotypic-based screening of course counter that of target-based approaches, namely that the target will be disease-relevant, does not exclude the possibility of hitting multiple targets, and is not dependent on existing knowledge.

However, it’s not always a bed of roses:

  • though knowing the mechanism of action (MOA) is not required for drug approval, it greatly facilitates the design of clinical trials in terms of defining patient populations, drug class and potential toxicities. Pharmas therefore typically try to find the target or mechanism of action post-phenotypic screening, which again can take large amounts of resources to elucidate.
  • the phenotypic assay may pick out more unselective or toxic compounds
  • setting up a robust and physiologically relevant phenotypic assay usually takes much longer and typically has a much lower throughput.
  • and how translatable would a phenotypic assay really be? Would we need to use induced pluripotent stem cells from patients that are difficult to culture, and can sometimes take months to differentiate into relevant cell types? The use of 3D organoids as opposed to 2D cell culture to mimic tissue systems add another layer of complexity.
  • Dr Mayer also highlighted the important “people” aspect – explaining to shareholders why you are now screening 100x less compounds in a more “physiologically relevant” assay that has not been proven to work.

Its difficult to get actual numbers on which approach has shown to be most effective so far but two reviews have tried to do so.

  • Swinney and Anthony (2011) looked at 75 first-in-class medicines (i.e. having novel MOA) from 1999-2008 and found that ~60% of these drugs were derived from phenotypic screening while only 40% was from target-based screening even though this approach was already being highly incorporated by pharma.
  • Another more recent study by Eder et al., (2014) that looked at 113 first-in-class drugs from 1999-2013 saw 70% of drugs arising from target-based screens. Out of the 30% of drugs identified from systems-based screening, about 3/4 of these drugs were derived from already known compound classes (termed chemocentric) and only 1/4 was from true phenotypic screens.

The large discrepancies between the two studies was attributed mostly to the longer time window analysed that may be required for target-based screening approaches to fully  mature.

The key metric to evaluate however would probably be the cost/compound progressed by each approach. Eder et al. claimed target-based approaches shortened length of drug development but gave no indication regarding the amount of resources used.

Interestingly, the types of compounds and disease indications differed widely between each approach used. With kinase and protease inhibitors featuring prominently in target-based approaches and drugs targeting ion channels being identified more in phenotypic screens.

Which approach is best? There is no right answer and a lot I imagine would depend on the disease being studied. Target-based approaches were more relevant in identifying drugs for cancer and metabolic disease while phenotypic-based approaches were more effective for central nervous system disorders and infectious disease.

In essence, both approaches could be used in parallel. Ideally, it would be interesting to see if incorporating phenotypic screens as the primary step may help reduce the current large attrition rates. The now enhanced library of tool compounds available and existing natural product derivatives serve as good starting candidates in these phenotypic screens. Target elucidation however is still likely required, so technologies that can successfully identify molecular targets will remain high in demand.

A key focus however should be on increasing translatability of phenotypic assays in order to reduce inefficiencies in drug screening. An unbiased approach is essential, one not just dependent on ease of set up or how things are traditionally done.

 

Cancer – luck of the draw

You don’t smoke, you don’t drink, you eat moderately and you exercise 3x a week. What are the chances you’ll still get cancer?

It’s luck of the draw, according to researchers Cristian Tomasetti, Ph.D., and Bert Vogelstein, M.D., both from Johns Hopkins Kimmel Cancer Center. They recently published in Science that majority of cancer mutations are largely attributed to DNA copying errors made during cellular replication (named R in study). The other two driving factors – environment (E) and genetic inheritance (H) – took a backseat in most cancers, with the exception of lung, skin and esophageal cancers.

Their conclusions were based mainly on looking at the correlation between stem cell divisions and cancer incidence. The idea being that the more divisions a cell makes, the greater the occurrence of DNA copying errors. As mutations accumulate, the risk of cancer correspondingly increases.

Looking at 69 countries and 17 different tissue types, they found correlation values ranged between 0.6-0.9. This high correlation was “surprising” as it was expected that the diverse environmental factors in different countries would have dampened the impact of stem cell divisions. They found this correlation increased with greater age range (0-89) although stem cell divisions do not increase proportionally with age in certain tissue types like bone and brain. I do not have access to the supplementary materials but would have liked to see what the correlation values were based on tissue type and different age ranges.

The authors attributed DNA copying error-induced mutations to four sources:    1) mispairing   2) polymerase errors   3) base deamination (i.e. losing of the amine group) and 4) free radical damage (i.e. oxidative stress).

To be honest, the finding is hardly surprising. The accumulation of spontaneous mutations over one’s lifetime is a well-known fact. Cancer occurs when the scales eventually tip – i.e. enough mutations occur such that tumour suppressors are no longer  able to hold oncogenes in check – setting the cell on a path to rapid multiplication and eventual destruction.

What perhaps is controversial is that what people may take away from the finding is that there’s no point in living healthy, we’re all going to die anyway!

Here’s the breakdown of each contributing factor when 32 different cancer types were modeled based on epidemiological findings. Their mathematical model assumed that cancers not induced by environment (E) or inheritance (H) was due to DNA copying errors (R):

“The median proportion of driver gene mutations attributable to E was 23% among all cancer types. The estimate varied considerably: It was greater than 60% in cancers such as those of the lung, esophagus, and skin and 15% or less in cancers such as those of the prostate, brain, and breast. When these data are normalized for the incidence of each of these 32 cancer types in the population, we calculate that 29% of the mutations in cancers occurring in the United Kingdom were attributable to E, 5% of the mutations were attributable to H, and 66% were attributable to R.”

Researchers are chiding the study as being over-simplified and going against previous evidence that 42% of cancers can be prevented by lifestyle and diet changes. The controversy has happened before as a previous study by the same authors essentially demonstrated the same findings but with sampling limited to the US.

However, its not like the authors entirely dismissed environment and genetic inheritance, deeming it unimportant. They obviously can impact outcomes as seen for certain cancers. In addition, they acknowledge certain cancers cannot be explained by DNA copying errors alone and require further epidemiological investigation.

More importantly, is what one can do to counter this source of mutations. The authors call for early diagnosis or introducing “more efficient repair mechanisms”. This can certainly drive efforts towards a new therapeutic approach focusing on DNA repair. With genetic intervention becoming commonplace in the lab and scientists relying on DNA repair mechanisms to introduce artificial mutations, its only a matter of time till we learn how to better write DNA or find ways to correct copying errors. And if this may lead to the cure for cancer then…  that’s one big problem solved!

It’s how you move that matters – studying protein vibrations

giphy (1)

Molecular interactions resemble a dance, a composition of energy and motion.

A novel tool that may provide fresh perspectives to structural biologists has been developed by Dr Andrea Markelz’s group from the University of Buffalo. Its application towards understanding a biological function was published in  “Moving in the Right Direction: Protein Vibrational Steering Function” in the Biophysical Journal, with Dr Katherine Niessen as first author.

The group developed a technique called anisotropic terahertz microscopy (ATM) which was able to distinguish directional motions of protein vibrations. This is opposed to traditional methods that only measure total vibrational energy distribution by neutron inelastic scattering.

One way distal mutations (i.e. mutations far away from the site of disruption) affect ligand (i.e. drug/protein/other molecule) binding is by inducing long-range motions in protein structure that allows accommodation of the ligand. This technique could therefore provide a fresh look at drug-protein interactions, allowing scientists to discern and model drugs that not only bind but produce the desired vibrations to cause a particular effect.

Niessen et al. demonstrated the utility of ATM by studying the chicken egg white lysozyme (CEWL) and its binding to inhibitor (tri-N-acetyl-D-glucosamine, 3NAG). Binding of the inhibitor did not produce much change in the vibrational density of states (VDOS or energy distribution) whereas ATM could detect dramatically different changes between bound and unbound states.

They attributed these differences to the direction of vibrational movement. By computer modelling and simulation, the unbound (apo) form exhibited clamping motions around the binding site while the bound (holo) form displayed more twisting motions. Quite a stark difference when you consider the dance moves in “Heyyy Macarena” from “Twist and Shout”.

Furthermore, CEWL with distal mutations that induced higher catalytic activity without changes in binding, saw no significant change in VDOS but distinct differences in anisotropic absorbance. This demonstrates the utility of ATM to evaluate long-range mutations and their effect on protein activity.

Challenges remain as highlighted by Dr Jeremy Smith, a key expert in the field. This involves the need to crystallize and align proteins, and evaluating the detection sensitivity of the technique on a range of vibration types and under various conditions. But even he agreed it was a step (or shimmy) in the right direction.

 

Trump’s proposed budget

Trump released a budget blueprint last week that proposed rather drastic cuts in research and development, most of which would be channeled towards defense (in the form of wall building and boosting military defenses). Although it has to get through Congress later in the year to become reality, there is no doubt that the livelihood of scientists and the state of scientific research in America is coming under attack by Mr Trump. Here’s a brief summary of his proposed plans:

Trump’s proposed 2018 budget for R&D reflected in terms of Institute, Proposed budget change (% change from budget in 2016)

  1. National Institute of Health, NIH: $5.8 billion  (-20%)
  2. Department of Energy (Science), DOE: $900 million (-20%)
  3. Department of Energy (Advanced Research Projects Agency-Energy): $300 million (-100%)
  4. NASA Earth Science: $102 million (-5%)
  5. National Oceanic and Atmospheric Administration, Oceanic and Atmospheric Research (NOAA-OAR): $250 million (-50%)
  6. Environmental Protection Agency (EPA): $2.5 billion (-31%)
  7. Department of Energy (National Nuclear Security Administration): $1.4 billion (+11%)
  8. Homeland Security: $2.8 billion (+7%)

TrumpBudget

Matt Hourihon’s article nicely sums up which scientific programs would be affected together with more informative charts.

The basic goal of the budget was to increase defense capital by $58 billion (+10%). To do so without incurring significant debt meant reducing spending on non-defense capital by $54 billion, though that still leaves $4 billion unaccounted for.

Trump not only cuts funding for science but nearly every other area including housing and urban development, anti-poverty measures, agriculture, transportation and education. Read Vox’s article for an overview.

Although funding towards basic research and whether it drives economic growth has always been debated, the motivation behind the proposed steps seems driven more by xenophobia than by “making America great again”.

Trump seems to acknowledge that industry support for scientific endeavors are currently strong, which was his reason for completely de-funding grants for energy research. However the large cuts in basic science highlight his lack of understanding or even care towards the role a government plays in setting the climate for scientific research.

Ultimately, his actions would likely produce a brain drain in America. Thanks not only to the looming lack of research funding, but a general growing discomfort of foreign researchers feeling rather unwelcome.

I’m just glad to be living in a land where the leaders believe in the importance of science and share similar beliefs to Albert Einstein who said:

“Concern for man himself and his fate must always form the chief interest for all technical endeavours … in order that the creations of our mind shall be a blessing and not a curse for mankind.”

 

6 ways to keep up with scientific literature

So much information, so little time. Living in the digital age means information is easily available at the click of a mouse but it also means having to contend with a flood of news, views and reviews that can be overwhelming and confusing. Relevant scientific news these days not only comes in the form of journal publications but also pre-prints (see bioRxiv, pronounced bioarchives), Letters, blogs, news websites, and even God forbid, Tweets.

I’m ignoring (and secretly hating) the folks who have no problems keeping up, and whose answer to this question would probably just be to “read more”. For those like myself who have non-photographic memories and who tend to easily forget things previously read, I’ve put together a short list of pointers that would hopefully prove useful.

1. Use a citation software/web clipper

Mendeley is great for this, though I’m sure there are others. This allows you to easily download and sort your papers into folders, while enabling easy bibliographical citation. You can tag them with keywords, which adds another level of organization. A highly useful keyword search through the text of all the papers (provided you downloaded a pdf copy) is also available. It takes a while to get used to reading on a computer, but this really pays off in the long run as you don’t accumulate wads of space/tree-consuming paper that often end up unsorted and unread. A fullscreen mode on Mendeley allows you to read with a notes bar on the side that lets you highlight text and add comments.

For non-scientific articles, Evernote is great. Similar to Mendeley, you can sort web articles, photos, all kinds of media really, into folders and tag them with keywords. I love the clip from web function where a button at the top of your internet browser allows you to clip the website in simplified format. Often used when I’m surfing the net for ideas on what to blog about!

2. Create feeds/alerts

I spotted one of my Professors using HighWire for this and its a pretty nice one-stop shop for creating feeds/alerts. You can set up citation alerts that identifies journal articles containing your specified keywords/from specified authors and sends a list of their titles to your email. You can also sign up to receive an electronic table of contents from your favorite journals that allows for a more broader review of the recent literature. Recently, there have been some warnings that HighWire may be discontinued but so far these emails are still coming. I also am trying PubCrawler as a backup. eTOC alert emails can also be done directly at your favorite journal’s website.

Of course this only works if you spend the time to go through these alerts. Often what happens it is all these emails accumulate in your inbox collecting virtual dust. So best to set aside some time in a week to go through them and pick the articles of interest for more in-depth reading.

3. Get on Twitter 

Although I personally have not mastered the art of Tweeting, Twitter is an amazing resource for obtaining real-time insight into what key players in your field of study are talking and thinking about. Follow your scientific idols, and see who they follow, and follow them too. Not every scientist is on it though, but you’d be surprised sometimes at who you may find.

4. ResearchGate

I’m not a big user of ResearchGate, but they offer access to articles that you may otherwise have to pay for which is what drives many to get on it. Its a good way of seeing who has published what, who their closest collaborators are, and enables social interaction via online forums.

5. Write a blog or a literature review journal

Although reading widely is great for keeping up with literature, often it is remembering what you have read that is the challenge. A good way of cementing what you’ve read is by summarizing it and writing this down. It’s one of the reasons I started this blog. Often, I find myself searching and re-reading old posts to recall certain things. Writing a blog not only helps sort through your key thoughts, its a good way of collecting various sources of information into one easy-to-digest article, written in your own hand. If you’re shy about publishing it, create a private one. You’ll find yourself returning to it over and over.

6. Create a journal club

Interacting with people is naturally more memorable than reading something in private. Having a physical discussion about a paper in a coffee house or over food could help in remembering what was said, as odour memory seems to be the most resistant to forgetting. Furthermore, hearing opinions of your peers on the study also widens ones perspective. Even if the discussion is not physical, there are plenty of online forums, webchats and email threads that one can start with a group of people. In addition to the potential for generating new ideas, it’s a great way of keeping in touch!