Young blood transfusions against aging – what we know

My dad is getting on in years. He’s 73 and is not exactly at the peak of health. In fact, I should count my lucky stars he’s still alive. He had several big health scares but the doctors have ‘worked their magic’ and he remains in good spirits, with an alert mind, and an ever-active daily schedule though with somewhat restricted mobility.

He asked me one day, me being the scientist of the family, what I thought of young blood transfusions. I stared at him in a mixture of horror and amusement. He had read online that young blood transfusions provided some health benefits and apparently asked his doctor for one! Of course the doctor politely declined, stating blood transfusions were typically reserved for emergencies and he could only have one if his haemoglobin levels dropped below a certain (very low) level. I asked if he would like my blood or my brother’s, but he said we were ‘too old’. Images of vampires and creepy old ladies who bathed in the blood of virgins to remain youthful flashed through my mind.. along with my dad drinking the blood of a teenager.

Perhaps the most well-cited studies of this wild idea, are those carried out by neuroscientist Dr Tony Wyss-Coray’s lab at Stanford University. He published two studies in Nature, one in 2014 and one in April this year, demonstrating cognitive improvement in old mice exposed to young blood.

The earlier study involved a surgical procedure called parabiosis that joined an old mice to a young mice such that they shared the same blood circulation. Five weeks after the surgery, the old mice showed significant cognitive enhancement. This was demonstrated by upregulation of genes associated with increased synaptic plasticity, an increased number of dendritic spines in the hippocampus (which is known to decline with age), and enhanced long term potentiation (LTP, an electrophysiological phenomena in the brain representing memory).

Importantly, they also injected old mice with the plasma from young mice over 4 weeks and found they exhibited improved memory in cognitive behavioral task. Essentially they measured the freezing time after auditory and electrostimulation – fear conditioning – and spatial memory – water maze. Pretty convincing results!

The second study done this year identified factors in the plasma responsible for this rejuvenating effect. Interestingly, they also found human umbilical cord plasma produced the same cognition-enhancing effects when injected in old mice, suggesting a common anti-aging molecule shared between mouse and man. The factor identified was tissue inhibitor of metalloproteinases 2 (TIMP2). Injection of this factor alone enhanced synaptic plasticity markers, LTP and improved cognition in various behavioral tasks. Conversely, inhibiting TIMP2 with antibodies reduced cognition and removing it from the young plasma prior to injection, also removed the cognitive-enhancing effects. A surprisingly clear-cut result in neurobiology.

A separate study by a group at Harvard University identified another factor in young blood, Growth Differentiation Factor 11 (GDF11),  that induced “vascular remodeling, culminating in increased neurogenesis and improved olfactory discrimination in aging mice.”

It’s no wonder that people are getting excited about this, and my dad may actually have a point!

Looking at the biotech scene, there are two biotechs involved in young blood therapy. One is called Alkahest, a Stanford spin-out with Dr Wyss-Coray as Chairman of the scientific advisory board. They have partnered with a Spanish pharmaceutical, Grifols, a world leader in plasma-based products. Grifols invested US$38 million in Alkahest with an additional US$13 million to develop and sell Alkehest’s plasma-based products (see Press release).

Another company is Ambrosia, started by Princeton graduate Jesse Karmazin, which has sparked some controversy by carrying out a human clinical trial with a US$8000 participation fee. Judging from the website that has very little detail at all, much less about science/technology, it seems to be purely a money-making endeavor that thrives off fear of aging. They are using healthy volunteers and do not even have a placebo group in the trial. Most in the field have written it off as “the scientific equivalent of fake news”, but undoubtedly, Ambrosia is not likely to suffer from a lack in participants desperately seeking the fountain of youth.

While we might be able to one day find the ‘cure for aging’, let’s not lose touch with reality. As physicist Sean Carroll says in “The Big Picture”:

“But eventually all of the stars will have exhausted their nuclear fuel, their cold remnants will fall into black holes, and those black holes will gradually evaporate into a thin gruel of elementary particles in a dark and empty universe. We won’t really live forever, no matter how clever biologists get to be”

*For a great review on the current state of anti-aging efforts in biotech/pharma, read this: João Pedrode Magalhães et al., “The Business of Anti-aging Science“, Trends in Biotechnology, 35(11), 2017

**After this post was published, results of Alkahest’s trial of 18 participants was released showing young blood transfusions while safe produced minimal, if any, benefits – reported in Science Magazine

Advertisements

Artificial intelligence – fears and cheers in science and healthcare

Artificial intelligence (AI), defined as the theory and development of computer systems able to perform tasks normally requiring human intelligence, is increasingly being used in healthcare, drug development and scientific research.

The advantages are obvious. AI has the ability to draw on an incredible amount of information to carry out multiple tasks in parallel, with substantially less human bias and error, and without constant supervision.

The problem with human bias is one of particular importance. In case you haven’t seen it, watch Dr Elizabeth Loftus’ TEDtalk, on how humans easily form fictional memories that impact behavior, sometimes with severe consequences. I am not sure to what extent AI can be completely unbiased, programmers may inadvertently skew the importance that AI places on certain types of information. However, its still an improvement from the largely impulsive, emotion-based, and reward-driven human condition.

Applications of AI in healthcare includes its use in diagnosis of disease. IBM’s Watson, a question answering computer system designed to successfully beat two human contestants in the game show Jeopardy! outperformed doctors in diagnosing lung cancer with a 90% success rate, compared to just 50% for the doctors. Watson’s success was attributed to its ability to make decisions based on more than 600,000 pieces of medical evidence, more than two million pages from medical journals and the further ability to search through up to 1.5 million patient records. A human doctor in contrast, typically relies largely on personal experience, with only 20% of his/her knowledge coming from trial-based evidence.

AI systems are also being used to manage and collate electronic medical records in hospitals. Praxis for example uses machine learning to generate patient notes, staff/patient instructions, prescriptions, admitting orders, procedure reports, letters to referring providers, office or school excuses, and bills. It apparently gets faster, the more times it sees similar cases.

In terms of scientific research, AI is being explored in the following applications (companies involved):

  • going through genetic data to calculate predisposition to disease in an effort to administer personalized medicine or to implement lifestyle changes (Deep Genomics, Human Longevity, 23andMe, Rthm)
  • delivery of curated scientific literature based on custom preferences (Semantic ScholarSparrhoMeta now acquired by the Chan-Zuckerberg initiative)
  • going through scientific literature and ‘-omic’ results (i.e. global expression profiles of RNA, protein, lipids etc.) to detect patterns for targeted drug discovery efforts. Also termed de-risking drug discovery (Deep Genomics again, InSilico Medicine, BenevolentAI, NuMedii)
  • in silico drug screening where AI uses machine learning and 3D neural networks of molecular structures to reveal relevant chemical compounds (Atomwise, Numerate)

There is incredible investor interest in AI with 550 startups raising $5 billion in funding in 2016 (not limited to healthcare). Significantly, China is leading the advance in AI with iCarbonX achieving Unicorn status (> $1 billion) in funding. It was founded by Chinese genomicist Jun Wang, who previously managed Beijing Genomic Institute (BGI), one of the world’s sequencing centers that was involved in the Human Genome Project. iCarbonX now competes with Human Longevity in its effort to make sense of large amounts of genetic, imaging, behavioral and environmental data to enhance disease diagnosis or therapy.

Some challenges that AI faces in healthcare is the ultra-conservatism in terms of making changes to current practices. The fact that a large proportion of the healthcare sector do not understand how AI works, makes it more challenging for them to see the utility that AI can bring.

Another problem is susceptibility to data hacking, especially when it comes to patient records. One thing’s for sure, we can’t treat healthcare data the same way we are currently treating credit card data.

Then there’s the inherent fear of computers taking over the world. One that Elon Musk  and other tech giants seem to feel strongly about:

elon-musk-AI-04-17-02

Image from Vanity Fair’s “ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE” by Maureen Dowd.

Though he wasn’t fearing computers develop a mind of their own, more so that AI may be unintentionally programmed to self-improve a process that spells disaster for humankind. And with AI having access to human health records, influencing patient management and treatment, and affecting drug development decisions, I think he has every right to be worried! If we’re not careful, we might be letting AI manage healthcare security as well. Oops, we already are: Protenus.

 

Other Sources:

PharmaVentures Industry insight: “The Convergence of AI and Drug Discovery” by Peter Crane

TechCrunch: “Advances in AI and ML are reshaping healthcare” by Megh Gupta Qasim Mohammad

ExtremeTech: “The next major advance in medicine will be the use of AI” by Jessica Hall

Phenotypic or target-based screening in drug discovery? Insights from HCPS2017

Drug discovery has always been a topic close to my heart and I was fortunate to attend and present at the High Content Phenotypic Screening conference organised by SelectBio in Cambridge UK recently. The conference covers the latest technologies in high content screening and was attended by pharma scientists and screening experts, offering relevant insights into what issues are currently being faced in the search for new drugs.

Dr Lorenz Mayer from Astrazeneca summed it up nicely when he explained pharma’s dire lack of novel targets, typically not more than 20 drug targets/year, many of which largely overlap between pharmas. Dr Anton Simeonov from NIH continued with the bleak outlook by highlighting how drug discovery currently follows Eroom’s Law – i.e. Moore’s law spelled backwards. Moore’s Law, named after Intel co-founder Gordon Moore, basically highlights the doubling efficiency every 2 years of the number of transistors that can be placed inexpensively on integrated circuits. Eroom’s law in contrast, highlights the halving efficiency every 9 years of new drug approvals per billion spent in the USA.

erooms-law-moores-law-2

Figure from BuildingPharmaBrands blog with reference to Scannell et al., Diagnosing the decline in pharmaceutical R&D efficiency, Nature Rev Drug Discovery, 2012

Reasons given for the opposing trends include the greater complexity of biology compared to solid-state physics, the tightening regulations of drug approval, and of particular interest, the shift towards target-based drug discovery as opposed to phenotypic-based drug discovery.

Phenotypic-based drug discovery, the main route drug discoverers took before advances in molecular biology in the 1980s, involves finding drugs without necessarily knowing its molecular target. Back in the day, it was done mostly by measuring efficacy of compounds given to animal disease models or to willing but possibly uninformed patients. Despite the simplistic approach, it was the most productive period in drug discovery, yieding many drugs still being used today.

These days however, target-based drug discovery dominates the pharmaceutical industry. It can be simplified into a linear sequence of steps — target identification, tool production and assay development, hit finding and validation, hit-to-lead progression, lead optimization and preclinical development. Drug approvals these days are rare where the molecular target is unknown, and large resources are put into increasing the throughput and efficacy of each step. The problems associated with this approach however, are as follows:

  • poor translatability where the drug fails due to lack of efficiency – either resulting from the target being irrelevant or the assay not sufficiently representing the human disease
  • assumes disrupting a single target is sufficient
  • heavily reliant on scientific literature which is shown to be largely irreproducible

The advantages of a target-based approach however is the ability to screen compounds at a much higher throughput. It also has resulted in a vast expansion of chemical space, where many tool compounds (i.e. compounds that efficiently bind a target but cannot be used in humans due to toxic effects) have been identified. Tool compounds are great to use as comparison controls in subsequent phenotypic (hit validation) assays.

Phenotypic-based screening is still being performed today, typically in the form of cellular assays with disease-relevant readouts. For example, cellular proliferation assays for cancer drug screening. And now more sophisticated assays involving the use of high content imaging, where changes in expression or movement of physiologically relevant molecules or organelles can be imaged at high-throughput.

The advantages of phenotypic-based screening of course counter that of target-based approaches, namely that the target will be disease-relevant, does not exclude the possibility of hitting multiple targets, and is not dependent on existing knowledge.

However, it’s not always a bed of roses:

  • though knowing the mechanism of action (MOA) is not required for drug approval, it greatly facilitates the design of clinical trials in terms of defining patient populations, drug class and potential toxicities. Pharmas therefore typically try to find the target or mechanism of action post-phenotypic screening, which again can take large amounts of resources to elucidate.
  • the phenotypic assay may pick out more unselective or toxic compounds
  • setting up a robust and physiologically relevant phenotypic assay usually takes much longer and typically has a much lower throughput.
  • and how translatable would a phenotypic assay really be? Would we need to use induced pluripotent stem cells from patients that are difficult to culture, and can sometimes take months to differentiate into relevant cell types? The use of 3D organoids as opposed to 2D cell culture to mimic tissue systems add another layer of complexity.
  • Dr Mayer also highlighted the important “people” aspect – explaining to shareholders why you are now screening 100x less compounds in a more “physiologically relevant” assay that has not been proven to work.

Its difficult to get actual numbers on which approach has shown to be most effective so far but two reviews have tried to do so.

  • Swinney and Anthony (2011) looked at 75 first-in-class medicines (i.e. having novel MOA) from 1999-2008 and found that ~60% of these drugs were derived from phenotypic screening while only 40% was from target-based screening even though this approach was already being highly incorporated by pharma.
  • Another more recent study by Eder et al., (2014) that looked at 113 first-in-class drugs from 1999-2013 saw 70% of drugs arising from target-based screens. Out of the 30% of drugs identified from systems-based screening, about 3/4 of these drugs were derived from already known compound classes (termed chemocentric) and only 1/4 was from true phenotypic screens.

The large discrepancies between the two studies was attributed mostly to the longer time window analysed that may be required for target-based screening approaches to fully  mature.

The key metric to evaluate however would probably be the cost/compound progressed by each approach. Eder et al. claimed target-based approaches shortened length of drug development but gave no indication regarding the amount of resources used.

Interestingly, the types of compounds and disease indications differed widely between each approach used. With kinase and protease inhibitors featuring prominently in target-based approaches and drugs targeting ion channels being identified more in phenotypic screens.

Which approach is best? There is no right answer and a lot I imagine would depend on the disease being studied. Target-based approaches were more relevant in identifying drugs for cancer and metabolic disease while phenotypic-based approaches were more effective for central nervous system disorders and infectious disease.

In essence, both approaches could be used in parallel. Ideally, it would be interesting to see if incorporating phenotypic screens as the primary step may help reduce the current large attrition rates. The now enhanced library of tool compounds available and existing natural product derivatives serve as good starting candidates in these phenotypic screens. Target elucidation however is still likely required, so technologies that can successfully identify molecular targets will remain high in demand.

A key focus however should be on increasing translatability of phenotypic assays in order to reduce inefficiencies in drug screening. An unbiased approach is essential, one not just dependent on ease of set up or how things are traditionally done.

 

It’s how you move that matters – studying protein vibrations

giphy (1)

Molecular interactions resemble a dance, a composition of energy and motion.

A novel tool that may provide fresh perspectives to structural biologists has been developed by Dr Andrea Markelz’s group from the University of Buffalo. Its application towards understanding a biological function was published in  “Moving in the Right Direction: Protein Vibrational Steering Function” in the Biophysical Journal, with Dr Katherine Niessen as first author.

The group developed a technique called anisotropic terahertz microscopy (ATM) which was able to distinguish directional motions of protein vibrations. This is opposed to traditional methods that only measure total vibrational energy distribution by neutron inelastic scattering.

One way distal mutations (i.e. mutations far away from the site of disruption) affect ligand (i.e. drug/protein/other molecule) binding is by inducing long-range motions in protein structure that allows accommodation of the ligand. This technique could therefore provide a fresh look at drug-protein interactions, allowing scientists to discern and model drugs that not only bind but produce the desired vibrations to cause a particular effect.

Niessen et al. demonstrated the utility of ATM by studying the chicken egg white lysozyme (CEWL) and its binding to inhibitor (tri-N-acetyl-D-glucosamine, 3NAG). Binding of the inhibitor did not produce much change in the vibrational density of states (VDOS or energy distribution) whereas ATM could detect dramatically different changes between bound and unbound states.

They attributed these differences to the direction of vibrational movement. By computer modelling and simulation, the unbound (apo) form exhibited clamping motions around the binding site while the bound (holo) form displayed more twisting motions. Quite a stark difference when you consider the dance moves in “Heyyy Macarena” from “Twist and Shout”.

Furthermore, CEWL with distal mutations that induced higher catalytic activity without changes in binding, saw no significant change in VDOS but distinct differences in anisotropic absorbance. This demonstrates the utility of ATM to evaluate long-range mutations and their effect on protein activity.

Challenges remain as highlighted by Dr Jeremy Smith, a key expert in the field. This involves the need to crystallize and align proteins, and evaluating the detection sensitivity of the technique on a range of vibration types and under various conditions. But even he agreed it was a step (or shimmy) in the right direction.

 

Trump’s proposed budget

Trump released a budget blueprint last week that proposed rather drastic cuts in research and development, most of which would be channeled towards defense (in the form of wall building and boosting military defenses). Although it has to get through Congress later in the year to become reality, there is no doubt that the livelihood of scientists and the state of scientific research in America is coming under attack by Mr Trump. Here’s a brief summary of his proposed plans:

Trump’s proposed 2018 budget for R&D reflected in terms of Institute, Proposed budget change (% change from budget in 2016)

  1. National Institute of Health, NIH: $5.8 billion  (-20%)
  2. Department of Energy (Science), DOE: $900 million (-20%)
  3. Department of Energy (Advanced Research Projects Agency-Energy): $300 million (-100%)
  4. NASA Earth Science: $102 million (-5%)
  5. National Oceanic and Atmospheric Administration, Oceanic and Atmospheric Research (NOAA-OAR): $250 million (-50%)
  6. Environmental Protection Agency (EPA): $2.5 billion (-31%)
  7. Department of Energy (National Nuclear Security Administration): $1.4 billion (+11%)
  8. Homeland Security: $2.8 billion (+7%)

TrumpBudget

Matt Hourihon’s article nicely sums up which scientific programs would be affected together with more informative charts.

The basic goal of the budget was to increase defense capital by $58 billion (+10%). To do so without incurring significant debt meant reducing spending on non-defense capital by $54 billion, though that still leaves $4 billion unaccounted for.

Trump not only cuts funding for science but nearly every other area including housing and urban development, anti-poverty measures, agriculture, transportation and education. Read Vox’s article for an overview.

Although funding towards basic research and whether it drives economic growth has always been debated, the motivation behind the proposed steps seems driven more by xenophobia than by “making America great again”.

Trump seems to acknowledge that industry support for scientific endeavors are currently strong, which was his reason for completely de-funding grants for energy research. However the large cuts in basic science highlight his lack of understanding or even care towards the role a government plays in setting the climate for scientific research.

Ultimately, his actions would likely produce a brain drain in America. Thanks not only to the looming lack of research funding, but a general growing discomfort of foreign researchers feeling rather unwelcome.

I’m just glad to be living in a land where the leaders believe in the importance of science and share similar beliefs to Albert Einstein who said:

“Concern for man himself and his fate must always form the chief interest for all technical endeavours … in order that the creations of our mind shall be a blessing and not a curse for mankind.”

 

What would you do with 900 million dollars of start-up funding?

America, a land of plenty – plenty of land, plenty of food, plenty of crazy politicians and plenty of start-up funding.

Grail, a company formed by sequencing giant Illumina in Jan 2016, recently obtained a hefty $900 million in Series B financing, after already obtaining $100 million in Series A. Grail aims to screen for cancer mutations in circulating tumour DNA (ctDNA) from blood samples via next-generation sequencing (learn more about this booming field in Sensitive Detection of ctDNA). The money came from several large pharmaceutical companies, Johnson & Johnson purportedly with the largest investment followed by others such as Bristol-Myers Squibb, Celgene and Merck. Interestingly, Bill Gates and Jeff Bezos from Amazon has also invested in Grail, together with the venture arm of medical distributor McKesson, China-based Tencent Holdings, and Varian Medical Systems, a radiation oncology treatment and software maker from Palo Alto.

This is the biggest start-up financing deal in biotech by a long-shot, the largest deal in 2016 went to Human Longevity, Craig Venter’s company that raised $220 million in series B. Another one that came somewhat close was RNA company Moderna Therapeutics, which raised $450 million in 2015.

Grail plans to carry out “high-intensity sequencing” on blood samples from vast numbers of people to detect circulating tumour DNA at early stages, essentially in people not showing any signs cancer, as a means of early detection to enable better treatment. This is an especially challenging feat, given that ctDNA makes up < 1% of circulating DNA found in the blood. But there are some hints that Grail is sitting on promising data sets that have turned skeptics into believers.

There are concerns that testing healthy people for cancer might yield false positives that could subject people to unnecessary and potentially dangerous testing procedures and treatments. This was the case for the Prostate-specific antigen (PSA) test used to screen men at risk for prostate cancer. It turned out that PSA testing did not significantly reduce mortality of men with prostrate cancer but did increase the harms associated with the accompanying treatments and tests, some of which are pretty nasty such as urinary incontinence and erectile dysfunction.

So Grail had better be sure the sensitivity and accuracy of their predictions are full-proof as cancer treatments are not exactly pleasant. They seem to be taking it seriously, judging from their embarkation on an ambitious trial called “The circulating cell-free genome atlas study” where they will recruit more than 10, 000 participants – 7000 newly-diagnosed cancer patients of multiple solid tumour types who have not undergone treatment, and 3000 healthy volunteers.  The trial is already recruiting and is projected to be completed within 5 years by Aug 2022, with a primary outcome measure available by Sept this year. Grail hopes to detect shifts in cancer stage severity as they perform their tests over time. How accurately their tests reflect other clinical readouts would give appropriate proof of its reliability. Likely, more trials involving more patients would be necessary to determine if this form of testing is full-proof and whether it may even replace tissue biopsies as a gold standard in cancer diagnosis.

Grail has even drafted plans to make their form of testing available to the medical community by 2019, subject to experimental results. An incredibly ambitious timeline, so its no wonder they need the big amounts of cash to drive it through. Jeff Huber, a former Google Exec, is Grail’s new CEO. His wife Laura died of colorectal cancer, so his new job also fulfils a personal mission. Other members of the team include other former employees from Illumina and Google, including Verily CSO/founder Vikram Bajaj. The Google Life Science company Verily have recently received a similarly outstanding investment of $800 million from who would have guessed, Singapore Temasek Holdings.

The scale of investment in America seriously dwarfs that found in the European biotech scene. Despite conservatives highlighting a potential bubble in US biotech and Trump’s anti-pharma sentiment that may signal a potential decline in available funding, one cannot deny that the lofty research goals being currently undertaken, can only yield an incredible expansion of scientific knowledge. In my opinion, science is expensive, and the more money you have, the more science you can do. They key thing though, is to make sure its good science!

 

 

An exciting time for epitranscriptomics

Epigenetics is a well-established means of gene regulatory control where chemical modifications of DNA bases or their associated histone proteins affect the expression of genes (as opposed to DNA sequence). These epigenetic marks may take the form of methylation (usually on cytosine residues) or histone acetylation/phosphorylation/ubiquitination/sumoylation etc. and can be passed down to daughter cells. Epigenetic changes are induced by the environment and provide a biological mechanism by which nurture as opposed to nature, plays a significant role in shaping our behaviours and characteristics.

Recently, scientists are uncovering the functional significance of epitranscriptomics – or the chemical modification of RNA. The first report of mRNA modification, specifically,  methylation of the N6 position of adenosine (m6A), occurred in 1974 by Fritz Rottman et al.. The m6A modification is the most common eukaryotic mRNA modification however its functional significance remained unclear until the 2010s. In 2012, Dominissini et al. using m6A-antibody enriched RNA-seq, discovered 12000 methylated sites in 7000 coding genes and 250 non-coding genes. Typically highly concentrated around stop codons, within long internal exons and at transcription start sites, it became evident that genes without these modifications tend to be more highly expressed. He writes about it in a Science essay here.

There are several players involved in epitranscriptomics, and they are often referred to as writers, readers and erasers. A methyltransferase METTL3 (“writer“) produces the m6A modification, the YT521-B homology (YTH) domain family of proteins are “readers” that bind to m6A modified RNA and regulate functions that affect protein expression such as RNA degradation, translation and splicing. Finally the “erasers” such as enzyme FTO, implicated in various diseases such as cancer and obesity, removed these m6A marks. Thereby completing the set of actors required for establishing epitranscriptomics as a regulatory mechanism for RNA expression.

Samie Jaffery’s group recently published a controversial paper that identified another epitranscriptomic modification N6,2O-dimethyladenosine (m6Am), located near the 5′ caps of mRNA, that is the main substrate of eraser FTO instead of m6A. m6Am is correlated with increased mRNA stability as it makes 5′ caps harder to remove, which thereby increases protein expression. This is in contrast with m6A modification, which is associated with suppression of protein expression. This highlights distinct functional roles of these RNA methylation marks.

There are already studies demonstrating m6A regulation is utilized also by non-coding RNA. The simple self-made schema below shows how m6A modification was used by Xist to carry out its transcriptional repression of the X chromosome, demonstrated by Patil et al. in a recent Nature publication.

xistm6a

Epitranscriptomics is a relatively young field and opens up new possibilities to study how RNAs are regulated and in particular, how non-coding RNAs may carry out their functions. This signals yet more exciting times ahead for RNA researchers!