Choosing a graduate school/lab/professor

This may be one of the biggest decisions you make in your life. You are sure you want to do a PhD but are still uncertain about where or who to do it with. Which school should you pick? Which area should you focus on? Which Professor would not bite your head off? Choices, choices.

In all honesty, the choice boils down to fortuity of events. You may apply to various schools, but what if only one accepts you? Or you could apply to one school and you get in. Or lucky you, apply to all schools and they all accept you so you can have your pick! Obviously you would want to pick a good, reputable school. Somewhere people have actually heard of. Even if it were not a famous school, at least choose a lab well-known in its field. Doing a PhD in an unknown school, in an unknown lab, that publishes in unknown journals, will not get you very far.

What should you do your PhD in? This is one that poses a lot of headaches to many students, because it is a tough choice, and many have the wrong impression that it will limit you to that field for life. Truth is, many end up doing completely different things from what they did during their PhDs. Of course, if you want to be a true academic, you could take the conventional route and absorb yourself entirely within your single chosen field of study. Which means learning from the best in the field, becoming knowledgeable in that particular area, and starting your own lab on an independent branch of study which in actual fact is not far off from your PhD lab’s main research focus. This might suit some, but not others, it is entirely up to one’s personality. Note however that what you choose to do during your PhD does not limit what you can do in future.. Science is science. It is forming hypothesis, designing experiments, performing them and making sure they are reproducible before making conclusions. It applies to any field of study, and this process is what you expect to hone during your PhD. Sure you might become an expert in diabetes, cancer, neurodegenerative disease after immersing yourself in the field for four years, but what makes you think you cannot gain the same depth of knowledge from some strategic, focussed reading post-PhD?

Next, choosing a Professor. This all boils down to your own resourcefulness as well as character. There are three kinds of PhD students, the ambitious and experienced, the ambitious and inexperienced, and the clueless.

The ambitious and experienced PhD student usually has a few years of research experience under his/her belt and can manage a project pretty independently. These students would probably do fine under a busy Professor who is hardly ever around. However, I cannot over emphasize the importance of post-docs. Whatever it is, you cannot do research in a silo. You need fellow human minds to bounce ideas off, and if your post-docs are smart, capable and available, that only serves to your advantage. So take some time to talk to people in the lab, and make sure you would have the sufficient tools and resources to be able to do what you want to do.

For the ambitious but inexperienced PhD students i.e. little/no previous research experience, they would probably need more hands-on guidance. This may work if you have a post-doc assigned to you or if your Professor is young and able to spare the time to guide you personally. These students do not do well if there is not enough guidance, so if you are one of these, make sure to ask before you join if you would have a personal mentor in the lab. Also ask how often you would be monitored in your research i.e how many times a week would you meet up with the Professor? Remember, this is 3-4 years of your life, so to make sure you are getting everything out of it, you would want to make sure the people are available to teach you.

For the clueless who have no idea what they want. The most essential thing you would need to do is to just talk to more labs/Professors. Only by talking to Professors and their lab members would you get a feeling of how it would be like to work there. Talking to previous lab members especially, also provide great censorship-free insights on how working for the Professor and various people in the lab may be like. Also take a look at their publication history, if they are publishing 6 or more papers a year in reputable journals, that is a good sign. On the other hand if there have been no papers from the lab for more than 5 years, that is a VERY clear warning sign. Another useful thing is to look at what previous PhD students from the lab end up doing. If many of them are still in science, that’s always comforting!

Some PhD programs come with some coursework, but many do not. To be honest, coursework may be nice to learn new facts/techniques, but in the end a PhD is about doing science. And many are of the opinion that coursework ought to be minimized so that one can fully attain lab nirvana. However, I feel it would not hurt to have the occasional course/workshop on current scientific techniques, career guidance, social networking skills etc. The latter area especially seem to be lacking in many PhD students!

So there, key take-aways, know thyself and what kind of guidance one needs. Do your homework and get to know the lab. Try to choose a lab good at what it does and having enough resources to do it. And never join a lab without talking to the Professor first for goodness sake!

Advertisements

More on RNA interference

RNA interference has come a long way since its early beginnings in 1990s. First observations of it were reported in the early 1990s where various groups reported a “quelling” of gene expression when homologous RNA sequences were introduced. Researchers Craig Mello and Andrew Fire were the first to establish in 1998 that double-stranded RNA was capable of silencing the expression of genes carrying the complementary sequence. Double-stranded-ness was a particular necessity as injection of only sense or antisense RNA strands failed to achieve the same effect. For this they won the Nobel Prize in Physiology/Medicine in 2006.

Since then, much work has gone into elucidating the mechanism of how RNA interference works. It is now known that double-stranded RNA is cleaved into smaller lengths of ~21-25 ribonucleotides by an enzyme called Dicer. These small intermediates carry out the gene knockdown effect, specifically the antisense strand would bind to a complementary RNA sequence from an endogenous target gene, recruiting it to a protein complex called RISC or RNA-induced silencing complex. This complex contains Argonaute proteins which then carry out the “slicing” of the target gene, hence silencing its expression.

The scientific world rejoiced as we now had a cool and relatively easy way of knocking down gene expression. It took a while for us to realise that the short interfering RNA or siRNA approach to knocking down genes came with a major drawback of having rather broad off-target effects. These effects occur as siRNAs recognize their targets via a seed region of 2-8 nucleotides which may occur on thousands of mRNAs. siRNAs also look a lot like naturally occurring short RNAs called microRNAs which do not need total complementarity to the target sequence to knock down a gene. Hence varying degrees of knockdown may occur on several genes that give rise to a positively mixed phenotype! Another thing that happens when you introduce double-stranded RNA is one can stimulate an immune response which may also interfere with the overall phenotype. This did not stop people from performing siRNA screens, or charging money for performing siRNA screens though. Millions if not more dollars must have been spent on screens which yielded data that could not be validated.

Thankfully, several methods are now being used to counter the off-target problem. Many rely on chemical modification (e.g. 2′-O-methylation) of the siRNAs that presumably reduce binding to seed sequences of off-target mRNA. However this does not completely eliminate false positives. A particular method proven quite successful is the administration of a pool of several siRNAs that target the same gene using different seed sequences. The use of a pool (of 30 siRNAs for example) allows one to reduce the concentration of each individual siRNA, minimizing off-target effects. This method of course heavily relies on bioinformatics to design appropriate siRNAs and may be limited by gene length. Because of the well-defined characteristics of each pool, one can not only target specific gene paralogs but also target several genes from the same family based on shared sequences. I have to admit I work for the founders of this technology (www.sitoolsbiotech.com) so perhaps I am biased 🙂 But it is truly cool how the use of bioinformatics is incorporated to control the complexity of these siPools.

I believe this is just the start and more advances will be seen over the next few years that would significantly improve siRNA screening. The increasing role of RNA over DNA is also taking the world by storm and I am sure many more discoveries lie ahead.

The hype about CRISPR

CRISPR or clustered, regularly interspaced, short palindromic repeat sequences are commonly found in bacteria and function as part of their innate immune system to counter foreign nucleic acids such as viruses and plasmids. CRISPR DNA sequences are translated into CRISPR RNAs (crRNAs) which complex with Cas or (CRISPR-associated) proteins to bring about cleavage of invading DNA. These systems were first described in 1987 but their precise function as an immune defense was only well-established by 2008.

At the same time, gene editing was being explored with the use of zinc finger proteins and transcription activator-like effector nucleases (TALEN). These proteins can basically be engineered to bind any DNA sequence, inducing double stranded breaks that trigger the cell’s physiological DNA repair mechanism which is prone to introducing errors. This leads to introduction of DNA modifications that often generate a dysfunctional/altered protein, or no protein at all. Other methods of knocking down genes include RNA interference where short RNA sequences of 21-25 bases are introduced into the cell and bind to a complementary endogenous mRNA sequence, leading to their cleavage or inhibition of translation. RNA interference methods however differ from genetic/DNA editing by being transient and incomplete, where low levels of protein function may still exist. They have also been plagued by off-target effects where imperfect matches between siRNA (or silencing RNA) can also lead to gene knockdown.

The use of CRISPR has caught the scientific world’s attention due to its ease in introducing genetic alterations. They have now been modified down to a simple two component system that can be introduced via a single vector: the Cas9 nuclease and the guide RNA (gRNA) consisting of a crRNA and tracRNA (or transactivating RNA; necessary for modulating Cas9 function) fusion. By modifying the guide RNA sequence, one can target any genetic sequence for cleavage by Cas9.

CRISPR

How CRISPR works (From Sander and Young, 2014 “CRISPR-Cas systems for editing, regulating and targeting genomes”, Nature Biotechnology, 32,347–355)

Indeed, several genetic screens have been performed with CRISPR using large gRNA libraries (64,000-87,000) producing knock-out cell lines within 7-14 days. Complete gene knockout was observed creating clear phenotypes as seen from a complete loss of fluorescence from GFP-expressing cells. What about off-target effects? Principally, CRISPR and RNAi work on the same premise of targeting short (20 base long) sequences that might occur in several places within the genome. However, siRNAs utilize endogenous proteins Argonaute and Dicer to achieve their effects, whereas the details on how gRNA-Cas9 work together to achieve genetic alteration has not been sufficiently well-characterized. The jury therefore is still not out yet on their potential off-target effects. Certain interesting observations however have been described. For example, open chromatin structures may potentially lead to more Cas9 binding and cleavage, while cleavage efficiency of a gRNA was found highly determined by Cas9 affinity. The CRISPR system also affords the added advantage of not just being used for gene knockouts but also gene activations as catalytic dead versions of Cas9 may be used to provide other perturbations to DNA.

But in case you get carried away and start acting like a CRISPR-obsessed maniac, take note of the limitations. For pharma companies in particular, what drug can provide 100% inhibition of a target protein? In that respect, siRNA screening as compared to CRISPR screening may provide a more accurate representation of the actions of small molecule drugs. Also, the fact that cell lines have to be engineered and lentiviral systems established may impose greater challenges on time and resources compared to siRNA screening where one simply throws on siRNAs onto cells and see effects within 2-3 days.

In the next blogpost, we shall talk more on how best to counter the off-target effects of siRNA screens. A subject quite dear to my heart.

Pfizer buys Hospira

As further illustration of the importance of biologics and the rise of generics, Pfizer has invested 17 billion US dollars for Illinois-based pharmaceutical and medical device company, Hospira. Hospira is pretty well-established with about 15, 000 employees (probably about to be reduced in the near future judging from Pfizer’s previous acquisitions), being the largest producer of generic injectable pharaceuticals in hospitals. It also develops biosimilars, which are generic versions of biologics. Though biosimilars can never achieve the exact composition to the reference biologic due to complex synthesis procedures, they nevertheless achieve biosimilar status on passing regulatory standards of quality, efficacy and safety. Interestingly, biosimilars have so far only been approved in Europe and Australia.

The deal is said to make investors happy as it would increase earnings and is a great counter-measure for the patent cliff Pfizer is currently facing. Additionally, it also spurs the expectation that Pfizer will eventually break up into two or more separate companies, which would increase shareholder value. CEO Ian Read has already done this with Pfizer’s animal health unit in the form of Zoetis Inc. This also appeases investor sentiment after Pfizer’s previously failed bid for Astrazeneca.

One wonders whether this deal would be enough though to sustain Pfizer in the long run. Many other companies also produce generics and do so at a larger scale than the previous Hospira ever could. These include Teva Pharmaceuticals based in Israel, Sandoz – the generic division of Novartis, and Actavis (previously known as Watsons Pharmaceuticals *very popular in Singapore) which is headquartered in Dublin, Ireland though initially founded in Illinois (yes, the move was for the tax breaks). Not to mention the rising stars of India, Ranbaxy Pharmaceuticals, Glenmark and Sun Pharmaceuticals, reporting chart-topping revenue growths over recent years.

Pfizer has built a reputation of being able to take care of herself though, so we will just have to wait and see what other moves she has up her sleeves.

Biomarkers in cancer

The importance of biomarkers is unquestionable. They enable the objective measurement of physiological processes, pathogenic progression and pharmacologic response to therapeutic intervention. In simple terms, they allow one to measure the state of human health. Biomarkers can take the form of proteins, DNA, RNA , metabolites, or even cells present in blood, urine, semen, saliva, biopsies, or any extractable human sample. They can take the form of electrical activity such as in electrocardiogram or electroencephalogy readouts from the heart and brain respectively. They can take the form of body temperature, blood pressure, genetic and epigenetic modifications, and images obtained by shooting radioactive or magnetic waves through our bodies. They may even encompass behavioral changes or changes in how we perceive our environment although these may prove more challenging to measure objectively.

The use of biomarkers, specifically predictive biomarkers, are closely tied to the dawn of personalized medicine. Predictive biomarkers offer information on the likelihood of response to a given therapy and have been increasingly used in the field of cancer to decide first-line treatments. It seems obvious that people are a heterogenous lot and a standardized treatment may not work for everyone. Patients suffering from chronic myeloid leukemia (CML) experience a chromosomal translocation from chromosome 9 to 22 (forming the Philadelphia chromosome) which produces a fusion protein called BCR-ABL. BCR-ABL acts as a constitutively active tyrosine kinase that results in unregulated cell division. Imatinib (Gleevec) was developed in the late 1990s by Novartis (then known as Ciba Geigy) and inhibits the tyrosine kinase activity of BCR-ABL, reducing proliferation of BCR-ABL-expressing cells and significantly improving survival. Resistance to imatinib however occurs in 10-15% of patients and is influenced by mutations in the catalytic domain of BCR-ABL (30-50% of patients with this mutation develop resistance). Catalytic domain mutation screening in BCR-ABL is used as a predictive biomarker for identifying patients to be treated with other recently developed tyrosine kinase inhibitors for imatinib-resistant CML, namely nilotinib and dasatinib. Similarly, activating mutations in epidermal growth factor receptor (EGFR) correlate with higher response rates in non-small cell lung carcinoma (NSCLC) patients to gefinitab while patients without these mutations respond better to carboplatin-paclitaxel treatment. EGFR testing is therefore now recommended to NSCLC patients to decide first-line treatment.

Another form of biomarkers, prognostic biomarkers, reveal if a therapeutic intervention is working by offering insight into disease progression. The current gold standard of monitoring clinical benefit in a cancer trial is overall survival, but this is increasingly being replaced by progression-free survival. Still, these are often supplemented with other biomarkers known as surrogate end-points of efficiency and usually involve measuring tumour size or function by imaging techniques like magnetic resonance imaging, computed tomography and positron emission tomography. These latter surrogate measures cannot be used in isolation however due to issues in reproducibility of assessment, inability to assess certain disease sites (e.g. bone) and the inability to distinguish between tumour and necrotic/fibrotic masses. Furthermore, they may not work in therapies that utilize the immune system to target cancer cells which tend to increase tumour size presumably due to infiltration of immune cells.

Blood biomarkers offer the advantage of being easily accessible and are assayed more objectively with machines rather than human interpretation. Currently in the field of cancer, protein biomarkers such as antigens (e.g. cancer antigen 125 in ovarian cancer) are most commonly used to monitor therapeutic response. However their ability to derive from non-tumour origins and fluctuations with concomitant illness limit their use as reliable and robust surrogate biomarkers in advanced-stage solid tumours. Circulating tumour cells have some prognostic value but experience limited dynamic range and difficulties in measurement. Circulating cell-free DNA (cfDNA) perhaps provide the most promise as surrogate end points in clinical trials, providing a wide dynamic range, high sensitivity, and apparent correlation with tumour burden. They are DNA fragments usually ~170 base pairs in length released from apoptotic/necrotic cells and harbor mutations in cancer patients. Techniques involved in their assay include targeted DNA-capture methods and next-generation sequencing.

Pharmacodynamic biomarkers or biomarkers that measure the effect of a drug on its target are perhaps the most desired type of biomarker in a clinical trial as they provide proof-of-concept that the drug is working through a specified pathway to provide clinical benefit. They usually involve tissue biopsies to examine the status of target activation (e.g. phosphorylation if the drug is a kinase inhibitor).

Technology is advancing at a rapid pace leading to the identification of many novel biomarkers yet challenges remain with regard to implementing their use in the clinic. Hospitals often rely on practices that have withstood the test of time and healthcare staff often experience an inertia or fear of implementing changes. Furthermore, the assay of some of these biomarkers require advanced equipment and expertise that may not be readily available or implemented in a hospital setting. The multicenter nature of clinical trials also make it difficult to standardize assays, a key aspect to producing reliable and reproducible measurements.

So it appears we have our work cut out for us.

(Sources: “Developing biomarker-specific end points in lung cancer clinical trials” Joel W. Neal, Justin F. Gainor & Alice T. Shaw. Nature Reviews Clinical Oncology 2014, doi:10.1038/nrclinonc.2014.222)

Data-driven research

How many of us have planned out a string of experiments based on certain specific assumptions, grounded by previous work, published literature, and Professor’s expectations, only to have it all fail on the first try? And upon subsequent further attempts?

You spend weeks, months, sometimes years, trying to optimize every aspect of the experiment, but it still fails to give you the result you expect (or want). You hear people around you say, ah, this is the life of a scientist, things are never expected to work on the first try. You keep at it, going in on weekends, spending every waking hour thinking where did I go wrong? As you do that, time slips by, you are wasting reagents, your Professor is losing patience, and there is a dawning intense realization that your time is running up and you have to graduate in 2 years and with nothing to show for. Is it too late for a project change?

Scary as it sounds, it is a common scenario for many PhD students, myself included. I was trying to set up a cellular model for Huntington’s disease (HD), and to cure it by altering protein equilibrium in the cell. There were hundreds of papers showing how expressing the Huntingtin protein was enough to produce cell death, presumably via its aggregating properties. But it was not a simple and cut process, there were various models of inducible expression, overexpression, and cell-lines that endogenously express the “toxic” protein. But in my hands, these “toxic” effects were not apparent. It usually required an additional stress but then more questions would arise as to which stress is appropriate to reproduce a disease model, and how much stress should one apply? I know I was not alone because coming from industry prior to my PhD, I knew even pharma was having problems establishing the HD cell model. And so 2 years went by, filled with variable results and slow sometimes invisible progression. Thankfully, I had a backup project which was beginning to yield more interesting results and we ended up switching to that. And it was a harried next 2 years playing catch-up and producing enough results to enable me to graduate.

Sometimes one just has to know when to give up. Science is driven by nature, not man. No amount of hard work, hours in the lab, planning and theorizing, will change the outcome of an experiment. If it is not working, and you have exhausted the possibilities, let it go. This is called data-driven research. I see so many students, spending time, reagents, and their own energy, repeating an experiment over and over again (sometimes without varying anything!). And sometimes Professors (usually the young ones)  will be behind this insanity, being so intent on achieving a particular result, believing their careers rest on this, that they unwittingly drive their labs to the ground.

Usually, experiments fail for good reason. Sometimes reagents are faulty, conditions are not ideal, something stupid like not letting a gel run long enough for bands to separate, can cause weeks of head-scratching, agony and frustration. What does help is asking people around you, or sometimes asking the internet. Gaining new perspectives can reveal things you never realised. The problem is when students think it is their own fault and beat themselves up about it, repeating the experiment till the cows come home. And sometimes things just do not work. Move on already. Or find another method to prove your point. There are numerous ways to prove a point! An important side-note: It is key to record down all your experimental observations. Without this recording process, one cannot know where one went wrong. You do not want to be second-guessing yourself on whether you really added in the correct reagent.

It also helps to distance yourself from the lab when things go wrong. Take a walk, have a coffee, do something to take your mind away from it. Gives you some bigger picture perspective and also calms the nerves. Just my 2 cents 🙂