Opiniomics

bioinformatics, genomes, biology etc. "I don't mean to sound angry and cynical, but I am, so that's how it comes across"

Let’s keep saying it, and say it louder: REVIEWERS ARE UNPAID

The arguments over peer review and whether we are obliged to do it generally fall on two sides – either it is or isn’t an implicit part of your job.   Stephen Heard falls into the former group, arguing that there are many things which are implicitly part of our jobs as academics, and which don’t make it into our job description.

I agree wholeheartedly with Stephen that peer-reviewing is not in my job description.  In fact it is nowhere to be seen.  It’s not in my employment contract either, nor has it ever been in any of my lists of annual objectives, after 16 years in academia.  In fact, I don’t think it’s ever been discussed, either in annual appraisal meetings or in my annual meetings with the institute director.  It doesn’t make it on to my CV either.

Here is a key point: I could never write another peer review for the rest of my career and my career would not suffer.  Not one bit.

It is not part of my job and I am not paid to do it.

(for the record, I do peer reviews! For free!)

In an increasingly pressurised environment where the only factors that influence my career progression are papers published and grants won; with over 6500 e-mails in my inbox which I have lost control of; and with a to-do list I have no chance of ever finishing, prioritisation is essential.  The key for any task is to be important enough to get into the “action” zone on my to do list.  Does peer review manage that?  Occasionally, but not often.

Would it get there more often if I was paid to do it?  Absolutely.  Why?  Because I have bills to pay and a small family and every little helps.  I imagine this is even more true for post-docs and early career researchers.  Why should they do something for free, often for profit-making organisations, when it doesn’t affect their career prospects one tiny bit?  The answer is simple: they shouldn’t.

A common argument is this: if everyone stopped peer reviewing, science would grind to a halt.  Well unfortunately that ignores reality.  If delivery drivers stopped driving, would there be no food on the shelves?  No.  Because we wouldn’t and couldn’t let that happen.  We’d find whatever incentive needed to be found, and make sure the drivers still drove.  The same is true of peer review.  If you are struggling to find people, there is a simple solution, one that is as old as money itself: pay them.

(note: I do many peer reviews and I will continue to do so; free.  However, I believe it is time to re-think incentives, and yes, it is time to pay people for peer reviews)

It’s a bit POINTLESS if you forget the POINTLESS in POINTLESS ADMIN

Rather predictably, a Guardian article railing against pointless admin has stimulated a response from the university admin community.   Unfortunately, the response kind of misses the point – academics are annoyed at POINTLESS ADMIN, not at administrators in general (though the two are not entirely unrelated).

Here’s the thing – there is both pointless and essential administration, and at the same time, there are both excellent and poor administrators.  An attack on pointless admin is not an attack on good administrators.   We all can recognise groups of excellent administrators without whom the place would fall apart.  No-one is attacking them.

So what is pointless admin?  There is so much of it I don’t know where to start.  Perhaps the best example is the recording of academic outputs.  My University insists I put them in PURE; Wellcome Trust insist I have an ORCID account; and BBSRC insist I use ResearchFish.  You can accept the motivation for this whilst at the same time recognising that it is genuinely pointless replication of effort.  Academics are angry not because this happens once or twice, but because it happens all the time and it is increasing in frequency.

So what makes a good administrator?  Well, I have always said “A bad administrator reminds you to do something, a good administrator does it for you”.  There are caveats, but it’s a good starting point.  A quick guide for administrators might be something like this: “Is this a tick-box exercise?  Can you tick the box?  Then tick the box.  Please.  Thank you!   Does it really need the academic’s input?  Can they send bullet points and you do the rest?  Let’s do that then!  Otherwise are there any parts you can fill in?  I’d love for you to fill them in!  Is it possible to visit and not send yet another e-mail?  That would be awesome.  You can ask if a follow-up email would help and then send one afterwards.  Is there a possibility that one system/form can be filled in and then the same information copied to other systems/forms?  Can you do the copying?  Awesome!  And, as an aside, can we maybe have less systems/forms if they contain the same information?  Amazing”.

The obvious response to all of this is “why should the admins have to do all of that?  You arrogant ****!  Do your own admin!”.  This misses an unfortunate point – we are drowning.  If academia was a near-death experience, then academics are 200 yards off shore, in a rip-tide, barely breathing with our arms in the air, waving for help.   We need help.  Our job is to teach, to publish papers and to win research grants, research grants with overheads, all of which go some way to sustaining the very institution we all work for.   Every minute doing admin is a minute we are not doing the thing that keeps the lights on.

Please.  Help us.  Remove POINTLESS ADMIN.  And be a good administrator, not a bad one.

Judge rules in favour of Oxford Nanopore in patent dispute with PacBio

Forgive me if I get any of the details wrong, I am not a lawyer, but the title of this post is my take on a judgement passed down in the patent infringement case PacBio brought against ONT.

To get your hands on the documentation, you need to register and log in to EDIS, click “Advanced Search”, do a search for “single molecule sequencing” and click the top hit.

My interpretation of the documentation is that the judge has massively limited the scope of the patents in question by expanding on the definition of “single molecule sequencing”.  ONT argued that in the patents in question, “single molecule sequencing” referred only to “sequencing of a single molecule by template-dependent synthesis”, and the judge agreed with this definition.

sms

All claims are then subsequently limited to template-dependent synthesis, which of course is NOT what Oxford Nanopore do.

tds

The document then goes into an area that would make all biological ontologists rejoice – THEY TRY AND DEFINE THE TERM “SEQUENCE”.  I can almost hear the voices shouting “I told you so!” coming out of Manchester and Cambridge as I write 😉

Beautiful boxplots in base R

As many of you will be aware, I like to post some R code, and I especially like to post base R versions of ggplot2 things!

Well these amazing boxplots turned up on github – go and check them out!

So I did my own version in base R – check out the code here and the result below.  Enjoy!

probability

HiSeq move over, here comes Nova! A first look at Illumina NovaSeq

Illumina have announced NovaSeq, an entirely new sequencing system that completely disrupts their existing HiSeq user-base.  In my opinion, if you have a HiSeq and you are NOT currently engaged in planning to migrate to NovaSeq, then you will be out of business in 1-2 years time.  It’s not quite the death knell for HiSeqs, but it’s pretty close and moving to NovaSeq over the next couple of years is now the only viable option if you see Illumina as an important part of your offering.

Illumina have done this before, it’s what they do, so no-one should be surprised.

The stats

I’ve taken the stats from the spec sheet linked above and produced the following.  If there are any mistakes let me know.

There are two machines – the NovaSeq 5000 and 6000 – and 4 flowcell types – S1, S2, S3 and S4.  The 6000 will run all four flowcell types and the 5000 will only run the first two.  Not all flowcell types are immediately available, with S4 scheduled for 2018 (See below)

S1 S2 S3 S4 2500 HO 4000 X
Reads per flowcell (billion) 1.6 3.3 6.6 10 2 2.8 3.44
Lanes per flowcell 2 2 4 4 8 8 8
Reads per lane (million) 800 1650 1650 2500 250 350 430
Throughput per lane (Gb) 240 495 495 750 62.5 105 129
Throughput per flowcell (Gb) 480 990 1980 3000 500 840 1032
Total Lanes 4 4 8 8 16 16 16
Total Flowcells 2 2 2 2 2 2 2
Run Throughput (Gb) 960 1980 3960 6000 1000 1680 2064
Run Time (days) 2-2.5 2-2.5 2-2.5 2-2.5 6 3.5 3

For X Ten, simply mutiply X figures by 10.  These are maximum figures, and assume maximum read lengths.

Read lengths available on NovaSeq 2×50, 2×100 and 2x150bp.  This is unfortunate as the sweet spot for RNA-Seq and exomes is 2x75bp.

As you can see from the stats, the massive innovation here is the cluster density, which has hugely increased. We also have shorter run times.

So what does this all mean?

Well let’s put this to bed straight away – HiSeq X installations are still viable.  This from an Illumina tech on Twitter:

 

We learn two things from this – first, that HiSeq X is still going to be cheaper for human genomes until S4 comes out, and S4 won’t be out until 2018.

So Illumina won’t sell any more HiSeq X, but current installations are still viable and still the cheapest way to sequence genomes.

I also have this from an un-named source:

speculation from Illumina rep “X’s will be king for awhile. Cost per GB on those will likely be adjusted to keep them competitive for a long time.”

So X is OK, for a while.

What about HiSeq 4000? Well to understand this, you need to understand 4000 and X.

The HiSeq 4000 and HiSeq X

First off, the HiSeq X IS NOT a human genome only machine.  It is a genome-only machine.  You have been able to do non-human genomes for about a year now.  Anything you like as long as it’s a whole genome and it’s 30X or above.  The 4000 is reserved for everything else because you cannot do exomes, RNA-Seq, ChIP-Seq etc on the HiSeq X.  HiSeq 4000 reagents are more expensive, which means that per-Gb every assay is more expensive than genome sequencing on Illumina.

However, no such restrictions exist on the NovaSeq – which means that every assay will now cost the same on NovaSeq.   This is what led me to say this on Twitter:

At Edinburgh Genomics, roughly speaking, we charge approx. 2x as much for a 4000 lane as we do for an X lane.  Therefore, per Gb, RNA-Seq is approx. twice as expensive as genome sequencing.  NovaSeq promises to make this per-Gb cost the same, so does that mean RNA-Seq will be half price?  Not quite.  Of course no-one does a whole lane of RNA-Seq, we multiplex multiple samples in one lane.  When you do this, library prep costs begin to dominate, and for most of my own RNA-Seq samples, library prep is about 50% of the per-sample cost, and 50% is sequencing.  NovaSeq promises to half the sequencing costs, which means the per-sample cost will come down by 25%.

These are really rough numbers, but they will do for now.  To be honest, I think this will make a huge difference to some facilities, but not for others.  Larger centers will absolutely need to grab that 25% reduction to remain competitive, but smaller, boutique facilities may be able to ignore it for a while.

Capital outlay

Expect to get pay $985k for a NovaSeq 6000 and $850k for a 5000.

Time issues

One supposedly big advantage is that NovaSeq takes 40 hours to run, compared to the existing 3 days for a HiSeq X.   Comparing like with like that’s 40 hours vs 72 hours.  This might be important in the clinical space, but not for much else.

Putting this in context, when you send your samples to a facility, they will be QC-ed first, then put in library prep queue, then put in sequencing queue, then QC-ed bioinformatically before finally being delivered.  Let’s be generous and say this takes 2 weeks.  Out of that sequencing time is 3 days.  So instead of waiting 14 days, you’re waiting 13 days.  Who cares?

Clinically having the answer 1 day earlier may be important, but let’s not forget, even on our £1M cluster, at scale the BWA+GATK pipeline itself takes 3 days.  So again you’re looking at 5 days vs 6 days.  Is that a massive advantage?  I’m not sure.  Of course you could buy one of the super-fast bioinformatics solutions, and maybe then the 40 hour run time will count.

Colours and quality

NovaSeq marks a switch from the traditional HiSeq 4 colour chemistry to the quicker NextSeq 2 colour chemistry.  As Brian Bushnell has noted on this blog, NextSeq data quality is quite a lot worse than HiSeq 2500, so we may see a dip in data quality, though Illumina claim 85% above Q30.

 

University of California makes legal move against Roger Chen (and Genia)

The relationship between sequencing companies is frosty and beset by legal issues, which I’ve covered before here and here.  Keith Robison tends to cover in more detail 😉

Most recently, PacBio moved against Oxford Nanopore, we think claiming that ONT’s 2D technology violated their patent on CCR (link).

Well now the absolute latest is a filing by the University of California against Roger Chen and therefore Genia. If you click through to the documents (requires registration) you’ll see that UC claim Chen, with others, produced key inventions whilst at UC that he later assigned to Genia, but which should have automatically been assigned to UC according to UC’s “oath of allegiance”, which Chen signed as a UC employee.

It awaits to be seen how important this is and no doubt Chen/Genia/Roche will fight tooth and nail; however if the courts decide in UC’s favour it could spell the end of Genia, and at the very least see a large cash settlement with UC.

Fascinating times!

Is the long read sequencing war already over?

My enthusiasm for nanopore sequencing is well known; we have some awesome software for working with the datawe won a grant to support this work; and we successfully assembled a tricky bacterial genome.  This all led to Nick and I writing an editorial for Nature Methods.

So, clearly some bias towards ONT from me.

Having said all of that, when PacBio announced the Sequel, I was genuinely excited.   Why?  Well, revolutionary and wonderful as the MinION was at the time, we were getting ~100Mb runs.  Amazing technology, mobile sequencer, tri-corder, just incredible engineering – but 100Mb was never going to change the world.  Some uses, yes; but for other uses we need more data.  Enter Sequel.

However, it turns out Sequel isn’t really delivering on promises.  Rather than 10Gb runs, folk are getting between 3 and 5Gb from the Sequel:

At the same time, MinION has been coming along great guns:

Whilst we are right to be skeptical about ONT’s claims about their own sequencer, other people who use the MinION have backed up these claims and say they regularly get figures similar to this. If you don’t believe me, go get some of the World’s first Nanopore human data here.

PacBio also released some data for Sequel here.

So how do they stack up against one another?  I won’t deal with accuracy here, but we can look at #reads, read length and throughput.

To be clear, we are comparing “rel2-nanopore-wgs-216722908-FAB42316.fastq.gz” a fairly middling run from the NA12878 release, m54113_160913_184949.subreads.bam and one of the Sequel SMRT cell datasets released.

Read length histograms:

minion_vs_pacbio

As you can see, the longer reads are roughly equivalent in length, but MinION has far more reads at shorter read lengths.  I know the PacBio samples were size selected on Blue Pippin, but unsure about the MinION data.

The MinION dataset includes 466,325 reads, over twice as many as the Sequel dataset at 208,573 reads.

In terms of throughput, MinION again came out on top, with 2.4Gbases of data compared to just 2Gbases for the Sequel.

We can limit to reads >1000bp, and see a bit more detail:

gt1000minion_vs_pacbi

  • The MinION data has 326,466 reads greater than 1000bp summing to 2.37Gb.
  • The Sequel data has 192,718 reads greater than 1000bp, summing to 2Gb.

Finally, for reads over 10,000bp:

  • The MinION data has 84,803 reads greater than 10000bp summing to 1.36Gb.
  • The Sequel data has 83,771 reads greater than 10000bp, summing to 1.48Gb.

These are very interesting stats!


This is pretty bad news for PacBio.  If you add in the low cost of entry for MinION, and the £300k cost of the Sequel, the fact that MinION is performing as well as, if not better, than Sequel is incredible.  Both machines have a long way to go – PacBio will point to their roadmap, with longer reads scheduled and improvements in chemistry and flowcells.  In response, ONT will point to the incredible development path of MinION, increased sequencing speeds and bigger flowcells.  And then there is PromethION.

So is the war already over?   Not quite yet.  But PacBio are fighting for their lives.

People are wrong about sequencing costs on the internet again

People are wrong about sequencing costs on the internet again and it hurts my face, so I had to write a blog post.

Phil Ashton, whom I like very much, posted this blog:

But the words are all wrong 😉  I’ll keep this short:

  • COST is what it COSTS to do something.  It includes all COSTS.  The clue is in the name.  COST. It’s right there.
  • PRICE is what a consumer pays for something.

These are not the same thing.

As a service provider, if the PRICE you charge to users is lower than your COST, then you are either SUBSIDISED or LOSING MONEY, and are probably NOT SUSTAINABLE.

COST, amongst other things, includes:

  • Reagents
  • Staff time
  • Capital cost or replacement cost (sequencer and compute)
  • Service and maintenance fees
  • Overheads
  • Rent

Someone is paying these, even if it’s not the consumer.  So please – when discussing sequencing – distinguish between PRICE and COST.

Thank you 

Reading data from google sheets into R

Reading data from google sheets into R is something you imagine should be really simple, but often is anything but. However, package googlesheets goes a long way to solving this problem.

Let’s crack on with an example.

First, install the software:

install.packages("googlesheets")

We then need an example sheet to work with, and I’m going to use one from Britain Elects:

So go here and add this to your own collection (yes you’ll need a google account!) using the little “Add to my drive” icon to the right of the title, just next to the star.

Right, let’s play around a bit!

# load package
library(googlesheets)

# which google sheets do you have access to?
# may ask you to authenticate in a browser!
gs_ls()

# get the Britain Elects google sheet
be <- gs_title("Britain Elects / Public Opinion")

This should show you all of the sheets you have access to, and also select the one we want.

Now we can see which worksheets exist within the sheet:

# list worksheets
gs_ws_ls(be)

We can "download" one of the sheets using gs_read()

# get Westminster voting
west <- gs_read(ss=be, ws = "Westminster voting intentions", skip=1)

# convert to data.frame
wdf <- as.data.frame(west)

And hey presto, we have the data. Simples!

Now let's plot it:

# reverse so that data are forward-sorted by time
wdf <- wdf[2769:1,]

# treat all dates as a single point on x-axis
dates <- 1:2769

# smooth parameter
fs <- 0.04

# plot conservative
plot(lowess(dates, wdf$Con, f=fs), type="l", col="blue", lwd=5, ylim=c(0,65), xaxt="n", xlab="", ylab="%", main="Polls: Westminster voting intention")

# add labels
axis(side=1, at=dates[seq(1, 2769, by=40)], labels=paste(wdf$"Fieldwork end date", wdf$Y)[seq(1, 2769, by=40)], las=2, cex.axis=0.8)

# plot labour and libdem
lines(lowess(dates, wdf$Lab, f=fs), col="red", lwd=5)
lines(lowess(dates, wdf$LDem, f=fs), col="orange", lwd=5)

# add UKIP, and we treat absent values as -50 for plotting purposes
ukip <- wdf$UKIP
ukip[is.na(ukip)] <- -50
lines(lowess(dates, ukip, f=fs), col="purple", lwd=5)

# add legend
legend(1, 65, legend=c("Con","Lab","LibDem","UKIP"), fill=c("blue","red","orange","purple"))


westminster

From there to here

This is going to be quite a self-indulgent post – in summary, I am now a full Professor at the University of Edinburgh, and this is the story of my early life and career up until now.  Why would you want to read it? Well this is just some stuff I want to say, it’s a personal thing.  You might find it boring, or you may not.  There are some aspects of my career that are non-traditional, so perhaps that will be inspiring to others; to learn that there is another way and that there are multiple routes to academic success.

Anyway, let’s get it over with 😉

Early life

I was born and bred in Newcastle.  I come from a working class background (both of my grandfathers were coal miners) and growing up, most of my extended family lived in council houses.  I went to a fairly average comprehensive (i.e. state) school, which I pretty much hated.  Something to endure rather than enjoy.  Academic ability was not celebrated – I don’t know the exact figures but I’d guess less than 5% of my fellow pupils ended up in higher education.  Being able to fight and play football were what made you popular, and I could do neither 🙂 Choosing to work extra hours so I could learn German didn’t improve my popularity much…

My parents both worked hard – really hard – and I had everything I needed, but not everything I wanted.  Looking back this was a good thing.  My Dad especially is a bit of a hero.  He started off as a “painter and decorator” – going to other peoples’ houses and doing DIY, painting, putting up shelves, wallpaper, laying carpets etc.  As if doing that, having a small family (I have an older brother) and buying a house weren’t enough, he studied part-time for a degree in sociology at Newcastle polytechnic, famously going off to do painting and decorating jobs between lectures, and at the weekends.  After graduating, he went on to have a successful career in adult social care, and finished life as lecturer at a further education college.  My mother also worked in social care, and together they supported me through University (BSc and MSc).  Just amazing, hard working parents.  I attribute my own work ethic to both of them, they set the example and both my brother and I followed.

Education

I did a BSc hons in Biology at the University of York.  Some bits I loved, some bits I didn’t, and I came out with a 2.1.

One practical sticks in my mind – in pairs, we had 3 hours to write a program in BASIC (on a BBC B!) to recreate a graph (yes – we had to recreate an ASCII graph) based on an ecological formula (I don’t remember which).  I finished it in ten minutes and my partner didn’t even touch the keyboard.  Writing this now reminds me of two things – firstly, hours sat with my Mum patiently punching type-in programs into our Vic 20 so I could play space invaders (written in, you guessed it, BASIC); and secondly, hacking one of my favourite ZX Spectrum games, United, so I could have infinite money to spend on players.  Happy days!

At the end of my undergraduate, I didn’t know what else to do, so I took the MSc in Biological Computation, also at the University of York.  This was awesome and I loved every minute (previous graduates include Clive Brown, but more about that later).  The prospectus was so wide-ranging – we covered programming (in C), mathematical modelling, statistics (lots and lots of statistics), GIS and Bioinformatics, among many other courses.  It was hard work but wonderful.  It really taught me independence; that I could start the day knowing nothing, and end the day having completed major tasks, using nothing but books and the internet.

At the end of the course I had three job offers and offers to do a PhD – looking back this was a pivotal decision, but I didn’t realise it at the time.  I chose a job in the pharmaceutical industry.

Early career

My first job was a 12 month contract at GlaxoWellcome in Stevenage.  GW had invested pretty heavily in gene expression arrays.  Now, these were very different to modern microarrays – this was pre-genome, so we didn’t know how many genes were in the human genome, nor what they did.  What we had were cDNA libraries taken from various tissues and normalised in various ways.  These would be spotted on to nylon membranes, naively – we didn’t know what was on each spot.  We then performed experiments using radioactively labelled mRNA, and any differentially expressed spots were identified.  We could then go back to the original plate, pick the clone, sequence it and find out what it was.

It’s now obvious that having a genome is really, really useful 😉

I was in a team responsible for all aspects of the bioinformatics of this process.  We manufactured the arrays, so we wrote and managed the LIMS; and then we handled the resulting data, including statistical analysis.  I worked under Clive Brown (now CTO at Oxford Nanopore).  He asked me in interview whether I could code in Perl and I had never heard of it!  How times have changed…. he was very much into Perl in those days, and Microsoft – we wrote a lot of stuff in Visual Basic.  Yes – bioinformatics in Visual Basic.  It can be done…

I spent about four years there – working under Francesco Falciani for a time – then left to find new challenges.  I spent an uneventful year at Incyte Genomics working for Tim Cutts (now at Sanger) and David Townley (now at Illumina), before we were all unceremoniously made redundant.  With the genome being published, Incyte’s core business disappeared and they were shutting everything down.

Remember this when you discuss the lack of job security in academia – there is no job security in industry either.  The notion of job security disappeared with the baby boomers, I’m afraid.

I managed to get a job at Paradigm Therapeutics, also in Cambridge, a small spin-out focused on mouse knockouts and phenotyping.  I set up and ran a local Ensembl server (I think version 12) including the full genome annotation pipeline.  This was really my first experience of Linux sys/admin and running my own LAMP server.  It was fun and I enjoyed it, but again I stayed less than a year.  Having been made redundant from Incyte, my CV was on a few recruitment websites, and from there an agent spotted me and invited me to apply for my first senior role – Head of Bioinformatics at the Institute for Animal Health (now Pirbright).

Academia

So, my first job in academia.  It’s 2002.  I am 28.  I have never been in academia before.  I have no PhD.  I have never written a paper, nor written a grant (I naively asked if we got the opportunity to present our grant ideas in person in front of the committee).  I am a group leader/PI so I am expected to do both – I need to win money to build up a bioinformatics group.  What the institute needed was bioinformatics support; but they wanted me to win research grants to do it.

This job really was really, really tough, and I was hopelessly ill-equipped to do it.

My first grant application (to create and manage pathway genome databases for the major bacterial pathogens we worked on) was absolutely annihilated by one reviewer, who wrote a two page destruction of all of my arguments.  Other reviewers were OK, but this one killed it.  Disaster.  I figured out that no-one knew who I was, I needed a foot-print, a track record and I needed to publish (I’d still like to find out who that reviewer was….)

In 2005 I published my first paper, and in 2006 my second.  You’ll note I am the sole author on both.  Take from that what you will.  Meanwhile I was also supporting other PIs at the institute, carrying out data analyses, building collaborations, and these also led to papers; and so slowly but surely I began building my academic life.  I got involved in European networks such as EADGENE, which eventually funded a post-doc in my lab.  I won a BBSRC grant from the BEP-II initiative to investigate co-expression networks across animal gene expression studies; and I successfully applied for and won a PhD studentship via an internal competition.  With papers and money coming in, the institute agreed to provide me with a post-doc from their core support and hey presto, I was up and running.  I had a research group publishing papers, and providing support to the rest of the institute.  It only took me five or six years of hard work; good luck; and some excellent support from one of my managers, Fiona Tomley.  During my time at IAH I had 7 managers in 8 years; Fiona was the only good one, and she provided excellent support and advice I’ll never forget.

So what are the lessons here?  I think this was an awful job and had I known better I wouldn’t have taken it.  At the beginning there was little or no support, and I was expected to do things I had no chance of achieving.  However, I loved it, every minute of it.  I made some great friends, and we did some excellent work.  I forged collaborations that still exist today.  I worked really, really hard and it was quite stressful – at one point my dentist advised gum shields as I was grinding my teeth – but ultimately, through hard work and good luck, I did it.

It’s worth pointing out here that I believe this was only possible in bioinformatics.  My skills were in such short supply that a place like IAH had no choice but to take on an inexperienced kid with no PhD.   This couldn’t have happened in any other discipline, in my opinion.  It’s sad, because ultimately I showed that if you invest in youth they can and will make a success of it.  Being older, or having a PhD, is no guarantee of success; and being young and inexperienced is no guarantee of failure.

Roslin

And so, in 2010, to Roslin.  There was quite an exodus from IAH to Roslin as the former shrank and the latter grew.  I was lucky enough to be part of that exodus.  My role at Roslin was to build a research group and to help manage their sequencing facility, ARK-Genomics.  I certainly don’t claim all of the credit, but when I arrived ARK-Genomics had a single Illumina GAIIx and when we merged to form Edinburgh Genomics, we had three HiSeq 2000/2500.  We also had significantly better computing infrastructure, and I’m proud that we supported well over 200 academic publications.  My research is also going really well – we have, or have had, grants from TSB, BBSRC, and industry, and we’re publishing plenty of papers too.  The focus is on improved methods for understanding big sequencing datasets and how they can contribute to improved farm animal health and production.  Scientifically, we are tackling questions in gut microbiome and host-pathogen interactions.

Roslin is a great place to work and has been very supportive; the group enjoys a small amount of core support through which we help deliver Roslin’s core strategic programmes; and it is an incredibly rich, dynamic environment with over 70 research groups and over 100 PhD students.  You should come join us!

In 2015, I was awarded a PhD by research publication – “Bioinformatic analysis of genome-scale data reveals insights into host-pathogen interactions in farm animals” (I don’t think it’s online yet) – and in 2016 promoted to Personal chair in bioinformatics and computational biology.  Happy days!

A note on privilege

Clearly being a white male born in the UK has provided me with advantages that, in today’s society, are simply not available to others.  I want readers of this blog to know I am aware of this.  I wish we lived in a more equal society, and if you have suggestions about ways in which I could help make that happen, please do get in contact.

So what next?

More of the same – I love what I do, and am lucky to do it!  More specifically, I want to be a champion of bioinformatics as a science and as an area of research; I want to champion the young and early career researchers; I want to continue to train and mentor scientists entering or surviving in the crazy world of academia; and I want to keep pushing for open science.  Mostly, I want to keep being sarcastic on Twitter and pretending to be grumpy.

Cheers 🙂

 

« Older posts

© 2017 Opiniomics

Theme by Anders NorenUp ↑