Opiniomics

bioinformatics, genomes, biology etc. "I don't mean to sound angry and cynical, but I am, so that's how it comes across"

Author: biomickwatson (page 1 of 16)

The three technologies bioinformaticians need to be using right now

I’m old. Sometimes I feel really old. I’m old enough to remember when you had to install things manually, and that’s if you were allowed to; worse would be that you depend on an IT department to install software for you. I am old enough to remember when changing servers meant reinstalling everything again from scratch. I am old enough to have been frightened by dependency hell, where you do not know all the dependencies of the software you need to install, and whether they would conflict with other software you also need to install. It’s not too long ago that my “pipelines” were simple Perl of bash scripts, with for() and while() loops in them, whose purpose it was to run things in the correct order and maybe do some basic checking to ensure things ran.

My bone-chilling fear is that many of you, maybe even most of you, still do this. My fear is that if you are a pet bioinformatician, you don’t have anyone around to tell you that it needn’t be like this anymore. How do I sleep? Sometimes very badly.

Look, this isn’t a tutorial, so here it is. These are the three technologies that, as a bioinformatician, you need to be engaging with right now. Not next week, or month. Now. They are not new technologies. Now is not the time to put this off, it’s time to realise that if you don’t use them, you are already out of date, and the longer you leave it, the worse things will become.

Here they are:

1. Software environments and/or containers

Software environments can be thought of as a folder where you install all the software you need to run a particular analysis. “But I already have one of those!” I hear you cry. Ah, but there’s more. Software environments don’t require root access, they handle all software dependencies, they are exportable, shareable and re-useable, they can be used with multiple different flavours of Linux/Unix, and in the vast majority of cases they make installing and maintaining software simple and pain-free. You can have multiple environments (e.g. one for each pipeline you run), and you can also version them. The one I am most familair with is conda (https://docs.conda.io/en/latest/) which has an associated BioConda project (https://bioconda.github.io/) with many bioinformatics tools set up and ready to go.

Containers are similar in that they are a place you install all of the software you need to run a pipeline. Containers are a form of light-weight VM, and the idea is you describe once how to install software into a container, and then you use that container multiple times i.e each time you run your pipeline. No changes are made to your local machine, the software simply downloads the container and runs your pipeline inside of it. Again you can set up and use multiple containers, and version them. The most common software tools for containers are Docker and Singularity, and there is a BioContainers project (https://biocontainers.pro/#/)

Installing software is now so simple in the vast majority of cases, using either or both of the above. Please use them.

2. Workflow management systems

It’s a law of the universe that every bioinformatician will say they have a “pipeline”, but often I find this is actually a bash, perl or python script. They were great for their time, but it is time to move on.

Workflow management systems describe true pipelines. Each has their own, often simple, language; the language describes jobs, and each job will have input and output. The input of one job can be the output of another job, and the jobs will be executed in the correct order to make sure all of the inputs are present before running. They handle wild cards and scale to tens of thousands of input files. They run on your local computer, they integrate with common HPC submission systems, they can submit jobs to the cloud, and they often also integrate with software environments and containers. Yes, so each job in your workflow can be run within its own environment or container. They track which jobs succeeded and which failed, they can re-submit those that failed, they can restart where the pipeline finished last time, they clean up when things fail, and you can add more input files at a later date and they will only re-run the jobs they need to. There are hundreds of these, but those I am most familiar with are Snakemake (https://snakemake.readthedocs.io/en/stable/) and NextFlow (https://www.nextflow.io/)

Honestly, adopting (1) and (2) will change your life. Do it.

3. Cloud

The third is optional, and also not new, but it is now so much easier to access. No matter how big your University cluster, nothing will match the size of the compute clouds of Amazon, Microsoft and Google. These things are vast, they run Netflix and Dropbox, and once you start using the cloud, you realise there are essentially no computational limits to your research.

What’s amazing about cloud is that the vast majority of it is sat around doing nothing most of the time, and companies sell off these “on demand” resources at a massive discount. The downside is that if there is a surge of Netflix then they may kill your jobs, but the upside is you only pay 10-20% of the advertised cost. And if you use a workflow management system, it can pick up where you left off anyway, so….

Snakemake can submit jobs to Google Cloud (via Kubernetes) and NextFlow can submit to Amazon. Broad adopted Google Cloud for GATK workflows and got the cost down to $5 per sample. Remember that the next time you pay your University HPC bill. There are now essentially no computational limits to your research – when you realise that, it frees your mind and you begin to think much bigger than you did before.

And before you say “I don’t have a credit card”, I am here to tell you that both Amazon and Google will send monthly invoices. It’s all good.

———-

There we have it. None of this is new and apologies if I sound patronising; but if you are a bioinformatician and you’re not using any of the above, please, I beg of you, take a few days out to learn about them all, and start using them. It will change your life, for the better. If you are a student or a post-doc looking for a job, these are the technologies you will need to talk about in your interview. The future is here – in fact it arrived a few years ago – the time to change is now.

Massive scale assembly of genomes from metagenomes in ruminants

Hopefully some of you will have seen our recent paper published in Nature Biotech:

Compendium of 4,941 rumen metagenome-assembled genomes for rumen microbiome biology and enzyme discovery
Robert D. Stewart, Marc D. Auffret, Amanda Warr, Alan W. Walker, Rainer Roehe & Mick Watson

Nature Biotechnology volume 37, pages 953‚Äď961 (2019)

In this paper we analyse samples from 283 beef cattle, generate between 50-60M PE150 reads from each, perform single sample assemblies, perform co-assemblies, bin the whole lot and talk about the 4941 metagenome-assembled-genomes that are at least 80% complete and at most 10% contaminated.

We also ran three MinION flowcells on one sample and generated lots of contigs >1Mb in length, and at least three whole bacterial chromosomes from novel bacterial species.

This was a real labour of love, and the entire dataset is available at DOI https://doi.org/10.7488/ds/2470 and at ENA accession PRJEB31266.

I just wanted to highlight a few things from the paper you may have missed in the details ūüėÄ

We have released all of our data

And I mean all of it. This includes the primary raw data, the single sample metagenome assemblies, the metagenome co-assemblies, the bin layer (more of what that means below), the 4941 MAGs, the annotations and predicted proteins, DIAMOND search hits against KEGG and UniProt – literally everything. If there is something you want that’s missing let us know. Some of it may not have uploaded to ENA yet, but it should all be there very soon. If it is not in the ENA yet, it should be in at the Edinburgh DataShare DOI https://doi.org/10.7488/ds/2470.

There are more MAGs than you think….

Recent massive scale re-analysis of the human microbiome data by the Segata, Finn and Kyrpides labs revealed 10,000s of new genomes from all human body sites. In theory these studies dwarf our own, but they used a cut-off of 50% complete, whereas our 4941 are all 80% complete.

Well, the new ENA model for metagenomic assemblies includes both a bin layer and a MAG layer. The MAGs are your final product – the fully analysed, cleaned and annotated genomes. In our case this is 4941. The bin layer is the layer of bins from which the MAGs derive. In our case we have chosen to put all bins that were at least 50% complete and no more than 10% contaminated into the bin layer. So in actual fact, whilst we do not talk about this in the paper, we are releasing 20,469 new MAGs from the rumen microbiome. These should all be available from the ENA soon.

We got a (near) complete bacterial chromosome from an Illumina metagenome assembly!!

Almost unheard of, in my experience, but we assembled a near complete genome of a novel Proteobacteria species as a single contig direct from the Illumina data using IDBA-UD. We are not sure why the stars aligned and enabled us to do it,  but we were certainly presently surprised.

These small Proteobacteria are numerous and abundant in our rumen datasets and have also been found in other metagenomic datasets. They do not appear to have been cultured and sequenced, and with their small genomes, we have to speculate that they depend on other microbes to survive, so can only be isolated from communities. Nevertheless, we know their CAZyme profile now and one of my students is going to have a crack at culturing one of these in a placement with Alan Walker later this year.

MAGs can be very high quality

I know there are MAG-sceptics out there, but I would just like to point them to Supplementary Figure 13.¬† The bottom left shows whole-genome alignments between an Illumina MAG (vertical axis) and a Nanopore whole-chromosome. These came from different samples. SO what is the take home? Well these are from two different samples, sequenced using two different technologies and assembled using completely different software and methods… yet they came up with the same answer. In other words, the Illumina MAG is of such high quality that it agrees, almost completely, with a single, whole-chromosome assembly from the Nanopore data. Remarkable.

This paper is a reference for IDEEL

I have blogged at great length about indels in long-read assembled genomes, and we created a simple Snakemake pipeline (IDEEL – Indels are not ideal) to demonstrate these were a problem. I was pleased when this pipeline got a shout out from Kat Holt on Twitter

Until now, IDEEL has not had a formal publication, and as we used it extensively in this paper, and mention it explicitly by name, then I propose that our new paper is the formal citation for IDEEL.

The real story is the proteins

Whilst the figures of 4941 MAGs and the single chromosome assemblies will take the headlines, for me the real story here is in the proteins. When clustering all known rumen miccrobial proteins, using a 90% identity cut-off, we ended up with 6.24million clusters. However, 3.67million of them are singletons that only contain proteins from our new MAGs. This is an incredible amount of new protein diversity, and figuring out what their function is will be the next major challenge for metagenomics researchers.

We also compared CAZymes to CAZy and find that CAZy poorly represents rumen CAZymes.

All microbiome work ends up with the need to culture

I increasingly believe we have to bring in to culture many of these microbes if we are to take the next step in microbiome research. We predict over 2000 novel species of bacteria and archaea. We cannot possibly know what they do, how they behave and interact, without bringing them in to culture.

There is a myth that 95% of rumen bacteria are uncultureable. This myth needs to die. It already killed one of our grant applications ūüôĀ There is nothing inherently uncultureable with the majority of these bugs, and even with the small Proteobacteria I mention above, it should be possible with a bit of work and the right co-culturing conditions. So please let this myth die!! All bugs are cultureable, some are more difficult than others.

Onwards and upwards ūüôā

Voice is dead, will email be far behind?

I have a not-so-secret secret – I never answer the phone. With landlines, it is 100% of the time, because I can’t see who is calling me.¬† Despite the fact my work landline has a messages facility, I don’t listen to those either. Not in 8 years since I moved to Edinburgh. I must have 100+ voice messages on there – I will never listen to them. I won’t.

Guess what? Nothing bad happened. No-one died. Projects finished. I’m doing pretty well and the University is still standing. So, maybe, just maybe, we don’t need voice calls anymore?

My mobile is slightly different – I answer calls from four “people”: my wife, my parents, my brother and my friends. I do check the messages though. If it’s an unknown number, i just ignore it – if it’s important, I’ll pick up the message and call back later.

Voice is dead. It’s so dead. Dead as a dodo, deader than a dodo because it’s possible George Church might bring dodos back. Why? I learned that 99.9% of the calls I get are calls I don’t want. Cold calls, or calls about stuff I don’t want to know about or hear about, or calls to remind me to do something I have already decided I don’t want to do, or have already scheduled. I don’t need calls. I could 100% turn off incoming calls for the rest of my life and I would be soooo happy.

Without doubt the biggest culprits are cold call companies trying to sell me stuff. The vast majority of calls are these type of calls and they have killed voice. It’s dead. Stone dead, we do not need it anymore. I do not need incoming calls. I no longer require this feature, please turn it off.

The question is, will email go the same way?

What with journal and conference spam, benign spam from my University, and the endless growth of cc- and bcc-, I’d say the percentage of emails I have zero interest in reading is above 90%, possibly even 95%. It is approaching the level at which voice died.¬† Spam filters don’t help, especially with benign spam (yesterday’s was an email-to-all about a dying relative’s fundraiser, today’s (it’s only 08:30am) was about a bumper new crop of vegetables).

Spam within an organisation, deliberate, stupid, ridiculous spam driven by cc- and bcc- cancer and reply-all mailing lists is possibly the most¬†harmful self-mutilation large organisations can do, as they watch productivity grind to a halt and then wonder “why?” possibly sending out a few all-staff surveys to find out.

Here is what I would love, please tell me if this exists – I want e-mail to be like ssh and have public private keys. Like, you only get to contact me if you have my public key, and it matches the¬†current private key in my inbox. You want to contact me? You have to figure out how to get my public key. Hint: you won’t be able to get it by calling me.

/rant

On stuck records and indel errors; or “stop publishing bad genomes”

I’m in real danger of sounding like a stuck record, but readers of the blog will know I have a bee in my bonnet about researchers who (unwittingly I’m sure) publish long-read assemblies with uncorrected indel errors in them.¬† ¬†If you are new to the blog, please read about my simple method for detecting indels, a deeper dive into errors in single molecule bacterial genomes, and my analyses of problems with published and unpublished single molecule assemblies of the human genome.

If you can’t be bothered reading, then the summary is:

  • BOTH single molecule sequencing technologies (PacBio and Nanopore), their major error mode is insertions / deletions
  • Once a genome is assembled, some of these errors remain in the assembly
  • If they are uncorrected, they inevitably cause a frameshift or premature stop codon in protein-coding regions
  • It’s not that you¬†can’t correct these errors, it’s that mostly, outside of the top assembly groups in the world, people don’t

Latest off the line is this cool paper –¬†High-Quality Whole-Genome Sequences for 77 Shiga Toxin-Producing Escherichia coli Strains Generated with PacBio Sequencing.¬† Being a man with a pipeline, I can just slot these right in and press go.¬† That’s what I did.

The methods section is remarkably brief:

The sequence reads were then filtered and assembled de novo using Falcon, Canu, or the PacBio Hierarchical Genome Assembly Process version

However, the GenBank accession numbers are all there and they have more details on which genome was assembled with which software tool.

Importantly, though, there is no mention of whether Quiver, Arrow, Racon or Pilon were used to correct the assemblies.¬† I know that Canu and HGAP have read error correction “built in”, but we have found this is often not enough, and a second or third round of correction is needed.¬† ¬†Whether this occurred on these genomes I have no idea.

My basic approach is to take each genome, predict the protein sequences using Prodigal, search those against UniProt TREMBL using Diamond, then compare the length of the predicted protein with the length of the top hit.  What we should see is a distribution tightly clustered around 1 i.e. the predicted protein should be about the same length as its top hit from the database.  We then look at the number of proteins that are less than 90% of the length of the top hit.

Here are those data for 73 of the E coli genomes:

Accession Software Version # proteins < 0.9 Coverage Strain Serotype # contigs Length
CP027640 HGAP v.3 1799 70.725 2014C-4705b O112:H21 2 5,329,029
CP027484 HGAP v.3 1636 57.246 2013C-4390 O76:H19 2 5,353,719
CP027675 HGAP v.3 1113 58.817 88-3510b O172:H25 2 5,140,386
CP027672 FALCON v0.3.0 1089 128.628 2014C-3003b O76:H19 3 5,234,640
CP027376 HGAP v.3 714 166.017 2013C-4404 O91:H14 4 5,009,822
CP027363 HGAP v.3 608 121.382 88-3001 O165:H25 2 5,195,753
CP027445 HGAP v.3 608 93.11 2013C-3492b O172:H25 2 5,196,105
CP027325 HGAP v.3 607 122.932 2013C-4830 O165:H25 3 5,135,675
CP027459 HGAP v.3 591 185.92 90-3040b O172:H25 2 5,253,712
CP027763 HGAP v.3 544 83.737 2015C-3125b O145:H28 3 5,471,132
CP027591 HGAP v.3 521 142.9 2014C-3011b O177:H25 4 5,168,350
CP027597 HGAP v.3 516 151.359 86-3153b O5:H9 2 5,342,528
CP027344 HGAP v.3 487 39.545 2014C-3946 O111:H8 3 5,264,938
CP027587 HGAP v.3 480 96.558 2013C-4974b O5:H9 2 5,235,560
CP027544 HGAP v.3 471 79.656 2013C-3264b O103:H25 2 5,486,407
CP027331 HGAP v.3 416 44.034 2013C-3277 O26:H11 4 5,438,694
CP027548 HGAP v.3 396 67.785 2014C-3061b O156:H25 2 5,303,935
CP027362 HGAP v.3 388 121.374 95-3192 O145:H28 1 5,385,516
CP027338 HGAP v.3 381 92.732 2014C-3051 O71:H11 2 5,597,475
CP027340 HGAP v.3 380 64.661 2015C-3121 O91:H14 2 5,366,577
CP027552 HGAP v.3 364 118.255 2015C-4498b O117:H8 2 5,434,442
CP027335 HGAP v.3 341 134.69 2014C-3716 O26:H11 3 5,568,215c
CP027472 HGAP v.3 328 47.07 2014C-3050b O118:H16 2 5,671,594
CP027454 HGAP v.3 322 69.53 2014C-4423b O121:H19 3 5,338,915
CP027351 HGAP v.3 321 55.972 2014C-3655 O121:H19 2 5,442,537
CP027317 HGAP v.3 320 120.949 2015C-3107 O121:H19 2 5,388,260
CP027368 HGAP v.3 320 126.946 2014C-3307 O178:H19 3 4,965,987
CP027555 HGAP v.3 310 121.607 2013C-3513b O186:H11 3 5,584,939
CP027766 HGAP v.3 308 119.094 2013C-3342 O117:H8 2 5,489,451
CP027361 HGAP v.3 303 98.329 2014C-4639 O26:H11 3 5,325,246
CP027593 HGAP v.3 303 112.231 2013C-3304 O71:H8 4 5,309,950c
CP027572 HGAP v.3 301 91.841 2013C-3996 O26:H11 2 5,858,766c
CP027352 HGAP v.3 296 80.59 2012C-4606 O26:H11 3 5,647,195
CP027310 HGAP v.3 285 110.963 2014C-4135 O113:H21 2 4,949,048
CP027355 HGAP v.3 285 96.531 2013C-4991 O80:H2 4 5,367,251
CP027435 HGAP v.3 285 119.123 2014C-3599 O121:H19 2 5,400,138
CP027550 HGAP v.3 285 78.081 2015C-4136CT1b O145:H34 2 4,836,918
CP027328 Canu v.1.6 282 197.393 2014C-3741 O174:H8 3 5,394,679c
CP027586 HGAP v.3 278 169.219 2012EL-2448b O91:H14 1 5,272,286
CP027347 HGAP v.3 276 81.415 2013C-4361 O111:H8 2 5,317,846
CP027599 HGAP v.3 274 113.492 97-3250 O26:H11 3 5,942,969
CP027390 HGAP v.3 272 86.476 2015C-4944 O26:H11 2 5,802,748
CP027452 HGAP v.3 271 44.21 2014C-3338b O183:H18 2 4,799,014
CP027437 HGAP v.3 269 94.01 2012C-4221b O101:H6 3 5,012,557
CP027319 HGAP v.3 259 103.5 2014C-3084 O145:H28 4 4,717,123
CP027442 HGAP v.3 257 81.78 2013C-3252 O69:H11 3 5,636,732
CP027387 HGAP v.3 254 81.261 2014C-3057 O26:H11 2 5,645,983
CP027380 HGAP v.3 251 98.093 2013C-3250 O111:H8 6 5,401,672
CP027221 HGAP v.3 248 70.272 2015C-3101 O111:H8 3 5,313,278
CP027573 HGAP v.3 247 136.013 2013C-4081b O111:H8 4 5,411,943
CP027323 HGAP v.3 243 230.753 2013C-3033 O146:H21 2 5,426,201
CP027577 HGAP v.3 243 76.725 2013C-4225b O103:H11 2 5,646,446
CP027546 HGAP v.3 241 69.884 2013C-4187b O71:H11 2 5,509,931
CP027464 HGAP v.3 239 157.835 2013C-4248 O186:H2 8 5,243,827
CP027582 HGAP v.3 232 94.362 2013C-4538b O118:H16 2 5,680,428
CP027312 HGAP v.3 230 110.444 2013C-3181 O113:H21 1 5,167,951
CP027307 HGAP v.3 229 75.219 2015C-3108 O111:H8 3 5,364,442
CP027219 HGAP v.3 228 73.109 2015C-3163 O103:H2 2 5,500,189
CP027366 HGAP v.3 227 99.281 89-3156 O174:H21 2 5,065,883
CP027388 HGAP v.3 226 100.369 2011C-4251 O45:H2 2 5,440,026
CP027313 HGAP v.3 225 104.403 2014C-3550 O118:H16 4 5,549,395
CP027520 HGAP v.3 225 84.93 89-3506b O126:H27 3 5,178,386c
CP027461 HGAP v.3 222 149.21 95-3322b O22:H5 1 5,095,223
CP027342 HGAP v.3 215 89.272 2014C-4587 OUND:H19 2 5,040,163
CP027449 HGAP v.3 193 60.29 2014C-3097b O181:H49 3 5,077,228
CP027371 HGAP v.3 186 185.611 2015C-3905 O181:H49 2 4,901,620
CP027584 HGAP v.3 181 186.121 00-3076b O113:H21 2 4,997,979
CP027373 HGAP v.3 167 169.936 05-3629 O8:H16 3 4,904,151
CP027440 HGAP v.3 158 127.87 2012C-4502 O185:H28 2 4,892,666
CP027579 HGAP v.3 155 98.187 2013C-4282b O77:H45 3 5,030,044
CP027457 HGAP v.3 149 222.41 88-3493b O137:H41 2 5,001,754
CP027447 HGAP v.3 141 157.52 2014C-3075 O36:H42 2 5,168,620
CP027462 FALCON v0.3.0 137 120.31 07-4299b O130:H11 2 4,847,172

 

The key column is perhaps “# proteins < 0.9″¬†which is the number of proteins we predict have a premature stop codon.¬† Many of these could be due to genuine pseudogenes, a known method of adaptation in bacterial genomes, however an excess indicates a problem.¬† However, in my experience, one does not usually see more than a 100-200 pseudogenes annotation in any bacterial genome.¬† As can be seen here, 4 of the E coli isolates have over 1000 genes that are predicted to be shorter than they should be!

Let’s look at this graphically.¬† Here is¬†CP027462, the “best” genome with only 137 predicted short genes:

CP027462

And zoomed in:

CP027462.500

This looks pretty good, nice histogram centred around 1 which is what we expect.

What about the worst?  Here is CP027640, which has 1799 predicted short genes:

CP027640

And zoomed in:

CP027640.500

As you can see, there are way more proteins here that appear to be shorter than they should be.

I note there is no mention of pseudogenes, insertions or deletions in the paper, nor is there mention of error correction.  At this point in time I would treat these genomes with care!

I wanted to see if there was any relationship between the number of short genes and the coverage, and there is but it’s not significant:

lm

Call:
lm(formula = d$coverage ~ d$num_short)

Residuals:
    Min      1Q  Median      3Q     Max 
-66.671 -29.329  -8.588  19.749 119.103 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept) 118.31888    7.93060  14.919

 

If you remove the genomes with over 1000 short genes, then the line is basically flat, which suggests to me coverage is not the main issue here.



I’ll repeat what I said above – it’s not that you can’t correct errors in these assemblies, it’s that people don’t.

It’s hard and takes care and attention, which is difficult to do at scale.

If you are the authors of the above study, I’m sorry, this isn’t an attack on you.¬† If you need help with these assemblies, there are many, many groups who could help you and would be willing to do so.¬† Please get in touch if you want me to put you in touch with some of them.

However, at the moment, I am sorry but these genomes look like they should be treated with¬†great care.¬† And I’m sorry to sound like a stuck record.

How accurate is the nanopore-only assembly of GM12878?

Yes, I still blog!

A quick history.¬† ¬†As many of you will be aware, Jain¬†et al published a fantastic paper where they produced the first de novo genome assembly of a human genome using the MinION, a portable DNA sequencer that uses nanopores to detect the sequence of single molecules.¬† ¬†After fixing errors with Illumina, they report an accuracy of 99.8% (more on how useful that is later….)

Despite the fact I loved this paper, I was slightly frustrated that they didn’t tackle, head on, the major issues with single molecule assemblies – insertion and deletion errors.¬† In response, I wrote my own paper, showing that indel errors are ubiquitous in both PacBio and nanopore published single molecule assemblies.

The analysis I carried out in the linked bioRxiv is fairly basic to say the least, so we have now carried out a more complex analysis using full length CDS alignments with¬†splign.¬† We also introduce the concept of control assemblies.¬† GM12878 is a cell line, and as such, has accrued over time mutations and indels in areas of the genome it doesn’t use, including many genes.¬† However, Illumina-only assemblies of NA12878 exist e.g.¬†GCA_000185165.1 , so we can use this as a control – in other words, we can look for genes with indels in the single molecule assemblies that do not have indels in the appropriate Illumina-only assembly.

Serge Koren recently published a blog post detailing a new version of the nanopore assembly of NA12878 using¬†only the nanopore data, in other words they didn’t use Pilon/Illumina at all, simply polished with nanopore data using the in-built canu read-correction and signal-level polishing with nanopolish.¬† This is apparently 99.76% accurate.

So how do the two nanopore assemblies compare?

# transcripts with indels # genes with indels
Nanopore + pilon 9051 4282
Nanopore only 22440 10111

A couple of things to point out:

  • We assayed around 46000 CDS transcript sequences we had evidence might be problematic
  • We looked at the best, longest alignment for each CDS that was predicted to produce a protein
  • We only looked at alignments including >80% of the CDS
  • The above numbers are transcripts/genes that have indels in the nanopore assembly but not in the illumina assembly

We can see that both assemblies still have indel issues.  The polishing with pilon has removed many, but 4282 genes remain affected.  The nanopore only assembly, polished with nanopolish, has over 10,000 protein coding genes with indels.

You are shouting – show me an example!

Here is an example of a transcript aligned against an illumina assembly of NA12878: link

Here is the same transcript aligned against the nanopore-only assembly of NA12878: link

Finally, here is the same transcript in the nanpore+pilon assembly of NA12878: link

As you can see, there is no evidence from Illumina data that NA12878 has problems with this transcript.  There are indels in the nanopore-only assembly, that have been fixed in the nanopore+pilon assembly.

I’m not trying to attack anyone or any technology, but we can’t fix problems if we don’t talk about them.

I remain concerned that people are publishing pacbio and nanopore assemblies without paying sufficient attention to indel errors, and our work repeatedly demonstrates that both PacBio and Nanopore assemblies suffer from the problem, even after polishing.¬† Our own solution to this has been to fix the remaining errors manually.¬† Do not assume that you shouldn’t be doing this!

Nanopore is a fantastic technology, but we should not overstate its accuracy nor ignore its problems.

For me, statements that entire, complex, 3Gb assemblies are “99.8%” accurate are at best completely pointless and at worst misleading.¬† I don’t blame authors for using them, reviewers almost certainly ask for them, but they are genuinely pointless statistics.

I hope you enjoyed this blog post!

 

 

 

With great power comes great responsibility

Recently I published a blog post about a fairly simple test to find out whether you have “short” protein predictions in your bacterial genomes, and predicted that some of these short peptides may be the result of unresolved errors in long-read, single molecule assemblies.

Perhaps not surprisingly, there was a reaction from the PacBio community over this, and here is my response.

Before I begin, I just want to say that, whilst most people see me as some kind of Nanopore fan-boy, the reality is I am a fan of cool technology and that includes PacBio.¬† The hard facts are that I have spent around ¬£200k on PacBio sequencing over the last 18 months, and about ¬£20k on nanopore in the same time period.¬† I also encouraged our core to buy a PacBio Sequel.¬† So I am not anti-Pacbio.¬† I am, however, anti-bullshit ūüėČ

In addition, my blog post wasn’t about problems with technology per se, it was about problems with people.¬†¬†If you don’t know errors are there, you might not correct them, and if you believe company hype (for example that PacBio data is Q50 after polishing) you might believe your assembly is perfect.

It isn’t.¬† They never are.

Let’s dive into some data.¬† The three PacBio genomes I chose in the last blog post are:

AP018165	Mycobacterium stephanolepidis 
CP025317	Escherichia albertii 
ERS530415	Yersinia enterocolitica

if you search GenBank genomes for these, there is only one Mycobacterium stephanolepidis genome, but there are 39 Escherichia albertii genomes and there are 176 Yersinia enterocolitica.  Many of these will be sequenced on different technologies, which allows us to do a comparison within a species.

Escherichia albertii 

I downloaded the 39 genomes from GenBank, and ran the process I described last week.  For the complete genomes, this is what the data look like, ordered from lowest number of short peptides, to greatest:

Accession Technology Contigs # proteins # short
GCA_001549955.1 Sanger+454 1 4299 19
GCA_002285455.1 Sanger+454 1 4157 19
GCA_002285475.1 Sanger+454 1 4295 28
GCA_002741375.1 PacBio 1 (plus 6 plasmids) 5126 79
GCA_002895205.1 Illumina+454+Ion 1 4423 83
GCA_000512125.1 PacBio 1 4482 106
GCA_002872455.1 PacBio 1 (plus 3 plasmids) 4984 124

Sure is unlucky that PacBio assemblies are at the bottom of the table isn’t it?¬† Of course, Sanger is the gold standard, and many will be asking what the Illumina assemblies look like.

Let’s look at the next ten genomes, which are not complete, but are the least fragmented:

Accession Technology Contigs # proteins # short
GCA_002109845.1 454 25 4758 93
GCA_001514945.1 Illumina 43 4237 82
GCA_002563295.1 Ion 44 4531 143
GCA_001514965.1 Illumina 50 4390 86
GCA_001515045.1 Illumina 53 4480 110
GCA_001515005.1 Illumina 59 4963 117
GCA_001514645.1 Illumina 63 4470 84
GCA_001514685.1 Illumina 70 4209 68
GCA_001514925.1 Illumina 73 4464 84
GCA_001514905.1 Illumina 78 4622 113

What I think is worth pointing out here is that the PacBio genomes in the first table, which are complete, have about the same number of short proteins as the Illumina and 454 assemblies in the second table, which are fragmented.  We would usually expect fragmented assemblies to have more short proteins because the contig ends would interupt ORFs.   Indeed, compared to Sanger complete assemblies, the fragmented assemblies do have more short proteins.

They just don’t have more than the PacBio complete assemblies.¬† Odd that.¬† If there were no uncorrected errors in the PacBio assemblies, they would be more like the Sanger assemblies than the Illumina ones.

Yersinia enterocolitica

Of the 176 GenBank genomes for this species, my automated script could only detect sequencing technology for 35 of them.  Here are the 21 that have a claim to be complete or near-complete (<5 contigs)

Accession Technology contigs # proteins # short
GCA_000834195.1 Illumina+454 2 4161 17
GCA_000987925.1 PacBio 2 4340 19
GCA_001304755.1 PacBio 2 4344 20
GCA_002082275.2 PacBio+Illumina 1 4067 31
GCA_002554625.2 PacBio+Illumina 1 4384 35
GCA_001305635.1 Illumina 2 4162 35
GCA_000834795.1 PacBio+Illumina 2 4259 36
GCA_000755045.1 Illumina+454 2 4226 44
GCA_000834735.1 PacBio+Illumina 2 4138 48
GCA_000755055.1 Illumina+454 1 4095 53
GCA_000754975.1 Illumina+454 2 4083 53
GCA_000597945.2 Illumina+454 3 4640 59
GCA_000754985.1 Illumina+454 3 4282 65
GCA_002083285.2 PacBio+Illumina 4 4198 76
GCA_002082245.2 PacBio+Illumina 2 4521 106
GCA_001708575.1 PacBio 3 4673 221
GCA_001708615.1 PacBio 3 4615 224
GCA_001708635.1 PacBio 3 4659 224
GCA_001708655.1 PacBio 3 4654 226
GCA_001708555.1 PacBio 3 4674 230
GCA_001708595.1 PacBio 2 4733 235

Slightly different story here, in that the PacBio (and PacBio hybrid) genomes appear to have some of the lowest number of predicted short proteins.¬† This is what people now expect when they see PacBio bacterial genomes.¬† However, there are also six PacBio genomes at the bottom of the table, and so I don’t think you can really look at this data and think there isn’t a problem.¬† It’s possible that those six just happen to be the strains of¬†Yersinia enterocolitica¬†that have undergone the most pseudogenisation, but I don’t think so.



Let’s get some things straight

  • I know this is an incomplete analysis and obviously more work needs to be done
  • If I¬†personally wanted a perfect microbial genome, I would probably use PacBio+Illumina
  • I have nothing against PacBio
  • Nanopore aren’t in this list because they’re not in the species I chose, but I am sure they would also have significant problems

 

As I said above, it’s not that PacBio has a problem¬†per se, it’s that people have a problem.¬† Yes, many errors are correctable with Quiver, Arrow and Pilon; often multiple rounds are necessary and you still won’t catch everything.

But not everyone knows that, and it’s clear to me from the above that many people are still pumping out poor quality, uncorrected, indel-ridden PacBio genomes.

The same is true for Nanopore, I have no doubt.

Let’s stop bullshitting.¬† These technologies have problems.¬† It doesn’t mean they are bad technologies, anything but – both PacBio and Nanopore have been transformational.¬† There is no need for bullshit.

“With great power comes great responsibility” – in this case, the responsibility is two-fold.¬† One, stop contaminating public databases with sh*t assemblies; and two, stop bullshitting that this isn’t a problem for your favourite technology.¬† That helps no-one.

 

A simple test for uncorrected insertions and deletions (indels) in bacterial genomes

A friend and a colleague of mine once said about me “he’s a details man”, and it was after we had discussed the fact some of my papers consist solely in pointing out the errors other people ignore – in RNA-Seq for example, or in genome assemblies (I have another under review!).

By now, those of you familiar with my earlier work will be jumping up and shouting

“Of course!¬† Even as far back as 2012, he was warning us about the dangers of mis-annotated pseudogenes!¬† Just look at Figure 2 in his review of bacterial genome annotation!”

Well, let’s dig into that a bit more.

For the uninitiated, the vast majority of bases in most bacterial genomes fall within open-reading frames (ORFs) that are eventually translated into proteins.  Bacterial evolution, and often niche specificity, is characterised by the psedogen-isation of these ORFs Рbasically, many replication errors in ORFs (especially insertions and deletions) can introduce a top codon, the transcript and resulting protein are truncated, they are often no longer functional, and you have a pseudogene.  This happens when the ORFs are not under positive selection.

HOWEVER.  Sequencing errors can also introduce errors, and you end up with incorrectly annotated pseudogenes.

And OH NOES we just so happen to be flooded with data from long read technologies that have an indel problem.

Let’s assume you have a shiny new bacterial genome, sequenced on either PacBio or Nanopore, and you want to figure out if it has an indel problem.¬† Here is my simple and quick test:

First predict proteins using a gene finder.  Then map those proteins to UniProt (using blastp or diamond).  The ratio of the query length to the length of the top hit should be a tight and normal distribution around 1.

OK so what does this look like in practice?

Let’s look at a genome we know if OK, E coli MG1655:


MG1655.combined

On the left, we have the raw histogram, and on the right we have zoomed in so the y-axis ends at 500.  Generally speaking, this has worked I think Рthe vast majority of predicted proteins have a 1:1 ratio with their top hit in UniProt.

Here is¬†Salmonella gallinarum which we know “has undergone extensive degradation through deletion and pseudogene formation”


gallinarum.combined

Well, would you look at that!¬† There they are, all those pseudogenes appearing as predicted proteins that are shorter than their top hit.¬†¬†For those of you saying “It’s not that obvious”, in the plots above, MG1655 has 157 proteins < 90% of the length of their top hit, whereas for¬†Salmonella¬†gallinarum the figure is 432.¬† ¬†By my recollection,¬†gallinarum has over 200 annotated pseudogenes.

If you’re convinced that this actually might be working, then read on!

So here are a couple of PacBio genomes.  I genuinely chose these at random from recent Genome Announcements.  Generally speaking, all had high PacBio coverage, were assembled using Canu or HGAP, and polished using Pilon.

In no particular order:


AP018165.combined

(237 proteins < 90% of top hit)


CP025317.combined

(246 proteins < 90% of top hit)


ERS530415.combined

(268 proteins < 90% of top hit)

Now, these are not terrible numbers, in fact they are quite good, but those long tails are a tiny bit of a worry, and are definitely worth checking out.¬† Bear in mind, if 200 pseudogenes in¬†Salmonella gallinarum is “extensive degradation through deletion and pseudogene formation“, then 200 incorrect indel pseudogenes in your genome is a problem.¬† In the above,¬†I can’t possibly tell you whether these¬† numbers of predicted short/interrupted ORFs are real or not because I don’t know about the biology.¬† However, I can say that given they were created with a technology known to have an indel problem, they are worth further investigation.

Indel sequencing errors are not the only problem, so are fragmented assemblies.  In fragmented assemblies, the contig ends also tend to be in ORFs and again, you get short/interrupted proteins.  Here is one of our MAGs from our recent rumen paper:


RUG001.combined

(310 proteins < 90% of top hit)

It’s not all bad news though, some of the MAGs are OK:


RUG045.combined

(126 proteins < 90% of top hit)

And at least other peoples’ MAGs are just as bad (or worse):


UBA2753_genomic.combined

(265 proteins < 90% of top hit)

My absolute favourite way to assemble genomes is a hybrid approach, creating contigs with Illumina and then stitching together with a small amount of nanopore data.  We did this with B fragilis and the results look good:


fragilis_be1.combined

(63 proteins < 90% of top hit)


So are there any examples of where it goes really badly hideously wrong?¬† Well the answer is yes.¬† We are assembling data from metagenomic samples using both PacBio and Nanopore.¬† ¬†The issue with metagenomes is that you may not have enough coverage to correct errors using the consensus approach – there simply aren’t enough bases at each position to get an accurate reflection.¬† We are seeing the same pattern for both PacBio and Nanopore, so I won’t name which technology, but…. well¬†scroll down for the horror show.

 

 

 

 

 

 

 

 

 

 

 

 

Are you ready?¬† Scroll down….

 

 

 

 

 

 

 

 

 

 

A 3Mb contig from a Canu assembly:


tig00000022.combined

(2635 proteins < 90% of top hit)

Now,¬†the only work this has had done on it is that Canu performs read correction before assembly.¬† We did one round.¬† Sweet holy Jesus that’s bad isn’t it??

Can it be corrected?  To a degree.  Here it is after two rounds of Pilon and some technology-specific polishing:


tig00000022.pilon2.combined

(576 proteins < 90% of top hit).

So less bad, but still not great.¬† I’d say this assembly has problems, which is why we haven’t published yet.


What’s the take home message here?¬† Well there are a few:

  1. Errors matter.  Pay attention to them.
  2. Both long read technologies have indel problems and these probably cause frameshifts in your ORFs
  3. Polishing and consensus and Illumina correction helps but it doesn’t catch everything
  4. The problem will inevitably be worse if you have low coverage
  5. Comparing predicted proteins with their top hits in UniProt can identify where you have errors in your ORFs (or real pseudogenes!)

Code for doing this type of analysis will appear here.

Let’s keep saying it, and say it louder: REVIEWERS ARE UNPAID

The arguments over peer review and whether we are obliged to do it generally fall on two sides – either it is or isn’t an implicit part of your job. ¬† Stephen Heard falls into the former group, arguing that there are many things which are implicitly part of our jobs as academics, and which don’t make it into our job description.

I agree wholeheartedly with Stephen that peer-reviewing is not in my job description. ¬†In fact it is nowhere to be seen. ¬†It’s not in my employment contract either, nor has it ever been in any of my lists of annual objectives, after 16 years in academia. ¬†In fact, I don’t think it’s ever been discussed, either in annual appraisal meetings or in my annual meetings with the institute director. ¬†It doesn’t make it on to my CV either.

Here is a key point: I could never write another peer review for the rest of my career and my career would not suffer.  Not one bit.

It is not part of my job and I am not paid to do it.

(for the record, I do peer reviews! For free!)

In an increasingly pressurised environment where the¬†only factors that influence my career progression are papers published and grants won; with over 6500 e-mails in my inbox which I have lost control of; and with a to-do list I have no chance of ever finishing,¬†prioritisation is essential. ¬†The key for any task is to be important enough to get into the “action” zone on my to do list. ¬†Does peer review manage that? ¬†Occasionally, but not often.

Would it get there more often if I was paid to do it? ¬†Absolutely. ¬†Why? ¬†Because I have bills to pay and a small family and every little helps. ¬†I imagine this is even more true for post-docs and early career researchers. ¬†Why should they do something for free, often for profit-making organisations, when it doesn’t affect their career prospects one tiny bit? ¬†The answer is simple: they shouldn’t.

A common argument is this: if everyone stopped peer reviewing, science would grind to a halt. ¬†Well unfortunately that ignores reality. ¬†If delivery drivers stopped driving, would¬†there be no food on the shelves? ¬†No. ¬†Because we wouldn’t and couldn’t let that happen. ¬†We’d find whatever incentive needed to be found, and make sure the drivers still drove. ¬†The same is true of peer review. ¬†If you are struggling to find people, there is a simple solution, one that is as old as money itself: pay them.

(note: I do many peer reviews and I will continue to do so; free.  However, I believe it is time to re-think incentives, and yes, it is time to pay people for peer reviews)

It’s a bit POINTLESS if you forget the POINTLESS in POINTLESS ADMIN

Rather predictably, a Guardian article railing against pointless admin has stimulated a response from the university admin community.   Unfortunately, the response kind of misses the point Рacademics are annoyed at POINTLESS ADMIN, not at administrators in general (though the two are not entirely unrelated).

Here’s the thing – there is both pointless and essential administration, and at the same time, there are both excellent and poor administrators. ¬†An attack on pointless admin is not an attack on good administrators. ¬† We all can recognise groups of excellent administrators without whom the place would fall apart. ¬†No-one is attacking them.

So what is pointless admin? ¬†There is so much of it I don’t know where to start. ¬†Perhaps the best example is the recording of academic outputs. ¬†My University insists I put them in PURE; Wellcome Trust insist I have an ORCID account; and BBSRC insist I use ResearchFish. ¬†You can accept the motivation for this whilst at the same time recognising that it is genuinely pointless replication of effort. ¬†Academics are angry not because this happens once or twice, but because it happens all the time and it is increasing in frequency.

So what makes a good administrator? ¬†Well, I have always said “A bad administrator reminds you to do something, a good administrator does it for you”. ¬†There are caveats, but it’s a good starting point. ¬†A quick guide for administrators might be something like this: “Is this a tick-box exercise? ¬†Can you tick the box? ¬†Then tick the box. ¬†Please. ¬†Thank you! ¬† Does it really need the academic’s input? ¬†Can they send bullet points and you do the rest? ¬†Let’s do that then! ¬†Otherwise are there any parts you can fill in? ¬†I’d love for you to fill them in! ¬†Is it possible to visit and not send yet another e-mail? ¬†That would be awesome. ¬†You can ask if a follow-up email would help and then send one afterwards. ¬†Is there a possibility that one system/form can be filled in and then the same information copied to other systems/forms? ¬†Can you do the copying? ¬†Awesome! ¬†And, as an aside, can we maybe have less systems/forms if they contain the same information? ¬†Amazing”.

The obvious response to all of this is “why should the admins have to do all of that? ¬†You arrogant ****! ¬†Do your own admin!”. ¬†This misses an unfortunate point – we are drowning. ¬†If academia was a near-death experience, then academics are 200 yards off shore, in a rip-tide, barely breathing with our arms in the air, waving for help. ¬† We need help. ¬†Our job is to teach, to publish papers and to win research grants, research grants with overheads, all of which go some way to sustaining the very institution we all work for. ¬† Every minute doing admin is a minute we are not doing the thing that keeps the lights on.

Please.  Help us.  Remove POINTLESS ADMIN.  And be a good administrator, not a bad one.

Judge rules in favour of Oxford Nanopore in patent dispute with PacBio

Forgive me if I get any of the details wrong, I am not a lawyer, but the title of this post is my take on a judgement passed down in the patent infringement case PacBio brought against ONT.

To get your hands on the documentation, you need to register and log in to EDIS, click “Advanced Search”, do a search for “single molecule sequencing” and click the top hit.

My interpretation of the¬†documentation is that the judge has massively limited the scope of the patents in question by expanding on the definition of “single molecule sequencing”. ¬†ONT argued that in the patents in question, “single molecule sequencing” referred only to “sequencing of a single molecule by template-dependent synthesis”, and the judge agreed with this definition.

sms

All claims are then subsequently limited to template-dependent synthesis, which of course is NOT what Oxford Nanopore do.

tds

The document then goes into an area that would make all biological ontologists rejoice – THEY TRY AND DEFINE THE TERM “SEQUENCE”. ¬†I can almost hear the voices shouting “I told you so!” coming out of Manchester and Cambridge as I write ūüėČ

Older posts

© 2019 Opiniomics

Theme by Anders NorenUp ↑