Opiniomics

bioinformatics, genomes, biology etc. "I don't mean to sound angry and cynical, but I am, so that's how it comes across"

Thoughts on clinical genome sequencing

For the purposes of this post, when I talk about “clinical genome sequencing”, I’m talking about the sequencing of a patient’s genome with the specific aim of affecting their health care, be it diagnosis, prognosis or choice of treatment.

During my travels I often come across people in the clinical space (providers of clinical tests to patients) and I am always surprised by their reticence to even explore the idea of clinical genome sequencing.

I’ll give an example: recently, someone said to me that the £100m invested in Genomics England (a project to sequence 100,000 patients in the UK in the next 5 years) would have been better invested in the current NHS genetic testing framework.  In other words, don’t invest in the future, just put the money into the current service (the NHS currently costs approx £105billion annually, and so £100million is a drop in the ocean).

Obviously there is a clash of cultures here, and the ambitious, reckless attitude of “sequence first, think later” prominent in the genomics world is about as far as one can get from the cautious, regulated world of the clinic.

Now, I’m not an expert in clinical testing, but I wanted to express a couple of ideas.

We need to use the correct comparison on cost

The argument I hear most often is “Why would you pay $1000 to sequence a whole genome when the genetic test for (insert disease here) costs only $100?”.  Of course this argument only makes sense for a heartbeat, until you realise that the $100 test only tests for one disease, whereas the $1000 genome tests for a much larger number.  So the correct comparator for the $1000 clinical genome is not the $100 test, but the sum total of the cost of all genetic tests that that patient will ever need throughout their lifetime that the $1000 genomic test would test for too.

Of course we need to go further.  At present, we know the association between genetic variation and disease for a relatively small number of diseases.  However, in 10 years time we will know more; in 20 years yet more; and in 50 years yet more.

So the correct comparator is not just the sum of all tests that a patient will need throughout their lifetime that we know about, but all the additional genetic tests that we will develop throughout that patient’s lifetime.  Someone smarter than me needs to model this, but I’m betting that if we sequence all babies at birth now for $1000, and use the data for genetic tests throughout that person’s life, then $1000 will prove to be a bargain.

The patient needs to own the data

The second thing I wanted to say is that I’m surprised to learn that currently NHS patients don’t get access to the genetic data determined for them during a genetic test.  For example, let’s say a gene is sequenced or a SNP assayed – the consultant gets to see the test result (not the data) and make a decision, but the patient doesn’t get the data.  They don’t get the DNA sequence, or the SNP.  This needs to change.

The model has to be that once a patient is sequenced, they get the raw and processed data, and they own their own genome.  Then, whenever they need a genetic test, they provide not blood, nor DNA, they simply provide the genome sequence data that they already own, and the test provider applies a regulated and approved informatics pipeline to come up with the test result.

I’m naive

I am, I know this.  I’m not a clinician.  But I am convinced that genome seqence data will hugely influence clinical practice, and I am convinced that patients owning their own data is crucial to the coming revolution.

Thoughts?  Comments?  I expect to be told how wrong I am 😉

14 Comments

  1. Mick,
    Naive, yes maybe. Wrong? I don’t think so. It’s about ending diagnostic odysseys — the time and the cost — as has been evangelized for some time by Howard Jacob and many others…
    http://www.bio-itworld.com/BioIT_Article.aspx?id=105386
    … and increasingly confirmed by medical centers such as Baylor College of Medicine, in it’s landmark NEJM paper last year:
    https://www.bcm.edu/news/molecular-and-human-genetics/whole-exome-sequence-takes-new-tech-to-clinic
    Kevin Davies

  2. Thanks for the interesting post. While I agree with you that personal genome data is likely to dramatically influence medical care, I do think the cautious clinicians you’ve been talking to have a point.
    What follows is a little rambling.. for the possibly more succinct (but simplistic, meant for policy makers version) look here http://www.phgfoundation.org/reports/15237/
    With all genetic testing I have learned that there are two key questions ‘what is the purpose of the test?’ and ‘What is the sensitivity and specificity of your test? In other words, how likely are you to be wrong, and what are the consequences of being wrong?
    If the diagnosis is more or less already made by the clinician and the test gives you a genetic explanation for your phenotype but doesn’t affect your management, then failing to detect a mutation, or detecting the wrong one in and of itself may not be the end of the world.
    If on the other hand you are trying to discover whether or not your child has inherited two AR mutations from their parents (meaning future pregnancies may warrant screening), or indeed whether your foetus is carrying a potentially disease causing mutation, then the impact of ‘getting it wrong’ can be huge, as reproductive life/death decisions are made on this basis.
    The argument of the clinical geneticists (and their clinical scientist colleagues) that I’ve met through work is that CURRENTLY a (30x) whole genome sequence cannot provide them (in many but not all cases) with the >95% sensitivity and specificity that their current NGS based single gene or panel tests achieve.
    They simply wouldn’t be confident signing out the results of a WGS based test without a great deal of extra (expensive and time consuming) validation testing (Sanger, MLPA or extra NGS to make up for coverage gaps). This is an issue for the 100K genomes project, as no extra cash is on offer for NHS genetics services to act on the data that arises from it, be it in validating the results or cascading testing to relatives.
    On the cost point, there I am more in agreement. Not so much because of the costs of testing for multiple conditions over your lifetime, but because even for a single condition it is common these days to test for large panels (up to 100 genes) at a time. Designing and running custom gene panel diagnostics is expensive, as the companies that provide them are often making them in low volumes for a few labs. This is driving many UK genetic testing labs to move towards ‘clinical exome’ sequencing as the kits (e.g Trusight One) are mass produced (cheaper) and simplify your testing lab workflows as you always run the same wet lab assay, regardless of condition being investigated. It is notable, however, that these kits only cover 4800 odd genes with known disease associations.. and so aren’t going to find you mutations in genes not before associated with disease (which clinicians will anyway tell you is research not diagnostics). For that you will need WGS + validation, which, to achieve current clinical standards is going to cost several thousand pounds.
    Lastly (sorry.. I will stop soon) on the patients owning their data. I agree completely. Especially as genetic medicine is not currently set up to reanalyse historical test results in light of newly published data… something individual patients who didn’t get a diagnosis first time round might be quite keen to be able to do.
    As for regulated and approved bioinformatics pipelines… well, that is a rather large can of worms and one which the PHG Foundation is peeking into right now.. so watch this space.

  3. Genomic analysis already plays an important role for many patients, but there are still significant limitations. Not all classes of pathogenic alleles are efficiently and accurately detected by current nextgen sequencing technology. The genetic mechanisms underlying many important traits are not completely understood. Some pathogenic states may involve epigenetics. Etc. Cost is also likely to drop further.
    It will take awhile before WGS becomes a baseline tool for everyone. So unless you have a pressing problem there is no particular reason for most people to get their genomes sequenced today. Early adopters welcome, of course.
    As far as owning your own data, sure, why not? But you will have as many difficulties interpreting your variants as anyone and you will lack the context information that comes with aggregating clinical expertise. Most people have many other things on their minds – may not be fair to expect them to have responsibility for this complex problem.

  4. While I am sure the current genetic screening/counseling system would appreciate and improve if given an £100millon investment, I suspect it would be unlikely to make significant headway in truly driving forward whole genome sequencing at a diagnostic tool.

    It is important not to confuse what Genomics England is doing with large scale research sequencing efforts with patients like UK10K or ICGC. The research projects are hoping to produce appropriate diagnostic tools but are mostly wanting to better understand the link between genotype and disease and if diagnostic improvement happens its almost an added bonus in many cases.

    If I understand correctly what Genomics England and the 100K genomes scheme is trying to do is actively move whole genome sequencing into the clinic. This involves both improving the technology (both sequencing and bioinformatics) but also educating clinicians and ensuring that the data moves towards the level of specificity required for acceptable diagnostic testing.

    This sort of investment is needed in order to actually start this process and without it while it might happen somewhere in the world it would not happen in the UK.

    IMHO to assume this is just about data collection is wrong.

  5. Excellent post, as always, Mick. Thanks. I want to contribute a couple of points.

    I agree that the owning of own clinical data is essential – I think it is ridiculous that this isn’t the case already regarding traditional medical records in the UK. Even moreso that you can be asked to participate in a clinical trial, involving genetic screening, and they still won’t provide you with the data. While John is correct that individuals may not know what to do with the data themselves, at least owning it they can then share it with who they want, and relatively easily get second, third, fourth opinions on it. Though you have the right, it is not usually straightforward to get a second opinion on anything from the NHS. There are already tools freely available that allow those with the technical knowledge to perform analyses on genomic data, and it is only a matter of time before similar tools that are suitable for non-experts to use will become available.

    Another important point I feel is that while at the moment there is only limited value in having one’s genome sequence, unless one has reason to believe they may be harbouring a particular genetic predisposition, the value is going to increase exponentially over time. As more genomes are sequenced and matched to phenotypes and health outcomes over the next 25-50 years, I feel we will get a much better grasp of the genetics underlying most diseases, and perhaps even begin to locate the elusive ‘missing heritability’.

    Regarding whether or not 30x is deep enough, that will depend on the variant(s) in question – many regions of a “30x genome” will be sequenced substantially deeper, while others will be substantially less – if the variant falls into the former category, then identification may be relatively straightforward. While it may not prove definitive at a clinical level, it should certainly help direct analyses in the correct direction.

  6. Thanks for your reply, this is really interesting.

    On the issue of a 30X genome not providing accurate (>95%) results, is there any evidence for this? Do we know at what coverage this changes?

  7. I can’t help but feel I dealt with these points in my post, but I will repeat them 😉

    I realise that not all alleles are accurately detected by NGS, but as long as enough are to make the costs make sense, then my argument still stands.

    The correct comparison is not the diseases we know about now, but the diseases we will know about during patients lifetimes. That could be 50 years worth of additional research.

    NGS will always become cheaper; therefore if you never do it because it will become cheaper, then you will never do it.

    Once patients own their own data, they will be able to buy interpretation from approved providers.

  8. Sorry, should have made it clearer that issue is with 30x average coverage, which means you’ve huge areas covered at much less than this. The best discussion I have seen of this, and what it means for clinical sequencing, was written by the scientific advisory group to the 100K genomes project. http://www.genomicsengland.co.uk/wp-content/uploads/2013/06/GenomicsEngland_ScienceWorkingGroup_App2rarediseases.pdf
    See pages 6-7 written by Mark McCarthy. There is not, sadly, a great deal of published data on this issues from a clinical testing perspective yet, but it is something I know NHS genetics labs are working on to try and resolve. Best reference I have found is this one http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3166834/ but it’s based on ‘outdated’ sequencing tech and so am not sure how applicable it is in 2014.
    If you really want to know more you could do worse than read these guidelines from the Association of Clinical Genetic Scientists, which set out how they go about validating NGS baseed genetic tests. http://www.acgs.uk.com/media/774802/bpg_for_targeted_next_generation_sequencing_final.pdf
    You’ll note they do not specify any minimum depth of coverage, only that it should be empirically determined for each test according to the required sensitivity. This will depend not only on the difficulty of getting good coverage of the gene you’re targeting but will also depend on the class of variant being looked for i.e. SNV vs indelvs CNV.
    Had an interesting discussion about this recently with some clinical scientists who suggested this was a major challenge with introducing WGS, that you’d potentially need to determine test sensitivity for each mutation type across each gene. This is essentially what Illumina has already done with its Understanding Your Genome programme, and is probably why they only report on mutations in a few thousand genes… those are the only ones that meet their sensitivity criteria.

  9. Hi Mick – this is indeed stuff to think about. Perhaps as a data point, I recently went along to a hospital to receive the results of genetic testing for someone, and when I asked, they gave me a copy of the report listing all the SNPs in a format I could look up on the snpedia. This is from The Netherlands, and probably influenced by me being familiar with genetics etc. But still.

  10. Why not do MS/MS of liver and kidney? Why not sequence the diversity of blood to know one’s auto-immune profile ? What is special about “genome sequencing” other than it being the fashion of the day?

    > In other words, don’t invest in the future, just put the money into the current service (the
    > NHS currently costs approx £105billion annually, and so £100million is a drop in the ocean).

    Both are bad options. You guys need to pay off your debt.

  11. Hi Mick – also interested to hear your thoughts on how efforts like the Personal Genome Projects (especially PGP UK – http://www.personalgenomes.org.uk/), fit in to the big picture here, in particular, extending access to individuals genome data to the wider research community?

  12. …and another perspective:
    http://www.theguardian.com/society/2014/feb/21/nhs-plan-share-medical-data-save-lives
    Ben Goldacre wrote an article on care.data NHS data sharing plans suggesting people should be involved, but it’s being mis-sold currently.by the government.

  13. I’ve just submitted a paper looking at precisely the issue of SNP detection sensitivity (sorry, not indels or CNVs) in whole exome vs. whole genome sequencing. 30X average WGS gets you >98% estimated sensitivity for heterozygous coding variants (and >99% for homozygous variants). I observed minimal difference between known and novel sites. Coding regions generally seem to be evenly and well covered by WGS. Given that the majority of diagnostic variants are in coding regions at the moment, and it’s far easier to functionally characterise coding variants in general, this should be fairly reassuring for clinicians. Non-coding/intergenic regions are obviously going to be more problematic due repeats.

  14. Well, let’s have a look at another data generator: radiology.
    that’s typically big chunks of data which are generated once (when the picture is taken), but can be needed later (to compare before/after surgery, to follow the evolution of the healing, etc.)
    Nowadays, all past exams (including voluminous ones like MRI, CT-Scans, 4D imaging, etc.) are stored onto a hospital’s PACS, and kept for future reference.
    Whenever you move from one place to another, you can easily order your data burnt on a DVD and get it with you so that a doctor at your new place could reference the old pictures if needed.
    Once genome data is know to be good quality enough (data sequenced today will still be useful in 10 years. Not that tests in 10 years will critically depend on data which is not available at all today) – and we’ve reached probably that point today – and once there are clear cases that data should be better kept for future reference – again something that you proved easily – I think will start to see consideration about archiving genomes.
    (and it’s a good thing that a genome can fit on a DVD – easier to carry it around to specialists)
    The current main (solveable) problem would be getting the necessary storage capacity if having personal genomes suddenly becomes the norm.
    @homolog.us:
    Guess what? there *are* people thinking about a way to acquire data from MS/MS in a good quality in a way which can by re-interpreted later on.
    Look around for technologies like OpenSWATH.
    Again, if the data quality is good enough, and there are enough proven case of “future reference” being useful, that’s going to start appearing soon on PACS in hospitals near you.
    (but with MS/MS we aren’t there yet).

Leave a Reply

© 2017 Opiniomics

Theme by Anders NorenUp ↑