Edited 30/07/2014 – I got confused about who wrote what and when – apologies to Embriette and Jonathan

There’s a lot of discussion about Neil Hall’s paper on the Kardashian index, and this is classic Neil stuff – funny and provocative.  I’m not going to discuss this paper directly, enough are doing it already.

Over on the microBEnet blog, Embriette Hyde states:

A high K index indicates that said scientist may have built their reputation on a shaky foundation (i.e. the Kardashians, circled in Figure from the paper below), while a low K index indicates that said scientist is not being given credit where credit is due.

I think we’re coming dangerously close to making judgements about quality, and what we’re actually measuring is citations – and they’re not the same thing!

For me, the number of citations a particular paper gets is based on:

  1. The size of the field e.g. if you work in cancer (a large field), your paper has a higher probability of being cited than if you work on an obscure fungus that only grows in Lithuanian coal mines (probably a very small field, but I may be wrong).
  2. Given the size of the field, next comes the relevance of your work.  So, sticking with the cancer example, you may work with a high prevalence cancer, such as lung cancer (70,000 deaths last year in the UK) or cancer of the penis (111 deaths in the UK last year)
  3. Then, given the size of the field, and the importance of the work, comes relevance.  Perhaps you discovered an oncogene that affects 60% of the population; alternatively you have found an oncogene that affects 2% of a small tribe from the Peruvian rainforest.
  4. Finally comes the quality of your work.  It is quite easy to imagine someone who does incredibly high quality work in a niche area getting far fewer citations than someone who does really crappy work in a larger, more important field

To be fair to Neil, and Embriette, neither directly state that citations = quality, in fact Neil deliberately uses the phrase “scientific value”, which is ill defined.

However, the fact still remains that what Neil is implying is that scientists with low numbers of citations are somehow less than those with lots of citations.  And that’s just not right.

And Embriette’s wording implies that scientists with low citations (and high numbers of Twitter followers) are on “shaky foundations”.  I say the assumptions behind that wording are on shaky foundations!

Some really very high quality work never gets cited (we can argue another time about the value of such research).  And some really quite shoddy work gets huge numbers of citations.  So I get to have the last word:  number of citations is not a measure of quality.