Opiniomics

bioinformatics, genomes, biology etc. "I don't mean to sound angry and cynical, but I am, so that's how it comes across"

Not providing feedback on rejected research proposals is a betrayal of PIs, especially early career researchers

Those of you who follow UK research priorities can’t have missed the creation of the Global Challenges Research Fund (GCRF).  Over the last few months the research councils in the UK have been running the first GCRF competition, a two stage process where preliminary applications are sifted and only certain chosen proposals are allowed to go through to the second stage.   We put one in, and like many others didn’t make the cut.  I don’t mind this, rejection is part of the process; however I do worry about this phrase in the rejection email:

Please note specific feedback on individual outlines will not be provided.

Before I go on, I want to launch an impassioned defense of the research councils themselves.  Overall I think they do a very good job of a very complex task.  They must receive many hundreds of applications, perhaps thousands, every year and they ensure each is reviewed, discussed at committee and a decision taken.  Feedback is provided and clearly UK science punches above its weight in global terms, so they must be doing something right.  They are funding the right things.

I’m also aware that they have had their budgets cut to the bone over the last decade and by all accounts (anecdotal so I can’t provide links) Swindon office has been cut to the bare minimum needed to have a functional system.  In other words, in the face of cuts they have protected research spending.  Good work 🙂

I kind of knew GCRF was in trouble when I heard there had been 1400 preliminary applications.  £40M pot with expected grants of £600k means around 66 will be funded.  That’s quite a sift!

The argument will go that, with that sheer number of applications there is no way the research councils can provide feedback.  Besides, it was a preliminary application anyway, so it matters less.

I couldn’t disagree more, on both accounts.

First of all lets deal with the “preliminary” application thing.  Here is what had to happen to get our preliminary application together:

  • Initial exchange of ideas via e-mail, meetings held, coffee drunk
  • Discussions with overseas collaborators in ODA country via skype
  • 4-page concept note submitted to Roslin Science Management Group (SMG)
  • SMG discussed at weekly meeting, feedback provided
  • Costings form submitted and acted on by Roslin finance
  • Quote for sequencing obtained from Edinburgh Genomics
  • Costings provided by two external partners, including partner in ODA country
  • Drafts circulated, commented on, acted on.
  • BBSRC form filled in (almost 2000 words)
  • Je-S form filled in, CV’s gathered, formatted and attached, form submitted

In actual fact this is quite a lot of work.  I wouldn’t want to guess at the cost.

Do we deserve some feedback?  Yes.   Of course we do.

When my collaborators ask me why this was rejected, what do I tell them?  “I don’t know”?  Really?

Secondly, let’s deal with the “there were too many applications for us to provide feedback” issue.  I have no idea how these applications were judged internally.  I am unsure of the process.  However, someone somewhere read it; they judged it; they scored it; forms were filled in; bullet points written; e-mails sent; meetings had; a ranked list of applications was created; somewhere, somehow, information about the quality of each proposal was created – why can we not have access to that information?  Paste it into an e-mail and click send.  I know it takes a bit of work, but we put in a lot of work too, as did 1400 other applications.  We deserve feedback.

At the moment we just don’t know – was the science poor?  Not ODA enough?  Not applied enough?  Too research-y?  Too blue sky?  Wrong partners?  We are floundering here and we need help.

Feedback to failed proposals is essential.  It is essential for us to improve, especially for young and mid- career researchers, the ones who haven’t got secure positions, who are being judged on their ability to bring in money.  Failure is never welcome, but feedback always is.  It helps us understand the decision making processes of committees so we can do better next time.  We are always being told “request feedback” so when it doesn’t come it feels like a betrayal.  How can we do better if we don’t know how and why we failed?

So come on research councils; yes you do a great job; yes you fund great science, in the face of some savage budget cuts.  But please, think again on your decision to not provide feedback.  We need it.

6 Comments

  1. Well I am going to say what I think is happening but that may be of little help. Obviously it is just a personal opinion.

    There is no feedback because there is no objective reason of why you did not get that grant. The real problem was that were too many really good proposals and too few options to fund them. The old fashioned review system that we came up with and still employ worked well when 30% of grants got funded but breaks down when only 5% of them do.

    Most scientists will never admit to this but I am convinced that we as scientists cannot objectively identify the top 5 grants out of a hundred. FWIW I don’t think that grants can be ranked at that level. At that point tacit preferences, hidden biases, pre-judgments and subjectivity will dominate the outcome.

    In computer science research there is evidence showing that papers winning “best of” conference do not go on to garner more citations or that having alternative review teams on very selective conferences will produce wildly different outcomes on accepted papers.

    Hence the problem is that we reached the point where there are substantially more, equally good and worthy proposals than funding opportunities. Hence the reason for not getting this grant is simply this – somebody had to lose and this time it was you and many others just like you.

    But of course this is a sort of heresy after all this is SCIENCE it has gotta be objective – and I don’t expect my view to be overly popular. For me it works and helps me get over it more quickly – I chalk it up to luck. Alas the thing with luck is that some people can be very unlucky …

  2. biomickwatson

    1st August 2016 at 8:46 pm

    I actually agree. Spot on. Research councils ill prepared for this funding (not their fault!) and ill prepared for the volume of applications.

  3. While I agree with you that feedback should be standard even when there has been an avalanche of proposals to the MRC and BBSRC calls, I suspect you misinterpreted the first stage of the BBSRC call. Unlike the two MRC Foundation calls, outlines were not judged on their quality, to quote the assessment criteria “The research quality will not be assessed at that outline stage”. So your application didn’t fit the aims or the scope of the call.

    In many ways not receiving feedback as many researchers are being asked to operate in a space that is new to them and, given GCRF is due to be ramped up over the spending period, will need better guidance on where they fell down on “fit” in future. At least the MRC (in reply to their 923 applicants) put each application in one of four categories (low relevance to health in low / middle income countries, relevant to health in LMICs but suited to conventional funding schemes, suitable but not a high priority for further research at this time and suitable and encouraged to proceed to a full application).

    We can only hope this is a learning opportunity for all of us.

  4. All the more reason why it should not be too hard to provide feedback. The criteria should be clear and simple to apply, and it should be equally straightforward to let us know where we have fallen down; we are not being judged on a complex issue like science quality. I agree with the blog and the first response above. Everyone has raised their game to the point where funding is, to a substantial extent, now based on chance.

    FWIW our MRC application was judged to be on an area of low relevance to LMIC health , even though we provided WHO figures in the proposal to show it is actually a massive problem. I won’t appeal (though there was a recent case where someone took legal action over a large EU grant and overturned the decision not to fund), but I agree that funding mechanisms are largely broken, despite the strenuous efforts of the research councils.

  5. biomickwatson

    2nd August 2016 at 7:28 am

    I’m not sure I misinterpreted – I read the guidelines very carefully. In fact BBSRC’s were quite loose stating even research carried out entirely in the UK but with some relevance to ODA countries was compliant. We proposed work on samples collected in ODA countries working with an in-country partner. It’s really hard to see how we weren’t compliant. I think they got way more applications than they expected and had to introduce more criteria to the sift.

  6. I am not sure that nearly 4/5 of 500 applications would be out of scope and I’m pretty certain that our (rejected) application was not. Rejection e-mail was just the standard “there were other stronger applications”. The guidance did allow BBSRC room to sift applications and this is clearly necessary when trying to get from c500 to c20. Like the author of the blog though, I’d love to know how our application was perceived to help decide what to do next.

Leave a Reply

© 2017 Opiniomics

Theme by Anders NorenUp ↑