Peer-reviewed scientific grants are more likely to be funded when applicants are allowed to suggest reviewers

Peer reviewers are four times more likely to give a grant application an “excellent” or “outstanding” score rather than a “poor” or “good” one when they are chosen by the grant’s applicants, an analysis of Swiss funding applications has found.

The study, at the Swiss National Science Foundation (SNSF), was completed in 2016, and the SNSF acted quickly on its findings by banning grant applicants from being able to recommend referees.

The authors, who are affiliated with the SNSF, posted their results online at PeerJ Preprints1 on 19 March, and in their paper call on other funders to reconsider their funding processes.

“I think this practice should be abolished altogether,” says study co-author Anna Severin, a sociologist who studies peer review at the University of Bern. Other experts are also wary of the problems that author-picked peer reviewers might cause, but some question whether banning them altogether is the right step.

The study examined more than 38,000 reviews from nearly 13,000 SNSF grant applications by about 27,000 peer reviewers from all disciplines between 2006 and 2016. The authors found that reviewers nominated by applicants were more likely to give these applicants higher evaluation scores than referees chosen by the SNSF.

Higher scores
The study found that reviewers affiliated with non-Swiss institutions gave higher evaluation scores, on average, than those based in the country. Male reviewers gave higher scores than female reviewers did, and male applicants received higher scores than female applicants, although the difference was small. Academics aged over 60 received the best feedback, regardless of their gender.

The findings echo those of previous studies of manuscript peer review2,3, which have found that author-nominated reviewers rate papers more favourably than do referees picked by journal editors.

Liz Allen, who is the director of strategic initiatives at the open-access publisher F1000, says that the latest study is robust, but notes that making a policy change based solely on its data is questionable. “This almost automatically assumes that the scores must be ‘too high’ and therefore biased instead of perhaps testing out who the reviewers were and whether there were reasons why the scores might have been higher,” says Allen, who is also the former head of evaluation at the UK biomedical funder Wellcome Trust.

Johan Bollen, who studies complex computer systems and networks at Indiana University Bloomington, says he sees benefits to both sides of the argument. Grant applicants or study authors “have important information with respect to the experts that are most suited to provide an in-depth and knowledgeable review of their proposal”. But it might create an opportunity for authors to bias the reviewing process, he adds.

A new system
Bollen has previously argued for a system in which all researchers are guaranteed some money, provided they anonymously allocate a fraction of their funding to researchers of their own choice. The goal would be to shift the focus from funding projects to funding people.

Funding agencies around the world have different approaches to choosing grant reviewers. The US National Science Foundation does consider nominated reviewers, as well as those who applicants say are not fit to evaluate their work. Applicants to the US National Institutes of Health, however, are not allowed to suggest potential reviewers.

A spokesperson of UK Research and Innovation, Britain’s central research funder, told Nature that the organization’s individual, topic-based research councils invite applicants to nominate prospective peer reviewers, but suggested reviewers are not always used. When they are, the process also includes at least one additional referee, the spokesperson says.

Finding reviewers who want to referee papers or grant applications can also be a struggle, notes study co-author João Martins, a data scientist at the European Research Council Executive Agency in Brussels. A 2018 survey of more than 11,000 researchers worldwide found a growing “reviewer fatigue”. As a result, journal editors must now invite, on average, a greater number of peer reviewers to referee manuscripts to get each review completed.

Teenager Codes Plug-In to Expose Corruption

A self-taught 16-year-old coder from Seattle, Washington, has created a web browser plug-in that won’t let you forget the pervasive and corrupting influence of money in politics.

Called “Greenhouse,” the plug-in picks out the names of any members of Congress on a given web page. Users can then mouse-over those members of Congress to see their top donors, and what percentage of their funding came from small-dollar donations. Here’s an example, taken from a story in today’s New York Times about climate skeptics’ opposition to new carbon emission regulations:

Readers of this article, with the “Greenhouse” plug-in installed, might draw a connection between Oklahoma Senator James Inhofe’s climate skepticism and the money his 2012 campaign received from the oil and gas industry and the mining industry ($558,150 and $150,850 respectively).

Nicholas Rubin, the concerned (but not-yet-old-enough-to-vote) citizen behind the plug-in, first became interested in the issue when he gave a school presentation on corporate personhood while in the seventh grade. About a year later, Lawrence Lessig — the Harvard law professor and activist — provided Rubin with further inspiration. “I went to see Larry Lessig talk about campaign finance at the town hall here in Seattle. Both of these events sparked an interest in me,” Rubin told “It made me angry. I remember asking my dad (multiple times) questions like ‘How is this legal?’”

When it came time to test the project, Rubin got in touch with Lessig, who signed on as the first beta tester. “He loved Greenhouse, and helped me by giving feedback and ideas along the way,” Rubin said.

Read more about the plug-in and try it out at Rubin’s site, »