Posts Tagged ‘grant’

by Holly Else

Grant reviewers award lower scores to proposals from women than to those from men, even when they don’t know the gender of the applicant, an analysis of thousands of submissions to the Bill and Melinda Gates Foundation has found (1).

That’s because male and female scientists use different types of word on grant applications, according to the study, published by the US National Bureau of Economic Research.

The study finds that women are more likely to choose words specific to their field to describe their science, whereas men tend to use less precise terms. These broader terms seem to be preferred by the reviewers who decide how to distribute the cash, says the analysis — even though proposals containing those words don’t lead to better research outcomes.

The findings aren’t surprising, says Kuheli Dutt, who works in academic affairs and diversity at Columbia University in New York City. Dutt sees parallels with research showing that men are more likely to boast and overstate their performance in tests, whereas women are more likely to be cautious in their statements (2). Using broad words might lead to sweeping claims, but narrow words might imply more cautious claims, she says.

Loaded language

Previous research had highlighted how differences in the way men and women use language can drive bias. For example, some studies show that the words in some job adverts can put women off applying, and women in the geosciences are less likely than their male counterparts to receive a recommendation letter whose tone suggests that they are outstanding candidates (3).

A meeting between four executives in a large, open space.
Reviewers give higher scores to grant applications from men than those from women.Credit: TommL/Getty

Grant reviewers award lower scores to proposals from women than to those from men, even when they don’t know the gender of the applicant, an analysis of thousands of submissions to the Bill and Melinda Gates Foundation has found1.

That’s because male and female scientists use different types of word on grant applications, according to the study, published by the US National Bureau of Economic Research.

The study finds that women are more likely to choose words specific to their field to describe their science, whereas men tend to use less precise terms. These broader terms seem to be preferred by the reviewers who decide how to distribute the cash, says the analysis — even though proposals containing those words don’t lead to better research outcomes.

The findings aren’t surprising, says Kuheli Dutt, who works in academic affairs and diversity at Columbia University in New York City. Dutt sees parallels with research showing that men are more likely to boast and overstate their performance in tests, whereas women are more likely to be cautious in their statements2. Using broad words might lead to sweeping claims, but narrow words might imply more cautious claims, she says.

Loaded language
Previous research had highlighted how differences in the way men and women use language can drive bias. For example, some studies show that the words in some job adverts can put women off applying, and women in the geosciences are less likely than their male counterparts to receive a recommendation letter whose tone suggests that they are outstanding candidates3.

But this is the first time that ‘gendered’ language has been explored in grant applications, says Julian Kolev, who studies entrepreneurship at the Southern Methodist University in Texas and led the work.

Kolev’s analysis looked at almost 7,000 proposals submitted to the Grand Challenges Explorations programme of the Bill & Melinda Gates Foundation between 2008 and 2017. The fund awards grants of between $100,000 and $1 million to address challenges in global health and is open to anyone through a two-page online application. Reviewers are blind to the gender of the applicants.

The researchers singled out the applications from US researchers and sought information from the Gates Foundation on applicants’ gender, discipline and where they work. The group also looked at each scientist’s publication record and grant history before and after the application.

The team found that women received significantly lower scores from reviewers than men did. This couldn’t be explained by the applicants’ experience, publications record or the gender of the reviewers. Instead, it seemed to be down to their communication style in the proposal.

The researchers found that men tended to use ‘broad’ words, such as “control”, “detection” and “bacteria”, more often. These were defined as words that appeared at the same rate in proposals regardless of the topic. By contrast, women favoured ‘narrower’ or more topic-specific terms, such as “community”, “oral” and “brain” (see ‘Broad language’). The authors linked broad words to higher review scores, and narrow ones with lower scores.

But funded applications that contained many broad words didn’t result in work that led to more publications and future grants, the researchers found. And when women secured funding, they generally outperformed men on these measures.

Closing the gap

The Gates Foundation says that it is committed to ensuring gender equality and that its grand-challenges programme uses blind reviews in an attempt to eliminate reviewer bias. It is also reviewing the results of this study.

Kolev suggests that grant reviewers could be trained to limit their sensitivity to communication styles. The make-up of the review panel also seems important. “We consistently show that female reviewers’ scores do not favour proposals from male applicants in the way that male reviewers’ scores do,” he notes. “So increasing the number of female reviewers is one potential way to mitigate the effects we find.”

doi: 10.1038/d41586-019-01402-4

References
1.Kolev, J., Fuentes-Medel, Y. and Murray, F. Natl Bureau Econ. Res. Working Paper No. 25759 https://www.nber.org/papers/w25759 (2019)

2.Reuben, E., Sapienza, P. and Zingales, L. Proc. Natl Acad. Sci. USA 111, 4403–4408 (2014).

3.Dutt, K. et al. Nature Geosci. 9, 805–808 (2016).

https://www.nature.com/articles/d41586-019-01402-4?utm_source=Nature+Briefing&utm_campaign=96860aed6e-briefing-dy-20190502&utm_medium=email&utm_term=0_c9dfd39373-96860aed6e-44039353

Advertisements

Peer reviewers are four times more likely to give a grant application an “excellent” or “outstanding” score rather than a “poor” or “good” one when they are chosen by the grant’s applicants, an analysis of Swiss funding applications has found.

The study, at the Swiss National Science Foundation (SNSF), was completed in 2016, and the SNSF acted quickly on its findings by banning grant applicants from being able to recommend referees.

The authors, who are affiliated with the SNSF, posted their results online at PeerJ Preprints1 on 19 March, and in their paper call on other funders to reconsider their funding processes.

“I think this practice should be abolished altogether,” says study co-author Anna Severin, a sociologist who studies peer review at the University of Bern. Other experts are also wary of the problems that author-picked peer reviewers might cause, but some question whether banning them altogether is the right step.

The study examined more than 38,000 reviews from nearly 13,000 SNSF grant applications by about 27,000 peer reviewers from all disciplines between 2006 and 2016. The authors found that reviewers nominated by applicants were more likely to give these applicants higher evaluation scores than referees chosen by the SNSF.

Higher scores
The study found that reviewers affiliated with non-Swiss institutions gave higher evaluation scores, on average, than those based in the country. Male reviewers gave higher scores than female reviewers did, and male applicants received higher scores than female applicants, although the difference was small. Academics aged over 60 received the best feedback, regardless of their gender.

The findings echo those of previous studies of manuscript peer review2,3, which have found that author-nominated reviewers rate papers more favourably than do referees picked by journal editors.

Liz Allen, who is the director of strategic initiatives at the open-access publisher F1000, says that the latest study is robust, but notes that making a policy change based solely on its data is questionable. “This almost automatically assumes that the scores must be ‘too high’ and therefore biased instead of perhaps testing out who the reviewers were and whether there were reasons why the scores might have been higher,” says Allen, who is also the former head of evaluation at the UK biomedical funder Wellcome Trust.

Johan Bollen, who studies complex computer systems and networks at Indiana University Bloomington, says he sees benefits to both sides of the argument. Grant applicants or study authors “have important information with respect to the experts that are most suited to provide an in-depth and knowledgeable review of their proposal”. But it might create an opportunity for authors to bias the reviewing process, he adds.

A new system
Bollen has previously argued for a system in which all researchers are guaranteed some money, provided they anonymously allocate a fraction of their funding to researchers of their own choice. The goal would be to shift the focus from funding projects to funding people.

Funding agencies around the world have different approaches to choosing grant reviewers. The US National Science Foundation does consider nominated reviewers, as well as those who applicants say are not fit to evaluate their work. Applicants to the US National Institutes of Health, however, are not allowed to suggest potential reviewers.

A spokesperson of UK Research and Innovation, Britain’s central research funder, told Nature that the organization’s individual, topic-based research councils invite applicants to nominate prospective peer reviewers, but suggested reviewers are not always used. When they are, the process also includes at least one additional referee, the spokesperson says.

Finding reviewers who want to referee papers or grant applications can also be a struggle, notes study co-author João Martins, a data scientist at the European Research Council Executive Agency in Brussels. A 2018 survey of more than 11,000 researchers worldwide found a growing “reviewer fatigue”. As a result, journal editors must now invite, on average, a greater number of peer reviewers to referee manuscripts to get each review completed.

https://www.nature.com/articles/d41586-019-01198-3?utm_source=Nature+Briefing&utm_campaign=b5d0e19ae2-briefing-dy-20190418&utm_medium=email&utm_term=0_c9dfd39373-b5d0e19ae2-44039353