Skip to Main Content
Mobile Menu

News and Media

Infographic and Video of the Application Workflow for Organizations

Resubmitted from: Grants.gov | June 20, 2018 at 4:00 am | Tags: Grants.gov WorkspaceHow to Apply for a Federal GrantInfographicVideo | Categories: ApplicantsTraining | URL: https://wp.me/p7pTup-Zs

by Grants.gov

Applying for a federal grant can feel daunting – even for a seasoned veteran. The average federal grant application involves a multitude of decisions, from filling in form fields to communicating with collaborators.

The following graphic and its accompanying video break this complicated endeavor into four high-level phases.

For each of these phases, Grants.gov offers training videos and step-by-step instructions, so be sure to take advantage of these resources and share them with your team members and colleagues.

Grants.gov Application Workflow Infographic

Reach Out to NIH Staff – We’re Here to Help

Resubmitted from: ColumbuM | May 9, 2018 at 3:25 pm | Categories: Uncategorized | URL: https://wp.me/p7Dr3j-4Hj

by ColumbuM

We had the pleasure of interacting with over 900 applicants and grantees at last week’s NIH Regional Seminar on Program Funding and Grants Administration in Washington, DC. A recurring theme in many presentations was the importance of reaching out to NIH staff throughout the grant application and award process.

Most folks know to call the eRA Service Desk when they run into issues with ASSIST or eRA Commons. But, do you know where to go for other support? The best people to talk with about the scientific or administrative information in your particular application or award are in the NIH institute or center that may fund the grant. Our resource on Contacting Staff at the NIH institutes and Centers will help you understand the roles of NIH program officials, scientific review officers, and grants management officials, when to contact them, and where to find their contact information.

Looking for a NIH Program Official in Your Research Area?

Resubmitted from: Open Mike Blog Team | April 16, 2018 at 12:03 pm | Categories: Uncategorized | URL: https://wp.me/p7Dr3j-4GJ

by Open Mike Blog Team

For years researchers have used the Matchmaker feature in NIH RePORTER to identify NIH-funded projects similar to their supplied abstracts, research bios, or other scientific text. Matchmaker was recently enhanced to make it just as easy to identify NIH program officials whose portfolios include projects in your research area.

After entering your scientific text (up to 15,000 characters), Matchmaker will analyze the key terms and concepts to identify up to 500 similar projects. Those projects will continue to show on the Projects tab with handy charts to visualize the results and quickly filter identified projects by Institute/Center, Activity Code, and Study Section. A new Program Official tab identifies the program officials associated with the matched projects and includes its own filters for Institute/Center and Activity Code. From the list of program officials you are one click away from their contact information and matched projects in their portfolios. Never before has it been so easy to answer the question “Who at NIH can I talk to about my research?”

 

“Cover Letters and their Appropriate Use” Podcast Now Available

Resubmitted from: NIH Staff | April 16, 2018 at 11:52 am | URL: https://wp.me/p7Dr3j-4GG

by NIH Staff

Ever wonder what you should and shouldn’t put in a grant application cover letter? Dr. Cathleen Cooper, director of the Division of Receipt and Referral in NIH’s Center for Scientific Review, explains just that in the latest addition to our “All About Grants”  podcast series – “Cover Letters and Their Appropriate Use” (MP3Transcript).

All About Grants podcast episodes are produced by the NIH Office of Extramural Research, and designed for investigators, fellows, students, research administrators, and others just curious about the application and award process. The podcast features NIH staff members who talk about the ins and outs of NIH funding, and provide insights on grant topics from those who live and breathe the information. Listen to more episodes via the All About Grants podcast pagethrough iTunes, or by using our RSS feed in your podcast app of choice.

There’s No I In Team: Assessing Impact of Teams Receiving NIH Funding

There’s No I In Team: Assessing Impact of Teams Receiving NIH Funding 

Almost 11 years ago, Stefan Duchy, Benjamin Jones, and Brian Uzzi (all of Northwestern University) published an article in Science on “The Increasing Dominance of Team in Production of Knowledge.” They analyzed nearly 20 million papers published over 5 decades and 2.1 million patents and found that across all fields the number of authors per paper (or patent) steadily increased, that teams were coming to dominate individual efforts, and that teams produced more highly cited research.

In a Science review paper published a few weeks ago, Santo Fortunato and colleagues offered an overview of the “Science of Science.” One of their key messages was that “Research is shifting to teams, so engaging in collaboration is beneficial.”

I thought it would be worth exploring this concept further using NIH grants. For this post, data were acquired using a specific NIH portfolio analysis tool called iSearch. This platform prvides easy access to carefully curated, extensively-linked datasets of global grants, patents, publications, clinical trials, and approved drugs.

One way of measuring team size is to count the number of co-authors on published papers. Figure 1 shows box-and-whisker plots of author counts for 1,799,830 NIH-supported papers published between 1995 and 2017. The black diamonds represent the means. We can see from these data that the author counts on publications resulting from NIH support have steadily increased over time (mean from 4.2 to 7.4, median from 4 to 6).

Figure 1 shows box and whisker plots highlighting the number of authors on publications supported by NIH funding. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 20. The black diamonds represent the mean for each plot.

Figure 1 shows box and whisker plots highlighting the number of authors on publications supported by NIH funding. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 20. The black diamonds represent the mean for each plot.

Figure 2 shows corresponding data for 765,851 papers that were supported only with research (R) grants. In other words, none cited receiving support from program project (P), cooperative agreement (U), career development (K), training (T), or fellowship (F) awards. We see a similar pattern in which author counts have increased over time (mean from 4.0 to 6.2, median from 4 to 5). Also, of note is a drifting of the mean away from the median, reflecting an increasingly skewed distribution driven by a subset of papers with large numbers of authors.

Figure 2 shows box and whisker plots highlighting the number of authors on publications supported by NIH research (R) grants. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 60. The black diamonds represent the mean for each plot.

Figure 2 shows box and whisker plots highlighting the number of authors on publications supported by NIH research (R) grants. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 60. The black diamonds represent the mean for each plot.

Next, let’s look at corresponding data for papers that received support from at least one P grant (N=498,790) or at least one U grant (N=216,600) in Figures 3 and 4 respectively. As we can see, there are similar patterns emerging that were seen for R awards.

Figure 3 shows box and whisker plots highlighting the number of authors on publications supported by NIH program project (P) grants. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 25. The black diamonds represent the mean for each plot.

Figure 4 shows box and whisker plots highlighting the number of authors on publications supported by NIH cooperative agreement (U) grants. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 20. The black diamonds represent the mean for each plot.

Figure 3 shows box and whisker plots highlighting the number of authors on publications supported by NIH program project (P) grants. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 25. The black diamonds represent the mean for each plot.

Figure 4 shows box and whisker plots highlighting the number of authors on publications supported by NIH cooperative agreement (U) grants. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 20. The black diamonds represent the mean for each plot.

Figure 5 focuses on 277,330 R, P, or U-supported papers published between 2015 and 2017 and shows author counts for papers supported on R grants only (49%), P grants only (11%), U grants only (8%), R and P grants (16%), R and U grants (7%), and P and U grants (9%). The patterns are not surprising – author counts are higher for papers supported by P and U grants—likely as these are large multi-factorial activities inherently involving many researchers—but even for R grant papers the clear majority involve multiple authors.

Figure 5 shows box and whisker plots highlighting the number of authors on publications from 2015 to 2017 supported by recent NIH funding. The X axis represents the mechanisms of support including, in order, R awards, P awards, U awards, R and P awards combined, R and U awards combined, as well as P and U awards combined, while the Y axis is the number of authors on a publication from 0 to 25. The black diamonds represent the mean for each plot.

Figure 5 shows box and whisker plots highlighting the number of authors on publications from 2015 to 2017 supported by recent NIH funding. The X axis represents the mechanisms of support including, in order, R awards, P awards, U awards, R and P awards combined, R and U awards combined, as well as P and U awards combined, while the Y axis is the number of authors on a publication from 0 to 25. The black diamonds represent the mean for each plot.

Finally, in Figure 6 we show a scatter plot (with generalized additive model smoother) of relative citation ratio (RCR) according author count for NIH-supported papers published in 2010. As a reminder, RCR is a metric that uses citation rates to measure influence at the article level. Consistent with previous literature, an increased author count is associated with higher citation influence – in other words, the more authors on a paper, then the more likely it is to be influential in its field.

Figure 6 shows a scatterplot highlighting the number of authors and the relative citation ratio for R supported papers in 2010. The X axis represents the number of authors on a logarithmic scale, while the Y axis is the relative citation ratio also on a logarithmic scale. A best fit line is displayed on the graph.

Figure 6 shows a scatterplot highlighting the number of authors and the relative citation ratio for R supported papers in 2010. The X axis represents the number of authors on a logarithmic scale, while the Y axis is the relative citation ratio also on a logarithmic scale. A best fit line is displayed on the graph.

Summarizing these findings:

Consistent with prior literature, we see that NIH-funded extramural research, including research funded by R grants, produce mostly multi-author papers, with increasing numbers of authors per paper over time. These findings are consistent with the growing importance of team science.
Mechanisms designed to promote larger-scale team science (mainly P and U grants) generate papers with greater numbers of authors.
There is an association by which greater numbers of authors are associated with greater citation influence.
It is important to understand that, even in this competitive funding environment, research is shifting to teams. And when we look more closely at the impact of the shift, we see that collaboration is proving to move science forward in important ways. How big should teams be? Some recent literature suggests that small teams are more likely than large teams to produce disruptive papers. A few years ago, my colleagues published a paper on the NIH-funded research workforce; they found that the average team size was 6. Is this optimal? We don’t know.

There is much more for us to look at in terms of the role of team science in NIH supported research. In the meantime, it’s great to see more confirmation that scientific collaboration is truly beneficial to moving science forward.

College of Health Sciences Faculty Participate in Research Development Fellowship

Eight faculty members within the College of Health Sciences will be participating in a Fellowship Program for External Funding Proposal Development provided by Boise State’s Division of Research and Economic Development.

The Division of Research and Economic Development have created the Fellowship Program to support faculty research endeavors across the Boise State campus. Faculty from the College of Health Sciences will be the program’s second cohort as the first was provided to the School of Public Service. This program will serve to mentor faculty in the development and submission processes of fundable research proposals.

Mentoring will begin this spring and take place over the course of two semesters. The program will hold 11 meetings for faculty to meet with Mendi Edgar, grant development specialist, and Jana LaRosa, coordinator for research and development, both from the Division of Research and Economic Development. Within these meetings, faculty will participate in workshops devoted to the thorough process of developing fundable research proposals. These workshops will include an introduction to defining a research problem, finding appropriate funders, creating relationships with those funders, preparing the proposal, effective grant writing practices, and submitting the proposal. By the end of the program, each faculty member will have created a fundable grant proposal for a minimum award amount of $50,000.

“The College of Health Sciences, Office of Research is delighted to be collaborating with the Division of Research and Economic Development on this Fellowship Program,” said Ella Christiansen, research administrator for the Office of Research. “The Fellowship provides a great opportunity for training and professional development to our faculty. We look forward to assisting the participants with their proposal submissions that result from this program and hope to have them all receive external funding!”

Participants were chosen through an application process that was open to all College of Health Sciences faculty members. Faculty will receive a single course reduction for the Fall 2018 semester and are eligible for up to $1,500 in research funds to be used in support of their proposal project. Uses of these funds include gathering data and traveling to conferences or training opportunities.

Faculty participants include:

  • Karin Adams, assistant professor, Department of Community and Environmental Health
  • Jenny Alderden, assistant professor, School of Nursing
  • Tyler Brown, assistant professor, Department of Kinesiology
  • Stephanie Hall, clinical assistant professor, Department of Kinesiology
  • Eric Martin, assistant professor, Department of Kinesiology
  • Nicole O’Reilly, assistant professor, School of Social Work
  • Ellen Schafer, assistant professor, Department of Community and Environmental Health
  • Lucy Zhao, assistant professor, School of Nursing

“This is a great group of researchers, as each of the schools within the college are represented,” said Christiansen. “We hope that having this diversity of disciplines and research interests will spark conversations and future collaborations.”

“We are so proud of our faculty participating in this fantastic fellowship program,” said Tim Dunnagan, dean of the College of Health Sciences. “We are grateful to Vice President Mark Rudin and his team in Research and Economic Development for offering this fellowship and for all of their generous support as we grow our research within the college.”

Do Reviewers Read References? And If So, Does It Impact Their Scores?

Resubmitted from: Mike Lauer | March 30, 2018 at 9:33 am | URL: https://wp.me/p7Dr3j-4Ge

by Mike Lauer

In March 2017, we wrote about federal funders’ policies on interim research products, including preprints. We encouraged applicants and awardees include citations to preprints in their grant applications and progress reports. Some of your feedback pointed to the potential impact of this new policy on the peer review process.

Some issues will take a while to explore as preprints become more prevalent. But some we can dig into immediately. For example, how do references cited in an application impact review?  To start to address this question, we considered another one as well: do peer reviewers look at references – either those cited by applicants or others – while evaluating an application?  We had heard anecdotes, ranging from “Yes, I always do,” to “No, I don’t need to,’ but we didn’t have data one way or the other. And if reviewers do check references, how does it impact their understanding and scoring of an application?

So, together with colleagues from the NIH Center for Scientific Review (CSR), we reached out to 1,000 randomly selected CSR reviewers who handled applications for the January 1, 2018 Council Round. There were an equal number of chartered (i.e. permanent) and temporary reviewers solicited to participate (n=500 each) over a three week period from November 16 to December 8, 2017.

Our survey focused on the last grant where they served as primary reviewer. Specifically, we asked if they looked up any references that were either included in the application (i.e. internal references), and if they also looked up any that were not included in the application (i.e. external references). Depending on their answers to each of these questions, we also proceeded to ask certain respondents follow-up questions to better understand their initial feedback. We felt it would be interesting to know, for example, how reading the paper or abstract impacted their understanding of the application and their score.

We received 615 responses (62% of total), including 306 chartered members and 309 temporary members.  Figure 1 shows the responses related to if they looked up references, either internal or external to the application.  Most reviewers answered yes – particularly for internal references.

Figure 1 shows a bar graph displaying data on whether reviewers looked up any references during their review. The graph is broken up into three groups representing All member reviewers, Chartered members, and Temporary members. Each group is further subdivided into Internal References and External References. Finally, each sub-group shows bars corresponding to a Yes (green), No (orange), or Do Not Recall (gray) response. The Y axis is the percentage of respondents from 0-100 percent. An explanation on the graph indicates there were an overall 1,000 reviewers solicited, with 615 respondents (62 percent response rate). 306 were Chartered members and 309 were Temporary members. The margin of error is plus or minus 4 percent.

Figure 2 goes a bit deeper – as a secondary question, we asked whether the references affected reviewers’ understanding of the applications.  The clear majority said yes. Figure 4, shows that most reviewers (~85%) found the references improved their understanding.

Figure 2 shows a bar graph describing how the references affected a reviewer’s understanding of an application. The graph is broken up into three groups representing All reviewers, Chartered members, and Temporary members. Each group is further subdivided into Internal References and External References. Finally, each sub-group shows bars corresponding to a Yes (green), No (orange), or Do Not Recall (gray) response. The Y axis is the percentage of respondents from 0-100 percent. An explanation on the graph indicates there were an overall 1,000 reviewers solicited, with 615 respondents (62 percent response rate). 306 were Chartered members and 309 were Temporary members. The margin of error is plus or minus 4 percent.

Next, we learned that of those reviewers that checked references, about 2/3 reported that the references affected their scoring for the application (Figure 3). References reviewers found on their own (external references) seemed slightly more influential.  Figure 4 shows references could impact the score in either direction.  References cited in the application were slightly more likely to improve scores than worsen them, and external references were slightly more likely to make scores worse than improve them.

Figure three shows a bar graph displaying data for if looking up references affected a reviewer’s score. The graph is broken up into three groups representing All reviewers, Chartered members, and Temporary members. Each group is further subdivided into Internal References and External References. Finally, each sub-group shows bars corresponding to a Yes (green), No (orange), or Do Not Recall (gray) response. The Y axis is the percentage of respondents from 0-100 percent. An explanation on the graph indicates there were an overall 1,000 reviewers solicited, with 615 respondents’ (62 percent response rate). 306 were Chartered members and 309 were Temporary members. The margin of error is plus or minus 4 percent.

Figure 4 shows stacked bar charts related to responses for (1) the extent the references affected a reviewer’s understanding of the application and (2) how the references affected their score. The graph is broken up into three groups: All respondents, chartered members, and temporary members. Each group is subdivided into responses for internal and external references. The Y axis represents the percentage of respondents from 0-100 percent. Scores were based on a Likert-type range where 1 represented greatly improved (green), to 4 as neutral (tan), and 7 which was Made It Worse (red).

Nearly half of the respondents even provided additional comments for us to consider.  Here is a sampling of their thoughts:

“References are of immense value.”
“I look up references to judge the quality of the [principal investigator’s] work in relation to the rest of the field, to learn about the field in general, and to delve into specific questions that might be key to evaluation of the application.  This could result in changes to the score in either direction.”
“References are useful and sometimes critical.”
This experience was very enlightening. We were pleased to learn that most reviewers do look up references as part of their work in the peer review process, but preprints, at least for now, are too rarely cited in applications to have a clear impact. Further, both chartered and temporary reviewers shared similar perspectives on looking up references, which they noted often affects their understanding of the applications and resulting scores. Finally, they indicated that references internal to applications often lead to reviewers’ improving their scores.  We may need to revisit this survey as preprints and other interim products become more common.

Overall, this survey demonstrates, yet again, the time and care NIH reviewers spend on applications. They work hard for all of us-  NIH, applicants and the American public, and I am personally grateful to all of them.

I would like to acknowledge Neil Thakur with the NIH Office of extramural research as well as Mary Ann Guadagno, Leo Wu, Huong Tran, Cheng Zhang, Lin Yang, Chuck Dumais, and Richard Nakamura with the NIH Center for Scientific Review for their work on this project.

Demystifying Funding Opportunity Announcements on Grants.gov—Grant Writing Basics

Resubmitted from: Grants.gov | April 2, 2018 at 4:00 am | Tags: Funding Opportunity Announcement (FOA)Grant WriterTips | Categories: ApplicantsGrant Writing Basics | URL: https://wp.me/p7pTup-Wa

It is easy to be intimidated when you first encounter a Funding Opportunity Announcement (FOA) on Grants.gov.

There are the four tabs of content. The technical language culled from industry and government programs. Application forms, some of which may require file attachments. And, of course, there is the shiver-inducing closing date.

grant writing basics icon

We have developed the following tips to help applicants (especially those new to the federal grant application process) demystify the FOA and position themselves for a solid submission:

1) Register with Grants.gov and assign roles to your team before digging into an FOA or creating a workspace. If you don’t set up your account properly, you risk facing delays when you are ready to begin work on the application.

2) Read the FOA’s eligibility requirements carefully. After all, you don’t want to spend hours on an application only to realize later that you are not eligible to apply.

3) Preview the forms that you will need to fill out, including any optional ones that might require extra work or file attachments. Identify information or agreements you need that will take a while to track down.

4) Try to visualize what a successful application will look like. Break it down into its component parts – budget data, narrative and storytelling, standard form data, etc.

5) Jot down the agency contact listed in the opportunity. And if you need to, establish a line of communication early in the process so that if you have any program-related questions you can quickly reach out.

6) Plan to submit the final application at least a few days before the closing date, allowing yourself time to fix errors if any are encountered when you click submit.

Do you have other tips for first-time federal grant applicants? Share them below and we will highlight our favorites in a future blog post.

How do you define a “study” for the purposes of providing information on the PHS Human Subject and Clinical Trial form and registering in ClinicalTrials.gov?

How do you define a “study” for the purposes of providing information on the PHS Human Subject and Clinical Trial form and registering in ClinicalTrials.gov?

Our application instructions provide guidance to submit a study record for each protocol. When in doubt, NIH supports lumping several aims or hypotheses into a single study record, to the extent that makes sense for your research.
Have other questions related to the new PHS Human Subject and Clinical Trial form or NIH clinical trial policies? Find more FAQs and their answers at grants.nih.gov.

NIH Announces Inclusion Across the Lifespan Policy

Last month, NIH announced a revision (NOT-OD-18-116) to a decades-old policy originally conceived in response to concerns that children were not appropriately included in clinical research. These changes broaden the policy to address inclusion of research participants of all ages, and as discussed at the last Advisory Committee to the NIH Director meeting, will apply beginning in 2019 to all NIH-supported research involving human subjects. Our goal is to ensure that the knowledge gained from NIH-funded research is applicable to all those affected by the conditions under study.

To get here, NIH solicited feedback from experts and the public through a Request for Information and a workshop held over the summer. We heard from many of you, from pediatricians, geriatricians, primary care providers, statisticians, publishers, bioethicists, and people from the general public. Among the concerns raised were that many trials include poorly-justified age-based exclusions (Cherubini 2011, Cruz-Jentoft 2013), and that older adults, who carry a disproportionate burden of disease, are often underrepresented in clinical trials. For example, while nearly a third of US cancer patients are 75 years or older, less than 10% of patients in cancer trials are in this age range (Hurria 2014).

After considering input and in accord with the 21st Century Cures Act, our policy now requires people of all ages, including children under 18 years and older adults, be included in clinical research studies unless there are scientific or ethical reasons not to include them. We outline when certain age groups may be excluded and note that grantees are now required to annually report on the age at enrollment of their participants along with sex/gender, race, and ethnicity.

So, for application due dates on or after January 25, 2019 (yes, one year from now), if you propose a study involving human subjects, you must have a plan describing how participants across the lifespan will be included and justify the proposed age range of participants. Reviewers will consider whether the proposed age range is appropriate in the context of the specific scientific aims. Should the study be funded, keep in mind that your progress reports will include de-identified individual-level participant data on sex/gender, race, ethnicity, and age at enrollment (in units ranging from hours to years). Ongoing NIH-funded research (type 5 awards) are exempt from this policy, but the policy will apply if you are submitting a competitive renewal application on or after January 25, 2019.

We understand that sometimes research should exclude certain participants. For example, if the disease does not occur in the excluded group, or the knowledge sought is already available on the excluded group, then this may be an appropriate justification to limit who is in your study. We also recognize that there are situations where participation of certain groups would be unethical, or laws or regulations bar the inclusion of a specific group in research. The Guide notice describes situations in which exclusion of individuals based on age may be justified. Keep in mind that the age distribution of participants should be appropriate in the context of the science.

We look forward to working with you in the implementation of these high priority inclusion policies, which are designed to assure that our funded research will better help us make informed health and health care choices going forward.