News and Media
Monday, September 10, 2018
Starts at 10 am
SUB Hatch CD
Space is limited — if you plan to attend, please RSVP to firstname.lastname@example.org
Speaker: Paul J. Martin, MD
- Importance of mentoring
- Critical skill sets for research success
- NIH grant review insight
- ITHS resources
Paul J. Martin, MD, a co-Principal Investigator of the Institute, served as the Medical Director of Clinical Research Support, the Fred Hutch Clinical Trials Office, from 2011 to 2017. He is a Member of the Fred Hutch and a Professor of Medicine at the UW. Dr. Martin has more than 35 years of experience with hematopoietic cell transplantation at Fred Hutch, focusing on acute and chronic graft-versus-host disease (GVHD).
How do you define a “study” for the purposes of providing information on the PHS Human Subject and Clinical Trial form?
by NIH Staff
We recognize that it may be difficult to determine whether two or more closely related protocols should be considered a single study. Generally, if you have research activities that use the same human subjects population, follow the same core research protocol and procedures, and intend to combine the data for analysis in aggregate, this would be considered a single study for the purposes of the PHS Human Subjects and Clinical Trial form.
When in doubt, at the time of application NIH supports grouping studies that use the same research protocol and the same human subjects population into a single study record, to the extent that the information provided is accurate and understandable to NIH staff and reviewers. You are also encouraged to discuss how to group your studies with your NIH Program Officer.
For studies that will need to register and report in ClinicalTrials.gov, keep in mind that each ClinicalTrials.gov record should be a unique study record in the PHS Human Subjects and Clinical Trial form.
Additional FAQs provide handy tips for using the PHS Human Subjects and Clinical Trial Information form.
Resubmitted from: Grants.gov | June 20, 2018 at 4:00 am | Tags: Grants.gov Workspace, How to Apply for a Federal Grant, Infographic, Video | Categories: Applicants, Training | URL: https://wp.me/p7pTup-Zs
Applying for a federal grant can feel daunting – even for a seasoned veteran. The average federal grant application involves a multitude of decisions, from filling in form fields to communicating with collaborators.
The following graphic and its accompanying video break this complicated endeavor into four high-level phases.
For each of these phases, Grants.gov offers training videos and step-by-step instructions, so be sure to take advantage of these resources and share them with your team members and colleagues.
We had the pleasure of interacting with over 900 applicants and grantees at last week’s NIH Regional Seminar on Program Funding and Grants Administration in Washington, DC. A recurring theme in many presentations was the importance of reaching out to NIH staff throughout the grant application and award process.
Most folks know to call the eRA Service Desk when they run into issues with ASSIST or eRA Commons. But, do you know where to go for other support? The best people to talk with about the scientific or administrative information in your particular application or award are in the NIH institute or center that may fund the grant. Our resource on Contacting Staff at the NIH institutes and Centers will help you understand the roles of NIH program officials, scientific review officers, and grants management officials, when to contact them, and where to find their contact information.
by Open Mike Blog Team
For years researchers have used the Matchmaker feature in NIH RePORTER to identify NIH-funded projects similar to their supplied abstracts, research bios, or other scientific text. Matchmaker was recently enhanced to make it just as easy to identify NIH program officials whose portfolios include projects in your research area.
After entering your scientific text (up to 15,000 characters), Matchmaker will analyze the key terms and concepts to identify up to 500 similar projects. Those projects will continue to show on the Projects tab with handy charts to visualize the results and quickly filter identified projects by Institute/Center, Activity Code, and Study Section. A new Program Official tab identifies the program officials associated with the matched projects and includes its own filters for Institute/Center and Activity Code. From the list of program officials you are one click away from their contact information and matched projects in their portfolios. Never before has it been so easy to answer the question “Who at NIH can I talk to about my research?”
Resubmitted from: NIH Staff | April 16, 2018 at 11:52 am | URL: https://wp.me/p7Dr3j-4GG
by NIH Staff
Ever wonder what you should and shouldn’t put in a grant application cover letter? Dr. Cathleen Cooper, director of the Division of Receipt and Referral in NIH’s Center for Scientific Review, explains just that in the latest addition to our “All About Grants” podcast series – “Cover Letters and Their Appropriate Use” (MP3, Transcript).
All About Grants podcast episodes are produced by the NIH Office of Extramural Research, and designed for investigators, fellows, students, research administrators, and others just curious about the application and award process. The podcast features NIH staff members who talk about the ins and outs of NIH funding, and provide insights on grant topics from those who live and breathe the information. Listen to more episodes via the All About Grants podcast page, through iTunes, or by using our RSS feed in your podcast app of choice.
Almost 11 years ago, Stefan Duchy, Benjamin Jones, and Brian Uzzi (all of Northwestern University) published an article in Science on “The Increasing Dominance of Team in Production of Knowledge.” They analyzed nearly 20 million papers published over 5 decades and 2.1 million patents and found that across all fields the number of authors per paper (or patent) steadily increased, that teams were coming to dominate individual efforts, and that teams produced more highly cited research.
In a Science review paper published a few weeks ago, Santo Fortunato and colleagues offered an overview of the “Science of Science.” One of their key messages was that “Research is shifting to teams, so engaging in collaboration is beneficial.”
I thought it would be worth exploring this concept further using NIH grants. For this post, data were acquired using a specific NIH portfolio analysis tool called iSearch. This platform prvides easy access to carefully curated, extensively-linked datasets of global grants, patents, publications, clinical trials, and approved drugs.
One way of measuring team size is to count the number of co-authors on published papers. Figure 1 shows box-and-whisker plots of author counts for 1,799,830 NIH-supported papers published between 1995 and 2017. The black diamonds represent the means. We can see from these data that the author counts on publications resulting from NIH support have steadily increased over time (mean from 4.2 to 7.4, median from 4 to 6).
Figure 1 shows box and whisker plots highlighting the number of authors on publications supported by NIH funding. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 20. The black diamonds represent the mean for each plot.
Figure 2 shows corresponding data for 765,851 papers that were supported only with research (R) grants. In other words, none cited receiving support from program project (P), cooperative agreement (U), career development (K), training (T), or fellowship (F) awards. We see a similar pattern in which author counts have increased over time (mean from 4.0 to 6.2, median from 4 to 5). Also, of note is a drifting of the mean away from the median, reflecting an increasingly skewed distribution driven by a subset of papers with large numbers of authors.
Figure 2 shows box and whisker plots highlighting the number of authors on publications supported by NIH research (R) grants. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 60. The black diamonds represent the mean for each plot.
Next, let’s look at corresponding data for papers that received support from at least one P grant (N=498,790) or at least one U grant (N=216,600) in Figures 3 and 4 respectively. As we can see, there are similar patterns emerging that were seen for R awards.
Figure 3 shows box and whisker plots highlighting the number of authors on publications supported by NIH program project (P) grants. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 25. The black diamonds represent the mean for each plot.
Figure 4 shows box and whisker plots highlighting the number of authors on publications supported by NIH cooperative agreement (U) grants. The X axis represents fiscal year from 1995 to 2017, while the Y axis is the number of authors on a publication from 0 to 20. The black diamonds represent the mean for each plot.
Figure 5 focuses on 277,330 R, P, or U-supported papers published between 2015 and 2017 and shows author counts for papers supported on R grants only (49%), P grants only (11%), U grants only (8%), R and P grants (16%), R and U grants (7%), and P and U grants (9%). The patterns are not surprising – author counts are higher for papers supported by P and U grants—likely as these are large multi-factorial activities inherently involving many researchers—but even for R grant papers the clear majority involve multiple authors.
Figure 5 shows box and whisker plots highlighting the number of authors on publications from 2015 to 2017 supported by recent NIH funding. The X axis represents the mechanisms of support including, in order, R awards, P awards, U awards, R and P awards combined, R and U awards combined, as well as P and U awards combined, while the Y axis is the number of authors on a publication from 0 to 25. The black diamonds represent the mean for each plot.
Finally, in Figure 6 we show a scatter plot (with generalized additive model smoother) of relative citation ratio (RCR) according author count for NIH-supported papers published in 2010. As a reminder, RCR is a metric that uses citation rates to measure influence at the article level. Consistent with previous literature, an increased author count is associated with higher citation influence – in other words, the more authors on a paper, then the more likely it is to be influential in its field.
Figure 6 shows a scatterplot highlighting the number of authors and the relative citation ratio for R supported papers in 2010. The X axis represents the number of authors on a logarithmic scale, while the Y axis is the relative citation ratio also on a logarithmic scale. A best fit line is displayed on the graph.
Summarizing these findings:
Consistent with prior literature, we see that NIH-funded extramural research, including research funded by R grants, produce mostly multi-author papers, with increasing numbers of authors per paper over time. These findings are consistent with the growing importance of team science.
Mechanisms designed to promote larger-scale team science (mainly P and U grants) generate papers with greater numbers of authors.
There is an association by which greater numbers of authors are associated with greater citation influence.
It is important to understand that, even in this competitive funding environment, research is shifting to teams. And when we look more closely at the impact of the shift, we see that collaboration is proving to move science forward in important ways. How big should teams be? Some recent literature suggests that small teams are more likely than large teams to produce disruptive papers. A few years ago, my colleagues published a paper on the NIH-funded research workforce; they found that the average team size was 6. Is this optimal? We don’t know.
There is much more for us to look at in terms of the role of team science in NIH supported research. In the meantime, it’s great to see more confirmation that scientific collaboration is truly beneficial to moving science forward.
Eight faculty members within the College of Health Sciences will be participating in a Fellowship Program for External Funding Proposal Development provided by Boise State’s Division of Research and Economic Development.
The Division of Research and Economic Development have created the Fellowship Program to support faculty research endeavors across the Boise State campus. Faculty from the College of Health Sciences will be the program’s second cohort as the first was provided to the School of Public Service. This program will serve to mentor faculty in the development and submission processes of fundable research proposals.
Mentoring will begin this spring and take place over the course of two semesters. The program will hold 11 meetings for faculty to meet with Mendi Edgar, grant development specialist, and Jana LaRosa, coordinator for research and development, both from the Division of Research and Economic Development. Within these meetings, faculty will participate in workshops devoted to the thorough process of developing fundable research proposals. These workshops will include an introduction to defining a research problem, finding appropriate funders, creating relationships with those funders, preparing the proposal, effective grant writing practices, and submitting the proposal. By the end of the program, each faculty member will have created a fundable grant proposal for a minimum award amount of $50,000.
“The College of Health Sciences, Office of Research is delighted to be collaborating with the Division of Research and Economic Development on this Fellowship Program,” said Ella Christiansen, research administrator for the Office of Research. “The Fellowship provides a great opportunity for training and professional development to our faculty. We look forward to assisting the participants with their proposal submissions that result from this program and hope to have them all receive external funding!”
Participants were chosen through an application process that was open to all College of Health Sciences faculty members. Faculty will receive a single course reduction for the Fall 2018 semester and are eligible for up to $1,500 in research funds to be used in support of their proposal project. Uses of these funds include gathering data and traveling to conferences or training opportunities.
Faculty participants include:
- Karin Adams, assistant professor, Department of Community and Environmental Health
- Jenny Alderden, assistant professor, School of Nursing
- Tyler Brown, assistant professor, Department of Kinesiology
- Stephanie Hall, clinical assistant professor, Department of Kinesiology
- Eric Martin, assistant professor, Department of Kinesiology
- Nicole O’Reilly, assistant professor, School of Social Work
- Ellen Schafer, assistant professor, Department of Community and Environmental Health
- Lucy Zhao, assistant professor, School of Nursing
“This is a great group of researchers, as each of the schools within the college are represented,” said Christiansen. “We hope that having this diversity of disciplines and research interests will spark conversations and future collaborations.”
“We are so proud of our faculty participating in this fantastic fellowship program,” said Tim Dunnagan, dean of the College of Health Sciences. “We are grateful to Vice President Mark Rudin and his team in Research and Economic Development for offering this fellowship and for all of their generous support as we grow our research within the college.”
by Mike Lauer
In March 2017, we wrote about federal funders’ policies on interim research products, including preprints. We encouraged applicants and awardees include citations to preprints in their grant applications and progress reports. Some of your feedback pointed to the potential impact of this new policy on the peer review process.
Some issues will take a while to explore as preprints become more prevalent. But some we can dig into immediately. For example, how do references cited in an application impact review? To start to address this question, we considered another one as well: do peer reviewers look at references – either those cited by applicants or others – while evaluating an application? We had heard anecdotes, ranging from “Yes, I always do,” to “No, I don’t need to,’ but we didn’t have data one way or the other. And if reviewers do check references, how does it impact their understanding and scoring of an application?
So, together with colleagues from the NIH Center for Scientific Review (CSR), we reached out to 1,000 randomly selected CSR reviewers who handled applications for the January 1, 2018 Council Round. There were an equal number of chartered (i.e. permanent) and temporary reviewers solicited to participate (n=500 each) over a three week period from November 16 to December 8, 2017.
Our survey focused on the last grant where they served as primary reviewer. Specifically, we asked if they looked up any references that were either included in the application (i.e. internal references), and if they also looked up any that were not included in the application (i.e. external references). Depending on their answers to each of these questions, we also proceeded to ask certain respondents follow-up questions to better understand their initial feedback. We felt it would be interesting to know, for example, how reading the paper or abstract impacted their understanding of the application and their score.
We received 615 responses (62% of total), including 306 chartered members and 309 temporary members. Figure 1 shows the responses related to if they looked up references, either internal or external to the application. Most reviewers answered yes – particularly for internal references.
Figure 2 goes a bit deeper – as a secondary question, we asked whether the references affected reviewers’ understanding of the applications. The clear majority said yes. Figure 4, shows that most reviewers (~85%) found the references improved their understanding.
Next, we learned that of those reviewers that checked references, about 2/3 reported that the references affected their scoring for the application (Figure 3). References reviewers found on their own (external references) seemed slightly more influential. Figure 4 shows references could impact the score in either direction. References cited in the application were slightly more likely to improve scores than worsen them, and external references were slightly more likely to make scores worse than improve them.
Nearly half of the respondents even provided additional comments for us to consider. Here is a sampling of their thoughts:
“References are of immense value.”
“I look up references to judge the quality of the [principal investigator’s] work in relation to the rest of the field, to learn about the field in general, and to delve into specific questions that might be key to evaluation of the application. This could result in changes to the score in either direction.”
“References are useful and sometimes critical.”
This experience was very enlightening. We were pleased to learn that most reviewers do look up references as part of their work in the peer review process, but preprints, at least for now, are too rarely cited in applications to have a clear impact. Further, both chartered and temporary reviewers shared similar perspectives on looking up references, which they noted often affects their understanding of the applications and resulting scores. Finally, they indicated that references internal to applications often lead to reviewers’ improving their scores. We may need to revisit this survey as preprints and other interim products become more common.
Overall, this survey demonstrates, yet again, the time and care NIH reviewers spend on applications. They work hard for all of us- NIH, applicants and the American public, and I am personally grateful to all of them.
I would like to acknowledge Neil Thakur with the NIH Office of extramural research as well as Mary Ann Guadagno, Leo Wu, Huong Tran, Cheng Zhang, Lin Yang, Chuck Dumais, and Richard Nakamura with the NIH Center for Scientific Review for their work on this project.
Resubmitted from: Grants.gov | April 2, 2018 at 4:00 am | Tags: Funding Opportunity Announcement (FOA), Grant Writer, Tips | Categories: Applicants, Grant Writing Basics | URL: https://wp.me/p7pTup-Wa
It is easy to be intimidated when you first encounter a Funding Opportunity Announcement (FOA) on Grants.gov.
There are the four tabs of content. The technical language culled from industry and government programs. Application forms, some of which may require file attachments. And, of course, there is the shiver-inducing closing date.
We have developed the following tips to help applicants (especially those new to the federal grant application process) demystify the FOA and position themselves for a solid submission:
1) Register with Grants.gov and assign roles to your team before digging into an FOA or creating a workspace. If you don’t set up your account properly, you risk facing delays when you are ready to begin work on the application.
2) Read the FOA’s eligibility requirements carefully. After all, you don’t want to spend hours on an application only to realize later that you are not eligible to apply.
3) Preview the forms that you will need to fill out, including any optional ones that might require extra work or file attachments. Identify information or agreements you need that will take a while to track down.
4) Try to visualize what a successful application will look like. Break it down into its component parts – budget data, narrative and storytelling, standard form data, etc.
5) Jot down the agency contact listed in the opportunity. And if you need to, establish a line of communication early in the process so that if you have any program-related questions you can quickly reach out.
6) Plan to submit the final application at least a few days before the closing date, allowing yourself time to fix errors if any are encountered when you click submit.
Do you have other tips for first-time federal grant applicants? Share them below and we will highlight our favorites in a future blog post.