For faculty researchers, paying it forward doesn’t necessarily pay them back

By

Erik Ketcherside

Mary Lou Fulton Teachers College is dedicated to working with schools and communities to confront a wide range of challenges in education; an imperative in keeping with ASU’s mandate to assume fundamental responsibilities for the communities it serves. For a research university such as ASU, that commitment requires a balance between the merits of pure research and knowledge creation with the demand for research that confronts societal challenges.

Gustavo Fischman


That struggle for balance is present not only in choosing what research gets funded, but also in the process by which faculty members are evaluated, retained and promoted. That’s according to research by a team of scholars from four higher education institutions. ASU is represented by Gustavo Fischman, a professor of educational policy in MLFTC. Their paper in the open-source journal Humanities Commons — “How significant are the public dimensions of faculty work in review, promotion and tenure documents?” — became the subject of an Oct. 4 article on the website of Nature, the International Journal of Science.

Fischman answered some questions to give an overview of the research team’s findings.

For people outside academia, what does it mean that universities are evaluating faculty members more on traditional measures than service to the community? What are the effects?

For at least 200 years, research universities such as ASU have occupied a unique space in the public arena. The public supports our work because there is an assumption that in addition to teaching and training the next generation of professionals, we will contribute to the public good by exploring socially relevant issues and providing solutions. If the models used to evaluate faculty rely on incentives that may discourage or punish those who entered into academia with the idea of service to the community or contributing to the public good, it is fair to assume that those who are supporting our work will be reluctant to value what we do.

What’s wrong with the old system?

The traditional metrics of how many grants a researcher obtains, how many articles they publish and how many citations their research generates are relevant. But they’re not sufficient for public universities in the 21st century. They confuse continuous increases of countable items — more articles and more citations in more hard-to-publish journals — with scientific and social impact. Most research universities can show increases of their production, but there are no clear and compelling indicators that the gains in quantity, quality, access, relevance and opportunities to engage with scholarship have significantly improved for the public. The use of simplistic models that ignore the public dimension of our work is analogous to confusing the delivery of calories with feeding people: If our main goal is feeding people, but all you can effectively incentivize is the delivery of calories, we will conclude that junk food is more efficient than an apple.

Is there a middle ground that could meet the needs of both the community and academia?

I understand the pressures of highly polarized and politicized higher education contexts which are focused on rankings and other indicators of performance. Our societies demand better scientific accountability. The biggest challenge to develop research that could meet the needs of both the community and academia is to align our systems of incentives to overcome the lack of trust among potential research partners, be they other scholars, practitioners, policymakers or community members. We also need to confront the sense of comfort the current system provides, where the rewards are the same for publishing research that concludes with the statement “more research is needed” as they are for  producing knowledge that may eventually bring value to a scholarly field, help educators improve practices or provide rigorous evidence to families, communities and policymakers. We could move in this direction by expanding our debates beyond the important, yet insufficient, question, “How we are going to assess this research?” and carefully consider the questions we ask about research responsibility, relevance and impact. Universities need to avoid simplistic solutions that end up producing more and mattering less and engage in significant questions about research: “For whom and to what end?”

What steps need to be taken to make that middle ground possible?

In our project, we are explicit that we do not oppose the use of well-defined indicators or metrics as one way (among many others) to assess the scholarly relevance of research. However, we suggest care is needed in identifying and replacing simplistic policies that only pay lip service and symbolic attention to the public dimensions of scholarship. A simple and very effective first step is to work institutionally to support Open Access as an effective strategy to expand opportunities for engagement with more diverse groups of users and stakeholders, fulfilling the public patronage imperative of universities. Despite the limitations of such an approach, the lack of incentives to make research public in this way seems like a missed opportunity. A second necessary step is a recognition by universities that a lot more needs to be done to identify and avoid using simplistic metrics to assess research productivity. We are not the only group that has identified this need: The Humane Metrics Initiative, for example, has been working towards identifying metrics that assess levels of engagement with one’s community of practice and with the public at large. More broadly, the Declaration on Research Assessment stands out with over 12,000 scholars and over 700 scholarly organizations as signatories who have expressed their commitment to avoiding simplistic models to assess scholarly impact.