Learning from the Center for Advanced Global Studies (CAGSE) in Education Symposium at Arizona State University

January 16, 2019
Luis Crouch, RTI International & Silvia Montoya, UNESCO Institute for Statistics

We are very happy to have been invited to a Symposium on Innovations in Global Learning Metrics, sponsored by CAGSE, in November 2018. Silvia Montoya and Brenda Tay-Lim from the UNESCO Institute for Statistics (UIS) wrote a paper on “Options in achieving global comparability for reporting on SDG 4” and Luis Crouch presented for them due to their unavailability. We received excellent written pre-symposium commentary from Kadriye Ercikan (University of British Columbia), Tünde Cerović (Belgrade University and Open Society Foundations), Radhika Gorur (Deakin University), and William Schmidt (Michigan State University), as well as many live comments from the group. The paper and written comments are here.

In this blog we want to engage with the commentary—not just respond in a simple way to specific points. One of the broad discussion questions that arose was how researchers and academics can contribute more to policy directions and to policy critique. It’d be a bit off track to engage in that a lot now—maybe in some other blog or venue. But one simple, direct step is for us non-academics to simply engage in the discussion. That is why we are writing this blog.

First, the UN (and other) institutions who are custodians of the measurement at this point have a mandate. There is not much choice but to follow that mandate. The UN system is a membership organization and the member countries ultimately dictate. The measurement and tracking of performance, for a set of fixed indicators, and in a manner that is as standardized and comparable as reasonably feasible, are now a mandate, given to the custodian agencies. The language is very specific: “Global monitoring should be based, to the greatest possible extent, on comparable and standardized national data, obtained through well-established reporting mechanisms from countries to the international statistical system” (p. 8/62, “Report of the Inter-Agency and Expert Group on Sustainable Development Goal Indicators).

As was wisely noted at the meeting, though, it is true that while this is a policy or even political mandate now given to the professionals, professionals (and academics) do shape the agenda and provide policy and political leaders with a sense of what is possible. Professionals can’t entirely “hide” behind a mandate. But we honestly think that, had the policy makers truly been responsive to a technocratic agenda instead of having opinions of their own, the indicators would not be nearly as demanding on us as they are. We are being forced to stretch, especially in areas such as adult learning, civic engagement and sustainability, and digital skills. We are not sure the public and NGO researchers and officials necessarily wished this difficult challenge on themselves.

But more importantly, and as was also wisely noted in the Symposium, professionals ought to have the moral courage to engage with their mandate, not just “obey.”

To us, one of the most important reasons to have comparability and standardization has nothing to do with efficiency or cost savings and accountability and so on, and has a lot to do with equity and social justice taking as a point of departure the contents and skills the kids/youths are entitled to. If we did not have the standardized and comparable measurements that we already have that allows us to talk based on a common language and understanding, for instance, we would not know some of the things we increasingly know, in a comparable and multi-country (that is, pretty generalizable) manner, such as that:

  • About half of the global cognitive inequality is between countries, and half is within countries, at least insofar as this can be measured using assessments. Knowing this should be helpful to both governments and development agencies in setting allocative priorities.
  • We have a much clearer sense of what it takes to reduce that inequality within countries—less so between them.
  • For instance, we increasingly know that factors such as wealth and ethnicity/ethno-linguistic discrimination or marginalization count for more, in driving cognitive inequality, than gender or (less clearly so) the “pure” urban-rural divide.
  • We also increasingly know that, because there is a lot of inequality (both between and within countries) that is unexplained by any clear “ascriptive” factors (gender, parental wealth, ethnicity), “simple” (but not really so simple!) lack of management capacity and quality assurance is a real problem. And data/evidence can help here, not just in setting policy but in managing and “moving the needle” on that policy.

You can’t know how much inequality there is, or what drives it, unless you measure it—with a standardized measurement stick, otherwise it is literally difficult to judge that two things are not of equal length. But, we also note that the ideal might be “as much localization as possible, as much standardization as necessary.” That is why UIS’s emphasis has been on supporting the comparability of existing (and future national) assessments rather than on backing, adopting, “imposing,” or even endorsing specific global assessments.

Second, it was noted that measurement isn’t really the issue—action by teachers and systems is. This is true, and we would certainly back the idea that there be more funding of the “improvement” function than the “measurement” function. However, improvement can more easily gain traction if one knows what is going on. (There is, of course, already far more backing of the “regular business” aspects of education systems: assessment would be a tiny fraction of that cost. However, there is under-investment in how one actually uses assessments—the right combination of assessments—to improve.) But there is still a measurement mandate aimed at making the problem visible so resources for improvement are dedicated , and, since there are efficiencies in specialization, institutions such as UIS (and their equivalents at WHO, FAO, etc.) have to focus on measurement . But perhaps such specialized bodies ought to reach out more, and support others whose mission is to use the data to support teachers (or doctors and nurses, agricultural extension agents, etc.). Along those lines, though, we also suggested (with tongue only partially in cheek) that perhaps international assessments ought to be less, not more, relevant, or at least less determinant. That is, they ought to be only a reference point (albeit a useful one), and national assessments ought to have center stage. This is UIS’s position.

A last major issue that was discussed, partly in reaction to the paper but partly also because it was “in the air,” was whether (and how, and why) policy research and academic input influence policy. Some were skeptical or pessimistic. Others not as much. In our view, there is impact. Not, perhaps, immediately. And few if any policy makers make decisions solely based on evidence. Nor is the impact of research typically traceable to particular academics, books, papers, or conferences—it is a much more diffuse process than that, which can contribute to the sensation that one is not having impact. And, of course, political economy and just plain politics have a lot of influence. But JM Keynes got it about right: “Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back… Not, indeed, immediately, but after a certain interval… soon or late, it is ideas, not vested interests, which are dangerous for good or evil.” We can cite a few examples or suggest ways to think about this that are more optimistic about research impact:

  • While politics and political economy play a role, no one likes to say so publicly. Policy-makers seldom say “Oh, that was a purely political decision.” They often pay lip-service to rationality, data, evidence, as well as, at least in democracies or semi-democracies, common sense round what is right and just. Academics and researchers can take advantage of this tendency to pay lip service and demand to be heard. In a similar manner, human rights get announced before they get enacted, and they get enacted partly because they were announced and someone then used that in order to push. As noted, it is not immediate, traceable to particular individuals, etc.
  • A good example is the case for girls’ education and the progress that was made over the last 40 years or so. Researchers were instrumental in this. There was not necessarily a political case. Nor was there that much grassroots pressure from villagers or even urban dwellers. On the contrary, our experience suggests that, with regard to these issues, the grassroots were pretty feudal or patriarchal. Researchers, social activists, both global and local, eventually had an impact.
  • It also helps if researchers are sensitive to issues, and gain the initial trust of policy makers by helping them with smaller, relatively short-term, and relatively less weighty matters, as a way of gaining the space to have impact on the more serious issues. This can happen with individual researchers, with think tanks, institutions, universities and centers such as CASGE. Admittedly, this is a long game, but social development does not happen overnight.
  • “Situation rooms” that show data and modeling in visually-striking ways can be helpful, under certain circumstances. Generally, only as “just one more input.” And if one is indeed not naïve about things and over-estimates the impact one is likely to have. Policy makers often react against what they see as too much naivete on the part of researchers, when they signal that they expect policy makers to act right away on the evidence presented. But in our experience, showing the impact of simulations in real time, in a policy discussion (e.g., projecting even a simple, Excel-based model on the wall) can be useful. This varies by bureaucratic culture, of course. And it is more useful if one can take the “situation room” (again, just a simple projection of a simulation model can be useful) to the policy makers rather than having the policy makers come to the “situation room”—unless they happen to be nearby.
  • Finally, it is also important to take on board the fact that it is usually local intellectuals and activists who will carry the day. UN bodies, as was noted, can’t really “make” governments take action based on data/evidence. But the data can support local intellectuals and activists who can pressure governments, e.g., in eliminating school fees, in increasing investment in the younger children, etc.

There is there is no time, resources, and energies in questioning the commitments themselves. The 2030 Agenda is a call for everybody. Academia is not the exception and initiatives such as the GPE’s KIX are stressing the relevance of knowledge exchange and areas where academia can play a critical role if focused on building human capacities at all levels.