This is a joint post with Denizhan Duran.
This is a joint post with Denizhan Duran.
Two months ago, we set out to create an index that measures the quality of health aid. Here’s why: First, with the approaching 4th High Level Forum on Aid Effectiveness in Busan, aid quality is becoming an important topic of discussion. The recently released results of the Paris Declaration survey (discussed here) show that donors failed to reach almost all of the targets they specified themselves. In this context, mutual accountability between donors and partners is becoming more important, and such an index can help foster accountability. Second, we felt that looking at aid effectiveness at a micro level gave a better sense of the challenges that are faced on the macro level – and health is one of the largest and most complex aid sectors. Third, while there are initiatives and case studies on aid effectiveness in health, including IHP+ Results, there is a lack of quantitative analysis: OECD has declared health to be a tracer sector and published a report, which mostly relies on qualitative case studies. Finally, the Center for Global Development already has a methodology for evaluating aid effectiveness, called the Quality of Official Development Aid (QuODA), which will soon be updated – why not adapt it for the health sector?
We analyzed ~$13b of health aid that went from Development Assistance Committee (DAC) donors to partner countries in 2008 and 2009, using 24 indicators across four QuODA dimensions that were modeled after the Paris Declaration: maximizing efficiency, fostering institutions, reducing burden, transparency and learning. Every donor gets a score for each of these indicators, which are then standardized and averaged across every indicator within a dimension. While we tried to remain as faithful as possible to original QuODA indicators, which have been vetted by experts, we were faced with data constraints: apparently, there was a reason no one before us has attempted to quantify health aid effectiveness. As a result, we were able to keep 18 of the 31 indicators, which are based on OECD’s Credit Reporting System (CRS), and add 6 more indicators that specifically pertain to health, such as allocation to countries with high disease burdens, countries with national health plans, or projects supporting essential health metrics (see table below)
Every exercise which seeks to rank countries or compose an index comes with caveats, and our effort has plenty. First, we are analyzing slightly over half of all health aid disbursed in 2008 and 2009, as we leave out contributions by private foundations, or donors that do not report to the CRS database. Second, we are leaving out crucial indicators, such as aid predictability, measures for harmonization and coordination (such as sector-wide approaches), amount of budget support and use of results-based financing; yet, data on these is scarce and does not go beyond individual case studies. For original QuODA, the Paris Declaration Survey data is essential but we can’t include these measures as the special survey lacks sector-level questions – strange given OECD’s belief in sector-level aid effectiveness. Finally, neither aid effectiveness principles nor all of the indicators used to measure these principles have been empirically linked with better outcomes. However, they do represent commonly expressed goals of donors, and –in the case of infectious disease aid – do seem linked with impact.
We are still in the process of finalizing our analysis for our upcoming working paper – above, for example, is the graph of rankings in the maximizing efficiency indicator – but we would like to share some preliminary results:
- Is health aid getting better (it can’t get no worse?): While it is hard to make sweeping conclusions from such an index, we see that donors have performed worse across many indicators, on average, from 2008 to 2009. In 2009, compared to 2008, more aid went to richer, less well-governed countries with lower disease burdens. Aid also became more fragmented in 2009, as the median project size decreased. More aid flew into countries without national health plans and with lower-quality M&E frameworks. Progress was, however, made in some other indicators: health aid is more focused by recipient country – aid relationships are getting more significant. More money went into global public goods such as the WHO and UNICEF. Aid became less tied, and through more multilateral channels. More projects supported the collection of essential health metrics in 2009. Finally, the quality of reporting increased: donors are providing more information to the CRS database.
- Different from overall aid, but no clear pattern: When we compared overall QuODA and health QuODA across the same indicators, we found that there were relatively low correlations between the two and many changes in rankings: however, no overall pattern emerged, and some countries did better in health while some did worse.
- The United States might do better in health than it does in other sectors: Who knew that almost all American health aid is untied? While many studies and articles point out the fact that a majority of American health aid is tied, the CRS database shows that only 4% of United States health aid is tied. While we would like to believe that this is true and not a reporting error, we also found another problem with the United States’ data reporting to CRS – PEPFAR, which had a budget of $4b in 2009, disbursed only $23m in 2009 according to the CRS database. Something is amiss in U.S. reporting to the CRS.
As we mentioned, we are aware of all the caveats such an index brings – and the results should be taken with a large grain of salt. Our principal aim with this exercise is to generate a discussion over quantitative sector-level aid effectiveness measures, and let recipients hold donor agencies accountable. With that in mind, our paper will also include an analysis of aid effectiveness in the most aid-dependent countries. While the agenda is mostly set for Busan, we hope sector-level aid effectiveness is discussed – effective health aid can save lives, and as donors slash their aid budgets across the board, the commitment to better outcomes should be reaffirmed.
But because of the problems and limitations that we’ve encountered, we are wondering whether such rankings are worthwhile or are a waste of time. We’d welcome your comments on our indicators, our method and whether this exercise is useful.