BlogAchieving the Global Goal on Adaptation

Looking up, looking down, and looking sideways

With the recent announcements and action for the GGA and adaptation and resilience in general, it is important to not only keep in mind the essential role of common platforms but also common definitions. In their March 2024 Newsletter, AGWA highlighted the need for confidence in adaptation and resilience, and the gap between stated definitions for adaptation and/or resilience and their actual measurement.

AGWA

By John Matthews, Executive Director, AGWA

 

Over the past few months, a team of us has been reading reports, talking with programs, and soliciting reviews (including from many of you) about how we define and measure adaptation and resilience. Strangely, this work does not seem to be occurring in a vacuum. Over the past six weeks alone, I’ve personally been solicited for comments on at least seven different complementary initiatives. These have ranged from work targeting businesses, investments (such as bonds or portfolio evaluations), WASH, and policy frameworks, reaching up to global levels.

 

I am reminded a little of a brief period in the late 2000s when there seemed to be an efflorescence of “adaptation principles” that emerged like flowers and grasses after a desert rain. I contributed at least two publications to that era myself!

 

My hunch is that we have a slow-moving crisis of confidence in adaptation and resilience: are we doing what we say we are doing? Can we ensure we are being effective? Are different institutions, or programs within institutions, or even individuals within the same program working in ways that are measurable and can be replicated? Do we mean what we say we mean when it comes to policy and implementation, or are adaptation and resilience rhetorical flourishes, or perhaps even just recycled definitions in new vessels?

 

If you work on water, this debate really matters, because odds are that water resources are being measured in some way and/or water hazards are being carefully watched.

 

For our project, we’ve seen three interesting patterns. First, the number of definitions for adaptation and resilience has plummeted but not necessarily converged. We’ve gone from an archipelago the size of Indonesia to something more like the Hawaiian archipelago, with just a few main islands (and some tiny idiosyncratic reefs exposed at low tide). The IPCC’s definitions for the two terms are the most common but other credible versions exist.

 

Second, an active debate is occurring among practitioners and policy groups about the differences between adaptation and resilience. Are they elegant variations of the same concept? Different concepts? Does one flow into the other? My sense is that the momentum is behind diverging definitions, which are moving apart rapidly, and that the climate policy universe is driving the conversation more than technical groups, like engineers or scientists. The private sector and finance seem to be lagging behind. The biggest divide right now seems to be around simplistic climate de-risking (mostly in the finance community and the private sector more generally) and “transformational adaptation,” which is the concept of an ecosystem that passes a tipping point and becomes fundamentally reorganized (e.g., a desert becomes a forest). Another way to phrase this tension: are we worried most about really big problems or small ones? These operational differences spark quite strong reactions between institutions. Note that the climate policy community, at least in middle and low income countries, seems very worried about the challenge around imminent or in-process transformation.

 

Third, we have found that often a major gap exists between stated definitions for adaptation and/or resilience and what is actually being measured. Sometimes two institutions share definitions but if you look at what they measure for adaptation and/or resilience, then their implied definitions are actually completely different. (This debate reminds me a little of “discussions” I had with my mother when she asked me to clean the kitchen after dinner and her ire when she saw what I had actually done; we both wanted the kitchen clean, but my operational indicators differed from hers. Happily, our indicators have merged over time and all is peace and happiness when I visit. Her kitchen is cleaner too.) Thus, an institution may define adaptation as a circular and ongoing process of identifying risks/opportunities (surely the most common diagram of adaptation is a circle!) but actually measure and target quite fixed endpoints. Similarly (and also quite commonly), indicators such as water quantity and quality may be measured as “climate variables,” but without considering nonlinear and climate influences of those variables. Thus, the operational definition of adaptation and resilience may not actually contain any climate information. I would argue that what we measure is actually our most important and functional definition of the terms.

 

Why the amount of variation?

 

I suspect that, at core, we are seeing a deep tension between:

A transition from stationary (linear, predictive) to non-stationary (non-linear, uncertainty, systems) thinking in programs. Climate change is often not gradual, and sometimes not very predictable. The transition to non-stationary thinking is difficult, and the inertia to keep measuring the same things you did 20 years ago is strong. Swapping definitions without updating metrics is easy.
Seeing adaptation and resilience (and water) as intensely place-based programming and driven only by local stakeholders and conditions versus wanting to show broad patterns of efficacy and coherence and developing programs that can be aligned at higher levels. Local targets are essential. But we should also see meaningful convergence across projects and programs. Looking up, looking down, and looking sideways.

 

The best of the methodologies we’ve seen try to balance these issues — and often include an explicit step to create specialized project-specific indicators that match local needs with more general performance metrics that can be used at higher levels. That seems like a good trend. And one I hope we can continue to support.

 

Source: AGWA Updates, March 2024