Monitoring Indicators & Metrics

To ensure continuous monitoring of your indicators, you must design and implement a reliable, efficient monitoring process that optimizes both human and financial resources. Monitoring your indicator set can be labor-intensive and involve additional costs, but you can take steps to minimize the amount of additional funding and staff effort your monitoring activities require. See simplified matrix below.

Cost matrixFunding for Monitoring

In the previous step of the process, as you narrowed down your list of potential indicators, you took into account the capacity of those involved in the adaptation process—both the people directly responsible for leading and implementing your adaptation process and your partners. Your capacity assessment may have revealed sources of (free) data, expertise, and technical know-how, staff capacities and volunteer interests, and even newly discovered funding sources.

As with all adaptation funding, the ultimate solution is to include your monitoring costs in the adaptation line item in your organization’s annual budget. You can minimize the incremental funding needed for ongoing monitoring and evaluation by deploying any of the following strategies, also explained in this Job Aid.

  • Use government data: Identify existing data-monitoring efforts that produce data of interest (e.g., the Census, Bureau of Labor Statistics, federal or state-owned nature reserve monitoring, or other government/taxpayer-funded data collection).
  • Use academic/expert data: Draw on long-term monitoring by academic or non-profit institutions (e.g., long-term ecological research stations or ongoing monitoring of protected land).
  • Use citizen science: Data collected through citizen science (e.g., the Audubon Society’s bird counts) can be a rich source of information.
  • Use local science institutions: Aquariums, science centers, and other local science institutions collect—and may be willing to share—potentially useful data.
  • Forge partnerships: Partner with teachers and researchers at local high schools, colleges, and universities. Find partners who share your interests and have the necessary expertise, data collection equipment, and eager students or volunteers to conduct the data collection. Many students find applied work that makes a real-world contribution exciting and meaningful.
  • Repurpose existing data and reporting: Modify or add to data collection for existing monitoring and reporting systems (e.g., annual reporting, post-workshop surveys, performance reviews).
  • Keep it simple: Keep data collection as simple and straightforward as possible. Collect data online or electronically to streamline simple compilation, integration, analysis, and display—but don’t shy away from pen and paper when they are the tools at hand!

It is notoriously difficult to secure long-term funding for data monitoring. You can lower the hurdle by adopting a straightforward monitoring process and integrating data collection and staff training into your fundraising strategy.

Responsibilities for Monitoring

Assigning responsibility for monitoring is an effective way to ensure it gets done. Experience from our adaptation projects, as well as ideas from other sources, illustrate ways to assign responsibility.

  • Reporting requirements and mandates: Some funding, regulatory, and public health programs mandate reporting on such indicators as air quality, water quality, and incidence of disease. These indicators were not originally designed for adaptation, but they are already being tracked and, as climate change worsens, they can point to important outcomes related to public health and environmental conditions.
  • Policy and budget directives: When adaptation efforts are formalized in a directive, policy, plan, or budget, organizational leadership typically expects progress reporting on a regular basis (e.g., quarterly, annually, event-driven). With that clear expectation, associated tracking of indicators and metrics becomes part of daily work.
  • Incentive-based approaches (“carrots and sticks”): Insurance programs, credit rating systems, and other incentive-based programs encourage tracking with “carrots and sticks.” By engaging in resilience-building measures and/or achieving certain standards or levels of protection, entities qualify for rewards like lower costs, lower insurance premiums, and reputational benefits.
  • Payment: By allocating a budget line or paying an external partner/consultant to collect data, decision-makers can ensure indicators are tracked. Additional benefits come from engaging an external partner—efforts can be billed as “independent/external monitoring and evaluation” and thus add to the entity’s reputation for transparency and accountability.
  • Appointing dedicated staff/volunteers: Monitoring is more likely to get done within an organization when the responsibility for data collection is clarified and assigned explicitly to staff or volunteers as part of their regular job responsibilities.

Timing, Frequency and Duration of Tracking

When, how often, and for how long you track an indicator will vary by both the indicator itself and how you expect to use it.

For example, continuous states of a system—air quality, water quality, ecosystem biodiversity, etc.—may require frequent, nearly continuous long-term tracking. By contrast, a fundraising effort for a one-time infrastructure upgrade needs to be tracked only through the effort’s completion.

See our related Job Aid for an in-depth discussion of timing, frequency, and duration.

Frequency Table

Tracking can feel less urgent and more tedious than other priority tasks. Completing your tracking together with others can reduce the burden and increase transparency, accountability, and cross-organizational coordination. In addition, scheduling your tracking at regular intervals (quarterly, annually, or during designated M&E days), or immediately after an event, can help ensure you complete it—despite competing priorities.

The length of time you monitor a given indicator depends on how long it remains of interest to those involved in your adaptation process. Some indicators may be tracked just once, while others are tracked indefinitely. Indicators related to systems—such as human well-being, safety, economic opportunity for small businesses in a community—need to be tracked until those systems are no longer relevant to those affected by your adaptation efforts.

Some indicators must be tracked over several years before their effectiveness can be evaluated fully. For example, it can take years for a species to demonstrate stable independent survival after habitat restoration or assisted relocation—and that metric may be nearly impossible to achieve in a continuously changing climate. Long-term monitoring would be required.

Other indicators may need close tracking only until a major goal is achieved. After that, monitoring can occur less frequently and may involve only “spot-checking” to ensure the achievement lasts. For example, a community might decide to relocate a significant number of residences and businesses out of a high-risk area, devising annual targets over ten years until all structures are relocated and the area is restored as natural habitat. Annual relocations and associated restoration efforts would be tracked closely for the first ten years or until the goal is achieved. After that, the community may track the ecological health of the restored habitat only every five years, at the same time reassessing changing climate risks to determine whether additional structures should be moved to safety.

Appropriate Methodologies to Collect, Analyze and Interpret Data

Monitoring adaptation indicators and metrics may involve multiple methods and a number of considerations go into selecting them.

In research parlance, a “method” is a research tool. For example, automated weather stations, survey instruments, interview protocols, and different approaches to cost-benefit analysis are all research methods. Your method is the “what” and “how” you do your research.

A “methodology,” by contrast, is the justification for your choice of method(s). Your methodology is the perspective from which your research is undertaken. Methodology involves value judgments and reasoned arguments to support such choices as how often to sample, how many people to interview and how, and why to do a participatory study.

Unfortunately, some data tracking is done without much thought of method or methodology. Some tracking does not seem to demand a very “scientific” approach. Other tracking can be set up in simple, easy ways. (For example, see the simple action-tracking tool used by the Wells NERR.) Yet no good data has ever been collected from sloppy work, thoughtlessness, inappropriate use of methods and technologies, or—when data collection relates to human subjects—unethical research protocols.

In our projects, we have found that practitioners are most comfortable when monitoring is completed by or with the advice of an experienced colleague or trusted internal or external partner. This speaks to the benefits of knowledge co-production and the efficiency of everyone doing best what they have been trained for and are most familiar with.

Some fields or agencies have established stringent monitoring protocols that are replicated at different sites to ensure comparability and reliability. Often (but not always) this involves quantitative data. It may seem incontestable, but in fact much debate among experts and subjective judgment goes into creating those data collection protocols (i.e., methodologies).

In other fields, monitoring and data collection protocols are not nearly as strict, and different researchers are guided by varying standards of what is “adequate” or “good.” Surveys used online or for in-person interviewing are a case in point. A good survey question is focused, unambiguous, and easy to answer—yet to achieve that requires hard work. Subtle differences in wording can imply very different interpretations. Moreover, people are less and less willing to spend much time on surveys so asking a small (or just big enough) number of powerful questions is increasingly important.

The dividing line here is not between quantitative and qualitative data collection protocols. For example, cost-benefit analysis to assess the cost-effectiveness of an adaptation action—while a well-established tool in economics using and producing quantitative data—can be done in a variety of ways that involve several judgment calls (e.g., which discount rates to use, what to include/exclude as costs and as benefits).

There is considerable learning involved for non-experts to understand the subtle differences between methodologies. To understand the implications of methodological choices, ongoing interaction between experts and users of the information is required. Collaborative co-design of monitoring processes and data interpretation will make the monitored metrics more meaningful, useful, and actionable.

Resources for Monitoring Indicators