Funding Loss for Climate Resilience Science Center: What Led to CR2’s Disqualification?

The Climate Science and Resilience Center will lose funding starting March 1, 2026, a situation raising concerns among the national and international scientific community, users of the developed databases and platforms at the center, and public officials across various government agencies.

Funding Loss for Climate Resilience Science Center: What Led to CR2’s Disqualification?

Autor: The Citizen

Original article: Adjudicación de Centros de Interés Nacional: ¿Cómo perdió su financiamiento el CR2?


By CR2 Directory, CR2.0 Renewal Proposal Directorate

In October 2024, the National Agency for Research and Development (ANID) announced a call for applications for a new funding instrument for National Interest Research and Development Centers of Excellence.

These centers represent a new category under the National Center Plan established by the Ministry of Science, Technology, Knowledge, and Innovation, initiated by President Gabriel Boric’s government (MinCiencia, 2023), replacing the National Funds in Priority Areas (FONDAP).

Through the FONDAP instrument, the state had financed 12 research centers, including the Climate Science and Resilience Center (CR2), in areas designated as national priorities, building capacities over 13 years of operation (2012-2025).

CR2, a collaborative initiative involving researchers from the University of Chile (the sponsoring institution), the University of Concepción, and the Universidad Austral de Chile, along with other academic institutions nationwide, submitted a renewal proposal aimed at advancing climate change science, focusing on its societal impacts, enhancing resilience, and generating scientific evidence for decision-making.

On December 26, 2025, ANID announced the results of the competition, awarding funding to 11 national interest centers, involving approximately $77 million for the next five years.

Although the individual scores were not disclosed, CR2’s proposal placed fifth in the waiting list, behind the 11 awarded proposals and four others prioritized for being outside the Metropolitan Region.

This outcome means CR2 will lose ANID funding beginning March 1, 2026, a situation that has raised significant concern among the national and international scientific community, users of the databases and platforms developed at the center, public officials, and the media.

This article explains the process that led to CR2’s loss of funding and the actions taken in response.

Evaluation of the New National Interest Centers: Two Scientific Stages and a National Panel of Experts

Unlike the FONDAP instrument, this competition did not define specific priority areas for the centers to address. Instead, each proposal had to justify the relevance of its research area. The evaluation included a National Panel of experts tasked with assessing this dimension, among other aspects.

The competition evaluation was structured into three stages:

  1. The written proposal evaluation conducted by scientific peers (weighted at 30%).
  2. The evaluation by an International Panel of 14 scientists from various fields based on an interview with the team leading each proposal (35%).
  3. The evaluation by a National Panel of 7 experts who, based on the same interview which included the international scientific panel and “considering all available evidence,” would prepare a final assessment according to the criteria set out in the guidelines (35%).

It is important to clarify that both the National and International Panels participated in the same interview and assessed similar criteria established by the guidelines (though with different weights), including the scientific relevance of the proposal and its contribution to national interest and public policy.

Competition Results and the Role of the National Panel in Selecting Awardees

Figure 1 illustrates the competition results, detailing the scientific evaluation (stages 1 and 2) and the final score for each proposal obtained after the National Panel’s assessment. These results show a pattern where the National Panel’s evaluation significantly influences the position of proposals relative to the awarding threshold.

Figure 1. The blue and light blue bars display the results of the scientific evaluation (stages 1 and 2, accounting for 65% of the total score). The black lines, accompanied by a red or gray dot, correspond to the Final Score (after the National Panel’s evaluation) for selected and non-selected centers, respectively. The National Panel’s scores determining the final scores are detailed in Figure 2, following this. Each evaluation stage generates a score on a scale from 0 to 5, allowing for decimal scores. The y-axis indicates the ranking for each proposal according to its scientific assessment and final ranking following the National Panel’s evaluation. The dashed red line indicates the cutoff line that defines the eleven centers awarded in the competition. The data corresponds to all awarded centers, all centers on the waiting list (with two exceptions), and some non-awarded centers. The information was obtained through transparency requests and/or provided by the project teams themselves.
Figure 2.

Instead of observing a positive correlation between the final score and the scientific evaluation, with moderate and traceable adjustments, it is evident that for a subset of proposals, the National Panel stage significantly altered their relative positions concerning the awarding threshold.

The influence of the National Panel reorganizes the prioritization of the scientific evaluation stages, placing some proposals below the cutoff score (movement to the left on the graph) and others above the cutoff score (movement to the right on the graph). Notably, we highlight the following points:

– A decisive drop for CR2 from position 3 to 12 (and subsequently to position 16). The center achieved outstanding scientific performance—ranking third overall in scientific evaluation out of 44 proposals and second in interviews with the International Panel—yet after the National Panel stage, it fell just below the cutoff score, with a small margin below the last awarded project in the eleventh position. Additionally, due to regionalization criteria not applied to the first 11 centers, CR2 was placed in sixteenth position, fifth on the waiting list.

– A similar pattern is observed in centers such as CRHIAM, IDEAL, and CIIR, which, after securing promising positions in the first two scientific evaluation stages, also ended up below the threshold once the National Panel’s evaluation was included.

– Positive movements are seen in proposals that were not among the top 11 according to scientific evaluations but crossed the awarding threshold after undergoing the National Panel evaluation. This movement does not undermine the quality of the awarded projects; rather, it highlights the critical role the National Panel stage plays in defining final outcomes (for more details, see Figure 2).

While the National Panel may apply its criteria—differing from those used in previous stages—entitling it to assign 35% of the total score, the magnitude and direction of the observed adjustments necessitate clear and verifiable justification of the criteria and procedures actually applied. To date, such an explanation has not been provided.

Contradiction Between Qualitative Evaluation and Numerical Scores from the National Panel

It is entirely possible for the National Panel to arrive at a grounded assessment that differs from those made in prior stages.

However, in a proper evaluation process, coherence between qualitative judgments and the scores assigned by the panel is to be expected. In CR2’s case, the justification provided by the National Panel—mostly praiseworthy—seems incompatible with the low numerical score of 4.17 that left it below the cutoff:

(Our translation from the original English evaluation): “CR2.0 presents a compelling and highly relevant research proposal, with a well-defined scientific framework, an outstanding team, and strong institutional backing. The proposal is particularly strong in addressing national climate challenges and connecting with public policies, while also demonstrating a clear commitment to outreach and societal engagement. The integration of gender equality and inclusiveness is another positive aspect of the proposal, although it could benefit from a higher level of detail.”

“The slight reduction in the score is due to some areas where the proposal could be more specific or developed further. These include clarifying the mechanisms for interdisciplinary collaboration, improving the articulation of private sector involvement, providing more specific strategies for long-term financial sustainability, and offering a more detailed gender equality action plan with measurable outcomes.”

“Despite these minor gaps, overall the proposal is highly impressive and well-aligned with the needs of national and global climate resilience. Therefore, the score reflects a highly effective project for addressing key scientific, public policy, and social challenges related to climate change.”

We have been able to compare the National Panel’s evaluation of CR2 with that of other awarded centers and observe that, in comparison, CR2’s written justification is as strong or even more laudatory than some funded proposals. In that context, the assigned numerical score—and the subsequent disqualification—are even more challenging to explain in terms of the applied criteria and traceability.

Current Status

More than a month after the competition results were announced, with the 11 National Interest Centers already awarded and in the implementation phase, a fundamental question remains: Why wasn’t CR2’s proposal funded despite the excellent evaluations received in the scientific stages and the National Panel’s own positive justification, which rated the proposal as “highly impressive”?

Following the announcement of the competition results and adhering to the administrative pathway established in the guidelines, CR2’s proposal director, Roberto Rondanelli, filed a request for review to clarify these discrepancies. On Tuesday, February 10, 2026, ANID rejected this request.

Concerning the discrepancy between the justification and the score assigned to CR2’s proposal by the National Panel, ANID responded as follows:

“Regarding the differences expressed by Mr. Rondanelli related to the application of the evaluation scale defined in the competition guidelines, contained in Affected Resolution No. 8 of 2024, it should be noted that this scale is structured into categories of 0 to 5, such as Excellent (5.0) or Very Good (4.0–4.9), among others, that allow panels to weigh, within a range, both positive aspects and identified gaps in each criterion.”

“The score assigned by evaluators and panels derives from the evaluation and detection of gaps identified by experts, who consider both strengths, observations, risks, and elements for improvement within the same criterion. Thus, evaluative expressions such as “impressive,” “very effective,” or “minor gaps” do not, by themselves, impose the obligation for a score of 5.0, since the category 4.0 to 4.9 of “Very Good” indeed admits high merit proposals that present minor improvement areas justifying lower scores.”

“Therefore, the report from the National Panel provided to the applicant consistently shows that, for each criterion, along with positive elements, specific weaknesses are identified that prevent reaching the “Excellent (5.0)” category. The score assigned is within the range allowed by the scale and reflects the existence of strengths and aspects to improve, revealing the process of analysis and evaluation by experts and panels without implying contradiction and inconsistency as claimed by Dr. Rondanelli.”

Each reader can judge how convincing this explanation is. In our view, ANID’s central argument equates to saying that, since it belongs to the “Very Good” category, any score within the range of 4.0–4.9 would be defensible. But therein lies the issue: acknowledging a possible range does not explain a specific decision. What remains lacking is the traceability that concretely and verifiably links the “gaps” mentioned to a particular value within the range, and especially to a score so close to the lower limit of the category.

If the report itself describes high performance with “minor” gaps, what identifiable factor, and with what weight, justifies a score of 4.17 and not, for instance, a 4.8 or 4.9? As long as that connection remains unspecified, the fundamental question remains open, and the discrepancy between the nature of the justification and the magnitude of the assigned score is not genuinely clarified.

The outcome of the competition, now consolidated after the response to the review request, leaves Chile without a research center specifically dedicated to studying climate change and resilience against its impacts.

This occurs in a country particularly vulnerable to climate change, where communities and ecosystems increasingly face threats and extreme events such as heat waves, wildfires, storms, and floods, and where law mandates that adaptation to climate change be based on the best available evidence.

None of the points we raise here aim to question the quality or merit of the awarded proposals. All address issues of high relevance for national interests—adolescence, renewable energy, transportation, disasters, among others—and are supported by highly competent scientific teams. Our observation is systemic in nature and relates to the aggregate outcome of the process: the awarding culminated in an unusually high concentration across various dimensions.

Seven centers are linked to the same main institution; four, within the same faculty; no centers were awarded to public universities; much of the focus is outside social sciences and humanities; and ten centers were located in the Metropolitan Region, among other patterns.

Such an outcome can occur in an open and competitive contest, especially when resources are scarce. However, due to the significance of this instrument and its long-lasting implications for the national scientific system, a particularly robust, traceable, and verifiable justification was required, consistent with a program designed to guide evidence-based public policies for the coming decade.

In our case, after the review request, no explanation has been established that allows for an understanding of why the National Panel assigned a score that does not align with its own justification, placing CR2 just below the award threshold, nor why the institutional response does not substantively address the arguments presented.

In a country that needs to strengthen its scientific capacity and trust in institutions, decisions of this magnitude can only be supported by public, clear, and verifiable reasons. That is the explanation that citizens—and the scientific system itself—have the right to know.

By CR2 Directory, CR2.0 Renewal Proposal Directorate.

Suscríbete
|
pasaporte.elciudadano.com

Reels

Ver Más »
Busca en El Ciudadano