GiveWell’s Uncertainty Problem

Noah A. Haber
ORCiD: 0000-0002-5672-1769

Code and data repository
Modified model sheet for PSA demo
Web app: Two programs with uncertainty
Web app: Conceptual demonstration of overall selection problem
Web app: Selection rules for individual candidate programs

Mouseover text that looks blue like this for additional notes

Update, Dec 15, 2022: This post won one of two first place prizes in GiveWell’s “Change our Mind Contest.” See GiveWell’s description, the other winners’ essays, and commentary here.


GiveWell’s key decision point for its models are whether the intervention being modeled provides at least 3x as much expected benefit per dollar spent as a direct cash transfer (edit Jan 3, 2023: this rule was officially changed to 10x as of July 2022; see discussion in the comments), using the expected modeled effectiveness of GiveDirectly as a proxy. The key value of interest is the “best guess” point estimate for both inputs and outputs of the model. While there are some subjective assessments and adjustments dealing with uncertainty in GiveWell’s models, these are relatively rare, suggesting that GiveWell’s assessments are designed to be fairly risk (or uncertainty) neutral, or at least very risk tolerant.

The driving assumption is that this neutrality would give the best possible mix of recommendations. Some recommended interventions would have underestimated cost-effectiveness, some overestimated, but as a group of recommendations would be nearly as good as it gets with the information available.

Unfortunately, the particular circumstances of GiveWell’s decision-making process yields some major unintended – but fixable – consequences. Specifically, GiveWell’s decision-process advantages programs that have more unreliable evidence, are overestimated, and have weaker underlying cost-effectiveness.

This essay proceeds as follows:

  • A set of stylized simulation-based demonstrations that show how:
    • Individual programs with unreliable evidence have a selection advantage
    • Individual programs that are worse have a (limited) selection advantage
    • The selected group of programs will be strongly biased toward more unreliable and weaker programs, at the expense of better programs
  • A real implementation of a miniature probabilistic sensitivity analysis (PSA) demonstrating that:
    • Uncertainty is a serious and largely ignored problem in GiveWell’s models
    • That probabilistic sensitivity analysis is implementable within GiveWell’s existing workflow
  • A set of simulation-based demonstrations of alternative decision rules based on a PSA infrastructure

The major impact of these suggestions are that they change how GiveWell decides which programs are cost-effective and by what measures. That is expected to change the mix of which programs are selected. In particular it is likely that if these suggestions are taken:

  • GiveWell will be able to recommend some of the programs it would otherwise reject due to improved uncertainty-forward metrics for evaluation.
  • GiveWell will no longer recommended some of the programs in its recommendation portfolio due to the evidence being too unreliable.

While it is unclear exactly how many programs would change status with an uncertainty-forward decision-making process, it is almost certain that at least one program will change status in both directions (not recommended to recommended and recommended to not recommended). It is plausible that many interventions would have their statuses switched. Even one proposed intervention switching statuses would have a major impact on GiveWell’s recommendations.

Conceptual underpinnings

In the following section, I generate a number of simulation-based conceptual demonstrations. While these closely resemble GiveWell’s decision rules, they are here to develop conceptual intuition on the mechanics of the problem(s) addressed. Regarding technical language, see disclaimer. I recommend opening the provided apps in a second window as you read; they are there for you to play with.

GiveWell’s decision-making framework

GiveWell’s decision-making framework, like many cost-effectiveness-based frameworks, is based on a threshold, where programs are considered highly cost-effective if the estimated CE value is ≥ 3. However, these estimates are made with some amount of uncertainty, where we cannot know the “true” cost-effectiveness but can get a rough idea of how much uncertainty there may be.

Warm up exercise

To build some intuition on where problems start to arise, consider two comparable hypothetical programs being reviewed by GiveWell. Program A is fairly cost-effective and well-measured, with a true value of 2.8, but where what we measure by chance alone ranges by a normal distribution with an SD of 0.5. Program B is slightly worse overall (true value = 2.5), but is more poorly measured such that there is a wider range of values by which we might have measured it by chance (SD = 1.5).

As shown in the above app, 34% of the time, Program A would randomly be measured as being ≥ 3, while Program B would randomly be measured as being ≥3 and selected 37% of the time. In other words, despite Program B being overall worse and more poorly measured, GiveWell is more likely to include Program B in its selected portfolio. As a general rule, the more uncertainty, the closer a program will be selected by coin flip. If most programs evaluated are less cost-effective than our threshold, the more uncertainty in the program evaluation, the more likely it is to be selected.

As a result, GiveWell’s decision rule isn’t just “risk neutral” or “expected value maximizing,” it selects and biases FOR uncertainty. This is a major issue by itself; some programs will get lucky and be anomalously overestimated (i.e. the “winner’s curse”). Unfortunately, as demonstrated in subsequent sections, the problem becomes more serious in aggregate, and much more serious when we consider how uncertainty and program quality are connected.

Arguably, one of the reasons that GiveWell’s rule is 3x GiveDirectly in the first place is as a partial hedge against uncertainty. In other words, if we were very sure of GiveWell’s estimates, the rule should be 1x. However, because the threshold rule operates only on the point estimate and ignores uncertainty, the threshold rule has to hedge against both the mean (which it does well) and the amount of uncertainty (which it does poorly). As will be shown later, a rule that incorporates the uncertainty directly lets us relax that tension.

Sources of uncertainty in cost-effectiveness modeling

In this essay, I am going to be somewhat vague about what kinds of uncertainty I am dealing with. The clearest one is statistical uncertainty from sampling, which is typically what is provided by the literature in the form of confidence intervals, standard errors, p-values, etc. However, there are many many more forms of uncertainty in modeling, including but not limited to:

  • Internal identification and study design issues in studies from which models parameters are taken
  • External/generalizability in source studies relative to the target population of interest
  • Design choices and parameterizations in CE models
  • Unknown unknowns

For the sake of brevity, I am putting it all under the label “uncertainty,” and keeping vague about it. Deciding what kinds of uncertainty to consider and how is an exercise in and of itself, but must also be considered very carefully.

Importantly, uncertainty in models grows rapidly as more assumptions are required, more estimates are made, parameters interact with each other, etc. Total uncertainty in a model is much larger than the sum of its parts, as discussed later.

Selection in aggregate

Baseline scenario

The app above generates 2,000 programs from which GiveWell could hypothetically select. As a starting assumption, we assume that on average programs have a true relative cost-effectiveness of 1 (i.e. below our selection threshold), randomly drawn from a normal distribution with an SD of 1.5. This creates 2,000 programs, where we know their true cost-effectiveness. Then, we add measurement error to our estimate of how cost-effective these programs are, simulating uncertainty in measurement and assessment. Finally, we apply GiveWell’s threshold-based decision rule to those programs and observe which programs we select for inclusion into GiveWell’s portfolio and which ones we reject.

For the purposes of this demonstration, we are interested in the degree to which we get our decisions right, using whether the true cost-effectiveness of the program is better than or worse than cash transfers (i.e. a relative CE value of 1) as a proxy for whether we made the right decision.

Rejected (negative)Accepted (positive)
Negative (CE < 1)True rejectionFalse acceptance
Positive (CE ≥ 1)False rejectionTrue acceptance

In the first tab (“True vs false rejections”), we see the distribution of programs we accept and reject compared with whether or not they were truly better or worse than cash transfers. As expected, we generate many false rejections, due largely to the decision threshold of 3 being a “hedge” of sorts. More importantly, we observe false positives (i.e. programs that got “lucky”).

In the “Selection bias” tab, we see that there is a substantial bias in the programs that we select. Programs that we select have a large positive bias on average. If it were the case that being “risk-neutral” yields unbiased results, we would expect equal amounts of bias among the programs that we select and reject.

The fundamental reason for this is shown in the third tab (“Uncertainty vs bias”), where we see the relative amount of uncertainty on the x axis and the amount of bias in the y. Because we are selecting on a threshold and ignoring uncertainty, we are more likely to be selecting programs in the upper right quadrant (i.e. those that are truly better and/or more lucky and biased).

What happens if the amount of uncertainty and the true value of the program are related?

For the previous demonstration, we assumed that there is no relationship between the quality of the program and the amount of uncertainty. However, it is likely that programs that are truly better or more promising will also be better measured (i.e. less uncertain). There are many reasons to believe this is the case in general: the more likely a program is to be effective, the more likely it is that researchers and policy makers will invest in rolling it out, testing it, and retesting it; higher quality, stronger, more direct, and therefore more expensive tests will usually be reserved for programs more likely to yield strong results; regression to the mean-type effects will tend to temper expectations for good programs more efficiently than for poor programs; and publication biases and related adverse selection forces in academia will tend to overselect strongly biased results and more rarely measured interventions.

The “Program certainty reduction factor” in the app reduces more of the overall amount of uncertainty for programs that are truly more cost-effective. Crank it up to 1, and see what happens: we get even more false positives and false negatives and more overall bias, despite this factor reducing the overall uncertainty in our program measurement. While the specific numbers here do not represent reality, this is the most likely scenario we are in.

While there is no way to completely eliminate this issue, there are relatively low-lift ways of substantially mitigating it. One option, discussed in more detail in the section on “Alternative decision rules” below, uses a lower bound of the uncertainty distribution for each program measured, compared against a slightly more generous (i.e. lower) decision threshold. Using this alternative decision rule, we get fewer false rejections, fewer false acceptances, and reduced bias. This uncertainty-forward rule is particularly important and bias-reducing in the case where better programs have less uncertainty.

Selection bias with default decision-rule (left) and CI-based decision-rule (right) when more cost-effective programs are also better measured

In the app, I recommend flipping between the default and CI-based rules in different simulation scenarios to gain some intuition.


  • When a decision is made on whether a point estimate exceeds a threshold and ignores uncertainty, programs that are more uncertain will tend to be more likely to be selected, leading to biased selections.
  • When there is an inverse relationship between the value of the programs and the amount of uncertainty involved in predicting its true value, both programs that are more uncertain and overall worse will tend to be even more likely to be selected over better evaluated programs
  • The more uncertainty and more pronounced the relationship between certainty and value, the worse the problem is, to the point where it can completely overwhelm the value of the selection exercise.

Is this a problem for GiveWell’s models? A review of uncertainty in GiveWell models and demonstration of probabilistic sensitivity analysis

In short: yes, this is a major issue for GiveWell’s models.

Ideally, I would estimate the extent of the uncertainty by replicating all of GiveWell’s models and reassessing the certainty involved in every relevant decision and parameter. Unfortunately, doing that for even one model is more time than is available. Instead, rather than try to give a reliable estimate of the extent of the true uncertainty, I instead show what happens with a small amount of prodding, as a preview of what might happen with a full accounting of uncertainty. Code, data, and modified workbook are available.

For the Deworm the World and Malaria Consortium models assessed, I:

  1. Looked for a handful of key parameters pertaining to the overall effectiveness of the program and prevalence of the issue being addressed
  2. Traced those parameter values back to the original data, and located the statistical sampling uncertainty provided
  3. Decided on a conservative, reasonable distribution of uncertainty that reflects that statistical uncertainty identified.
  4. Using those data, ran a script that randomly varies those parameters in the model by the uncertainty distributions identified (i.e. ran a very limited probabilistic sensitivity analysis)

Important note: The results below are an *underestimate* of the amount of uncertainty, reflect an arbitrarily decided set of parameters, and are not designed for any kind of comparison between models. The more parameters added to the PSA (as would be done in a real exercise), the more decisions modeled, and the more thorough the investigation, and the more systematic the uncertainty estimation, the more uncertainty that would be found. In real life, each of these models is MUCH more uncertain than what is shown here, but by an unknown amount.

The results for the two models above give a flavor for what might be found with a full accounting of uncertainty, noting that the total amount of uncertainty is likely to be much larger than the above in the real exercise. In the Deworm the World intervention, we see that allowing a very small number of variables to vary by their statistical sampling uncertainty limits yields a very large range of potential outcomes, including those that yield worse return than cash transfers or are actively harmful. The Malaria Consortium intervention, on the other hand, is not quite as sensitive in this exercise (again noting that this is an underestimate of the amount of uncertainty, and that identifying which interventions are more uncertain or better is not feasible from this exercise). Should a full exercise yield similar certainty on the results, we might be more sure about this particular intervention, and weigh it more highly.

Uncertainty compounds rapidly and complicatedly

One of the key reasons that these models are so sensitive is because they layer and multiply uncertainty onto uncertainty. A highly uncertain parameter (such as the prevalence of a disease) multiplied by another highly uncertain parameter (such as the effectiveness of the intervention) produces much more uncertainty than the sum of its parts. The more steps needed to get to effectiveness, the larger the assumptions, the less direct the evidence, and the more uncertainty there is in the parameters, the more rapidly that uncertainty grows. There is no escaping this problem, only addressing it head on.

So, to answer the key question here: Is there enough uncertainty in these models that we should be seriously worried about making weak decisions due to ignored uncertainty? The answer is definitely “yes.”

Instead of sidestepping the problem of uncertainty, GiveWell’s modeling imposes extremely strong assumptions on its estimates that results in estimates that are biased and less reliable than they could be, and also misses that many of these interventions are measured with a strong level of reliability.

One-way sensitivity analyses should also be performed (i.e. varying only one parameter at a time) to best identify areas where it is more valuable to invest more time resolving uncertainties, and can easily be incorporated into a sensitivity analysis workflow.

A brief infrastructure and workflow interlude

GiveWell uses Google Sheets as the primary way it designs, develops, and shares its models. While PSA is much easier and more efficient using models implemented in more fit-to-task software such as R, Google Sheets is sufficient and accessible for a wider range of people. Rather than completely upend GiveWell’s existing infrastructure, the PSA here is based on a nearly exact copy of GiveWell’s existing models, with only modifications being moving some cell references for convenience and sheets defining some inputs and outputs. In this case, the code for running the PSA interfaces directly with the Google Sheets from R, and a comparable approach would require little to no general change in model design workflow for GiveWell’s researchers, and maintains the accessible openness of public Google Sheets documents. All code and files are, of course, available on my GitHub with no restrictions.

In other words, the PSA implementation here is designed to drop neatly into GiveWell’s workflow and infrastructure. In theory, GiveWell researchers could remix the code and design from this essay to implement their own PSA procedures and PSA-based decision rules (implemented and discussed in the next section).

The very lightly modified data sheet for performing this PSA is here for perusing.

Potential methods and frameworks for addressing uncertainty

To review briefly, in the previous sections we have shown 1) the problem with using the point estimate alone in combination with a threshold, 2) that this is a serious issue for GiveWell’s models, and 3) how to generate a distribution of uncertainty around the CE estimates using PSA. In this section, we explore alternative decision rules and frameworks to mitigate these issues based on PSA.

Option 1) Just look at the uncertainty

The most important part of this exercise is to look at and understand the uncertainty involved in these models and escape from viewing only the point estimate. While informal, simply going through the exercise and being confronted with uncertainty may be enough on its own to impact decisions. It is likely to have knock-on effects on study design as well.

Option 2) Use a lower bound of the uncertainty interval

Rather than using the mean value of the uncertainty distribution (i.e. the point estimate), researchers can use a lower bound (e.g. the lower bound on the 80% CI). The lower bound is impacted both by the expected value of the impact AND the distribution of uncertainty, where the LB will be lower the more uncertainty there is. In practice, the 80% CI lower bound is simply the 20th percentile value of the distribution of outcomes generated from the PSA.

While using the LB instead of the point estimate would inherently lead to more conservative decisions, we can change that by also changing the threshold. As discussed earlier, the existing threshold is doing too much work to try to make up for the fact that it does not include uncertainty, so it needs to be very large. When we include uncertainty directly through using a lower bound, we can relax that necessity and use a lower threshold (e.g. 2x GiveDirectly). By doing these simultaneously, GiveWell can be both less conservative (i.e. accept more programs), have fewer false positives among those it selects, and have a portfolio with much less bias. It is truly a win-win.

Option 3) Use a probability-based threshold with a discrete comparator

There is an alternative framework to think in: instead of using a single value of the distribution of cost-effectiveness values, we can change the question to “What is the probability that this intervention is better than 1x (i.e. cash transfers)?” We can set a critical value for that threshold (e.g. we accept programs that we are 90% sure are better than cash transfers). As above, that value comes straight out of the distribution from our PSA: it’s simply the proportion of outcome results from our PSA-generated distribution which are ≥1.

If that sounds a lot like p-value-based null hypothesis significance testing, that’s because it is. Here our “null” is 1 (i.e. the ratio of the program of interest’s estimate to the cost-effectiveness of cash transfers), and our alpha is our choice of threshold cutoff. While null hypothesis significance testing is often problematic and maligned in many circles of science, this is a case where it is ideally suited for the problem at hand. The tradeoff is that we stop caring about the actual number value of how cost-effective the program is, but instead how sure we are that it’s better than cash transfers.

But we can potentially do a bit better.

Option 4) Use a probability-based threshold with a distributional comparator

Up until this point, we have been considering the cost-effectiveness of cash transfers themselves as being known and fixed. In reality, our estimate of the cost-effectiveness of cash transfers ALSO has uncertainty. And we can do to the cash transfers model exactly what we did for all the other models to get a *distribution* of cost-effectiveness values to compare to. In practice, the question becomes “What is the probability that a random draw from the distribution of uncertainty for our program of interest is ≥ a random draw from the distribution of uncertainty for cash transfers?”

This version makes the idea that we are operating under uncertainty explicit. We are not 100% sure about the cost-effectiveness of these interventions, and that is ok!

Alternative options

There are a plethora of alternative frameworks, models, and decisions. Chief among them is adopting a Bayesian framework for decision-making under uncertainty. Mechanically, these require a somewhat larger departure from GiveWell’s existing framework and may impede public communications and accessibility, so they are not presented here. GiveWell should, however, strongly consider these options, and it has in the past. Notably, this document shows how the blog post’s assumption of unbiasedness isn’t particularly reliable either.

Which option is best depends on what question(s) you want to answer

While we can see that GiveWell’s existing methods yield unreliable and biased results, choosing which of the above options comes down to deciding: “What question(s) am I, a donor, most interested in?” The two probability-based thresholds imply a decision-tradeoff: faced with the choice of giving to Organization A or giving cash, how sure am I that I should give to Organization A instead of GiveDirectly? The downside to this question is that it makes the decision a binary; while you can rank programs by that probability, all very good programs will have a probability close to 1. That means that after some point, we can’t tell which programs are the best.

If we want to know “what is the *best* charity I can give to?” we have to incorporate the level of cost-effectiveness as well. While I have shown how ignoring uncertainty altogether leads to biased and unreliable answers to this question, there is a bit of tuning and tradeoffs to be made. Option 2)’s lower bound option can be considered a balance of both questions.

While tuning decision thresholds and choosing which options to use for decisions takes a bit of work, the good news is that they all come from the same PSA exercise. In some sense, the choice of which option to choose matters much less than incorporating uncertainty using any of these options (or others).

Additional benefits of uncertainty-forward models

Relationship to other common EA critiques

One common and extremely important critique of the EA community is a general level of overconfidence, exacerbated by the social and physical distance between EA researchers from the communities they impact. For example, the “moral value weights” reflect the moral values of the GiveWell researchers, which are not necessarily those of the communities they impact. an uncertainty forward approach lets researchers express that they are unsure of what those values should be.

While there is no way to fully fix this problem without directly incorporating impacted communities into the decision-making process, uncertainty-forward approaches are a halfway measure toward relieving this tension.

How an uncertainty-forward modeling framework changes model decision-making

Up until this point, we have been assuming that the models themselves would be relatively unaffected by this change in framing. However, an uncertainty-forward framing also changes how you construct and evaluate a model. For example, it may result in researchers changing focus to the variables of greatest range of plausible uncertainty that can impact results. In some cases, the reliability of a model may come down to a single parameter that might not have been noticed had researchers been looking only at the point estimate. With this information, researchers can add their own subjective assessments about model reliability beyond the rote 3x rule.

One of the most difficult tasks as a modeler is deciding which parameters to include, how to model specific issues, etc. This is a time-consuming process, and at present GiveWell’s workflow results in one parameterization per model. Rather than choosing just one, an uncertainty-forward modeling approach lets GiveWell researchers relax that choice; they can (and should!) include several options if and when they are unsure about which is the “right” answer. Since we no longer care so much about the single best guess value, we don’t have to make quite as many assumptions.

Error-checking opportunities

No model is without error. Sensitivity analyses incorporated into the modeling process can help identify these issues earlier and more efficiently. Sensitivity analysis provides an additional semi-automated sanity check of model construction. If varying a parameter yields little change in outcomes, outcome changes in an unexpected direction, too much change, etc. that can indicate a model error. Relatedly, see note about replicability.

Clearer communication with donors about the reliability of estimates

As a donor, I don’t want to only know what my best guess is, but also how sure I can be of that guess and decide accordingly. If I can do so as part of the normal process of model development, all the better. GiveWell has a responsibility to provide this information to donors, especially as in many cases it could result in donors being more sure of the organizations to which their donations go.

Beyond responsibility, uncertainty estimates are beneficial for the long-term credibility in GiveWell. Failure to communicate the uncertainty in estimates is a perennial problem. COVID-19 prediction models, for example, broadly advertised their “best guess” estimates, but rarely communicated the extent and nature of the uncertainty involved. As a result, nearly all of them failed to provide any useful information. GiveWell may be facing a similar problem; if GiveWell’s models are found to be less reliable than donors expect, they may (reasonably) trust GiveWell’s models less. That problem is avoidable by communicating and incorporating uncertainty by default.


In this essay, I have demonstrated that:

  1. Without an uncertainty framework, GiveWell’s models are
    1. Over confident
    2. Biased toward more uncertain interventions
    3. Biased toward weaker interventions
  2. The uncertainty problem is large, serious, and cannot be ignored
  3. A probabilistic sensitivity analysis-based decision process addresses these issues
  4. A probabilistic sensitivity analysis is very doable within GiveWell’s workflow without enormous additional burden or changes to model building infrastructure. As part of this essay, much of that infrastructure work has already been done.

I have two strong recommendations: First, GiveWell should develop a framework to systematically engage in the uncertainty of their problems, and to document them. Second, GiveWell should revise its decision-making rules to explicitly include that uncertainty, such as using an 80% lower bound in the distribution of cost-effectiveness or setting the threshold as being a certain probability of exceeding GiveDirectly’s cost-effectiveness.

After developing the initial draft of the outline and apps, a colleague pointed out another essay that is extraordinarily similar in structure and recommendations to this one. No doubt that other entries are also likely similar. The reason for this is relatively straightforward: these are standard problems in cost-effectiveness, with relatively standard approaches to these problems. In fields that rely heavily on cost-effectiveness, PSA-based cost-effectiveness analysis or similar are performed by default, and are often required by the organizations publishing and using cost-effectiveness. Fortunately, that ubiquity means that there is a wealth of experience GiveWell can draw from in modifying its own procedures and models.

Making good decisions under uncertainty requires understanding the nature of the uncertainty. Some important topic areas are inherently more expensive or difficult to generate evidence for, and understanding how much and why can help us make better decisions. Importantly, uncertainty does not inherently mean inaction. Decisions do not need to be strictly rule-based, but exploring the uncertainty and applying some rules give us a starting point. That first uncertainty accounting step is necessary in order to deal with the fallacy that avoiding dealing with uncertainty makes decisions risk neutral. Some topic areas are inherently more difficult to measure but could be enormously effective, and uncertainty-forward modeling will likely move decisions away from those programs compared to where GiveWell’s decisions are at this moment. However, at this moment GiveWell’s models are skewed toward more uncertain, less effective programs. There are balances and tradeoffs.

While there is unquestionably a cost in time and effort involved with changing to an uncertainty-forward framing, the benefits of doing so are likely to be extraordinarily high. With experience, good guidance, and tuning, the long run additional time it would take to bring uncertainty into GiveWell’s models is likely to be relatively modest, and the value gained in strength of inference, reliability, and credibility are large.

In order to improve the reliability of GiveWell’s modeling, it must embrace uncertainty.


The following people provided constructive comments, suggestions, criticism, etc: 

Josh Ehrenberg, Alyssa Bilinski, Joshua Blake, Ryan Briggs, Caitlin Tulloch, Theiss Bendixen, Jonas Kristoffer Lindeløv, Richard Nerland, Paul Whaley, Christopher Boyer, Tanae Rao, and Karthik Tadepalli. If any errors are present, they are my own.

Critique, comments, disagreements, and everything in between are very welcome; after all, even this essay about uncertainty contains uncertainty.

13 Replies to “GiveWell’s Uncertainty Problem”

  1. Great essay and congratulations on winning the contest!

    You make a great case for incorporating uncertainty. However if I understand correctly, the bias in the current decision making process occurs only when ignoring uncertainty AND using a threshold to make binary fund/don’t fund decisions. What would happen if GiveWell simply allocated funding to the programme with the highest cost effectiveness point estimate, until that programme had no additional room for funding (or until diminishing returns reduced its cost effectiveness below that of other programmes), and then began allocating funding to the programme with the next highest cost effectiveness point estimate? In my mind this would not suffer from the same bias as you have outlined above, and would maximise expected value?


    1. Hi Matt,

      Thanks for the suggestion! Unfortunately, this strategy suffers from roughly the same problem. It is true that when you are comparing 2 programs, the one with the best true effect will (most of the time) have a higher point estimate than the more uncertain, worse program. However, the greater the uncertainty in either or both programs, the closer that comes to a 50/50 chance.

      More importantly, consider the aggregate case that you have more than 2 programs to choose from. Let’s say we have 20, and are picking the top 5. If you line up all the candidate programs by their point estimate, the ones that will be on top will most likely be the ones that had the greatest uncertainty (i.e. the greatest opportunity for randomly having very high point estimates because they were the worst measured). By choosing the “top” programs, we chose mostly just the most uncertain ones.

      It’s worth remembering that we can never know how much the point estimate we see is influenced by chance. There really is no way to deal with the biases caused by uncertainty without directly addressing the uncertainty itself.

  2. Great article Noah!
    I’m curious what your thoughts are on using a bayesian adjustment to point estimates. In general terms, if each measurement = benefit + error, then E[error|measurement=m] = Integral[e*P(meaurement=m|error=e)*P(error=e), {e,-Inf,Inf}]. Then adjusted measurement could be calculated as E[benefit|measurement=m] = E[measurement-error|measurement=m] = m – E[error|measurement=m]. Obviously this would require an assumed error distribution, but I’m curious whether you think that’s an approach worth pursuing?

    1. Hi Kalev,

      Thanks! Bayesian-style adjustments are great, and there are some clear linkages between the freq-style PSA-based reasoning and a more purely Bayesian one. In practice, I tend to think that folks are a bit more uncomfortable with setting priors and that Bayesian-based solutions can be overly formal. To me, what matters most is more that the uncertainty is being engaged with much more than what specific solution you use.

  3. Congratulations Noah!

    Some comments:

    1. The way I think about the Optimizer’s Curse, the fundamental issue is that expected value does not commute with max. In symbols, for N distributions X_n: E(max(X_n)) >= max(E(X_n)) >= E(X_m). In our context, each X_n corresponds to some intervention. The LHS is a naive estimate of the cost-effectiveness of the naively best distribution which will exceed the middle term that is the true CE of the true best intervention; that in turn will usually exceed the RHS which is the true CE of the naive best intervention X_m. (Mathematically, the first inequality is a consequence of the convexity of max and the second is a trivial consequence of the definition of max. )

    The bigger N is, the bigger the gaps will tend to be (of course it also depends on how concentrated the X_n are, and also how correlated they are). I think this matters, because the implicit N will vary by intervention. For example, I would argue for deworming N is roughly 3X what it is for SMC. If you thought about mass deworming of school age children from scratch and had to guess what the main benefit was it probably wouldn’t be longterm economic benefits without much if any measurable short term benefit to health or education. In contrast, for SMC the estimated primary benefit — lives saved — is the first, most direct one would think of, not arguably the third as for deworming. If you accept that argument, then threshold based approaches (like the three you discussed) should have a more punitive threshold for deworming than SMC.

    2. Thresholding is a good, common sense approach. And in practical terms, it might be the way to go. But as you know of course we really care about the full distribution. If hypothetically after correcting from bias from the selection approach, we think intervention A has a 50% chance of being 0.95x cash and a 50% chance of being 20x cash, whereas intervention B has a 20% chance of being 0x cash and an 80% chance of being 2x cash, we’d clearly prefer A (even though B would win on cutoffs between 50% and 80%). Ultimately, what we would like (assuming we are risk neutral anyway) is an EV of CE, after correcting the distribution for the bias introduced by the selection procedure. But that would be hard! For one thing, the full selection procedure isn’t that well-defined, especially at the top of the funnel.

    3. My general feeling is GiveWell tends to err a little bit on the conservative side in estimating parameters of its CE models, which may informally help partially correct for OC.

    1. To state #1 more succinctly and without formulas: The more options you optimize over, the worse the optimizer’s curse and therefore the bigger the correction — i.e. the stricter threshold — you need. An example is deworming vs most other GW funded interventions (e.g. SMC). On this analysis deworming gets knocked down in cost-effectiveness for two reasons: the higher uncertainty and (the point I’m making here) the larger implicit space of alternative hypotheses it’s drawn from.

  4. I think another point related to uncertainty in analysis that can be made here is that relying on many small, more unreliable measures can be worse than using more coarse, aggregate values. iirc, they broke a lot of estimates up into sub-populations based on small demographic groupings, thereby increasing the uncertainties and biases in their estimates versus what they could get from coarser but more reliable measures

  5. Thanks, nice argument (and great interactive figures – one nitpick is that the threshold doesn’t update in the figure when I change it on the slider in ‘Selection rules for individual candidate programs’).
    This doesn’t affect the overall case, but I am interested in your repeated use of this 3x cash threshold. I don’t remember seeing 3x anywhere, do you have a source? In fact, if this is the most recent update, it looks like it is 10x I realise this is minor, but I am surprised given your confident-sounding claim in the first sentencec of the piece that “GiveWell’s key decision point for its models are whether the intervention being modeled provides at least 3x as much expected benefit per dollar spent as a direct cash transfer”.
    Overall I think your proposed approach would address nicecly, in a more principled and general way, the issue I raised specifically regarding uncertainty in discount rates:

    1. Hi Oscar,
      Thanks for the comment! Re: figure updating based on the slider, can you tell me what specifically you are seeing that isn’t changing as you expect? It appears to be working as expected on my end, but who knows!

      Re the threshold rule: Firstly, I should probably have put a note in the article to clarify this (and will edit one in shortly), but GiveWell changed their threshold to 10x in July 2022. Prior to that, the rule was 2-3x (e.g. However, as of the writing of this article, the rule had not yet been fully implemented in the models and recommendations (though I am sure it will be soon). Secondly, fortunately the actual threshold number is arbitrary in these conceptual simulation-based demonstrations (there is a brief note about that in a mouseover). Whether the threshold is 0.5 or 100, the same concepts apply so long as the range of uncertainty is at least in the same ballpark as the threshold (think orders of magnitude rather than specific numbers). But either way this should be explained better, thanks!

  6. OK thanks, that makes sense that the precise cost-effectivenss multiple does not matter for the argument to carry.
    Ah yes sorry I did not provide enough detail, the glitch seems very limited in scope. When I choose the ‘Probability CE > cash transfer distribution’ option and drag the ‘Minimum acceptable probability that candidate CE > cash transfer’s CE’ slider away fromteh default value, after the graph reloads I still see ‘Threshold: 80%’ in the top left. (The Result still changes appropriately.)

  7. I added a few comments and highlights as public ‘’ comments you can check. If you are willing to adjust this essay (‘latest version’) there are ways you might make it a little more clear (it’s already very well written though, IMO).

    In particular:

    1. Help understanding the first view of the graphs
    – “Selected” contains only where estimated CE is > 3
    – “Rejected” are all other programs (estimated CE This seems to be done in the file sensitivity analysis.R, pulling parameters from the linked Gsheet

    3. I don’t understand the logic behind “regression to the mean-type effects will tend to temper expectations for good programs more efficiently than for poor programs”

    Anyways, great job. I’d love to see and maybe be part of more work on an explicit Bayesian framework for decision-making under uncertainty here, perhaps in a way that enables a range of moderately-sophisticated uses to enter their own goals, moral parameters, etc.

Thoughts and comments welcome