EA - Are there diseconomies of scale in the reputation of communities? by Lizka
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are there diseconomies of scale in the reputation of communities?, published by Lizka on July 29, 2023 on The Effective Altruism Forum.Summary: in what we think is a mostly reasonable model, the amount of impact a group has increases as the group gets larger, but so do the risks of reputational harm. Unless we believe that, as a group grows, the likelihood of scandals grows slowly (at most as quickly as a logarithmic function), this model implies that groups have an optimal size beyond which further growth is actively counterproductive - although this size is highly sensitive to uncertain parameters. Our best guesses for the model's parameters suggest that it's unlikely that EA has hit or passed this optimal size, so we reject this argument for limiting EA's growth. (And our prior, setting the model aside, is that growth for EA continues to be good.)You can play with the model (insert parameters that you think are reasonable) here.Epistemic status: reasonable-seeming but highly simplified model built by non-professionals. We expect that there are errors and missed considerations, and would be excited for comments pointing these out.Overview of the modelAny group engaged in social change is likely to face reputational issues from wrongdoing by members, even if it avoids actively promoting harmful practices, simply because its members will commit wrongdoing at rates in the ballpark of the broader population.Wrongdoing becomes a scandal for the group if the wrongdoing becomes prominently known by people inside and outside the group, for instance if it's covered in the news (this is more likely if the person committing the wrongdoing is prominent themselves).Let's pretend that "scandals" are all alike (and that this is the primary way by which a group accrues reputational harm).Reputational harm from scandals diminishes the group's overall effectiveness (via things like it being harder to raise money).Conclusion of the model: If the reputational harm accrued by the group grows more quickly than the benefits (impact not accounting for reputational harm), then at some point, growth of the group would be counterproductive. If that's the case, the exact point past which growth is counterproductive would depend on things like how likely and how harmful scandals are, and how big coordination benefits are.To understand whether a point like this exists, we should compare the rates at which reputational harm and impact grow with the size of the group. Both might grow greater than linearly.Reputational harm accrued by the group in a given period of time might grow greater than linearly with the size of the group, because:The total reputational harm done by each scandal probably grows with the size of the group (because more people are harmed).The number of scandals per year probably grows roughly linearly with the size of the group, because there are simply more people who each might do something wrong.These things add up to greater-than-linear growth in expected reputational damage per year as the number of people involved grows.The impact accomplished by the group (not accounting for reputational damage) might also grow greater than linearly with the size of the group (because more people are doing what the group thinks is impactful, and because something like network effects might help larger groups more).Implications for EAIf costs grow more quickly than benefits, then at some point, EA should stop growing (or should shrink); additional people in the community will decrease EA's positive impact.The answer to the question "when should EA stop growing?" is very sensitive to parameters in the model; you get pretty different answers based on plausible parameters (even if you buy the setup of the model).However, it seems hard to choose parameters that imply that ...