EA - Project ideas: Epistemics by Lukas Finnveden
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project ideas: Epistemics, published by Lukas Finnveden on January 4, 2024 on The Effective Altruism Forum.This is part of a series of lists of projects. The unifying theme is that the projects are not targeted at solving alignment or engineered pandemics but still targeted at worlds where transformative AI is coming in the next 10 years or so. See here for the introductory post.If AI capabilities keep improving, AI could soon play a huge role in our epistemic landscape. I think we have an opportunity to affect how it's used: increasing the probability that we get great epistemic assistance and decreasing the extent to which AI is used to persuade people of false beliefs.Before I start listing projects, I'll discuss:Why AI could matter a lot for epistemics. (Both positively and negatively.)Why working on this could be urgent. (And not something we should just defer to the future.) Here, I'll separately discuss:That it's important for epistemics to be great in the near term (and not just in the long run) to help us deal with all the tricky issues that will arise as AI changes the world.That there may be path-dependencies that affect humanity's long-run epistemics.Why AI matters for epistemicsOn the positive side, here are three ways AI could substantially increase our ability to learn and agree on what's true.Truth-seeking motivations. We could be far more confident that AI systems are motivated to learn and honestly report what's true than is typical for humans. (Though in some cases, this will require significant progress on alignment.) Such confidence would make it much easier and more reliable for people to outsource investigations of difficult questions.Cheaper and more competent investigations. Advanced AI would make high-quality cognitive labor much cheaper, thereby enabling much more thorough and detailed investigations of important topics. Today, society has some ability to converge on questions with overwhelming evidence. AI could generate such overwhelming evidence for much more difficult topics.Iteration and validation. It will be much easier to control what sort of information AI has and hasn't seen. (Compared to the difficulty of controlling what information humans have and haven't seen.) This will allow us to run systematic experiments on whether AIs are good at inferring the right answers to questions that they've never seen the answer to.For one, this will give supporting evidence to the above two bullet points. If AI systems systematically get the right answer to previously unseen questions, that indicates that they are indeed honestly reporting what's true without significant bias and that their extensive investigations are good at guiding them toward the truth.In addition, on questions where overwhelming evidence isn't available, it may let us experimentally establish what intuitions and heuristics are best at predicting the right answer.[1]On the negative side, here are three ways AI could reduce the degree to which people have accurate beliefs.Super-human persuasion. If AI capabilities keep increasing, I expect AI to become significantly better than humans at persuasion.Notably, on top of high general cognitive capabilities, AI could have vastly more experience with conversation and persuasion than any human has ever had. (Via being deployed to speak with people across the world and being trained on all that data.)With very high persuasion capabilities, people's beliefs might (at least directionally) depend less on what's true and more on what AI systems' controllers want people to believe.Possibility of lock-in. I think it's likely that people will adopt AI personal assistants for a great number of tasks, including helping them select and filter the information they get exposed to. While this could be crucial for defending aga...