EA - Hiring: hacks + pitfalls for candidate evaluation by Cait Lion
The Nonlinear Library: EA Forum - Ein Podcast von The Nonlinear Fund

Kategorien:
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Hiring: hacks + pitfalls for candidate evaluation, published by Cait Lion on September 14, 2023 on The Effective Altruism Forum.This post collects some hacks - cheap things that work well - and common pitfalls I've seen in my experience hiring people for CEA.HacksSharing customised-generic feedbackRejected candidates often, very reasonably, desire feedback. Sometimes you don't have capacity to tailor feedback to each candidate, particularly at earlier stages of the process. If you have some brief criteria describing what stronger applicants or stronger trial tasks submissions should look like, and if that's borne out in your decisions about who to progress, I suggest writing out a quick description of what abilities, traits or competencies the successful candidates tended to demonstrate. This might be as quick as "candidates who progressed to the next stage tended to demonstrate a combination of strong attention to detail in the trial task, demonstrated a clear and direct writing style, and have professional experience in operations." This shouldn't take more than a few minutes to generate. My impression is it's a significant improvement for the candidates over a fully generic response.Consider borrowing assessment materialsNot sure how to test a trait for a given role? Other aligned organisations might have already created evaluation materials tailored to the competencies you want to evaluate. If so, that organization might let you use their trial task for your recruitment round.Ideally, you can do this in a way that's pretty win-win for both orgs (e.g. Org A borrows a trial task from Org B. Org A then asks their candidates to agree that, should they ever apply to Org B, Org A will send over the results of the assessment).I have done this in the past and it worked out well.Beta test your trial tasks!I'm a huge proponent of beta testing new evaluation materials. Testing your materials before sending them to candidates can save you a world of frustration down the road by helping you tweak unclear instructions, inappropriate time limits, and a whole host of other pitfalls.MistakesTaken from our internal hiring resources, here are some mistakes we've made in the past with our evaluation materials:Trial tasks or tests that are laborious to gradeSome types of work tests take a long time to grade effectively. Possible issues can be: a large amount to read, multiple sources or links to check for information, a complicated or difficult-to-apply rubric. Every extra minute this takes you to grade is multiplied by the number of tasks. The ideal work sample test is quick and clear to grade.Possible solutions:Think backwards from grading when you create the task.Where appropriate, be willing to sacrifice some assessment accuracy for grading speedBeta test!Tasks that require multiple interactions from the graderSome versions of trial tasks we used in the past had a candidate submit something to which the grader had to respond before the candidate could complete the next step. This turned out to be inefficient and frustrating.Solution: avoid this, particularly at early stages.Too broadSome work tests look for generalist ability but waste the opportunity to test a job-specific skill. The more you can make the task specific to the role, the more information you get. If fast, clear email drafting is critical, test that instead of generically testing communication skill.Too hard / too easyIf you don't feel like anyone is giving you a reasonable performance on your task, you may have made it too hard.A common driver of this failure mode is assuming context the candidate won't have or underrating the advantage conferred by context possessed by your staff but not by (most?) of your candidatesCeiling effects are perhaps a larger problem. If everyone is doing well...