Date: Wednesday, July 2, 2025
Hi! I’m Staci Haag, an evaluator who has worked in foreign development and foreign assistance. In this post, I reflect on a familiar challenge: how to create evaluation products that are actually used.
Being meaningful and useful as an evaluator isn’t just about asking interesting questions. Most evaluators can produce a well-researched product. But they aren’t often trained to create a well-used one.
The question is—why?
As an evaluator, I don’t want to hear something was well-written or that people find it interesting. Evaluations should be all those things as a matter of practice. What I want to hear, and work to create, is a product that people see as useful. That when they read it, they say things like…
But how do we get to this concept of usefulness?
The simple answer is the final product shouldn’t be the evaluation. It should be the midpoint of a longer process that brings stakeholders into discussions about the findings and recommendations, and which ones to prioritize for implementation.
The longer answer is—lead with the end in mind. Start the evaluation planning process with a robust discussion for how clients, the evaluators, and other stakeholders will want to use the results and data in the evaluation report. And reconfirm alignment with those goals throughout the evaluation process.
Start with a facilitated collaboration session for clients and relevant stakeholders, with the goal of completing the session with a set of agreed upon expectations for how they plan to use the data, findings, and recommendations—as well as who else may have a use for it. Follow up with a readout that includes concrete actions and deadlines for everyone in attendance. Engage with clients to understand their time restrictions and interest in engaging. Meet them where they are and don’t be afraid to let them into the evaluation process a little more deeply than you may be used to.
Periodically check in with the client to confirm that the direction still meets those intended end use needs. Provide updates, including drafts of questions you’ll be asking and highlights from desk research. If any early trends emerge, share those, especially if they will be surprising or the client may need time to process. Don’t be afraid to provide initial insights and trends that are emerging in your research. Schedule a discussion BEFORE you submit the report and use it to receive early feedback on the findings and recommendations, to ensure that what you’re proposing is feasible for implementation.
Hold a formal close-out after everyone has had time to read the evaluation report, to discuss the findings, and who else needs to see the report. Identify additional, smaller products such as one-page info graphs. Once the longer research product is created, these smaller products can greatly extend the shelf-life and useability of the evaluation and should be relatively easy to produce.
There aren’t a one-size fits all approach—but there are a lot of good ideas out there. One resource I have gone back to over and over is Michael Quinn Patton’s “.” It provides a roadmap for a facilitated process that encourages collaboration and engagement.
The ʵ is hosting Gov’t Eval TIG Week with our colleagues in the Government Evaluation Topical Interest Group. The contributions all this week to AEA365 come from our Gov’t Eval TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the ʵ, and/or any/all contributors to this site.