This is Part 3 of 3 of my previous two blog posts (Part 1 and Part 2) which began to describe our process of selecting grantee metrics.
Finally, once we had our draft set of grantee metrics, we then shared them with our client, advisory board, other experts, and grantees for their feedback. Grantee feedback is particularly important. In particular, we wanted to make sure that our grantees would be comfortable with collecting the data, and we asked grantees to flag if particular items would be difficult to collect. Also, we wanted to make sure that these grantees realistically think that they’d be able to achieve impacts in the given time frame of the grant — for example, if the grant period is one year, and it takes at least 5 years to observe a given outcome, then it makes no sense at this point to collect that outcome data.
We also allowed grantees to omit certain metrics if they were too cumbersome to collect — as long as we discussed and understood why it would be too challenging for them to collect the information. In those cases, if it made sense, we’d ask the grantee to provide proxy data items that they’d be able to collect easily.
Once we had our final set of metrics, we created a graphically appealing one-page dashboard which we refresh on an annual basis as a means to demonstrate to our stakeholders the impact of the fund.
Taking a step back and thinking about reporting and metrics more generally, I think we need to think carefully about why we are collecting this grantee information and whether we really need it. If it’s mainly to hold our grantee accountable for doing their work, would a site visit suffice instead? Or a phone call? I think a lot about the Whitman Institute’s trust-based philanthropy model.
If there’s a need to present metrics to our board of directors, can we streamline the number of metrics that are really needed? And have an honest conversation with the board about the burdens placed on grantees?
As an evaluator, I have seen people sort of go crazy with data and metrics — in some cases, more is not necessarily better!! We might collect 100 metrics, but only really use 5 of them to inform our decision making. Or another way to think about it — if we make a $20K grant and it costs $10K to collect the desired data, then would society have been better off if we had just used that additional $10K to serve more people?
If we apply an equity lens to monitoring and reporting, we need to acknowledge data is NOT free and that if we want it, we should compensate our grantees for it fairly. For example, as mentioned in a previous post, I found out that one of my grantees had spent 40+ hours collecting data for a $25K grant (and for that size grant, it should be more on the order of 10 hours or less), and ended up giving them another $2K to compensate them for their time.
We invite you to share your thoughts and ideas in the comments section below.
For links to our resources including our checklist of recommendations to incorporate DEI in grantmaking practice, our suggested dashboard of DEI metrics to track, our Stanford Social Innovation Review article, and a video presentation of our work, please go to our homepage.