Opinion

Closing the feedback loop: what is the value of perception surveys?

Elias Sagmeister • 13 December 2018

Source: Isabella Leyh, Ground Truth Solutions

Closing the feedback loop is notoriously challenging, as the UNHCR’s Innovation Service rightly concludes in a recent post. That is why at Ground Truth Solutions, we have made it our mission to help aid agencies do better and use feedback from affected people to improve the relevance and quality of aid, and to foster a meaningful dialogue with people affected by crisis. Perception surveys are a vital tool to this end, which is why we’d like to offer our two cents’ worth in response to this recent commentary.

Nobody said it was easy…

It is great to see UNHCR and others debate the difficulties of being responsive to people’s feedback and – more importantly – how a combination of existing tools can inform more responsive humanitarian action. Based on our experience in some 20 countries, we agree there is no quick-fix, one-size-fits-all, or silver bullet solution – as if such things were possible in the complex situations humanitarians navigate. We also agree that no agency should only do perception surveys, and that third-party surveys cannot replace common sense, good programming practice, or a range of valuable community engagement approaches.

The challenge to listen and react to feedback remains a big one, as our evidence confirms from over 10,000 interviews across multiple countries. By and large, people served by major aid agencies say their opinions are not taken into account. That is why “listening to everybody and providing a timely response,” instead of third-party perception surveys, is something we would applaud. Ground Truth Solutions, along with many others, will celebrate the day aid agencies succeed in listening and reacting to each and every individual affected by crisis. Until then, aid agencies seek out our expertise along with the advice of other specialised actors to help them be better listeners and identify actionable findings to optimise their programmes. We are never called in to replace what our partners are doing already, but to complement and strengthen their understanding of people’s views with expertise and an independent perspective. Besides donors and the agencies we work with, affected people also appreciate the chance to talk to someone not directly involved with the organisation serving them. This explains a lot of the traction that perception research has seen in the humanitarian sector.

Because no one can talk to everyone (not even via recording devices or chatbots), sampling for a smaller number of face-to-face interviews is our method of choice. We design a tailored strategy for each project with the aim of getting a representative sample of the entire population of interest. One approach is to sample several communities or clusters to represent that population and then sample randomly inside those communities. We have found that this is a good way to sample in logistically challenging contexts. However, cluster sampling has its limitations: if the communities you choose to sample are not representative of the full population, or if all communities in the population are very different from each other, cluster sampling can overlook important information. That’s why we take great care to consider factors that might influence the experience of affected communities – are some communities, for example, living in urban areas and others in rural ones? Do some people live in camps, whereas others do not? Does the ethnic make-up vary across camps?

Sure, bigger samples would be better. But fortunately, we typically get to do multiple rounds of data collection, and an increasing cohort of organisations collects its own data using similar perceptual indicators. Third-party perception data can then serve as a benchmark, against which other actors compare their performance. Whatever the design chosen, no one should rely on one perspective alone. This is especially important when influential local representatives act as gatekeepers to their communities rather than as neutral messengers, and could well be part of the problem.

But isn’t perception research too limited to be useful?

Surveys will never provide all the answers, and topline findings almost always require follow-up research or additional, more qualitative assessments to be truly useful. In fact, that’s the point of most surveys: whether it is through focus group discussions, user interviews, or other participatory approaches, a survey should be treated as the beginning of a dialogue, not a substitute.

To say that perception surveys are “inherently extractive,” however, is too simplistic a conclusion. Well-trained enumerators can actually provide information to respondents and make individual referrals where appropriate. At a higher level, survey data can actually show what is missing, and very often things can be fixed quite quickly. In a recent response to a post-hurricane survey in the Caribbean, for example, most people said they lacked information on how to get support to rebuild their homes. We sent an SMS blast to respondents with the local hotline number for the International Organisation for Migration and shared more information about available support. The hotline registered a spike in calls as a result, with many callers saying they heard about it through the survey.

Finally, and most importantly, a survey is as extractive as any other question or list of questions posed to anyone. But using surveys to better understand and serve people doesn’t have to be. In our work with an INGO in Kenya, results were shared with all respondents via SMS. In Lebanon, those surveyed were kept involved via mobile communications. In some countries, Facebook has been used to share results and continue the conversation, whereas in other contexts agencies themselves organise community meetings to discuss the findings. Myriad ways to say “we hear you” exist, but humanitarian field staff are typically best placed to continue the dialogue. Whatever the situation, no single approach to closing the loop is sufficient.

So, to summarise using our typical 5-point scale:

Should aid agencies throw out the listening they are already doing and rely on third-party perception surveys instead?

Should we base our practice on lessons learned about different feedback mechanisms to combine, experiment, and gradually improve our work and become more responsive to the people we aim to serve?

And should every expert in the field continue to challenge perception surveys, just like any other approach, and critically assess how it can offer most value and where? You guessed it...

As is the case with almost every survey, we look forward to the ensuing dialogue on this topic. 

 

 

Share on Facebook and Twitter