Use case While questions may have restricted scopes (e.g. Supervisor, Hidden), validations cannot. Instead, validations are visible to all user roles. There are several reasons that this may not be entirely desirable:
- Concerns that valid answers are not correct ones. Survey managers may be concerned that interviewers react to validations by changing data so that validation errors disappear–in other words, that interviewers have incentives to fabricate valid data.
- Workflows may require supervisors to check issues not visible to interviewers. Many survey protocols require that field supervisors investigate survey responses that a field-based validation system, whose results are visible only to supervisors and HQ, flags (e.g., see process described in sections 5.1.2 and 5.1.3 for Feed the Future here, a protocol shared with DHS and MICS surveys)
More changes may be required, but here is a compilation of suggested ones:
- Add a
scopefield to validations in Designer, with
scope == Interviewerthe default
- Evaluate validations depending on scope (e.g. Supervisor scope validations on Supervisor, HQ scope validations on HQ, Interviewer scope validations on Interviewer, Supervisor, and HQ)
There are three incomplete work-arounds that I see:
- Add validations to supervisor questions that flag issues. This restricts the visibility of the validation to supervisor roles and higher. This requires the addition of 1 or more supervisor questions that trigger the validation and, through their message, indicate the location and nature of the problem.
- Develop a validation system outside of SuSo that identifies problems. This achieves the desired workflow, but is only practically available to HQ. This solution requires programming a set of scripts, user actions to run the system’s code, and a computing environment where this system can be run. Practically speaking, this can only done be done on a laptop in HQ with good internet access. Consequently, this work-around does not satisfy the needs of a field supervisor.
- Monitor for suspected fabrication of valid data. With the paradata and user-written scripts, one could look for either questions with frequent answer changes or interviewers with frequent answer changes. The user would need to know which questions are apt to be changed in satisfy validation conditions.