For monitoring the incoming interviews one way is to export all interview data and then analyze them.
This can be time consuming for larger surveys (not to mention censuses).
With graphQL this becomes a much faster and leaner operation. There is a good number of things you can access with the “interviews” query, like error counts and unanswered questions. Also the responsibleName and identifying variables let you analyze things by enumerator, province, district, etc.
This is very nice, but it would be much more powerful if you could also receive selected (non-identifying) variables, such as GPS readings, interview result (conducted or rejected, why rejected), etc.
For example, enumerators with many rejected interviews should be detected early: they may be skipping distant households out of laziness, lack the politeness or capacity to convince people to be interviewed, etc.
Enumerators with many identical GPS readings might be working from home or some other location which isn’t the place of interview.
So my suggestion would be to include a node “interviewData” in “interviews”, analogous to the existing “identifyingData” node, which lets you request selected variables.
I had tried out the exposed variables when they were announced. I did not (still don’t) see any reference to graphQL queries.
In the graphQL “interviews” query I don’t see a node for returning exposed vars, “identifyingData” does not return them.
I’m sorry to insist on this topic.
If @vitalii 's suggestion regarding exposed variables works, this would be a very powerful way to access some key variables for monitoring incoming interviews.
I have exposed some variables on the server and can use them interactively for dynamic filtering.
However, I cannot find a way to make the “interviews” GraphQL query return them.
If anyone from the developer team has information how to do this, please let us know.
If exposed variables cannot be returned, I would like to suggest improving the “interviews” query to this effect.
This will return interviews (key and id fields) for ANY questionnaire that satisfies condition that there is an exposed variable called “name” and it has value of “Jane Doe”. So to narrow down the filter you may want to add other conditions like workspace or questionnaire id:
Better late than never
Thank you, this is useful information, however, what I am looking for is to receive all identifying and exposed variables (names and values), not using them for filtering.
This query returns the identifying variables for all interviews:
Ah, indeed it would be great to have the graphql-style access to the whole interview data so that one could pick not only exposed but any variable data… have to admit that this is also on my wish list
Until then, little less elegant solution is to loop over the interview ids and call the ‘old’ rest-based endpoint at /api/v1/interviews/[interview_id], which will give you data for (almost) all questions …
@klaus, I wanted to try how this idea would work and put together a small experiment. Before getting into live data access issues (currently you still have to trigger export generation) I wanted to see if such API would indeed be useful and if so, what other features (filtering etc) would be needed/possible…
@zurab1, yes, this would be the holy grail of monitoring! Filtering as existing for the interviews query (including identifying and exposed variables) would be perfect.
I’m not sure about the experiment, I did not implement this as I have never used docker before. Is this the way you plan to implement the final query? Or just for playing with it for now?
Ideal would be to extend the existing interview query in my opinion.
I am surprised that no one else is chiming in on this topic. Several people mention in other posts that they are using the API for survey monitoring…
This (experiment) is a separate application that accesses survey solutions database and serves data in a different api. It is written in python, and for simplicity I packaged it as a docker image. So, if you prefer/have python, you could directly install and run it instead of the docker…
I see pros and cons for both, having it a separate app vs integrating into the core, but until we’re sure that the approach (of accessing all data in this form) is feasible, I wouldn’t consider any integration of course. My goal for sharing with you/any other alpha-testers was to see if you could see how this works with your data/workflow and whether you’re able to write scripts/extract data as efficiently as we thought it we wanted.
I understand. Okay, I will find out how to load the docker image and test it on current survey data.
By looking at your examples I can already say: yes, this is what I was looking for. Remains to compare the response time with the create/download export files time. The fact not to fill up the server with hundreds of export files alone will be a big plus.
I’ll provide feedback as soon as I have done some testing.
Thank you already for implementing this functionality experimentally.