Do you have your variable URBANC inside a roster?
Offset is the offset of the timezone of event origin relative to UTC. This should not be a culprit.
Looks like a bug.
Please provide the information to the support email based on the GREEN checklist:
No, URBANC is calculated at the very start of the interview, out of any roster.
I just sent you the green checklist report. Let me know if anything is missing and if there is anything I can do to solve this issue. Thanks a lot.
Sorry for the multiple messages. Could the issue be related to the server version? Our server currently runs 19.08 but a new server version seems available (19.08.1). Please ignore if this does not make sense.
thank you everyone who contributed descriptions of symptoms and tracing the events regarding the above issue.
the problem has been investigated and was traced to changes in the external library code that we are using.
Our developers are working on a fix for this issue.
At this time there is nothing that the affected users can do. Yet, if your cloud server time is expiring please submit a request to extend the server. You will need to re-export the data with the hotfixed version (see #5 below).
The data entered by the enumerators is not lost and should reappear (selection restored) for the supervisors/hq users once the hotfix is applied. Correspondingly is will also appear in the exported data.
Any questionnaire using randomization will need to be re-exported after the hotfix to have the correct state and have the correct values of the random value in the sssys_irnd variable.
These are good news indeed. Thanks for the clarifications. Would it just be possible to keep us informed about when the hotfixed version is released? Our data collection is still ongoing and for now, I told our QC to ignore any issue related to the questions where randomization is involved (simply because they will not be able to know whether this comes from the bug, or from an actual interviewer’s mistake). Thanks again.
version 19.08.2 that is currently being pushed to the servers will prevent the issue with random value evaluated differently on the tablet and on the server from appearing again.
New interviews collected with this version should not suffer from this problem.
Older interviews collected with the previous versions still suffer from the same problem. With 19.08.2 the users can:
a) supervisor change the language of the interview, than revert it back to the previously set language (if the questionnaire is multilingual);
b) supervisor/hq answer a supervisor question (if the questionnaire contains a supervisor question);
c) supervisor reject back to the tablet, interviewer synchronize and complete again on the tablet (all data survives, just click the complete button again); then resubmit the completed interviews (synchronize).
This doesn’t deliver bullet #4 here yet. We are searching now for the fully automatic solution to automate c) and not involve the enumerators for surveys where there is no supervisor questions/multiple languages.
Will keep you posted here.
Ok Sergiy, that is helpful for now, thank you. I am testing your solution c) right now so we do not lose too much time on QC while waiting for the automated fix. But I am not sure I understand the process well: to get the data back to normal, do we need to successively apply the three steps a) to c)? In our situation, we have no supervisor question, but we do have different multiple languages, so the process I plan to implement is:
- reject the questionnaire from HQ back to the supervisor;
- supervisor receives the rejected case, change the language of the interview (is this step necessary? interviews were conducted in Khmer -> should the supervisor then switch to English, and then back to Khmer?);
- supervisor rejects the case back to interviewer;
- interviewer synchronizes, receives the case and only needs to complete it again, and synchronizes once more to send it back to supervisor;
- supervisor checks if the data is now correct.
Does that look correct to you?
A, B, and C above are alternatives. Either of them should achieve the same result. A and B are more attractive, since they can be undertaken centrally, e.g. by the supervisors, without having to coordinate with the interviewers and consumption of the mobile network traffic. Yet they require particular features (supervisor question for A, multilingual for B), which may not be available, hence C is the universal recipe.
In your case the HQ or supervisor can switch the language for an affected interview (collected with the earlier version than the hotfixed 18.08.2). That should immediately reveal the data exactly as the interviewer saw it (with the difference being the language). Switching the language back would cancel out the language difference as well and retain the correct state of the interview.
There is hardly any visual evidence which interviews have been already recovered that way and which haven’t. So you either track them somewhere, or attach a comment, or advise on another method that has worked for you.
For questionnaires with random selection of a respondent for an extended interview, the situation is usually simpler, since you will likely have a large number of not-answered questions, that would be a pretty obvious signal of the interview not recovered yet.
I believe the language approach should work regardless whether you are an HQ or supervisor, as long as you can open the interview for viewing the content.
We tried solution C yesterday and it worked well. But after reading your message this morning, I tried solution B (language switch) from admin, HQ and supervisor accounts and it does not seem to work for any of these account types, not sure why. I’ll stick to solution C then.
No problem for me to identify the interviews with random number issues, as I have a variable storing the random value (called “RANDOM”, which stores the random value after it changed) and other variables depending on this random value. In the Stata file, “sssys_irnd” corresponds to the initial random value (assigned to this case when it was created, before the change). Therefore I can easily identify in the export Stata file which cases had their depending variables affected by the random value change, reject them back to interviewers and implement the solution C that you proposed yesterday.
All of our interviewers are using v19.08.2 now, and I hope that this issue will not occur again. I will run regular checks to monitor this (by exporting and checking the data as described above) and will get back to you if we meet any further issues.
I will check what may be the reason that B didn’t work. Benjamin, please send to support email the name of the server and interview key, where strategy B didn’t work when you tried it.
The approach described by Benjamin is valid. After the hotfix the variable sssys_irnd will be exported correctly for all interviews. If within the questionnaire there is a variable that preserved the original value returned by Quest.IRnd() that was distorted on the server, the two will be different and this will point at necessity to reject and resubmit that interview. Unfortunately, not all user questionnaires/surveys have such a value stored, hence we are still in the process of determining of a mechanical procedure that would recalculate all the interviews without having to have the interviewers involved (that can be executed solely on the server).
I see that there is a new Interviewer app version available. Does it mean that the above issue will now be automatically fixed? Thanks,
when you check the data on your server you should find the issue fixed in all new interviews and all old interviews, except the ones rejected and received by the interviewers. The interviewers will find all the entered data in them, and once they resubmit them (they can still correct any other issues as usual) the supervisors will also find the data in correct appearance.
Excellent, thanks a lot!
Status update: The issue should now be fixed in all the surveys on all the cloud servers that are managed by our team: https://*.mysurvey.solutions
Hi, I think there is an issue with this code: the Round() cuts the density of marginal numbers (here 1 and 8) in half (see mini simulation in PDF attached). I think the code should be: (int)Math.Floor((9-1)*Quest.IRnd() + 1)
randomization.pdf (113.1 KB)
Could you please be more specific? What is the issue?
In other words, the code that was provided does not seem to select numbers 1 to 8 with the same probability, but 1 and 8 (the largest and smallest) get half the probability of the other numbers.
Cool. Where in the above the same probability was requested?