-
Notifications
You must be signed in to change notification settings - Fork 2
Description
Description
In multiple quiz sessions over several days, not all participants see the same number of questions during the answering phase of a quiz:
- Some participants see no questions.
- Some see only a few questions (e.g. 2 or 7 or 8).
- Others see all questions.
However:
- All participants were able to enter questions during the question phase.
- All questions were available and visible during curation (teacher view).
- The statistics view shows answers for the questions that were answered.
The issue seems to appear only from the second use of Recapp onwards in a given teaching setting. The first quiz in that setting works correctly in all phases for all participants; later quizzes show inconsistent numbers of visible questions in the answering phase.
A full reload of the quiz page does not restore missing questions for affected participants.
Expected behavior
During the answering phase of a quiz, all participants should see the same full set of curated questions.
Actual behavior
In some quizzes:
- Different participants see different numbers of questions (including zero).
- Participants can answer the questions they do see.
- The statistics page shows results for the answered questions, but some participants apparently never saw certain questions that others could see.
Additional context from classroom usage
- Approx. 20 participants were active in the affected quizzes (somewhat fewer in the third quiz).
- Devices were a mix of phones and laptops (more phones), so the issue does not seem tied to a specific device type.
- The three affected sessions used three separate quizzes (not the same quiz reused).
- In all three sessions, the teacher started the answering phase before any student joined. In the first quiz this worked fine; in later quizzes, some participants saw only a subset of questions.
- There was no phase switching or quiz restart before the problem occurred. When the problem appeared, students tried refreshing the page and re-logging in, but this did not resolve the missing questions.
- Affected participants saw different numbers of questions (e.g. 2, 7, …); it was not just their own questions, and there was no obvious pattern to which questions were missing.
- Across days, there was a large overlap of students (many of the same people participated in multiple quizzes), but not all participants were identical from day to day.
- The teacher suspects that the problem might occur mainly for students who have already used Recapp on a previous day, and that most students participated without logging in (anonymous usage via cookie/UUID).
-
Per-user answering state & actor/subscription flow – most likely
Recapp uses actors and subscriptions for the answering phase. The list of questions a participant sees is therefore not just a raw DB query, but the result of stateful backend logic plus the way the frontend subscribes to it.
The fact that:
- different participants see different numbers of questions, and
- a full page reload does not restore missing questions
suggests that some per-user or per-connection answering state in the backend/actors (or the subscription initialisation) can end up incomplete and then remain incomplete for that participant.
Possible failure modes under this hypothesis:
- A per-user/per-run actor or state store builds a question list once (or incrementally) and, for some participants, ends up with only a subset of curated questions.
- The frontend subscribes to an actor stream that only emits changes after subscription, without ensuring a full initial snapshot for every new connection/reconnection.
- On reload or rejoin, the client may re-attach to an existing actor/run with already truncated state instead of triggering a fresh initialisation.
Things to check:
- Backend:
- Which actor(s) or services are responsible for the answering phase and per-user quiz runs.
- How they compute and store the set of questions for a participant (initial snapshot vs incremental updates).
- Whether per-user/per-run state is reused across phases or across quizzes.
- Frontend:
- How the answering view subscribes to the quiz/quiz-run actor (initial load + subscription lifecycle).
- Whether every (re)subscription reliably triggers a full “current state” snapshot, not just incremental updates.
- What happens when a user reloads the page or joins after the answering phase has already started.
-
Cross-quiz or cross-session state not properly scoped to a quiz
If some state is keyed only by user/fingerprint (or another global key) and not by quiz ID, then state from one quiz might leak into another. The pattern “first quiz in a teaching setting works, later quizzes (new quizzes) have missing questions” fits a scenario where:
- The first quiz initialises some per-user state, and
- Later quizzes accidentally reuse that state instead of starting fresh.
This seems less likely than (1) because one might expect it to show up more consistently in more contexts, but it is still plausible.
Things to check:
- Any actors, collections, or caches that store state per user/fingerprint without including
quizId(or equivalent) in their key. - Any “current quiz/run” pointer on the user/fingerprint side that might not be cleared when leaving a quiz.
- Whether there is any global “answering session” concept that is not strictly tied to a single quiz ID.
-
Differences between logged-in and anonymous participants / cookie handling
Anonymous users are identified via a random UUID stored in a cookie (30-day vs session cookie). If logged-in users and anonymous users follow different backend/frontend code paths for the answering phase, there could be subtle differences in how quiz runs or per-user state are created and linked.
Currently there is no concrete indication that this is the cause, so this is considered least likely, but it may still be useful to verify that:
Things to check:
- Logged-in vs anonymous participants use the same logic to fetch their question list for answering.
- Differences in cookie lifetime or fingerprint logic do not cause participants to be associated with stale quiz runs or old per-user answering state when they join a new quiz.