Good question! It is a general model of recognition, not specific to Stall Catchers or to this task. It derives from psychophysics research showing that if you show the same person exactly the same thing 100 times, they will never see it exactly the same way, due to noise and other influences in our perceptual system. This model underlies our crowd science approach both for interpreting individual answers and for combining answers to the same movie from many different people.
Some of the movies are certainly overexposed and we have discussed ways to better regulate movie quality in future datasets. In those cases (when no clear gaps are visible), paying attention to whether the texture in clearly moving in one direction or not can be useful to making a flowing/stalled determination. We have also considered adding "tuning knobs" to allow participants to adjust image quality parameters such as brightness and contrast. Another popular request in this vein (no pun intended) is to provide a zoom capability. These features are all part of our ongoing discussion, and the ongoing challenge for us is how to best prioritize feature development.
This is exactly the kind of question that human computation (crowdsourcing science) research seeks to address. Though we call methods related to answering these questions "consensus algorithms", in some sense it is a misnomer, as consensus implies agreement among all voters. In reality, it is more like a "quorum algorithm" - seeking a majority vote. But the approach is more involved because each participant has a natural bias toward answering flowing or stalled, and each participant demonstrates a degree of sensitivity in discriminating between flowing and stalled. So we do our best to factor in these individual differences. And because humans are involved, we cannot guarantee perfect accuracy, but through validation studies, we can guarantee that a certain accuracy is achieved with a certain likelihood, and those guarantees are sufficient to support the research requirements for data quality.
Thanks for the great dialog, as always, Michael.