This paper, Crump et al. 2013, is an extensive investigation of how classic cognitive psychology paradigms, including the SHJ category learning experiments, work on the web. We’ll definitely need to take a close look at this.
This paper, Crump et al. 2013, is an extensive investigation of how classic cognitive psychology paradigms, including the SHJ category learning experiments, work on the web. We’ll definitely need to take a close look at this.
Do you know if Crump et al. (2013) explicitly referred to “rules” in their instructions? I hope not.
Right – that’s not clear. Their discussion in general doesn’t show sufficient appreciation of the fragility of the II > IV ordering. It would definitely be worth trying to replicate Kurtz et al.’s Exp. I more exactly on the web.
I appreciate that Crump et al. (2013) thoroughly explained all of the steps they took in this series of experiments, including the missteps. It’s very helpful to know, for example, that they were not obtaining results they expected until they added a check for comprehension of instructions.
While I question the ethics of having different incentive-level conditions (the low incentive group received a mere $0.75 for 15-20 minutes of work), this manipulation illustrated that there is apparently a subpopulation of Turkers who will complete HITs for almost no compensation. Of course, researchers should always strive to offer what they believe to be fair pay, but I appreciate that these authors tested the bounds of how much or little people would accept as payment for doing the identical work.