Tag Archives: surveyman

SurveyMan is a language and runtime for designing, debugging, and deploying surveys on the web at scale. For more information, see surveyman.org

On calculating survey entropy

I’ve been spending the past two weeks converting analyses that were implemented in Python and Julia into Clojure. The OOPSLA Artifact Evaluation deadline is June 1 and moving these into Clojure means that the whole shebang runs on the JVM (and just one jar!).

One of the changes I really wanted to make to the artifact we submit was a lower upper bound on survey entropy. Upper bounds on entropy can be useful in a variety of ways: in these initial runs we did for the paper, I found them useful for comparing across different surveys. The intuition is that surveys with similar max entropies have similar complexity, similar runtimes, similar costs, and similar tolerance to bad behavior. Furthermore, if the end-user were to use the simulator in a design/debug/test loop, they could use max entropy to guide their survey design.

We’ve iterated our calculation of the max entropy. Each improvement has lowered the upper bound for some class of surveys.

Max option cardinality Our first method for calculating maximum entropy of a survey was the one featured in the paper: we find the question with the largest number of options and say that the entropy of the survey must be less than the entropy of a survey having equal number of questions, where every question has this maximum number of answer options. Each option has equal probability of being chosen. For some $$survey$$ having $$n$$ questions, the maximum entropy would then be $$\lceil n \log_2 (\max ( \lbrace \lvert \lbrace o : o \in options(q) \rbrace \rvert : q \in questions(survey) \rbrace ) ) \rceil$$.

The above gives a fairly tight bound on surveys such as the phonology survey. For surveys that have more variance in the number of options proffered to the respondent, it would be better to have a tighter bound.

Total survey question max entropy We’ve had a calculation for total survey question max entropy implemented in Clojure for a few weeks now. For any question having at least one answer option, we calculate the entropy of that question, and sum up all those bits. For some $$survey$$ having $$n$$ questions, where each question $$q_i$$ has $$m_i$$ options, the maximum entropy would then be $$\lceil \sum_{i=1}^n \mathbf{1}_{\mathbb{N}^+}(m_i)\log_2(m_i)\rceil$$

While the total survey question max entropy gives a tighter bound on surveys with higher variance, it is still a bit too high for surveys with branching. Consider the wage survey. In Sara’s initial formulation of the survey (i.e. not the one we ran), the question with the greatest number of answer options was one asking for the respondent’s date of birth. The responses were dates ranging from 1900 to 1996. Most of the remaining questions have about 4 options each:

#/Options#/Questions
#/Options#/Questions
2
872
3581
41691
52101
62971

Clearly in this case, using max option cardinality would not give much information about the entropy of the survey. The max cardinality maximum entropy calculation gives 258 bits, whereas the total survey question max entropy gives 80 bits.

This lower upper bound still has shortcomings, though — it doesn’t consider surveys with branching. For many surveys, branching is used to ask one additional question, to help refine answers. For these surveys, many respondents answer every question in the survey. However, there are some survey that are designed so that no respondent answers every question in the survey. Branching may be used to re-route respondents along a particular path. We used branching in this way when we actually deployed Sara’s wage survey. The translated version of Sara’s survey has two 39-question paths, with a 2-option branch question to start the survey and zero option instructional question to end the survey. This version of the survey has a max cardinality maximum entropy calculation of $$80 * \log_2 97 = 528$$ and a total survey question max entropy of 160 bits (without the ceiling operator, this is approximately equal two two times the entropy of the previous version, plus one bit for the introductory branch question).

The maximum number of bits needed to represent this survey approximately doubled from one version to the next. This isn’t quite right — we know that the maximum path through the survey is 41 questions, not 80. In this case, branching makes a significant difference in the lower bound.

Max path maximum entropy Let’s instead compute the maximum entropy over distinct paths through the survey. We’ve previously discussed the computational complexity of computing distinct paths through surveys. In short, randomization significantly increases the number of possible paths through the survey; if we focus on path through blocks instead, we have more tractable results. Rather than thinking about paths through the survey as distinct lists of questions, where equivalent paths have equivalent lengths and orderings, we can instead think about them as unique sets of questions. This perspective aligns nicely with the invariants we preserve.

Our new maximum entropy calculation will compute the entropy over unique sets of questions and select the maximum entropy computed over this set. Some questions to consider are:

  1. Are joined paths the same path?
  2. If we are computing empirical entropy, should we also consider breakoff? That is, do we need the probability of answering a particular question?

We consider paths that join distinct from each other; the probability of answering that question will sum up to one, if we don’t consider breakoff. As for breakoff, for now let’s ignore it. If we need to compute the empirical entropy over the survey (as opposed to the maximum entropy), then we will use the subset relation to determine which questions belong to which path. That is, if we have a survey with paths $$q_1 \rightarrow q_2 \rightarrow q_4$$ and $$q_1 \rightarrow q_3 \rightarrow q_4$$, then a survey response with only $$q_1$$ answered will be used to compute the path frequencies and answer option frequencies for both paths. The maximum entropy is then computed as $$\lceil max(\lbrace -\sum_{q\in survey} \sum_{o \in ans(q)} \mathbb{P}(o \cap p) \log_2 \mathbb{P}(o \cap p) : p \in paths \rbrace) \rceil$$.

There are two pieces of information we need to calculate before actually computing the maximum entropy path. First, we need the set of paths. Since paths are unique over blocks, we can define a function to return the set of blocks over the paths. The key insight here is that for blocks that have the NONE or ONE branch paradigm, every question in that block is answered. For the branch ALL paradigm, every question is supposed to be “the same,” so they will all have the same number of answer options. Furthermore, since the ordering of floating (randomizable) top level blocks doesn’t matter, and since we prohibit branching from or to these blocks, we can compute the DAG on the totally ordered blocks and then just concatenate the floating blocks onto the unique paths through those ordered blocks.

The second thing we need to compute is $$\mathbb{P}(o \cap p)$$. The easiest way to do this is to take a survey response and determine which unique path(s) it belongs to. If we count the number of times we see option $$o$$ on path $$p$$, the probability we’re estimating is $$\mathbb{P}(o | p)$$. We can compute $$\mathbb{P}(o \cap p)$$ from $$\mathbb{P}(o | p)$$ by noting that $$\mathbb{P}(o \cap p) = \mathbb{P}(o | p)\mathbb{P}(p)$$. This quantity is computed by $$\frac{\# \text{ of } o \text{ on path } p}{\#\text{ of responses on path } p}\times\frac{\#\text{ of responses on path } p}{\text{total responses}}$$, which we can reduce to $$\frac{\# \text{ of } o \text{ on path } p}{\text{total responses}}$$. It should be clear from this derivation that even if two paths join, the entropy for the joined sub path is equal to the case where we treat paths separately.

The maximum entropy for the max path in the wage survey, computed using the current implementation of SurveyMan’s static analyses, is 81 bits — equivalent to the original version of the survey, plus one extra bit for the branching.

Hack the system

The Java/Clojure QC of SurveyMan has a RandomRespondent built in. This RandomRespondent class generates answers to the surveys on the basis of some policy. The policies currently available are uniform random, first option, last option, and Gaussian. I’ve been thinking about some other adversary models I could add to the mix :

  1. Christmas Tree : This is a variant of the uniform random respondent, where the respondent zigzags down the survey in the form of a “Christmas Tree.” zigzag christmas tree
  2. Memory Bot : This is more like an augmentation to one of the existing policies, where the questions and answers are cached, and for each question, the bot checks whether it has answered something like it before. We know that sometimes researchers repeat questions, or have similarly worded questions with the same answers (e.g. year of birth). The goal of this bot would be to identify similarly worded questions and try to give consistent answers.
  3. IR Bot : Alternately, we could search through Google for answers and use those answers as solutions.

It’s fairly trivial to write some Javascript to answer our kind of surveys. Since we now have automated browser testing set up, we should also be able to test collusion in the context of the full pipeline. W00t!

Pricing, for real this time

Reader be warned : I began this draft several weeks ago, so there might be a lack of coherence…

A few weeks ago I posted some musings on pricing. In that post I was mainly concerned with the modeling problem for pricing. Here I’d like to discuss some research questions I’ve been bandying about with Sara Kingsley, and outline the experiments we’d like to run.

The Problem

Pricing is intricately tied together with quality control; we use pricing algorithms to help ensure quality control. When we previously outlined our adversaries, we took a traditional approach with the assumption that an actor is either good or bad. There are several key features that this model ignores:

  1. Some workers are better at some types of tasks than other types of tasks.
  2. The quality of the task has an impact on the quality of the worker’s output.
  3. Some design decisions in services such as AMT sometimes make it difficult to tease out the difference between the above two.

(1) is well known and solved by conditioning the classification on the task. Plenty of work on assessing the quality of AMT workers incorporates task difficulty or task type. Task type discretizes the space, making classification clean and easy. Task difficulty is more difficult to model, since it can be highly subjective. I’m also not sure it’s entirely discrete and have not come across a compelling paper on the subject (though I haven’t looked thoroughly, so please post any in the comments!).

(2) seems to be better-known among social scientists than computer scientists. AMT workers have online forums where they post information about HITs. Typically if a HIT is good, they post very little additional information. If a HIT is bad, they will review it.

Poorly designed or unclear HITs incur a high cost for the workers and the requesters. Literature on crowdworkers’ behavior suggests that they are aiming for a particular rate. On AMT, a worker can return a HIT at any time. However, if a worker returns a HIT, they will not be compensated for any work whatsoever, and no information about abandonment is returned to the requester. Consequently, as a worker makes their way through a HIT, they must weight the cost of completion against the cost of abandonment. Even workers who are highly skilled at a particular task may perform poorly if a HIT is poorly designed. If workers do not reach out to requesters or if requesters do not search for their own HITs on forums, they may never know that workers are abandoning the work, or if they do, why.

Quality of the work is clearly tied to quality of the task. In SurveyMan, we address quality of the task in a more principled way than just best practices. It would also stand to reason that quality of the work would be tied to price. One might hypothesize that a higher price for a task would translate to higher quality work. However (according to what Sara’s told me), this is not the case. Work quality allegedly does not appear to respond to price. We believe that this result is a direct consequence from the AMT shortcoming detailed above — prohibiting workers from submitting early enforces a discontinuity in the observed quality/utility function.

How to address pricing

There are two main research questions we would like to address:

  1. Does the design of SurveyMan change worker behavior?
  2. Can we find a general function for determining the price/behavior tradeoff, and implement this as part of the SurveyMan runtime system?

The impetus for these particular questions was the results we found when running Sara’s wage survey. There were two differences in the deployment of this survey: (1) I did not run this survey with a breakoff notice and (2) this was the first survey launched over a weekend.

So-called “time of day effects” are a known problem with AMT. Since AMT draws primarily on workers from the US and India, there are spikes in participation during times when these workers are awake and engaged. Many workers perform HITs while employed at another job. It wouldn’t be a stretch to claim that sub-populations have activity levels that can be expressed as a function of the day of the week. This could explain some of the behavior we observed with Sara’s wage survey. However, the survey ran for almost a week before expiring. I believe that (1) had a strong influence on workers’ behavior.

Is SurveyMan the Solution?

Sara had mentioned some work in economics that found that changing the price paid for a HIT on AMT had no impact on the quality of the work. I had read some previous work that discussed the impact of price on attracting workers, but discussed quality control as a function of task design, rather than pricing. I suspect that the observed absence of difference between price points is related to the way the AMT system is designed.

AMT does not pay for partial work. When a worker accepts a HIT, they can either complete the HIT and submit it for payment (which is not guaranteed), or they return the HIT and receive no payment. Since requester review sites exist, the worker can use the requester’s reputation as a proxy for the likelihood that they’ll be paid for their work and as a proxy for the quality of the HIT.

Consider the case where the HIT is designed so that the worker has complete information about the difficulty of the task. In the context of SurveyMan, this would be a survey whose contents are displayed all on the same page. We know that there will be surveys where this approach is simply not feasible – one example that comes to mind is experimental surveys that require measuring the difference in a respondent’s responses over two different stimuli. In any case, if the user is able to see the entire survey, they will be able to gauge the amount of effort required to complete the task, and make an informed decision about whether or not to continue with the HIT.

This design has several drawbacks. There’s the aforementioned restriction over the types of surveys we can analyze. There’s also a problem with our ability to measure breakoff. Since we display one question at a time, in a more or less randomized order, we can tell the difference between questions that cause breakoff and length-related breakoff. When the respondent is allowed to skip around and answer questions in any order, we lose this power. We also lose any inferences we might make about question order, and generally have a more muddied analysis.

Displaying questions one at a time was always part of our design. However, we decided to allow users to submit early as a way of handling this issue with AMT and partial work. Since we couldn’t get any information about returned HITs, we decided to discourage users from returning them and instead allow them to submit their work early. Since we figured that we would need to provide users with an incentive to continue answering questions, we displayed a notice at the beginning of a survey that told the user that they would be paid a bonus commensurate with the amount and quality of the work they submitted. We decided against telling the user (a) how the bonus would be calculated and (b) how long the survey would be.

I initially thought we would court bots by allowing users to submit after answering the first question. This was absolutely not the case for the phonology surveys. Anecdotally it seems that AMT has been cracking down on bots, but I had a really hard time believing that we had no bots. It wasn’t until I posed the wage survey that I began to see this behavior. I believe that it is related to the lack of breakoff notice.

It would be interesting to test some of these hypothesis on a different crowdsourcing platform, especially one that allows tracking for partial work. Even a system that has a different payment system set up would be a good point of comparison.

Possible Experiments

We set up the wage survey to run a fully randomized version and a control version at the same time. I really liked this setup, since it meant that any given respondent had a 50/% chance of seeing one of the two, effectively giving us randomized assignments.

Experiment 1 To start with, I would like to run another version that randomly displays the breakoff notice on each version. One potentially confounding problem might be the payment of bonuses, since this has been our practice in the past, and may be known to the workers. The purpose of this experiment is to test whether showing the breakoff notice changes the quality of responses.

Experiment 2 Another parameter that needs more investigation is the base pay. We recently started using federal minimum wage, an estimated time per question, and the max or average path through the survey (whether to use max or average is still up for debate). I’ve seen very low base pay, with the promise of bonuses, successfully attract workers. It isn’t clear to me how the base pay is related to the number of responses or quality of worker.