Category Archives: Research

Posts about research.

Reproducibility and Privacy

What would it take to have an open database for various scientific experiments? An increasing number of researchers are posting data online, and many are willing to (and required) to share their data if you ask for it. This is fine for a single experiment, but what if you’d like to reuse data from two different studies?

There is a core group of AMT respondents who are very active. Sometimes, AMT respondent contact requesters, at which point they are no longer anonymous. My colleague, Dan Barowy received an email from a respondent, thanking him for the quality of the HIT. I asked him the respondents name and as it turned out, they had contacted me when I was running my experiments as well.

So if we have the general case of trying to pair similar pieces of data into a unit (i.e. person) and the specific case of AMT workers who are definitely the same people (they have unique identifiers), how can we combine this information in a way that’s meaningful? In the case of the AMT workers, we will need to obfuscate some information for the sake of privacy. For other sources of data, could we take specific data, infer something about the population, and build a statistical “profile” of that population to use as input to another test? Clearly we can use standard techniques to learn summary information about a population, but could we take pieces of data and unify them into a single entity and say with high probability these measurements are within some epsilon of a “true” respondent? How would we use the uncertainty inherent in this unification to ensure privacy?

Is it possible to unify data in such a way that an experimenter could execute a query asking for samples of some observation, and get a statistically valid Frankenstein version of that sample? I’m sure there’s literature out there on this. Might be worth checking into…

On Experiments vs Surveys (Ramble Time!)

“Big data hubris” is the often implicit assumption that big data are a substitute for, rather than a supplement to, traditional data collection and analysis.

The core challenge is that most big data that have received popular attention are not the output of instruments designed to produce valid and reliable data amenable for scientific analysis.
Lazer et al.

I wrote previously about the range of experimental paradigms available to us on the web. In that post, we looked at observational studies, surveys, quasi-experiments, and experiments on axes that captured the range of control available to us. We might to consider other axes as well:

Exploratory vs. Confirmatory

Observational studies and surveys can be considered more exploratory, whereas quasi- and true experiments are more confirmatory. This follows from the level of control we expect to have. Observational studies, as we’ve said before, are a lot like data mining. The environment that we are observing is rich with potential features, but may also be sparse with respect to observations on those features. Suppose we were interested in improving a recommendation system. We might start by looking for correlations between products. That is, we might observe that people who watched The Wizard of Oz also watched It’s a Wonderful Life on Netflix, or people who bought Bishop’s Machine Learning book also bought Hastie et al’s Elements of Statistical Learning on Amazon. We would quickly find that while there are a few strongly correlations, we would have a long tail of single observations. Furthermore, our observations do not differentiate between a lack of information and negative training data. That is, we do not know if someone hasn’t watched 2 Fast 2 Furious because they didn’t know it existed or if they didn’t watch it because they already know they won’t like it. Many of these recommender systems have user-provided ratings to help differentiate these two categories, but they cannot cover all the cases, since ratings are still opt-in.

While I am personally not particularly interested in recommender systems, they do provide a nice example of how different instruments correspond to exploratory vs. confirmatory data analysis. We could start by extracting interesting features from our user and product base. By “interesting” we mean features that have sufficient observations, that are as close to orthogonal as possible. The researcher should use domain knowledge to enumerate features.

Slight tangent: One thing I want to point out at this point is that enumerating features is the first step in generating a hypothesis we might want to test. In a lot of ways, features are the source of the hypotheses we’re testing. Features are the basic building blocks of hypotheses. Emery and I discussed recently the idea that hypotheses are like a probabilistic grammar over these features. Typically in machine learning research, the model is the subject of interest. The model is a representation of the variables of interest. What is the relationship between the features and the variables? Features are either directly observed or computed from the data. Personally, I would say that “computed from the data” here should have kind of bound on it. Clearly in the case of text classification, surface string suffixes are valid features. But what about parts of speech? If you are performing co-reference resolution or doing some kind of entity identification, then parts of speech could be a useful feature. But what if your task *is* part of speech tagging? Would it be fair to use the tags from one part of speech tagger as input to another? What if you did a first pass with your tagger, and then used that information as input on another run (that is, your tagger can take its own output as input)? Now what are the differences between variables in your model and features? /endrant

We would begin our exploratory analyses by looking at the features and the raw data and seeing what we find there. We can use this information or our own domain knowledge to construct a survey. The purpose of this survey is to collect data we might not otherwise have access to. For example, we might explicitly ask respondents who use Netflix about their viewing preferences. As discussed in my earlier post, this helps us fill in some data we might not have already.

A difference we generally see between traditional surveys and what we’ll call experimental questionnaires is the presence of a hypothesis to test. We’ll get into this in a later blog post, but generally in surveys we do not have the notion of a null hypothesis, nor an experimental control. I would say that surveys have their origins in much more qualitative goals. While they are used to guide decision-making, they are also widely used to help tell a narrative. This isn’t to say that they can’t or aren’t being used They can be manipulated easily, if they are not designed well, and it can be difficult to differentiate between well-designed surveys and poorly designed surveys. This susceptibility to manipulation strongly motivated our work on SurveyMan.

We noticed that there were considerable differences in the objectives and designs of the surveys our collaborators were using. For example, our collaborators in linguistics used surveys in a more experimental context — the questions in their surveys bore strong similarity to each other and their differences could be subtle to detect. Many of the questions belonged to discrete categories (what we might call “factors” in experimental design). There wasn’t much variability between questions. Those surveys were very focused on acquiring enough information to answer a particular question, or test a particular hypotheses.

The wage survey we ran was quite different. The questions were structurally heterogenous. None of the questions were “redundant” (that is, they did not appear to represent categories of questions that are equivalent in some way). The nature of survey design will always inject some bias for generating hypotheses — for example, the wage survey did not ask questions about favorite foods or whether the respondents had a family history of breast cancer. The authors of the wage survey were clearly interested in the relationships between education, employment, attitudes toward the AMT marketplace, and willingness to negotiate wages. However, the survey was not encoded as an experiment. It was more exploratory. We observed some interesting behavior in the context of randomization and breakoff, and would like to use this information in a future study.

Missing Data

In a survey, if we have missing data, it’s due to breakoff. We are faced with some choices: if breakoff occurs more frequently at a particular question, we ought to look at the question and see what’s going on — is it worded poorly? Are there insufficient options to cover potential responses? Is the question potentially offensive? If breakoff occurs most frequently at a position, this might indicate that the survey is too long, or that there’s a jarring shift between blocks in the survey. We would generally use this information to help debug the survey.

In an experiment, it’s not quite clear that the researcher would use the same information in the same way. Since we expect experiments to have some redundancy built in, breakoff at a particular question might tell us less than breakoff at a question type. That is, we might suspect there to be a latent variable influencing breakoff. Our analysis will have more hypotheses to consider when diagnosing the cause of breakoff.

Confounding variables

A major threat to validity in experiments that is not typically addressed (so far as I can tell) is the failure to model confounding variables. This is where we might use surveys, or a survey-like section of an experiment, to help identify these variables. We’ve done this sort of thing before — in the linguistics surveys, we had a demographics section. We could expand this and ask more questions in an attempt to capture these other variables. In any case, there is something qualitatively different about the demographic questions, when compared with the core questions of interest. In both surveys and experiments, we can view demographic questions as a path to stratifying the population. However, there seems to me to be a clearer divide between how these questions are used in surveys versus experiments.

Correlation vs. Causation

With surveys, we are looking primarily for two things: the raw data (e.g. 35% of Netflix subscribers have viewed The Wizard of Oz) and correlations (viewing It’s a Wonderful Life is positively correlated with having viewed The Wizard of Oz more than once). In experiments, we are also interested in the raw data and correlations, but we are also looking for causal relationships and patterns in the latent variables. If we are lucky, we might be able to find causal relationships in a dataset, but the permutations required to infer a causal relationship may be too spare for us to discover them through data analysis alone. Instead, we will need to design experiments to ensure coverage of the space. This will require some changes to the specification of the survey language, to capture the abstractions we need.

What does it mean for a survey to be “correct”? — Part I.

One of the arguments we keep making about SurveyMan is that our approach allows us to debug surveys. We talk about the specific biases (order, wording, sampling) that skew the results, but one thing that we don’t really emphasize in the tech report is what it really means for a survey to be correct. Here I’d like to tease out various notions of correctness in the context of survey design.

Response Set Invariant

Over the past few months, I’ve been explaining the central concept of correctness as one where we’re trying to preserve an invariant: the set of answers returned by a particular respondent. The idea is that if this universe were to split and you were not permitted to break off, the you in Universe A would return the same set of answers as the you in Universe B. That is, your set of answers should be invariant with respect to ordering, branching, and sampling.


These guys should give the same responses to a survey, up to the whole timeline split thing.

Invalidation via biases Ordering and sampling can be flawed and when they are, they lead to our order bias and wording bias bugs. Since we randomize order, we want to be able to say that your answer to some question is invariant with respect to the questions that precede it. Since we sample possible wordings for variants, we want to be able to say that each question in an ALL block is essentially “the same.” We denote “sameness” by treating each question wording variant as exchangeable and aligning the answer options. The prototypicality survey best illustrates variants, both in the question wording and the option wording:

BLOCKQUESTIONOPTIONS
_1._1How good an example of an odd number is the number 3?Somewhat bad
Somewhat good
Somewhat good
Good
_1._1How well does the number 3 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._1How odd is the number 3?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._1How odd is the number 3?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._2How good an example of an odd number is the number 241?Bad
Somewhat bad
Somewhat good
Good
_1._2How well does the number 241 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._2How odd is the number 241?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._2How odd is the number 241?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._3How good an example of an odd number is the number 2?Bad
Somewhat bad
Somewhat good
Good
_1._3How well does the number 2 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._3How odd is the number 2?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._3How odd is the number 2?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._4How good an example of an odd number is the number 158?Bad
Somewhat bad
Somewhat good
Good
_1._4How well does the number 158 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._4How odd is the number 158?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._4How odd is the number 158?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._5How good an example of an odd number is the number 7?Bad
Somewhat bad
Somewhat good
Good
_1._5How well does the number 7 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._5How odd is the number 7?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._5How odd is the number 7?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._6How good an example of an odd number is the number 4?Bad
Somewhat bad
Somewhat good
Good
_1._6How well does the number 4 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._6How odd is the number 4?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._6How odd is the number 4?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._7How good an example of an odd number is the number 465?Bad
Somewhat bad
Somewhat good
Good
_1._7How well does the number 465 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._7How odd is the number 465?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._7How odd is the number 465?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._8How good an example of an odd number is the number 396?Bad
Somewhat bad
Somewhat good
Good
_1._8How well does the number 396 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._8How odd is the number 396?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._8How odd is the number 396?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._9How good an example of an odd number is the number 1?Bad
Somewhat bad
_1._1How good an example of an odd number is the number 3?Bad
Good
_1._9How well does the number 1 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._9How odd is the number 1?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._9How odd is the number 1?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._10How good an example of an odd number is the number 463?Bad
Somewhat bad
Somewhat good
Good
_1._10How well does the number 463 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._10How odd is the number 463?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._10How odd is the number 463?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._11How good an example of an odd number is the number 6?Bad
Somewhat bad
Somewhat good
Good
_1._11How well does the number 6 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._11How odd is the number 6?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._11How odd is the number 6?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._12How good an example of an odd number is the number 730?Bad
Somewhat bad
Somewhat good
Good
_1._12How well does the number 730 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._12How odd is the number 730?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._12How odd is the number 730?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._13How good an example of an odd number is the number 9?Bad
Somewhat bad
Somewhat good
Good
_1._13How well does the number 9 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._13How odd is the number 9?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._13How odd is the number 9?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._14How good an example of an odd number is the number 8?Bad
Somewhat bad
Somewhat good
Good
_1._14How well does the number 8 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._14How odd is the number 8?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._14How odd is the number 8?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._15How good an example of an odd number is the number 827?Bad
Somewhat bad
Somewhat good
Good
_1._15How well does the number 827 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._15How odd is the number 827?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._15How odd is the number 827?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
_1._16How good an example of an odd number is the number 532?Bad
Somewhat bad
Somewhat good
Good
_1._16How well does the number 532 represent the category of odd numbers?Poorly
Somewhat poorly
Somewhat well
Well
_1._16How odd is the number 532?Not very odd
Somewhat not odd
Somewhat odd
Very odd
_1._16How odd is the number 532?Not at all odd
Somewhat not odd
Somewhat odd
Completely odd
2What is your native tongue?American English
British English
Indian English
Other

Invalidation via runtime implementation Clearly ordering and sampling are typical, well-known biases that we are framing as survey bugs. These bugs violate our invariant. We have also put restrictions on the survey language in order to preserve this invariant. I discussed these restrictions previously, but didn’t delve into their relationship with correctness. In order to understand the correctness of a survey program, we will have to talk a little about semantics.

Generally when we talk about language design in the PL community, we have two sets of rules for understanding a language: the syntax and the semantics. As with natural language, the syntax describes the rules for how components may be combined, whereas the semantics describes what they “really mean”. The more information we have encoded in the syntax, the more we can check at compile time — that is, before actually executing the program. At its loosest, the syntax describes a valid surface string in the language, whereas the semantics describes the results of an execution.

When we want “safer” languages, we generally try to encode more and more information in the syntax. A common example of this is type annotation. We can think of this extra information baked into the language’s syntax as some additional structure to the program. Taking the example of types, if we say some variable foo has type string and then try to use it in a function that takes type number, we should fail before we actually load data into foo — typically before we begin executing the program.

In SurveyMan, these rules are encoded in the permitted values of the boolean columns, the syntax of the block notation, and the permitted branch destinations. Let’s first look at the syntax rules for SurveyMan.

SurveyMan Syntax : Preliminaries

Before we define the syntax formally, let us define two functions we’ll need to express our relaxed orderings over questions, blocks, and column headers:

$$ \sigma_n : \forall (n : int), \text{ list } A \times n \rightarrow \text{ list } A \times n$$

This is our random permutation. Actually, it isn’t *really* a random permutation, because we are not using the randomness as a resource. Instead, we will be using it more like a nondeterministic permutation (hence the subscript “n”). We will eventually want to say something along the lines of “this operator returns the appropriate permutation to match your input, so long as your input is a member of the set of valid outputs.” $$\sigma_n$$ takes a list and its length as arguments and returns another list of the same length. We can do some hand-waving and say that it’s equivalent to just having a variable length argument whose result has the same number of arguments, so that

$$ \sigma_n([\text{apples ; oranges ; pears}], 3) \equiv \sigma_n(\text{apples, oranges, pears})$$

and the set of all valid outputs for this function would be

$$ \lbrace \text{[apples ; oranges ; pears], [apples ; pears ; oranges], [oranges ; pears ; apples],}$$
$$\quad \text{[oranges ; apples ; pears], [pears ; apples ; oranges], [pears ; oranges ; apples]} \rbrace$$.

We are going to abuse notation a little bit more and say that in our context, the output is actually a string where each element is separated by a newline. So, if we have for example

$$\sigma_n(\text{apples, oranges, pears}) \mapsto [\text{oranges ; pears ; apples}]$$,

then we can rewrite this as

$$\sigma_n(\text{apples, oranges, pears}) \mapsto \text{oranges}\langle newline \rangle \text{pears}\langle newline \rangle \text{apples}$$,

where $$\langle newline \rangle$$ is the appropriate newline symbol for the program/operating system.

Why not encode this directly in the typing of $$\sigma_n$$? The initial definition makes the syntax confined and well-understood for the (PL-oriented) reader. Since we’re defining a specific kind of permutation operator, we want its type to be something easy to reason about. The “true” syntax is just some sugar, so we have some easy-to-understand rewrite rules to operate over, rather than having to define the function as taking variable arguments and returning a string whose value is restricted to some type. Note that I haven’t formalized anything like this before, so there might be a “right way” to express this kind of permutation operator in a syntax rule. If someone’s formalized spreadsheets or databases, I’m sure the answer lies in that literature.

The above permutation operator handles the case where we have a permutation where we can use some kind of equality operator to match up inputs and outputs that are actually equal. Consider this example: suppose there are two rules in our syntax:

$$\langle grocery\_list \rangle ::= \langle fruit\rangle \mid \sigma_n(\langle fruit\rangle,…,\langle fruit\rangle)$$
$$\langle fruit \rangle ::= \text{ apple } \mid \text{ orange } \mid \text{ pear } \mid \text{ banana } \mid \text{ grape }$$

Suppose store our grocery list in a spreadsheet, which we export as a simple csv:

apple
banana
orange

Then we just want to be able to match this with the $$grocery\_list$$ rule.

Next we need to introduce a projection function: a function to select one element from an ordered, fixed-sized list (i.e. a tuple). This will actually be a family of functions, parameterized by the natural numbers. The traditional projection function selects elements from a tuple and is denoted by $$\pi$$. If we have some tuple $$(3,2,5)$$, then we define a family of projection functions $$\pi_1$$, $$\pi_2$$, and $$\pi_3$$, where $$\pi_1((3,2,5))=3$$, $$\pi_2((3,2,5))=2$$, and $$\pi_3((3,2,5))=5$$. Our projection function will have to operator over list, rather than a tuple, since it will be used to extract values from columns and since we will not know the total number of columns until we read the first line of the input. The indices correspond to the column indices, but since columns may appear in any order, we must use the traditional permutation operator to denote the correct index:

$$\pi_{\sigma(\langle column \rangle)}(\langle col\_data \rangle_{(\sigma^{-1}(1))}, \langle col\_data \rangle_{(\sigma^{-1}(2))},…,\langle col\_data \rangle_{(\sigma^{-1}(n))})$$

Now we have some set of projection functions, $$\pi_{\sigma(\langle column \rangle)}$$, the size and values of which we do not know until we try to parse the first row of the csv. However, once we parse this row, the number of columns ($$n$$ above) and their values are fixed. Furthermore, we will also have to define a list of rules for the valid data in each column, but for column headers recognized by the SurveyMan parser, these are known ahead of time. All other columns are treated as string-valued. We parameterize the column data rules, using the subscripts to find the appropriate column name/rule for parsing.

SurveyMan Syntax : Proper

$$\langle survey \rangle ::= \langle block \rangle \mid \sigma_n(\langle block \rangle, …,\langle block \rangle)$$

$$\langle block \rangle ::= \langle question \rangle \mid \sigma_n(\langle block \rangle, …, \langle block \rangle)$$
$$\quad\quad\mid \; \sigma_n(\langle question \rangle,…,\langle question \rangle) \mid \sigma_n(\langle question \rangle, … ,\langle block \rangle,…)$$

$$\langle row \rangle ::= \langle col\_data \rangle_{(\sigma^{-1}(1))},…,\langle col\_data \rangle_{(\sigma^{-1}(n))} $$

$$\langle question \rangle ::= \langle other\_columns \rangle\; \pi_{\sigma(\mathbf{QUESTION})}(\langle row \rangle)$$
$$\quad\quad\quad\quad\quad\;\langle other\_columns\rangle\;\pi_{\sigma(\mathbf{OPTIONS})}(\langle row \rangle) \;\langle other\_columns \rangle$$
$$\quad\quad\mid\;\langle question \rangle \langle newline\rangle \langle option \rangle$$

$$\langle option \rangle ::= \langle empty\_question\rangle \; \langle other\_columns\rangle \; \pi_{\sigma(\mathbf{OPTIONS})}(\langle row \rangle) \; \langle other\_columns \rangle$$
$$\quad\quad\mid\;\langle other\_columns\rangle \;\langle empty\_question\rangle \; \pi_{\sigma(\mathbf{OPTIONS})}(\langle row \rangle) \; \langle other\_columns \rangle$$
$$\quad\quad\mid\;\langle other\_columns\rangle \; \pi_{\sigma(\mathbf{OPTIONS})}(\langle row \rangle) \; \langle empty\_question\rangle \;\langle other\_columns \rangle$$
$$\quad\quad\mid\;\langle other\_columns\rangle \; \pi_{\sigma(\mathbf{OPTIONS})}(\langle row \rangle) \; \langle other\_columns\; \rangle\langle empty\_question\rangle \;$$

$$\langle empty\_question \rangle ::= \langle null \rangle$$

$$\langle other\_columns \rangle ::= \langle null \rangle \mid \langle other\_column \rangle \; \langle other\_columns \rangle$$

$$\langle other\_column \rangle ::= \langle repeatable\_column \rangle$$
$$\quad\quad \mid\; \langle nonrepeatable\_column \rangle $$
$$\quad\quad \mid \;\langle changable\_column\rangle$$

$$\langle nonrepeatable\_column \rangle ::= \;, \;\pi_{\sigma(\mathbf{FREETEXT})}(\langle row \rangle) \mid \pi_{\sigma(\mathbf{FREETEXT})}(\langle row \rangle)$$

$$\langle repeatable\_column \rangle ::= \; , \;\pi_{\sigma(\mathbf{BLOCK})}(\langle row \rangle) \mid \pi_{\sigma(\mathbf{BLOCK})}(\langle row \rangle)$$
$$\quad\quad\mid\; , \;\pi_{\sigma(\mathbf{EXCLUSIVE})}(\langle row \rangle) \mid \pi_{\sigma(\mathbf{EXCLUSIVE})}(\langle row \rangle)$$
$$\quad\quad\mid\; , \;\pi_{\sigma(\mathbf{ORDERED})}(\langle row \rangle) \mid \pi_{\sigma(\mathbf{ORDERED})}(\langle row \rangle)$$
$$\quad\quad\mid\; , \;\pi_{\sigma(\mathbf{RANDOMIZE})}(\langle row \rangle) \mid \pi_{\sigma(\mathbf{RANDOMIZE})}(\langle row \rangle)$$
$$\quad\quad\mid\; , \;\pi_{\sigma(\mathbf{CORRELATION})}(\langle row \rangle) \mid \pi_{\sigma(\mathbf{CORRELATION})}(\langle row \rangle)$$

$$\langle changable\_column \rangle ::= , \;\pi_{\sigma(\mathbf{BRANCH})}(\langle row \rangle) \mid \pi_{\sigma(\mathbf{BRANCH})}(\langle row \rangle)$$
$$\quad\quad\mid\; , \;\pi_{\sigma(\langle user\_defined \rangle)}(\langle row \rangle) \mid \pi_{\sigma(\langle user\_defined \rangle)}(\langle row \rangle)$$

$$\langle col\_data \rangle_{(\mathbf{FREETEXT})} ::= \mathbf{TRUE} \mid \mathbf{FALSE} \mid \mathbf{YES} \mid \mathbf{NO} \mid \langle string \rangle \mid \#\;\lbrace\; \langle string \rangle \; \rbrace$$
$$\langle col\_data \rangle_{(\mathbf{BLOCK})} ::= (\_|[a-z])?[1-9][0-9]*(\setminus .\_?[1-9][0-9]*)*$$
$$\langle col\_data \rangle_{(\mathbf{BRANCH})} ::= \mathbf{NEXT} \mid [1-9][0-9]*$$
$$\langle col\_data \rangle_{(\mathbf{EXCLUSIVE})} ::= \mathbf{TRUE} \mid \mathbf{FALSE} \mid \mathbf{YES} \mid \mathbf{NO}$$
$$\langle col\_data \rangle_{(\mathbf{ORDERED})} ::= \mathbf{TRUE} \mid \mathbf{FALSE} \mid \mathbf{YES} \mid \mathbf{NO}$$
$$\langle col\_data \rangle_{(\mathbf{RANDOMIZE})} ::= \mathbf{TRUE} \mid \mathbf{FALSE} \mid \mathbf{YES} \mid \mathbf{NO}$$

Notes :

  • The last match for the $$\langle block\rangle$$ rule should really be four rules — one for one question and one block, one for one question and many blocks, one for one block and many questions, and one for many blocks and many questions, but I thought that would be cumbersome to read, so I shortened it.
  • All $$\langle col\_data \rangle$$ rules not explicitly listed are unconstrained strings.
  • Remember, the subscripted $$n$$ in $$\sigma_n$$ is for nondeterminism; the $$n$$ indexing the column data denotes the number of input columns.
  • We also accept json as input; the json parser assigns a canonical ordering to the columns, so generated ids and such are equivalent.
  • What’s worth noting here are the things that we intend to check on the first pass of the parser and compare them against those that require another iteration over the structure we’ve built. We would like to be able to push as much into the syntax as possible, since failing earlier is better than failing later. On the other hand, we want the syntax of the language to be clear to the survey writer; if the survey writer struggles to just get past the parser, we have a usability problem on our hands.

    A good example of the limitations of what can be encoded where is the BRANCH column. Consider the following rules for branching:

    1. Branch only to top-level blocks.
    2. Branch only to static (i.e. non-floating) blocks.
    3. Only branch forward.
    4. Either branch to a specific block, or go to the next block (may be floating).

    (1) and (2) are encoded in the grammar — note that the regular expression does not include the subblock notation used in the BLOCK column’s regular expression. Furthermore, the BRANCH rule will not match against floating blocks. Any survey that violates these rules will fail early.

    (3) is not encoded directly in the grammar because we must build the question and top-level containing block first. Since it cannot be verified in the first pass, we perform this check after having parsed the survey into its internal representation.

    (4) cannot be determined until runtime. This is an example of where our syntax and static checks reach their limits. $$\mathbf{NEXT}$$ is typically used when we want to select exactly one question from a block and continue as usual. A user could conceivably use $$\mathbf{NEXT}$$ as a branch destination in the usual branching context; the only effect this could have would be the usual introduction of constraints imposed by having that block contain a “true” branch question.

    This leads into my next post, which will (finally!) be about the survey program semantics.