Organizing Virtual CUNY 2020 @ UMass Amherst ??????????????????????????

On March 8th, the CUNY organizing committee made the decision to cancel the in-person CUNY conference, due to the global COVID-19 outbreak. With 11 days until the planned start date, we had a choice. We could cancel completely, or we try to virtualize. We chose the latter option. We thought that we would try something new by attempting to host CUNY in a purely virtual format. There were good reasons to try this, we thought, even with the very short turnaround time this would require. First and foremost: CUNY is an important event for early career researchers and students in our community, and by the time the decision to cancel was made, many of them had invested a significant amount of time into preparing their talks. We wanted to find a way to honor their commitments, and find a way for them to reap at least some of the professional benefits. Second, we thought that everyone who was planning to attend would still like to hear about the latest and most exciting research in the world of psycholinguistics, even if it was from the comfort and safety of their homes. Third, we had the sense that many conferences would end up virtualizing, and that CUNY could serve as a sort of pilot experiment to see how this goes for a conference of our size. We are experimentalists, after all!

The purpose of this post is sketch out how we made this transition, and to report the results of that experiment.

Decisions, decisions

The two first questions we tackled were: How should we present the talks, and how should we present the posters?

For the talks, we had already decided on a schedule for the 3 days of the conference a few weeks prior. The registered CUNY attendees had already planned to attend for those three days, so we knew we had a captive audience then. Similarly, presenters knew if they were going to be presenting talks or presenting posters. For those giving talks, they knew at which time they would do so. We thought that because our conference attendees had been planning on these dates and times for so long, we should stick to this conference schedule. So we decided to hold a synchronous virtual conference with the originally planned dates and times.

We’ll start by detailing how we ran the talks at CUNY.

Talks

We contacted speakers and let them know that talks would take place when they were scheduled in the original program, but that we were flexible about this. We realized that there were now going to be speakers giving presentations from different time zones, and from places with unreliable internet and distractions.  Even more worrisome was the possibility that speakers may not be in a position to give a talk at all given the very difficult climate that the pandemic had created.

So we decided that our number one priority was allowing as many people to present as possible.  We abandoned any strong commitment to the thematic sessions we had organized and moved around speakers when doings so would make it possible for them to present synchronously. We also made it an option for speakers who could not present synchronously to pre-record their presentation and upload the video to OSF if they could not make their scheduled time for any reason. Overall, we had 34 planned talks for physical CUNY (7 invited presentations, and 27 papers). In the end, 4 of these talks were given asynchronously, and a small number were rearranged to make it easier on the presenters.

Zoom

We presented the synchronous CUNY talks in a series of Zoom Webinars. The Webinar capacity is not included in the basic free version of Zoom, so we paid for a subscription. We upgraded to the Business Plan with the Webinar and Cloud Recording Add-Ons, which in all costs $457 a month. Zoom webinars differ from meetings in that conference attendees have their video and mic disabled upon joining the webinar. The only participants with video and microphone access enabled were hosts and panelists. Our plan allowed a maximum of 500 attendees to view the conference at a time, and the webinars were also recorded by the host and stored in the cloud for people to be able to view after the fact. These were automatically captioned by Zoom.

We thought 500 was a satisfactory limit for our conference, and this ended up being the case: The maximum concurrent attendance at any CUNY Webinar reached 355 participants. Our backup option in case we had overflow was that we would stream the Webinars over YouTube live.

Organization of a CUNY session

As mentioned, each session was organized as its own separate Zoom Webinar. We maintained a single link on the website that participants could click to join the webinar, but behind the scenes, we were updating that link so that it pointed to the current CUNY session at any given moment.

Within each session, there were three different roles all the webinar panelists were given. They were:

Host: The host was a local UMass CUNY member who was responsible for running the technical side of the Zoom session. The host was the one who ran the Webinar on their computer; this was all done at a machine with a wired ethernet connection. The host also had primary responsibility for setting the parameters of each Zoom webinar and managing all the technical aspects of the session. We had a host checklist that detailed these responsibilities: You can see it here.

Session Chair: The session chair was a specialist in the area thematically represented in the given session. The Session Chair was responsible for introducing the speakers, giving the speakers a 5 minute warning, and managing the question and answer period. The checklist for the session chair responsibilities can be found here.

Speakers: The speakers’ main responsibility was giving their talk! The only thing that was really different for the speakers in the virtual format was getting used to Zoom, and how we had structured the sessions. Prior to CUNY, we gave our speakers a brief introduction to Zoom, helped them set up their screen sharing the way they wanted it to look, and gave them a brief orientation to what to expect the experience to be like, including information about the technical aspects (e.g. screen-sharing), and the experiential aspects (e.g. don’t feel awkward when you don’t hear lots of people clapping, and to try pinning a video of a fellow panelist if having someone nodding along is helpful).

Organization of the Q and A period

Asking questions of presenters is one of the key components of a conference and this is something we definitely wanted to preserve. We used the following strategy:

  • Rather than making use of the “raise hand” or “chat” features in Zoom, attendees could type in a question as the talk was ongoing. This question would be invisible to all other attendees, and would be sent for only the host and the panelists to see. These questions came in in the order they were asked and had the name of the question-asker tagged to them. We removed the options for attendees to raise hands or chat, in an attempt to minimize confusion over how to ask a question.
  • During the talk, the host’s responsibility was to dismiss questions that were not viable questions. Non-viable questions fell into a couple of categories: i) comments (e.g. ‘Good talk!’), ii) early questions that were unambiguously answered later in the talk, and iii) questions that violated the CUNY code of conduct. The goal of this process was to ensure that at the end of the talk, all of the remaining questions in the Q&A panel were questions that the session chair could choose from.
  • At the end of the talk, it was then the role of the session chair to select a question from those that had come in. They were asked to prioritize questions asked by students, postdocs and non-tenured faculty (determined from the attendees name), and to be mindful of gender balance in questions taken. Session chairs were invited to select questions according to which they thought would contribute the most to a constructive, engaging discussion.
  • When the session chair selected a question, they would read the name aloud; behind the scenes, the host had responsibility for listening to the session chair’s cues, and would unmute that attendee to ask their question to the speaker directly. Afterwards, the host would remove the attendee’s mic permissions.

Behind the scenes, random tips and tricks

Overall, the CUNY sessions required a fairly demanding amount of coordination between the host, the session chair, and the panelists. The session chair essentially had the role of the ring leader for a session: The host and the speakers had to monitor the chair’s verbal cues, and the chair managed Zoom make sure the transitions between talks, between question answers, and between sessions went smoothly.  Here are some observations about this process that might be helpful for future organizers:

  • Practice was key, especially for hosts and session chairs. We practiced several times leading up the conference, so that the host duties (muting / unmuting questioners, spotlighting speakers, and so on) were well-practiced.
  • Hosting and session chair duties are cognitively demanding. In fact, the really serious demands placed on the host were the primary reason that we split CUNY into different sessions; that allowed us to switch hosts and give them a chance to relax between sessions.
  • We tried to avoid dead air to the extent possible, and came up with a couple of little tricks to ensure this. For example, session chairs got in the habit of announcing the name of the question asker first, and then following it with 20-30 seconds of additional content (like announcing their affiliation). Announcing the name first, followed by a little buffer, allowed the host to find and unmute the attendee so that there was only a minimal amount of dead air. Similarly, the session chair prepared a very brief introduction to all presenters (name, title, and affiliation), which they could read out while the speakers got their screen ready.
  • There were often several questions ready at the end of a talk, which made things easy. But this wasn’t always the case, as sometimes there were no questions submitted at the point when a talk ended. And in the virtual format, we couldn’t tell if people were just in the process of typing in questions. This means we had no way of calibrating how long it would be before a question came in if there were no questions at the end of a talk. So to avoid dead air, the session chairs were instructed to give one of their own questions first in this case, to give people the time to type in questions. This seemed to work: In every talk where this happened, by the time the session chair had finished asking their first question, the Q&A box had filled up with more and we were able to move on to audience questions.
  • Zoom provides very good captions for videos, but they are available only at a delay. We were not able to do live captions in our Zoom Webinars, because we did not have the budget for this. Lina Hou and Savi Namboodiripad were gracious enough to lend their expertise and help us set up a workaround solution that relied on Otter.ai to provide live captions of the meeting. 

Posters

The poster sessions were organized as mixed synchronous / asynchronous presentation sessions. The key elements were:

  • use of OSF to archive the posters, and to host comments;
  • suggested brief ‘poster presentation’ videos to accompany the PDF posters, also uploaded to OSF;
  • three poster sessions during which conference attendees were encouraged to interact with posters.

We had accepted approximately 270 posters for physical CUNY, organized into three distinct poster sessions of 90 posters each. We decided early on that we did not want to have all synchronous poster sessions, because we thought that given our likely attendee to presenter ratio, poster presenters would largely find themselves alone during the poster period.

OSF

Before any of this, we set up an OSF Meeting for CUNY 2020. This was absolutely free! We had planned this as our repository for posters (and talks) for “offline” viewing. We built upon this aspect of physical CUNY to host the poster sessions. Presenters were able to create OSF projects on their own by following the instructions that OSF provides. These OSF projects can host any kind of file from poster pdfs to mp4 recordings of presenters’ spiels.

As far as we know, OSF is accessible from just about anywhere. This makes it the perfect repository for broad usage. Another reason we went with OSF for this is the commenting functionality. Those with OSF accounts can comment publicly on the projects of others to ask questions and engage with both the authors and the audience. This creates a space for discussion to ensue.

Posters

We kept “Poster Sessions” in the schedule of the conference. And we encouraged all attendees to visit posters and interact with poster presenters as they would at a physically-held CUNY.

As stated earlier, posters were stored on OSF, and authors themselves uploaded them. We additionally informed poster presenters that they should make full use of OSF’s capabilities to make this as useful of a poster session as it could be. This meant they could upload a 5-minute video of themselves talking through their poster as they would if it were up on a board and they were standing in front of it. They could also use the wiki of their project to promote a private Zoom meeting of their making, or their Skype handle if they were willing to set aside the poster session to be free to chat in real time with attendees. We estimate that somewhere between 1/3 and 1/4 of the poster presenters in a given session offered an option to meet with them live in the form of a Zoom meeting or similar.

Social Zoom Meetings

Another main aspect of the physical CUNY conference is the opportunity to socialize with researchers, meet new people and reconnect with old colleagues in the social functions scattered in and about the conference. On the first day of the conference, it became clear that our attendees were self-organizing social engagement in interesting and fun ways: They were already using Zoom (or similar services) for poster sessions, but other attendees were setting up Zoom for purely social purposes, such as organizing meet-ups around scientific themes (e.g. an open Zoom meet and greet for researchers interested in bilingual sentence processing), lab meet-ups for current students and alumni, and even just purely social happy hour-type Zoom events.

To try and facilitate access to all these different, organic opportunities that popped up, we created an open Google sheet for those who wanted to host socially-oriented Zoom meetings. It turns out that when you turn over editing privileges to hundreds of CUNY attendees, they will pretty quickly self organize.  Within an hour or so, the sheet had different tabs for each day of the conference, and different tabs for poster discussion sessions and social events.

The CUNY open Zoom sign-up sheet can be seen here.

How’d we do? The post-CUNY poll

After CUNY, we sent out a poll to everyone who attended the conference. 343 people responded to at least one question in the poll. Of the respondents, 157 (45.8%) were planning to attend physical CUNY,  179 (52.3%) indicated that they had not been planning to attend physical CUNY, and the remainder had not made up their mind when CUNY was cancelled.

Here are the results, and some of the main takeaways that we think they suggested.

Talks

  1. Talks were easy to access using the Zoom webinar format.
  2. There could have been more clarity about question procedures.
  3. The level of engagement with virtual talks was, on balance, similar to that for in-person talks.
  4. People prefer to watch talks in real time.

Posters

  1. There needs to be more work to figure out the best way to encourage and facilitate engagement with virtual posters.

Social Engagement

  1. Most attendees did not engage socially with other attendees.

Virtual conferences

  1. Most people report that they would submit their work to future virtual conferences.
  2. People would especially like the option to present virtually at conferences on other continents…
  3. … but they would prefer to travel to a conference on their own continent.
  4. If they can be reimbursed, most people would be willing to pay ~$100 for a virtual conference.
  5. If they cannot be reimbursed, most people would be willing to pay <$50 to attend.

What other conferences / workshops can follow this model?

We suspect that several of the incidental features of the CUNY conference and community were important for the success it enjoyed. Some further thoughts on this that might be relevant for people considering virtualizing their own conferences:

Size: CUNY drew a crowd of roughly 1,100 unique participants across the three days, with about 350 attendees at once. With this kind of traffic, we had no trouble on our side and no complaints of poor quality from the attendees. But we could imagine conference size mattering. A smaller conference may have difficulty emulating our engagement during the question-asking period, for instance. Similarly a conference that has a much larger audience might also need a different solution for the Q&A period and would have to pay for a webinar package that allows more users at a time or live stream it to YouTube or Facebook.

Non-Parallel Sessions: CUNY never has had parallel sessions, by design: There is only one talk at a time. This made our job quite a bit simpler. To achieve parallel sessions we think you would need double the people-power. The organizers and volunteers would also all have to be pretty proficient in Zoom. Given the amount of time and effort it took us to get a sufficient number of organizers conference-ready, this seems like a challenging task.

Community: The community of CUNY attendees is pretty tight knit, with a great deal of them being annual attendees. With that being the case, there was a shared investment in making this work. We think if it weren’t for this strong sense of community, we would not have had such enthusiastic session chairs and presenters and the engagement and discussions we were able to foster would be markedly diminished.

Traveling to Amherst

Amherst, Massachusetts is a lovely town situated in the Pioneer Valley. But Amherst is also quite a bit smaller than other recent CUNY sites (Boulder, CO; Davis, CA; Boston, MA; Gainesville, FL). In fact, Amherst is arguably the smallest, most rural venue for CUNY since our conference was founded, with Chapel Hill in 2008 coming in second place.

It’s adorable here. We love our little New England hamlet. But our location comes with some logistical challenges for out-of-town folk who need to travel to join us. This post is dedicated to the finer points of travel and lodging in the Pioneer Valley.

Travel

Information about travel options to Amherst is documented in detail at this link. We have two airports that are accessible to Amherst: Bradley Airport in Hartford CT, and Logan Airport in Boston MA (Albany International Airport in NY is a possible third option). Bradley is about a 45 minute drive to Amherst, and Logan is approximately a 1h30 drive.

Peter Pan operates a bus that goes from Logan Airport to Amherst MA: it is a 4 and a half hour ride and costs approximately $36 at the time of writing. Another transportation option is Valley Transporter, which operates a shuttle service to Amherst from both Logan and Bradley airports. These shuttles must be booked in advance,  and the price ranges from $57 for a single passenger to as low as ~$30 per passenger if you book with 8 people at once. So if you are traveling with a group, booking a Valley Transporter together is a good cost-saving option. It is also possible to get a Lyft or an Uber from the airport, though it can become quite costly.

Alternatively, you might find it convenient to rent a car while you are here, to drive from the airport to Amherst.

Finally, there is Amtrak train service to nearby Northampton (10 minute drive) and Springfield (30 minute drive), so you might wish to consider train travel as well.

If you have questions about how to get yourself from the airport to Amherst, MA, please don’t hesitate to be in touch with us at cunyumass@gmail.com.

Lodging

CUNY is being hosted on the UMass Campus, in the UMass Campus Center. This means that there is one hands-down winner for the most convenient lodging option:Hotel UMass. Hotel UMass is situated above the conference venue, in the same building. However, space at Hotel UMass is limitedso if walkability to the conference site is valuable to you, please book early. We have reserved a block of rooms with a reduced rate; the reduced rate is available until 2/19/2020 with the code CCH20C. Please note that on the Hotel UMass website, it will tell you there is no availability during the CUNY dates unless you put the conference code into the drop-down menu under ‘rate options’. Once you do this, you will be able to see any remaining rooms in the CUNY block (which is nearly the whole hotel).

Do not delay in booking Hotel UMass if you want to be close to the conference and the party! It is almost certain to sell out.

Other than Hotel UMass, what other options are there? Here is our rundown of walkable lodging options, along with walk times to the conference site. Other options that are accessible to the conference site, but not easily walkable, are listed at the bottom of this post.

Inn at Boltwood (28 minute walk to conference venue). The Inn on Boltwood is a higher-end lodging option situated right on the town green. It’s got a lovely restaurant with a noted mixologist tending bar, and lots of nice amenities available (including in-room fireplaces!). You can walk to campus, but it takes about a half an hour; there is also a bus that regularly shuttles between UMass and Amherst College, right by the Inn’s entrance.

University Lodge (16 minute walkto conference venue). The University Lodge is a relatively inexpensive option situated between the campus and the downtown. While there is no public transportation from the University Lodge to campus, it is a relatively short walk to the conference venue, and also a walk to downtown Amherst.

Allen House Inn (35 minute walkto the conference venue). While a bit further away, the Allen House Victorian Bed and Breakfast is perhaps the housing option with the most local character. A good option if you’d like some Victorian digs for your visit to Amherst!

AirBnB is also a solid option for Amherst; if you are looking on AirBNB, we recommend trying to find lodging near the center of downtown Amherst if possible. Downtown is both walkable to the conference venue (~30 minutes) and close to restaurants and nightlife in the evening.

One final option that we recommend considering is the Hotel Northampton. Hotel Northamptonis not walkable to the conference site, but there are convenient buses to from near the hotel to UMass (http://www.pvta.com/schedules/B43.pdf).  Northampton is Amherst’s slightly larger sister city, and Hotel Northampton is located right downtown; staying here will allow visitors easy access to Northampton’s restaurant and nightlife scene. AirBnB is also a good option in Northampton, and we again recommend staying closer to downtown.

These are the options recommended for folks who will not have cars with them when they come. If you are coming with a car, then there are many more options open to you! See a full listing here.

Pioneer Valley

OK, so you’ve made the long trip to Amherst, learned a lot about psycholinguistics, and are looking to have a little fun with the rest of your time here. What is there to do? Here is our list of favorite day trip options if you have a day free in the Pioneer Valley in March:

Sugar shacks (cabanes à sucre): March is prime sugar season in MA, when the sugar maples are tapped to make maple syrup. If you’d like to get fresh maple syrup ladled over a stack of pancakes, head over to one of the local sugar shacks. You can find a map of area sugar shacks here.

Mass MOCA: https://massmoca.org/is one of the country’s largest museums for contemporary art, and it is located in North Adams, about a 1 hour drive from Amherst. Located in a renovated lightbulb factory, it is a sprawling, immersive museum experience and well worth the trip if you have the time.

Deciding on a program

Notifications have been sent out, and authors are now putting the finishing touches on their abstracts for the final CUNY 2020 program. We had 430 abstracts submitted to CUNY this year. The vast majority of submissions received three reviews, while a few received four, and a few received only two.

In our program, we had room for 27 talks and 270 posters.  This meant that 133 abstracts were rejected outright.  This rejection rate (~31%) was slightly higher than in the last few years, and higher than we would have liked, but we were pinched between space constraints on the number of posters we could accept and the high number of submissions that came in.  There was a lot of good work that we couldn’t accept for this year’s CUNY, and it wasn’t fun to make that call.

How did we decide what got a talk? The short answer: we rolled up our sleeves and read lots of abstracts and reviews and talked it out. It took two full days of the full group meeting to hash it out. In terms of workload and difficulty of the task, this was easily the hardest thing we’ve had to do so far, both in terms of the overall effort and the difficulty of the decisions that we had to make.

First off, we had a number of criteria that we agreed on to guide the process:

  • High ratings from the reviewers, weighted equally across all of the rating criteria.
  • Appropriateness for the special session
  • Diversity of topics, speakers, institutions, and languages

Equipped with our criteria, our process was simple: We worked backwards down the list of abstracts, in order of their average rating across reviewers, and considered each in turn for a talk by reading the abstract and all of the reviews until we got to our target number of talks. We used the quantitative portion of the reviews to help us focus our attention on the abstracts which most consistently generated enthusiasm among the reviewers, but we did not use the quantitative ratings as a hard filter on what was and wasn’t a talk; the scales were used just too differently across reviewers for that to be a reasonable strategy.

Instead, the qualitative content of the reviews in conjunction with our own read of the abstract was what was critical in making our decisions. In general, the more detail provided in a review, the more that review influenced our judgment. A set of all 6’s or all 2’s without any comment or with only general/short comments received less weight than a review accompanied by clear justification. Comments like ‘I think this abstract is exceptionally novel and interesting because of their use of method X or because of the novel theoretical insight Y, and it deserves to be heard as a talk’ or ‘I think this abstract is not yet ready for a talk because of concern Z so I recommend a follow-up to address Z before this would be successful as a platform presentation’ were generally quite useful. A persuasive and concrete review often helped us make the decision one way or another.

The ‘sample from the highly ranked abstracts and listen to our colleagues’ model ran into multiple snags along the way, as you might expect! For instance, a simple application of this strategy led to more than one talk from the same first author, but this seemed to us to be in conflict with our ‘diversity of speakers’ goal; in that situation, we made a judgment call about which of the two candidate talks would be a better fit in the overall program. In a similar vein, we sought to avoid too many talks on the the same narrow topic, and too many talks from a single institution. To come up with our list of 27 talks we ended up sampling and considering roughly the top 20% of abstracts, with significantly more consideration given to the top end of that distribution.

Decisions about rejections were made largely based on reviewer scores, averaged across all reviewers and questions, but with some tweaks. In particular, we inspected the variance of the ratings on the papers that received low average ratings, looking for submissions that specifically had two high scores and one low one.  We then considered each of those abstracts individually for consideration in the main program, based on the contents of the reviews.

Assigning reviews

CUNY reviews are due soon, so we thought it might be interesting to describe the process by which we assigned the abstracts to reviewers. We tried to distill this complicated process down to a single (long) day, which was made possible with a little organization and pre-planning. When the big day came, we booked a room in the Linguistics department from morning until night, ordered a whole bunch of Dunkin’ Donuts, and hunkered down:

We were lucky enough to a ton of psycholinguistic brainpower we could apply to the task. We had support from our lovely and talented students who graciously took time out of their breaks to pitch in. Special thanks go to Anissa Neal, Bethany Dickerson, Alexander Göbel, Erika Mayer, Michael Wilson, and Kuan-Jung Huang. And faculty from all over our university and others pitched in: Mara Breen,  Charles Clifton,  Brian Dillon,  Lyn Frazier,  Jennie Mack,  Shota Momma, and Adrian Staub. We even had little baby Toma Negishi (seen on Shota’s lap) helping us out!

We wanted to keep the actual abstract assignment process to a single day, and to do this, we did a good amount of prep work before the big day so that we were ready to hit the ground running. A week before our meeting date, we created a master Google Sheet that contained information and links to all 432 abstracts, which we were able to download from our conference management software (Soft Conf). Our goal was to have an ‘abstract manager’ from our group for each abstract. Given the number of people pitching in, that worked out to around 33 abstracts. In advance of the meeting, everyone got a chance to claim to abstracts they were most interested in, with students getting first dibs. This way, everyone involved was responsible for assigning reviewers to abstracts that fell somewhere close to their areas of interest, with the idea being that that would put the abstract manager in the best possible position to assign reviewers from the reviewer list.

The actual reviewer assignment process relied on a central Google Spreadsheet that we all worked on in parallel, so that everyone could see everyone else’s reviewer assignments. Our sheet tracked the overall assignment load to each reviewer so that people could work in parallel, but still see how many abstracts were being assigned to any given reviewer. If you would like to use our Google Sheet set-up for your own purposes, you can find a pared down version of it here. It had a couple of bells and whistles that facilitated things, such as a pivot table that tracks the number of abstracts assigned to reviewers, and regular expressions plus conditional formatting in Google Sheets to automatically identify if an author was assigned to their own abstract.

Over the course of a (long) day, we hung out together and assigned reviewers together by jointly editing the sheet. Everyone entered their proposed reviewers from a master list of 173 reviewers into the spreadsheet to assign them to the abstracts. We continuously edited the proposed assignments in order to ensure an equitable distribution of abstracts to reviewers. We found it was especially helpful to do this all together at once together in the room. Not only was it more fun, but we were able to draw on each others’ knowledge when we weren’t sure if a given reviewer was a good fit for a given abstract. One thing we found was very difficult was avoiding conflicts of interest beyond the obvious ones; the last couple of hours involved a close editing of all the assignments in an attempt to ensure that people from the same lab were not reviewing each other’s abstracts (and that e.g. someone wasn’t reviewing an abstract of their former postdoc).

Lastly, Mara Breen deserves special acknowledgement for managing all of the abstracts that had UMass authors associated with them. This was something we had no idea how to handle. The solution we came up with was to give Mara responsibility for all of the UMass affiliated abstracts. Mara worked on a separate sheet to assign reviewers to those, so that the authors of those abstracts remained unaware of who was reviewing their work. This was done to make sure that the reviewers assigned to  a UMass abstract could expect to have the same amount of anonymity afforded to all other reviewers, which is an important part of the process.

Choosing reviewers

One of the major projects behind the scenes of organizing CUNY is getting the review process organized and running. One major, early part of this is to decide who gets to review abstracts. The CUNY tradition in this regard has been informal; organizers have simply kept tabs on who is an active, contributing member of the community, and included them as part of a long list of reviewers who are tapped year after year to review abstracts. This list tends to get passed down from organizer to organizer, with small modifications along the way.

This year, we decided to expand the set of reviewers in a more systematic way. The main reason for this was a Twitter discussion started by a number of junior, post-PhD researchers in the CUNY community around this time last year. The question that they asked was: “How do you become a CUNY reviewer?” It became clear to us that this process was totally mysterious from their perspective: you never were asked until one day, you were. We had a chance to talk to a number of junior researchers who had not been reviewing for CUNY, despite having been active members of the community, and get their thoughts on this.

As a result of these talks, we decided to make some changes to reviewer recruitment. We had several constraints we tried to respect. The list of reviewers could be expanded only so much while keeping the review process manageable, and we wanted our process of selecting new reviewers to be objective and transparent, rather than relying on personal connections. In light of these constraints, the newly-invited reviewers for this year’s CUNY will include:

  • The existing list of CUNY reviewers, passed down from previous years, and
  • All post-PhD researchers who were first author on one (non-invited talk) in the last five years.

This will add about 50 new potential reviewers to the list of 167 that we have inherited from last year’s CUNY.

We want to be clear that in expanding the list in just this way, we do not mean to imply that only the people who presented a talk in the last five years are valuable, contributing members of the community. That is obviously false. In fact, our major problem in trying to decide on reviewers was that there are too many valuable members of our community! When we considered a number of different objective criteria that we could imagine for making this cut (e.g., by including first authors on recent posters), we consistently ran into the problem of increasing the reviewer base by several hundred reviewers, to a completely unwieldy size. Our reviewer selection criterion was a compromise that allowed us to objectively expand this pool, while keeping the overall number of reviewers manageable.

Review criteria

Submissions for CUNY are open, so we thought it would be a good time to talk about the review criteria for submissions, and how we decided on what those would be. The conference organizers have a good degree of freedom is in deciding the criteria that reviewers are asked to address. This is largely passed down unchanged year to year. But when we sat down and discussed this, we realized we were interested in changing. some of the review criteria slightly. We thought it would be useful to announce these changes in advance of the submission deadline, so that researchers planning submissions can bear them in mind when writing their abstracts. We also thought folks might appreciate knowing what our reasoning behind these decisions was.

We are asking reviewers to rate each abstract numerically on four dimensions. These are:

  • Is the work methodologically sound? 1 means unsound; 7 means impeccable.
  • Does the work have substantial theoretical implications for psycholinguistics, linguistics, or psychology? 1 means little or unclear theoretical import; 7 means exceptionally notable theoretical import.
  • Is the work methodologically or theoretically innovative? 1 means not original and not innovative; 7 means exceptionally original and innovative
  • Overall score. Please take into account the originality and quality of the work as well as its relevance to the field. Use the following scale: 1 (poor), 2 (fair), 3 (“meh”), 4 (good), 5 (very good), 6 (excellent) to 7 (outstanding). A rating of “7” should be reserved for submissions that are innovative, methodologically impeccable, and theoretically significant. Please use the entire scale.

The first question is exactly the same as previous years. The second question was elaborated a bit, because ‘theoretically significant’ is likely to mean different things to different reviewers. The third question was previously framed as “Is the work timely and innovative.” We decided we did not want to ask reviewers to rate the ‘timeliness’ of the work: focusing on innovation, rather than trendiness, seemed more important to evaluating the abstracts. Our goal was to promote some more consistent criteria in the responses to this question across reviewers.

Another question that has been asked in previous years is ‘Is the work of broad interest to the CUNY community’. We have dropped this question entirely, for two reasons. First, it was not obvious to us in discussion that this is a great criterion for deciding (e.g.) what should be a talk, and what should be a poster. We wanted to develop a reviewing process whereby innovative, sound, impactful and non-mainstream work had a good opportunity to rise to the top. Second, it seemed to us that reviewers varied widely in how they responded to this question, in ways that seemed orthogonal to our goal of providing a solid, scientifically engaging program. For example, we heard many reviewers in past years who admitted that they were not happy giving honest low marks on this question in response to outstanding but non-mainstream work, knowing that it might ruin the chances of this work getting a talk.

Finally, as in previous years, reviewers will be asked to provide detailed narrative comments to back up, and elaborate on, their ratings.

Hello!

Organizing a conference, it turns out, involves a lot of moving pieces. We’ve been lucky enough to get advice, support, and close guidance from many past organizers. These folks have been invaluable in helping us navigate this complex process, and we couldn’t be more grateful for this support (especially Fernanda Ferreira and Al Kim, organizers of the two preceding CUNY conferences). One thing that we wanted to contribute to this tradition was a running record of our organizing process, in the hopes that it might be useful for future CUNY organizers (or organizers of similar conferences). But in addition to this, it struck us that the community might be interested in seeing how our CUNY conference came together ‘behind the scenes,’ to take some of the mystery out of the organizing process and answer burning questions like ‘who are these linguists and what are they doing to CUNY?’We felt that we could make some progress towards both of these goals by maintaining a blog that keeps a record of our process. We were directly inspired by Emily Bender and Leon Derczynski to keep a blog to write about our process of organizing CUNY (we highly recommend you check out their very interesting COLING 2018 PC blog).

Over the next 10 months, we will blog about our organizing progress, our decision-making process, and all the other fun parts that go into organizing a conference of this size. Our hope is to create a stable resource that future organizers might draw on, but also give the community a sense of how our vision for the conference evolved over the course of planning the event.

What can you expect to hear about here? Well, well, we anticipate we’ll have lots to say about the process of

  • Choosing a venue and a space
  • Deciding on our special session topic
  • Securing funding for CUNY
  • Recruiting reviewers
  • Reviewing abstracts and deciding on a program
  • … and more!

Keep your eye on this space going forward. In the meantime, let’s introduce the team of CUNY web denizens behind the blog and twitter account:

Brian Dillon

Brian Dillon is an Associate Professor in the Linguistics department, and one of the faculty organizers of CUNY 2020, along with Adrian Staub, Lyn Frazier, and John Kingston.

Anissa Neal

Anissa Neal is a second year graduate student in the Linguistics department. She is interested in syntactic theory, psycholinguistics, the processing of filler-gap dependencies, and African American English. In her research she uses eye-tracking-while-reading and behavioral methods.

Jon Burnsky

Jon Burnsky is a third year graduate student in Psychological and Brain Sciences. He is interested in thematic role processing, lexical prediction, and the formatting of lexical representations. In his research he uses eye-tracking-while-reading, EEG/ERPs, and behavioral methods.