Sunday, June 9, 2019

Be the CFP review you want to be reviewed by

There are lots of infosec conferences which means lots of CFPs and lots of talks reviewed. I participate in several and figured I would share some of the lessons I've learned.  A caveat: This is highly opinionated.  It's my experience so probably doesn't apply to everyone.  I mostly do small, specialized tracks and conferences so reviewing dozens of talks, not hundreds.

The CFP

Set yourself up for success.  There are probably 5 things you need to ask for in addition to the speaker info.  If you don't ask for them, you'll end up asking later:
  1. A title
  2. An abstract. Make it clear you'll be printing the abstract!
  3. A bulleted outline. If you don't ask for it in the CFP, you'll end up asking those who don't supply it anyway.
  4. What attendees will gain.  This could be processes, tools, knowledge.  But it's the 2nd most common question I have to ask after asking for an outline.  It also helps distinguish between vendor pitches and useful talks.  Vendors will often speak about how _they_ did something but not necessarily how attendees can do it.
  5. An attachment field.  This will let people share slides, longer outlines, detailed explanations of the talk, etc.  It's important for people who want to answer your specific questions but feel they have more they need to share.

The rating

Set your raters up for success.  You can ask your reviewers to answer lots of questions about talks, but the reality is only a few will be used.  I'd recommend 3 (stolen from bsidesNash:
  1. Content (0-5). How good is the content and the speaker's likely ability to give it.
  2. Applicability (0-5). How applicable is the content to the conference/track/interests of attendees/etc.
  3. Comments/notes to submitters.
Most other questions will likely be another way of asking all or a portion of either question one or question 2.  For example, asking "Has this speaker done a good job in previous talks?" is really just a question to help predict the quality of the content.

1 and 2 could be combined into a single accept-reject range of 0-5.  I like the two as neither I nor other raters I've worked with have had trouble answering both questions for all talks.  Also, they are orthogonal with very little affect of one on the other.

I also recommend 0-5.  Honestly, it can be 0 to anything.  The goal is simply to have a range that normalizes to 0%-100% easily.  1-5 does not.  Is 1-5 20%, 40%, 60%, 80% and 100%?  is it 0%, 25%, 50%, 75%, 100%? It's unclear how it maps out. Terms are even worse. "really bad", "bad", "ok", "good", "really good"?  Is that 0/25%/50%/75%/100%? If so, just use those numbers.  0-5 is easily 0/20/40/60/80/100%.  You could also simply provide a slider from 0 to 1 to allow people to provide the granularity they want.

Every rater should leave some note that can be passed to the submitter.  They may be passed directly, summarized, or aggregated, but you'll need those notes.

Each rater will probably also keep their own notes that do not get shared with the submitter.  It's honestly never clear to raters which comment field will or won't be seen by the submitter in the online review system so you might as well have a single one that will be shared and tell raters to keep private comments offline.  It also helps the raters think about how to communicate their feedback positively.

I'd also recommend making raters provide a rating before seeing the submitter.  Even if they can go change their score after the fact, it helps remove implicit bias based on the submitter. It's ok if a rater rates something, sees the submitter and updates their opinion based on the additional information about the org, previous talks be the speaker, etc that they can clearly articulate.  But you don't want the information about the submitter, their company, experience, other submissions, etc influencing the rating implicitly and you don't want submitter ethnicity, gender, sexual orientation, etc influencing it at all.

Pre-rating

There are two things you should do as soon after CFP submission closes as possible, even before rating the talks.
  1. Identify talks that should be moved to another track/reviewer.  
  2. Identify talks where you need to ask the submitter a question to accurately review the talk.
These two things are impossible to accomplish late in the review process.  The first only really applies if you have multiple tracks with multiple raters.  But if you wait to move a submission, more than likely the receiving rater will already be done and won't be interested in another talk.  

For questions, it often only takes minutes, hours or a day to get an answer back, but if the review team is all on the phone making selections, that answer will be too late.  Even if it's to ask for an outline, a more detailed explanation of the submission, or what attendees can expect to learn, most submitters have an answer and can get it to you quickly.

Try to do a pass through the submissions before reviewing and identify any submissions that fall into either category.  Addressing it up front will lead to better outcomes for everyone at review time.

The review

After the ratings are in, it's time to review them to pick the talks:
  1. Start with some mathematical analysis of your talks.  I do it with two scores in this blog, but it works just as easily with a single rating per talk.  Being able to visually check a talk's scores is strikingly helpful.  I've watched it save CFPs that were completely off track, take review meetings that were going no-where and turn them around, and half the time reviewing takes.
  2. Start with the talks that everyone rated perfect or near perfect.  If everyone agreed they're good, don't waste time rehashing it.  Mark these "accept".
  3. Then go to the bottom of the list and work your way up.  Basically, if no-one is willing to fall on their sword for the talk, "reject" it or mark it on the bubble.  (We tend to use "bubble up" or "bubble down".  Up for talks you'd accept if you could.  Down for talks you'd only take if you have to.)  
  4. At some point you're going to get to talks that people liked, but had some flaw.  Raters will be saying "I liked this one, but..." That means you're now into the middle section of the talks.  Go back to the top, after the talks you've already accepted, and work your way down marking "accept", "reject", "bubble up", or "bubble down".  Be biased against accepting.  It's easier to go to the bubble to add talks than to accept more talks than you can take and cut again
  5. Identify backup speakers.  How many is up to you, but I like 1 per track per day.  (Add at least one extra if international speakers are accepted as many things can prevent them from speaking.) I also like to identify someone on staff that will 'just be there' who can be easily found and give a talk (rather than having an empty room) if anything goes wrong.
Also, we tend to give reviewers one veto each; usually a talk they absolutely want, that they can use to overwrite the prevailing opinion of the group.

The notification

Now the part no CFP organizer likes, notifying people (particularly the non-acceptances).  This happens in a few stages:
  1. Notify all of the accepts.  You need all of them to confirm that they can still make it.  Until they confirm, you don't have a talk.  That said, this normally happens pretty quickly.  Accepted people are exciting and generally respond fast.
  2. Notify the bottom 3/4ths of the non-accepts.  You can't notify all because you may have some accepts that can no longer make it and so some of the non-accepts may turn into accepts.
  3. Once you have all the accepts complete, notify the backups and get their confirmation. (Note that if some of your accepts didn't confirm, you may need to move a backup to an accept and a bubble-up to a backup.)
  4. Finally notify any non-accepts that have not been notified.
All non-accepts deserve some feedback on why they weren't accepted.  It could be that the content wasn't the right fit, that the talk felt too complex or not complex enough.  it could be that the reviewers felt attendees wouldn't take a lot away from the talk.  It could be there were grammatical errors in the abstract.  It could simply be there wasn't enough information for raters to be confident it would be a good talk.  But all non-accepts deserve to hear from you.

And the rest of it

At this point, it turns into a speaker management job.  Making sure they have everything they need, know where to be and what to do.  That lasts until the speaker has completed their talk, but that's a subject for another post.