Reflections from the second Simons workshop

The workshop has been a interesting meeting of minds. The talks were quite variable in quality and I will reflect on some of the themes below. In general I find the standard lecture based setup of this kind of workshop a bit sleepy (esp after a dozen talks) and not conducive to deep interactions.

What if we organize it as follows: a few hour long plenaries by people like Les, Haussler, Eric Siggia, and Christos. Each talk is 35 min of material with a 10 min half time discussion and 10 min at the end, plus plenty of questions mixed in. A few hour long chalk talks (at most 2 slides). A bunch of “lightning talks” each lasting ~5 mins. This leaves time for standard coffee breaks plus two hours each day for speed dating. The idea is that a pair of researchers meet for 30 min to discuss, so each person meets with 4 others everyday. Each participant submit some research interests and the organizer pair up people who might have interesting discussions. Ideally it would be a mix of people from your field and people from other fields and priorities given to people who don’t already know each other. In a workshop of 70 participants, each person meets 20 strangers which is a sizable fraction. The idea is to spur new interactions outside existing cliques and especially between juniors and seniors.

On to the content. This workshop raises several important questions: how does theory make impact, what is the role of worst-case analysis, how can TCS contribute to understanding evolution, does evolution need more understanding, is this a potentially impactful intersection?

In theoretical/modeling research it is very easy to become detached from science and data. It tempting to: 1. make up problems to solve; 2. make up metaphors; 3. work on very particular extensions of existing models. I believe all of these are traps that can potentially derail a research agenda from having real scientific and social impact. There are many intellectually interesting problems so with a few exceptions I don’t consider pure mental stimulation to be a main priority.

To avoid these traps, it’s useful to have a checklist to evaluate a potential modeling idea:

  1. Is there a real scientific, empirical puzzle? It’s best if there a concrete conflict in data begging for explanation. Short of this, there should be an empirical phenomenon (read: data driven) that’s interesting and not well understood.
  2. The approach and model should be new. I firmly believe that in theory and in science the bulk of the impact is in conceptualizing a new framework and getting the first results. After that, it tends to get less interesting and much more technical. A double whammy.
  3. If you are still working on a model developed by Fisher more than 50 years ago, there better be a significant new element such as new data or new structure. Otherwise the results are incremental in the worst case. In general it’s useful to ask what am I bringing that’s unique to the problem.

I think a lot of the work in theoretical evolution, modeling and evolutionary algorithms fall under one of these three traps. For people coming from physics/math side 1 and 3 are more common. For CS folks, 2 is a danger.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s