
Last week I had the privilege of attending the 2025 Progress Conference, which brought together a diverse cadre of people working on progress, metascience, artificial intelligence, and related fields. I was surprised by how optimistic the median attendee was about AI for science.

I don’t write about my non-working life much on the Internet; my online presence has been pretty closely tied to Rowan and I try to adhere to some level of what Mary Harrington calls “digital modesty” regarding my family. Still, some basic demographics are helpful for context. I have two kids (aged 2 and 4) and one more due in December.

Inspired by related writings from Scott Alexander, who is funnier than me. TW: fiction, but barely. AlphaProteinStructure-2 is a deep learning model that can predict the structure of mesoscale protein complexes like amyloid fibrils. AlphaProteinStructure-2 is free for academic usage.

In center-right tech-adjacent circles, it’s common to reference René Girard. This is in part owing to the influence of Girard’s student Peter Thiel. Thiel is one of the undisputed titans of Silicon Valley—founder of PayPal, Palantir, and Founders Fund; early investor and board member at Facebook; creator of the Thiel Fellowship;

( This post is copied from some notes I gave to our summer interns at Rowan almost without modification. Hopefully people outside Rowan find this useful too!) This is a brief and opinionated guide on how to give a research talk to an external audience. Some initial points of clarification—this guide is for a research talk, not a sales call or a VC pitch. Research talks have their own culture and norms;

The past is powerful evidence for arguments about the present. Since the time of Livy and Tacitus, it’s been common to cite history to advance some ideological, cultural, or political idea. While there’s nothing wrong with this in principle, these lines of discourse often break down because (1) people know very little history and (2) what little history they do know is usually wrong.
Expanded from a post on X, which I felt didn’t do a good job expressing all of what I meant. The past few years of “AI for life science” has been all about the models: AlphaFold 3, neural-network potentials, protein language models, binder generation, docking, co-folding, ADME/tox prediction, and so on. But Chai-2 (and lots of related work) shows us that the vibes are shifting. Models themselves are becoming just a building block;

“You dropped a hundred and fifty grand on a f***** education you coulda' got for a dollar fifty in late charges at the public library.” — Good Will Hunting I was a user of computational chemistry for years, but one with relatively little understanding of how things actually worked below the input-file level.

Recently, Kelsey Piper shared that o3 (at time of writing, one of the latest reasoning models from OpenAI) could guess where outdoor images were taken with almost perfect accuracy. Scott Alexander and others have since verified this claim.