While looking over papers from the past year, one theme in particular stood out to me: meta-optimization, or optimizing how we optimize things.
While looking over papers from the past year, one theme in particular stood out to me: meta-optimization, or optimizing how we optimize things.
Note: old versions of this post lacked a discussion of S N 2. I've added an appendix which remedies this. In “The Rate-Limiting Span,” I discussed how thinking in terms of the span from ground state to transition state, rather than in terms of elementary steps, can help prevent conceptual errors. Today, I want to illustrate why this is important in the context of a little H/D KIE puzzle.
Last January, I aimed to read 50 books in 2022. I got through 32, which is at least more than I read in 2021. There’s been a bit of discourse around whether setting numerical reading goals for oneself is worthwhile or counterproductive.
A technique that I’ve seen employed more and more in computational papers over the past few years is to calculate Boltzmann-weighted averages of some property over a conformational ensemble. This is potentially very useful because most complex molecules exist in a plethora of conformations, and so just considering the lowest energy conformer might be totally irrelevant.
Today I want to engage in some shameless self-promotion and highlight how cctk , an open-source Python package that I develop and maintain with Eugene Kwan, can make conformational searching easy. Conformational searching is a really crucial task in computational chemistry, because pretty much everything else you do depends on having the correct structure in the computer.
Since my previous “based and red pilled” post seems to have struck a nerve, I figured I should address some common objections people are raising. Although this is obvious, I wanted to preface all of this by saying: this is my opinion, I'm not some expert on systems of science, and many of the criticisms come from people with much more scientific and institutional expertise than me. It's very possible that I'm just totally wrong here!
13 C NMR is, generally speaking, a huge waste of time. This isn’t meant to be an attack on carbon NMR as a scientific tool; it’s an excellent technique, and gives structural information that no other methods can. Rather, I take issue with the requirement that the identity of every published compound be verified with a 13 C NMR spectrum. Very few 13 C NMR experiments yield unanticipated results.
One of the more thought-provoking pieces I read last year was Alex Danco’s post “Why the Canadian Tech Scene Doesn’t Work,” which dissects the structural and institutional factors that make Silicon Valley so much more effective at spawning successful companies than Toronto. I’ll briefly summarize the piece’s key arguments here, connect it to some ideas from Zero to One , and finish by drawing some conclusions for academia.
Modeling ion-pair association/dissociation is an incredibly complex problem, and one that's often beyond the scope of conventional DFT-based techniques.
Last week, I posted a simple Lennard–Jones simulation, written in C++, that models the behavior of liquid Ar in only 1561 characters. By popular request, I'm now posting a breakdown/explanation of this program, both to explain the underlying algorithm and illustrate how it can be compressed.