Background
I started in astrophysics, where the work was computational from the beginning. My PhD wasn't about pointing telescopes — it was about building models of radio emission from expanding nova ejecta, reconstructing images from interferometric data, and wrangling observations across ten or more instruments with inconsistent formats, units, and metadata. The interesting problem was always the same: how do you extract a real signal from noisy, imperfect, heterogeneous data?
From there I moved into roles where I was building things for other people, not just for my own research. At Macalester, I co-created a computational physics course from scratch — it's now a permanent part of the major. At Michigan, I founded and funded a professional development program and redesigned a quarter of the department's lab curriculum. At Michigan State, I ran a multi-section course with a team of 25+ faculty and TAs, using performance data to drive iterative improvements. At Carleton, I led 30 applied machine learning projects across a dozen domains, translating vague questions into concrete implementations. The pattern shifted: I was still building tools, but now the tools were for teams, for students, for institutions — and I was learning to own what I built rather than move on to the next paper.
The Open Nova Catalog is where it all comes together. I took a dataset I've known since grad school and decided to build it right — not as a research prototype, but as a production system designed to persist. Serverless architecture, contract-first backend, 30+ architectural decision records written before code, an immutable release model designed for safe rollback. It's the project that best represents how I work now: scientific rigor, real engineering, and a conviction that if you're going to build something, you owe it to the next person to build it right.