Welcome to our latest interview with the author of a recent(ish) interesting paper. Today’s interview is with theoretical ecologist Chuliang Song. I asked Chuliang about Song & Levine 2025 Nat Ecol Evol, which proposes a powerful new technique (well, new to ecologists) for testing ecological models such as predator-prey models using time series data. I also asked Chuliang some questions about how the conversation around theory in ecology has changed over the years, as the field has become both more quantitative and more applied. Chuliang is a super interview subject, he’s got a really interesting point of view and he’s great at coining phrases. Seriously, you don’t want to miss this one! Reading this is going to be way more entertaining and thought-provoking than whatever you were planning to spend the next 15 minutes doing. 
The interview was conducted by email.
Summarize Song and Levine (2025) for our readers. The paper proposes a new analytical approach (well, new to ecology). What’s that approach and why is it useful?
Ecology has a model hoarding problem. We have way too many models and, at the same time, way too little confidence in any of them. Stefano Allesina once joked in a Quanta Magazine interview that physics textbooks stay roughly the same length over time, because experiments mercilessly execute old theories as fast as new ones are born. Ecology textbooks, on the other hand, just keep getting fatter. We’re fantastic at proposing models, but we’re notoriously terrible at retiring them. Just for predator-prey interactions alone, there are over 40 competing models for how predators eat. There’s a saying I’m fond of: if you keep your mind too open, your brain falls out.
So how do you actually discipline these models? That’s been a central research obsession of mine. We have another project in progress—on what we call label invariance—that’s also about getting better at retiring models. But this paper focuses specifically on what you can squeeze out of time series data.
What we did was borrow a tool from a completely different field. The idea comes from queueing theory—you know, the branch of mathematics that deals with how long you’ll wait in line at Disneyland. Biophysicists, notably Andreas Hilfinger and Johan Paulsson, later borrowed it to study gene expression in cells. The key insight is simple. Every model of population dynamics splits the world into “gain processes”—births, immigration, things making more of you—and “loss processes”—deaths, emigration, things making less of you. It turns out that for any given model, there’s a precise, inescapable mathematical relationship that must hold between how these gain and loss rates covary with the observed population abundance over time. We call this the “covariance criteria.”
This relationship holds no matter what else is going on in the messy real ecosystem you haven’t bothered to model. Indirect interactions? Doesn’t matter. Environmental chaos? Doesn’t matter, as long as things are roughly stationary. You often don’t even need to know the parameter values. So it’s a genuine lie detector test for your model’s structure— not just checking whether one is clever enough to twist knobs until the curve looks right.
We let this loose on three classic ecological headaches: the decades-long functional response debate (is predation prey-dependent or ratio-dependent?), figuring out how to model rapid evolution, and the hunt for higher-order species interactions. When the biophysicists first used this method in their field, it went on a brutal killing spree— falsified almost every published model of gene expression. We fully expected a similar bloodbath in ecology. But much to our surprise, the classic Lotka-Volterra model—the one that textbooks routinely dismiss as too simple and unrealistic—actually survived the gauntlet. So the method is a genuinely useful tool—not just for retiring bad models, but for proving that some of our oldest approximations actually know what they’re doing.
Can you say a bit about how that paper came about? It’s an unusual paper, in that it’s taking a methodological approach originally developed in one field—physics—and applying it to a very different field (ecology).
I did my PhD in an engineering department, so there weren’t many theoretical ecologists to grab coffee with. I ended up just hanging out with the biophysics community in Boston instead. One day, a friend casually mentioned some work coming out of Johan Paulsson’s group at Harvard—they had just built this mathematical method to audit models of stochastic gene expression. Honestly, it was never meant to be a research project. Like any easily distracted grad student, I was just curious to see how the mathematical gears turned under the hood, so I sat down and worked through the equations for fun.
Then I did absolutely nothing with it for years. Classic academia. It wasn’t until my postdoc at Princeton that the idea came back to life. My advisor, Jonathan Levine, gave me complete freedom to chase whatever I found interesting. I pitched this idea to him and we started working on it together. But the translation was far from straightforward. Biophysics deals with discrete counts of individual mRNA and protein molecules inside cells—the math is built on master equations that track every single reaction event. Ecology works with continuous variables like biomass and density. On top of that, ecological time series are typically shorter and noisier than what biophysicists get to play with.
So Jonathan and I spent a lot of time figuring out exactly when and why the math survives the border-crossing into ecology. We wanted to be very careful on that front. And I want to acknowledge that the editor and reviewers at Nature Ecology & Evolution were incredibly rigorous and constructive about it too—you can actually read our lengthy back-and-forth in the published peer review file.
I’ll say something more general here. There’s a seductive illusion in interdisciplinary work that you can just grab a method from field A and slap it onto field B. That rarely works and honestly shouldn’t work. The intellectual challenge—and where the real value lives —is in the translation: understanding precisely when the mathematics transfers, where it breaks down, and what genuinely new insight the translation reveals about your field. Think of “complex networks science”—it was initially pitched as a universal theory that would unify everything from the internet to ecosystems to the brain. Two decades later, the most productive work in network science is deeply discipline-specific.
The approach described in Song & Levine (2025) looks like it could be quite powerful. And it can be applied to observational time series data; ecologists already have a lot of time series data. Do you worry that that makes the approach ripe for abuse? Or maybe a better way to phrase the question is: when you’re proposing a new approach that you hope others will use, how do you find the right balance between “selling” the approach—highlighting its strengths —and making its limitations and weaknesses clear?
That is the eternal tightrope walk of methods development, isn’t it? You want people to use your new tool, but you also definitely don’t want to watch them accidentally hit their thumbs with it.
When it came to “selling” the approach, we were very deliberate about transparency. Instead of burying the limitations in a supplementary file, we put them front and center in the main text as guardrails. We explicitly spelled out that the method is for systems driven by deterministic forces, like Lotka-Volterra predation cycles, not just stochastic noise jittering around an equilibrium position. We also highlighted that the test is better at catching bad models than at guaranteeing good ones: a “pass” should strictly be read as “promising, let’s investigate further,” not “case closed.”
Of course, being candid can be risky—I’ve certainly had my share of rejections. But I think it’s absolutely worth it. In my experience, the ecology community largely rewards honesty about limitations. And if a reviewer does penalize you for admitting what your method can’t do, well, that’s a trade-off I can live with.
The “use and abuse” part of your question is a harder nut to crack. On the one hand, we want to make the method as accessible as possible—we built an R package to standardize the entire statistical workflow so people can apply it correctly out of the box. On the other hand, accessibility inevitably invites misuse. The history of quantitative ecology is littered with sophisticated techniques that were initially met with hype, then applied carelessly, and eventually discredited—not because the math was wrong, but because the field’s enthusiasm outran its caution. I don’t pretend to have a magic bullet for that. To paraphrase the famous quote, I’d add that all methods are abusable, but some are worth the risk. At the end of the day, you just have to be radically honest about the boundaries of your method and hope the community meets you halfway.
Your bachelor’s degree is in mathematics and your PhD is in civil and environmental engineering. And you have numerous papers besides Song & Levine (2025) that take ideas from physics and engineering and apply them to ecology. There’s of course a long and proud history, going all the way back to AJ Lotka and Vito Volterra, of people with training in mathematics, physical sciences, and engineering bringing ideas and techniques from those fields into ecology. There’s also a long history of resistance to those ideas and techniques from some other ecologists. What’s your sense of where things stand today? Is the conversation around the application of physics and engineering ideas to ecology different than it was 20 or 50 or 100 years ago, and if so, how?
I’m no historian and still fairly junior as a researcher, so take this with a generous grain of salt. But my sense is that the conversation has shifted enormously, and mostly for the better.
Robert May, who was trained as a physicist, once described the cultural shock he experienced upon entering ecology in the 1960s. The equations ecologists were using, he said, were “in some important ways different from the more familiar ones of physics.” What struck him wasn’t that ecology was less mathematical—it was that the relationship between the math and the empirical claims was looser, more tentative, less disciplined. I think what’s changed since then isn’t so much that ecology has become more quantitative —it has, obviously—but that the ambition of the quantitative work has shifted. We’ve gone from borrowing individual equations to importing entire intellectual frameworks— and then doing the hard work of figuring out what they reveal about ecology specifically.
Let me give you two concrete examples. The first is what I consider the most successful application of physics to ecology in recent years: the cavity method. It’s a technique from statistical physics, originally developed to understand disordered magnets—spin glasses, if you want the jargon. A biophysicist by training, Guy Bunin, had the key insight that the same mathematics could calculate how many species survive when you assemble a large, random ecological community. Since then, a serious community of mathematicians and theoretical physicists has built on that foundation, extending and refining it. Before the cavity method, our understanding of multi-species coexistence was mostly simulation-based or limited to the assumption that all species must coexist. After it, we could map out entire phase diagrams of ecological communities—and the method has already inspired new experimental designs and uncovered new patterns hiding in old data. That’s a profound upgrade, and it came directly from physics.
The second is very much a work in progress, but it’s something I’m genuinely excited about: working with biophysicist collaborators to bring the concept of irreversibility —a fundamentally non-equilibrium idea—into ecology. And here’s where the history gets deliciously ironic. Almost exactly a century ago, Alfred Lotka himself—the Lotka of Lotka-Volterra, arguably the founding figure of mathematical ecology—tried to do precisely this. He published a paper in Science arguing that irreversibility was central to understanding living systems. If you read the first few chapters of his classic Elements of Physical Biology (1925), there was hardly any math, but philosophical pondering of why irreversibility is the fundamental feature of life. It was prescient, it was profound, and it was almost completely ignored. That Science paper has been cited only a handful of times in a century.
So Lotka planted this seed in the 1920s, the soil wasn’t ready, and now—armed with tools from non-equilibrium statistical physics that didn’t exist in his lifetime—we’re finally in a position to grow it. I personally find it enormously encouraging. It suggests that the real barrier to physics ideas in ecology was never intellectual resistance per se. It was a lack of the right infrastructure—both mathematical and empirical—to make those ideas precise and testable. As that infrastructure develops, ideas that were once dismissed as too abstract or too foreign keep turning out to be exactly what we needed.
Two long-term trends in the field of ecology are that it’s becoming a more applied field, and that it’s becoming a more quantitative field, with an increasing emphasis on quantitative methods development. How do those trends affect the sort of research you do and how you present it to others (in talks, in papers, in grant proposals, etc.)? Because on the one hand, your work is quite quantitative and proposes lots of new quantitative methods, but on the other hand your work often concerns quite abstract and fundamental research questions.
Here’s what I think makes ecology genuinely unusual compared to other branches of biology: the field still has a deep soft spot for theory. If you talk to theorists in molecular or cell biology, they often feel completely decoupled from the empirical work happening around them. But in ecology, an enormous amount of empirical research is still explicitly framed around testing or extending theoretical predictions. That’s a rarity in modern life science, and it’s a big part of why I consider myself incredibly lucky to be doing this right now.
That said, the job market has definitely noticed the trend toward applied and quantitative work. If you scroll through the eco-evo job boards nowadays, finding a posting for a “pure theoretician” is like spotting an ivory-billed woodpecker—it’s practically extinct. The market overwhelmingly wants “quantitative ecologists” or “computational biologists,” titles that carry a strong implication of being closely tethered to data and immediate application. My PhD advisor gave me a friendly warning early on in grad school that getting hired as a professor as a pure theorist was an uphill battle bordering on a cliff climb.
Given that tension, I actually consider myself incredibly lucky to enjoy both sides. On the one hand, I love the abstract, fundamental questions—the “pure” theory—because that’s where you figure out the fundamental rules of the game. On the other hand, I genuinely love working with data. If my work can’t eventually shake hands with messy empirical reality, it’s just a glorified math puzzle.
Do I think we still need space for pure theory? Absolutely—and I want to be clear, this isn’t a knock on quantitative ecology at all. The rise of data-driven, computational approaches has been fantastic for the field and has produced tons of genuinely important work. But a field that only ever solves today’s applied problems eventually runs out of the fundamental insights needed to solve tomorrow’s. I think ecology departments should be actively making room for theoreticians—even the unapologetically pure ones—because the payoff may not be immediate, but it compounds. Of course, I am biased 



