[syndicated profile] dynamicecology_feed

Posted by Jeremy Fox

Welcome to our latest interview with the author of a recent(ish) interesting paper. Today’s interview is with theoretical ecologist Chuliang Song. I asked Chuliang about Song & Levine 2025 Nat Ecol Evol, which proposes a powerful new technique (well, new to ecologists) for testing ecological models such as predator-prey models using time series data. I also asked Chuliang some questions about how the conversation around theory in ecology has changed over the years, as the field has become both more quantitative and more applied. Chuliang is a super interview subject, he’s got a really interesting point of view and he’s great at coining phrases. Seriously, you don’t want to miss this one! Reading this is going to be way more entertaining and thought-provoking than whatever you were planning to spend the next 15 minutes doing. 🙂

The interview was conducted by email.

Summarize Song and Levine (2025) for our readers. The paper proposes a new analytical approach (well, new to ecology). What’s that approach and why is it useful?

Ecology has a model hoarding problem. We have way too many models and, at the same time, way too little confidence in any of them. Stefano Allesina once joked in a Quanta Magazine interview that physics textbooks stay roughly the same length over time, because experiments mercilessly execute old theories as fast as new ones are born. Ecology textbooks, on the other hand, just keep getting fatter. We’re fantastic at proposing models, but we’re notoriously terrible at retiring them. Just for predator-prey interactions alone, there are over 40 competing models for how predators eat. There’s a saying I’m fond of: if you keep your mind too open, your brain falls out.

So how do you actually discipline these models? That’s been a central research obsession of mine. We have another project in progress—on what we call label invariance—that’s also about getting better at retiring models. But this paper focuses specifically on what you can squeeze out of time series data.

What we did was borrow a tool from a completely different field. The idea comes from queueing theory—you know, the branch of mathematics that deals with how long you’ll wait in line at Disneyland. Biophysicists, notably Andreas Hilfinger and Johan Paulsson, later borrowed it to study gene expression in cells. The key insight is simple. Every model of population dynamics splits the world into “gain processes”—births, immigration, things making more of you—and “loss processes”—deaths, emigration, things making less of you. It turns out that for any given model, there’s a precise, inescapable mathematical relationship that must hold between how these gain and loss rates covary with the observed population abundance over time. We call this the “covariance criteria.”

This relationship holds no matter what else is going on in the messy real ecosystem you haven’t bothered to model. Indirect interactions? Doesn’t matter. Environmental chaos? Doesn’t matter, as long as things are roughly stationary. You often don’t even need to know the parameter values. So it’s a genuine lie detector test for your model’s structure— not just checking whether one is clever enough to twist knobs until the curve looks right.

We let this loose on three classic ecological headaches: the decades-long functional response debate (is predation prey-dependent or ratio-dependent?), figuring out how to model rapid evolution, and the hunt for higher-order species interactions. When the biophysicists first used this method in their field, it went on a brutal killing spree— falsified almost every published model of gene expression. We fully expected a similar bloodbath in ecology. But much to our surprise, the classic Lotka-Volterra model—the one that textbooks routinely dismiss as too simple and unrealistic—actually survived the gauntlet. So the method is a genuinely useful tool—not just for retiring bad models, but for proving that some of our oldest approximations actually know what they’re doing.

Can you say a bit about how that paper came about? It’s an unusual paper, in that it’s taking a methodological approach originally developed in one field—physics—and applying it to a very different field (ecology).

I did my PhD in an engineering department, so there weren’t many theoretical ecologists to grab coffee with. I ended up just hanging out with the biophysics community in Boston instead. One day, a friend casually mentioned some work coming out of Johan Paulsson’s group at Harvard—they had just built this mathematical method to audit models of stochastic gene expression. Honestly, it was never meant to be a research project. Like any easily distracted grad student, I was just curious to see how the mathematical gears turned under the hood, so I sat down and worked through the equations for fun.

Then I did absolutely nothing with it for years. Classic academia. It wasn’t until my postdoc at Princeton that the idea came back to life. My advisor, Jonathan Levine, gave me complete freedom to chase whatever I found interesting. I pitched this idea to him and we started working on it together. But the translation was far from straightforward. Biophysics deals with discrete counts of individual mRNA and protein molecules inside cells—the math is built on master equations that track every single reaction event. Ecology works with continuous variables like biomass and density. On top of that, ecological time series are typically shorter and noisier than what biophysicists get to play with.

So Jonathan and I spent a lot of time figuring out exactly when and why the math survives the border-crossing into ecology. We wanted to be very careful on that front. And I want to acknowledge that the editor and reviewers at Nature Ecology & Evolution were incredibly rigorous and constructive about it too—you can actually read our lengthy back-and-forth in the published peer review file.

I’ll say something more general here. There’s a seductive illusion in interdisciplinary work that you can just grab a method from field A and slap it onto field B. That rarely works and honestly shouldn’t work. The intellectual challenge—and where the real value lives —is in the translation: understanding precisely when the mathematics transfers, where it breaks down, and what genuinely new insight the translation reveals about your field. Think of “complex networks science”—it was initially pitched as a universal theory that would unify everything from the internet to ecosystems to the brain. Two decades later, the most productive work in network science is deeply discipline-specific.

The approach described in Song & Levine (2025) looks like it could be quite powerful. And it can be applied to observational time series data; ecologists already have a lot of time series data. Do you worry that that makes the approach ripe for abuse? Or maybe a better way to phrase the question is: when you’re proposing a new approach that you hope others will use, how do you find the right balance between “selling” the approach—highlighting its strengths —and making its limitations and weaknesses clear?

That is the eternal tightrope walk of methods development, isn’t it? You want people to use your new tool, but you also definitely don’t want to watch them accidentally hit their thumbs with it.

When it came to “selling” the approach, we were very deliberate about transparency. Instead of burying the limitations in a supplementary file, we put them front and center in the main text as guardrails. We explicitly spelled out that the method is for systems driven by deterministic forces, like Lotka-Volterra predation cycles, not just stochastic noise jittering around an equilibrium position. We also highlighted that the test is better at catching bad models than at guaranteeing good ones: a “pass” should strictly be read as “promising, let’s investigate further,” not “case closed.”

Of course, being candid can be risky—I’ve certainly had my share of rejections. But I think it’s absolutely worth it. In my experience, the ecology community largely rewards honesty about limitations. And if a reviewer does penalize you for admitting what your method can’t do, well, that’s a trade-off I can live with.

The “use and abuse” part of your question is a harder nut to crack. On the one hand, we want to make the method as accessible as possible—we built an R package to standardize the entire statistical workflow so people can apply it correctly out of the box. On the other hand, accessibility inevitably invites misuse. The history of quantitative ecology is littered with sophisticated techniques that were initially met with hype, then applied carelessly, and eventually discredited—not because the math was wrong, but because the field’s enthusiasm outran its caution. I don’t pretend to have a magic bullet for that. To paraphrase the famous quote, I’d add that all methods are abusable, but some are worth the risk. At the end of the day, you just have to be radically honest about the boundaries of your method and hope the community meets you halfway.

Your bachelor’s degree is in mathematics and your PhD is in civil and environmental engineering. And you have numerous papers besides Song & Levine (2025) that take ideas from physics and engineering and apply them to ecology. There’s of course a long and proud history, going all the way back to AJ Lotka and Vito Volterra, of people with training in mathematics, physical sciences, and engineering bringing ideas and techniques from those fields into ecology. There’s also a long history of resistance to those ideas and techniques from some other ecologists. What’s your sense of where things stand today? Is the conversation around the application of physics and engineering ideas to ecology different than it was 20 or 50 or 100 years ago, and if so, how?

I’m no historian and still fairly junior as a researcher, so take this with a generous grain of salt. But my sense is that the conversation has shifted enormously, and mostly for the better.

Robert May, who was trained as a physicist, once described the cultural shock he experienced upon entering ecology in the 1960s. The equations ecologists were using, he said, were “in some important ways different from the more familiar ones of physics.” What struck him wasn’t that ecology was less mathematical—it was that the relationship between the math and the empirical claims was looser, more tentative, less disciplined. I think what’s changed since then isn’t so much that ecology has become more quantitative —it has, obviously—but that the ambition of the quantitative work has shifted. We’ve gone from borrowing individual equations to importing entire intellectual frameworks— and then doing the hard work of figuring out what they reveal about ecology specifically.

Let me give you two concrete examples. The first is what I consider the most successful application of physics to ecology in recent years: the cavity method. It’s a technique from statistical physics, originally developed to understand disordered magnets—spin glasses, if you want the jargon. A biophysicist by training, Guy Bunin, had the key insight that the same mathematics could calculate how many species survive when you assemble a large, random ecological community. Since then, a serious community of mathematicians and theoretical physicists has built on that foundation, extending and refining it. Before the cavity method, our understanding of multi-species coexistence was mostly simulation-based or limited to the assumption that all species must coexist. After it, we could map out entire phase diagrams of ecological communities—and the method has already inspired new experimental designs and uncovered new patterns hiding in old data. That’s a profound upgrade, and it came directly from physics.

The second is very much a work in progress, but it’s something I’m genuinely excited about: working with biophysicist collaborators to bring the concept of irreversibility —a fundamentally non-equilibrium idea—into ecology. And here’s where the history gets deliciously ironic. Almost exactly a century ago, Alfred Lotka himself—the Lotka of Lotka-Volterra, arguably the founding figure of mathematical ecology—tried to do precisely this. He published a paper in Science arguing that irreversibility was central to understanding living systems. If you read the first few chapters of his classic Elements of Physical Biology (1925), there was hardly any math, but philosophical pondering of why irreversibility is the fundamental feature of life. It was prescient, it was profound, and it was almost completely ignored. That Science paper has been cited only a handful of times in a century.

So Lotka planted this seed in the 1920s, the soil wasn’t ready, and now—armed with tools from non-equilibrium statistical physics that didn’t exist in his lifetime—we’re finally in a position to grow it. I personally find it enormously encouraging. It suggests that the real barrier to physics ideas in ecology was never intellectual resistance per se. It was a lack of the right infrastructure—both mathematical and empirical—to make those ideas precise and testable. As that infrastructure develops, ideas that were once dismissed as too abstract or too foreign keep turning out to be exactly what we needed.

Two long-term trends in the field of ecology are that it’s becoming a more applied field, and that it’s becoming a more quantitative field, with an increasing emphasis on quantitative methods development. How do those trends affect the sort of research you do and how you present it to others (in talks, in papers, in grant proposals, etc.)? Because on the one hand, your work is quite quantitative and proposes lots of new quantitative methods, but on the other hand your work often concerns quite abstract and fundamental research questions.

Here’s what I think makes ecology genuinely unusual compared to other branches of biology: the field still has a deep soft spot for theory. If you talk to theorists in molecular or cell biology, they often feel completely decoupled from the empirical work happening around them. But in ecology, an enormous amount of empirical research is still explicitly framed around testing or extending theoretical predictions. That’s a rarity in modern life science, and it’s a big part of why I consider myself incredibly lucky to be doing this right now.

That said, the job market has definitely noticed the trend toward applied and quantitative work. If you scroll through the eco-evo job boards nowadays, finding a posting for a “pure theoretician” is like spotting an ivory-billed woodpecker—it’s practically extinct. The market overwhelmingly wants “quantitative ecologists” or “computational biologists,” titles that carry a strong implication of being closely tethered to data and immediate application. My PhD advisor gave me a friendly warning early on in grad school that getting hired as a professor as a pure theorist was an uphill battle bordering on a cliff climb.

Given that tension, I actually consider myself incredibly lucky to enjoy both sides. On the one hand, I love the abstract, fundamental questions—the “pure” theory—because that’s where you figure out the fundamental rules of the game. On the other hand, I genuinely love working with data. If my work can’t eventually shake hands with messy empirical reality, it’s just a glorified math puzzle.

Do I think we still need space for pure theory? Absolutely—and I want to be clear, this isn’t a knock on quantitative ecology at all. The rise of data-driven, computational approaches has been fantastic for the field and has produced tons of genuinely important work. But a field that only ever solves today’s applied problems eventually runs out of the fundamental insights needed to solve tomorrow’s. I think ecology departments should be actively making room for theoreticians—even the unapologetically pure ones—because the payoff may not be immediate, but it compounds. Of course, I am biased 🙂

[syndicated profile] dynamicecology_feed

Posted by Jeremy Fox

The question in the post title is one I’ve long been meaning to post on, but never got around to it because I never got around to doing the background research to answer my own question. So instead, I’m just going to throw the question out there and suggest an answer off the top of my head, in the confidence that commenters will improve on my answer. 🙂

First, a bit of throat clearing: I think it’s fairly rare for fundamental research in ecology, or any other field of science, to have direct practical applications. I think a lot of fundamental research does end up getting applied somehow, but usually in indirect and diffuse ways that are hard to steer (or even predict, or even trace in retrospect). I also think that people who do fundamental research also are pretty good at fooling others, and often themselves, into thinking that fundamental research has direct applications, even when it doesn’t.* Conversely, I think a lot of the most directly relevant applied ecological research isn’t all that interesting from a fundamental perspective, often because it’s very species- or location-specific. For instance, if you need to parameterize an integral projection model for some species of conservation concern, as a step towards modeling possible management interventions, it’s unlikely that your work is going to speak to any interesting fundamental conceptual issues in population ecology, or lead to the development of novel, broadly-applicable statistical methods. So I think it’s pretty rare for any ecological research to be of both great fundamental interest and direct practical importance. But surely there are a few examples?

The first one that comes to mind is trophic cascades research, especially in lakes, and its applications for managing algal blooms. But I don’t actually know anything about how algal blooms are managed, so maybe I’m wrong about this one?

The second one that comes to mind are simple stochastic population growth models and their applications to managing species at risk. Correct me if I’m wrong, but I vaguely recall that, in some countries, some of the official legal standards for designating species as “at risk” or “endangered”, entitling them to legally-mandated management intervention, are based on “rules of thumb” derived from simple stochastic population growth models?

The third example I can think of off the top of my head isn’t a single example, it’s a type of example: system-specific, applied case studies that became textbook illustrations of broadly applicable, interesting fundamental concepts. Stuff like applied research on various pest and disease organisms (e.g., large budmoth, measles), that ended up providing textbook illustrations of cyclic population dynamics.

Ok, over to you! Looking forward to your comments, as always.

*I’m thinking for instance of Peter Adler’s old guest post, taking fundamental ecologists–including himself–to task for bullshitting about how fundamental research in ecology will improve forecasting in a way that will be useful to land managers and other ecological decision-makers.

That indie game money

Mar. 7th, 2026 03:52 pm
[syndicated profile] atrivialknot_feed

Posted by Siggy

If a game is on Steam, it’s possible for a public observer to estimate how much money it made. The thing to look at is the number of reviews. There’s a fairly predictable ratio between the number of sales to the number of Steam reviews, about 30:1. Then you can multiply by the game price (accounting for discounts). Subtract 30% for Steam’s cut (or a smaller cut if the game was profitable enough). And if the game made under $1000, subtract $100 for Steam’s listing fee.

Let’s go through an example. Hollow Knight: Silksong currently has 394,000 reviews. That implies about 12M sales on Steam alone. Each sale is $20, and we’ll assume an average discount of 15%. In total that’s $200M revenue. For such a large game, Steam only takes a 20% cut, leaving the developers with $160M. Now, divide that among three developers over the course of 7 years of development, and the implied annual salary of each dev is $7.7M.

Of course, some of the work is done by contractors outside of the three main devs; for example, Silksong has a separate composer. But also, I’m leaving out future sales, and sales on other gaming platforms.

The Silksong devs make good money. But obviously Silksong is something of an outlier. Let’s look at another game.

Blightseed is a game I played last month, a conlang shoot ’em up. Very obscure, made by a solo dev, a bit of a buggy mess, but it’s completely unique and worth playing. Blightseed has 11 reviews at time of writing, which implies about 300 sales. Each sale is $7, and we’ll assume an average discount of 15%. That’s about $1800. Subtract Steam’s 30% cut, and that’s about $1300. I’m not sure how long it took to make this game, but I’d guess a single person could make this over the course of a year in their free time. Let’s say 250 hours, possibly an underestimate. So, about $5/hour.

That’s pretty decent for a hobby, considering most hobbies don’t pay any money at all. But as a job, that’s not good. Consider that making a game is somewhat technical, and perhaps a dev has the option of becoming a software engineer instead, paying (on the low end) around $100k/year, or $50/hour. In comparison, the $5/hour might as well round down to zero.

Now, you’ve heard of a lot more about hit indie games like Silksong than you have heard about obscure indie games like Blightseed. But of course, this is selection bias.  Blightseed is much closer to the typical case. In fact, Blightseed is above the median. Using similar methods, analysts have estimated that 40% of Steam games in 2025 didn’t make back the $100 listing fee, and 60% made under $1000.

So what about all those games that make negative profit? What’s their story? I can think of a few possibilities. a) These are failed games. They’re losing lottery tickets. A swing and a miss. b) Steam is functioning as a vanity press. The devs are accomplishing what they set out to do, which was publishing games, not making money. c) They’re making money elsewhere, such as selling DLC.

When talking about whether a game is successful, we have to consider how many people worked on it for how long, and what their goals were. If you’re a professional developer, $5/hour doesn’t really cut it. But if you’re doing it on the side, with the hope of breaking into the career, it might be a decent starting point.

And when you’re a hobbyist, the money barely matters. Even if you completely lose $100 to the Steam listing fee, it’s still cheaper per hour to develop video games than it is to play them. What matters to the hobbyist is whatever they find emotionally satisfying.

Bad Puzzles

Mar. 4th, 2026 03:25 pm
[syndicated profile] atrivialknot_feed

Posted by Siggy

What is the difference between a puzzle and a real world problem? A puzzle is devised by someone, generally with the intent of making a pleasant experience for the solver. In contrast, a real world problem is not guaranteed to have a solution, not guaranteed to have a feasible path towards a solution, and is not guaranteed to be pleasant to solve.

Here is a simple math puzzle. Can you design two six-sided dice whose sum follows the same probability distribution as 2D6, but with different numbers (all positive integers) on their faces? Classic, totally possible.

Here’s a simple real world physics problem: Can you estimate Earth’s equatorial bulge from its rotation speed and gravity? I thought I could estimate this using geometrical considerations, but that gives the wrong answer. The correct solution must account for the gravitational field of the bulge itself, which can be calculated by decomposing it into spherical harmonics. Nobody wants to do that.

Puzzles do not always succeed at being enjoyable. Sometimes you waste a lot of time on a puzzle, and then when you look up the solution you think, “I was never going to get that one.” For example, one time I picked up a puzzle box on a friend’s shelf, despite my friend’s insistence that the puzzle was stupid. After messing around a bit, he showed me how to open it: he slammed it hard on the table to shake a magnet loose. I was never going to solve that one, because I happen to have reservations about slamming potentially delicate objects that do not belong to me.


Puzzles can fail for a variety of reasons. Sometimes there is a mismatch between the puzzle and the solver. If you aren’t good with probabilities, you may not enjoy the dice problem, that’s okay. I am not familiar with the conventions of cryptic crosswords, so throwing one at me is unfair.

And of course, maybe the puzzle designer just didn’t do a good job. Often puzzle designers work in isolation, and they don’t really know what the solving experience is like. They just follow some intuition about what feels good or clever. But there’s a fundamental difference in the what the designer knows and what the solver knows, so puzzle designer intuitions are frequently wrong. For example, red herrings often dominate the solving experience, while being completely invisible to the designer.

There are certain puzzle design values that can work against designers. For instance, if you value “think outside the box”, this can turn into an arms race of cleverness. Every puzzle must break expectations, do something totally new and different. Eventually the designer gets what they wanted, a solution that nobody would ever think of. And puzzle solvers respond, “I never would have thought of that solution.”  It turns out that solvers don’t like that, go figure.

There’s also the sort of puzzle that celebrates excessive difficulty. For example, consider the sort of puzzle that promotes a nation-wide puzzle hunt, offering a money prize to the first solver. This sort of thing used to be much more fashionable, see this video for an example from 1984. The solution involved picking Carl Sagan’s Cosmos, first edition, flipping to Chapter 6, and mapping each number to the first letter of the corresponding word, with some rules for dealing with hyphenated words, acronyms, etc. The appeal in this puzzle is the thrilling fantasy of being the first solver, however impossible that may be. But as a puzzle that’s just unfair.

Difficult puzzles requires some degree of trust. As a solver, I want some assurance that the time investment will pay off. I need to have some faith in the competence of the puzzle designer, some indication that they’re not trying to out-clever or out-difficult themselves.

divider

Puzzle video games generally benefit from the norm that games should be playtested. At least someone tried these puzzles, and gave feedback to the designer. But even this is not perfect.

I play a lot of puzzle games. As a result, I tend to be good at them. I’m not the best, but I do alright with many of the hardest puzzle games. Nonetheless, I have relatively low trust as a solver. From my broad experience, I know there are lots of puzzles out there that aren’t very good, or at the very least don’t mesh with me. They’re not worth my time. I am not interested in gritting my teeth through a bad puzzle. I do not need to “up” my puzzle skill level; at this level, puzzle solving is not a useful real life skill. Puzzles are not so precious that I need to savor every single one.

In my opinion, less-experience puzzle gamers tend to be higher in trust. After all, being less-experienced means you’ve played relatively few puzzle games, typically the greatest hits. If you’re only playing the greatest hits, you have little reason to mistrust.

To illustrate my perspective, let’s consider what is sometimes called the most difficult video game puzzle of all time: The Fez Monolith puzzle. To solve the puzzle, you must be in a particular room and give an exact series of inputs. The series of inputs was discovered by brute force. Nobody knows how the inputs were intended to be discovered. A variety of theories have been proposed, but none are things that a player plausibly could have figured out from the beginning, without working backwards from the real answer.

The monolith puzzle commands a certain degree of reverence from players. But to me, that’s just a bad puzzle. This puzzle was designed with the intent of being very challenging, and was likely never playtested. It’s the modern equivalent of those puzzles from the 80s with money prizes. To the extent that people appreciate the monolith puzzle, they do not appreciate it as a puzzle, they appreciate it as a communal mystery.

Here’s another very difficult puzzle from a more obscure game. Hack ‘n’ Slash is a hacking puzzle game made by Double Fine in 2014. At first you hack objects using a clean gamey interface. But eventually you can navigate the game’s actual code, and that was something of a difficulty cliff. It also feels distinctly like the game was abandoned part way through its development. I played this game, and solved most of it, but there was one puzzle that required you to input a password. There was no getting around the password; the password was an encryption key for a file that was actually encrypted. Nobody knew how to solve the puzzle.

I found out when researching this, that someone finally solved the puzzle in 2022. The password was apparently a phrase that appeared in the loading screen. The solver used brute force, as well as a hint from the devs to narrow it down. Solving the puzzle gave you access to some dev commentary (although I’m afraid that a decade later, it’s all meaningless to me).

Was this a good puzzle? No. As an optional challenge, I suspect the designers did not meaningfully playtest the puzzle. When they picked a passphrase that appeared on the loading screen, I think they did not realize just how many equally likely passwords there would be from the solver perspective. It’s just a classic case of puzzle designers failing to respect the difference in knowledge between puzzle solvers and designers.

But, if you get good at solving bad puzzles, such a skill may have practical value. After all, the universe is one of the worst puzzle designers around.

Origami: Horses

Mar. 1st, 2026 05:08 pm
[syndicated profile] atrivialknot_feed

Posted by Siggy

Persian Horse

Persian Horse, designed by Peter Engel

Here’s a horse I folded at a conference many years ago.  It’s meant to stand on its hind legs, although you’d really need to attach it to a stand.

Earlier I was talking to someone who sold stands to origamists, and he observed that origamists are really cheap because they can get hours of entertainment from a sheet of paper.  Yeah… I can’t really see myself buying a stand for this.

Horse Red Pocket

Horse Red Pocket, designed by Don Leung

Bonus horse!  I folded this at another conference.  It’s meant to function as a hongbao, those red envelopes with money handed out to kids on New Years.

I should say, we didn’t much celebrate when I was growing up.  We got red envelopes and maybe there was a family dinner, but nothing like the two week celebrations you get in East Asia.

We also had tikoy, but I honestly can’t remember if that was like a New Years thing or if it was whenever my mom felt like making it.  I can make tikoy now but my husband finds it too weird for him.

[syndicated profile] dynamicecology_feed

Posted by Jeremy Fox

This week: c’mon, find something else to talk about besides US politics Beowulf, the canary province in the coal mine, Humphreys opacity, Stewart Brand, Stephen Heard vs. Scrooge McDuck, and more.

A reader of The Ecology of Ecologists just posted a very positive review on Story Graph. Remember: you can get a free signed bookplate if you tell the world–or even just me–what you think of my book. 🙂

An argument that, rather than replacing genuine expertise, AI will make it more valuable.

On Stewart Brand. I don’t really know much about Brand or the various broader movements he’s participated in or influenced over the past several decades. FWIW, just based on the bits and pieces I’ve picked up over the years, he seems like an interesting figure. He combines ideas and attributes that usually are seen as opposing. For instance, he was a towering figure in the environmental movement–but he’s also a pro-technology optimist who admires Elon Musk. I’m always interested in people who resist easy categorization.

The editorial board and reviewer pool at the Journal of Applied Ecology have both become more gender-balanced and geographically diverse over the past 20 years. The trends in reviewers reflect changes in both the composition and behavior of editors: the editorial board is not only more gender-balanced and geographically diverse than it used to be, but both men and women editors have been inviting more women reviewers.

Scholars of Old English focus disproportionately on just a few canonical texts in the entire corpus of Old English writing, although to a lesser extent than they used to decades ago. This may sound like a boring blog post, but it was actually really interesting–and it has graphs!

Political economist Chris Blattman shares his Claude-based workflow, which he swears by for everything from managing his inbox, to grant proposal drafting, to trip planning. Includes instructions, including for some bits that should work “straight out of the box” with no customization needed and no need to install Claude Code.

Cosma Shalizi’s unsolicited opinions (basically, ideas for blog posts he’s never going to get around to writing).

Stephen Heard on the strange mix of feelings prompted by correcting the proofs for the third edition (!) of The Scientist’s Guide to Writing. Speaking as a book author whose book would be doing very well if it sold 1/10 as many copies as Stephan’s, I was left wondering: what about the strange feeling of sleeping on a mattress full of cash from book royalties, huh? Tell us what’s that feels like, Stephan! 😉

New Brunswick is looking to slash higher education spending…somehow. Rather than looking to cut spending, they should instead raise revenue by appropriating Stephen Heard’s book royalties. Boom, problem solved. 😉

Artist uses frozen Alberta lake as his canvas.

[syndicated profile] dynamicecology_feed

Posted by Jeremy Fox

My post drafts folder contains a bunch of ideas for unwritten posts. Many of them are years old, so there’s basically no chance I’ll ever actually write them. Others are quite new, so there’s a good chance I’ll write them. My post ideas tend to get written up either fairly quickly, or not at all.

So here are a bunch of posts I haven’t written yet, some of which I’ll never write. But they’re all mixed up, so you’ll have to guess which is which. 🙂

When should you just keep doing what you’re doing? When does that demonstrate clear-eyed focus/an admirable resistance to faddishness/etc., and when does it amount to becoming lazy/self-indulgent/stuck in a rut/deadwood?

Are recent TT hires at R1 unis less likely to have ever been VAPs than TT hires at less research-intensive unis?

Something about Keynes’ line in Economics Possibilities for our Grandchildren about how economists should become like dentists. The thing about dentistry is that it’s super-useful but also super-boring. It’s routine, not creative. It doesn’t have unsolved puzzles or new ideas. Nor does it involve clashing values, mediating conflicts between stakeholders, etc. Could, or should, ecologists become more like dentists?

A post on my failure to replicate a classic protist microcosm experiment by Leo Luckinbill

Is it true that Ivy League unis and elite SLACs tend to hire their own alums? Could compile data on this from department websites, use other R1s and non-elite SLACs as a control group.

Maybe we should stop studying stabilizing vs. equalizing mechanisms?

The modern coexistence theory backlash has begun

What’s the strongest trade-off in ecology? Are there any that always show up in the data, without any need to control for the organism’s health/quality/environment?

How much does meta-analysis improve on unweighted averaging of effect sizes?

Have ecology faculty job seekers’ collective preferences to live in certain places, or for certain types of jobs, gotten stronger over time? Look at predictors of # applied from ecoevojobs.net, broken down by year. Problem: would need to compile a lot of new data on job-level predictors, not just state-level predictors as I did in a previous post.

Best examples of ecological research that’s of great fundamental interest AND has important, direct applications?

Read some old articles on what ecology is and how to do it, and then post about them. Taylor 1936 Ecology. Stiling 1994 ESA Bulletin. Bergman and Adams 1993 ESA Bulletin.

Ecology is full of variation. So why do ecologists’ hypotheses about the drivers of that variation so rarely pan out?

Is it more common these days than it was 10 years ago for TT ecology hires to have MSc degrees? Relatedly, are TT ecology hires these days spending longer from bachelor’s degree to getting hired into their first faculty job, because they’re spending longer in grad school due to more frequently completing an MSc before a PhD?

Should you go into admin? How should you think about the pluses and minuses? How can you maintain your research program while you’re a dept chair, or dean, or whatever? Problem: I don’t know anything about this topic.

What’s the most substantively I’ve ever been cited? Or more broadly, the most substantively I’ve ever influenced anyone else’s research? Has anyone besides my own collaborators ever published a paper that couldn’t have been written without my work, or that would’ve looked very different without my work? Or has my work every prevented someone from publishing what would’ve been a major paper?

Ashera Oleatus Review

Mar. 3rd, 2026 10:15 pm
[syndicated profile] mount_ink_feed

Posted by Kelli McCown

I reviewed the Ashera Aeon in Holly last year, so I was excited when I found out that Ashera had released a new fountain pen model-the Oleatus. Ashera was very kind and sent me an Oleatus in Yew Burl with a 14k gold medium nib to try. The Oleatus is available in 8 different woods. This material/nib combo retails for $870.

The pen comes in a gorgeous wood box with the Ashera name engraved.

The finish is oiled rather than lacquered, which gives the pen a fabulous feel in hand.

This is a very large pen when compared to other popular pen models.

The yew burl material is so pretty!

The cap is a twist-on, but it only takes one twist to remove, rather than the multiple turns some pen caps take.

The 14k medium nib is very juicy and slightly bouncy.

Yes, the nib is very dirty, but that just makes me like it more. She’s pretty, juicy, bouncy goodness. Sailor Black is the first ink I usually put in a pen and this combo did not disappoint.

You can see the ink on my fingers from where I made a mess when I refilled the pen.

More close-ups of the pretty nib. I love that it doesn’t have a ton of engraving or logos on it.

She’s definitely classy, but with plenty of personality.

The nib is just a little bit wider on the downstroke than the sidestroke.

The pen works well with both cursive and print writing.

Overall, this is a fabulous pen. It writes well, feels great in the hand, and looks gorgeous.

Disclaimer: This pen was provided by Ashera for the purpose of this review. All photos and opinions are my own. This post is not sponsored, and does not contain affiliate links.

Permalink

Carnival of Aces: Second Glance

Mar. 2nd, 2026 04:08 pm
[syndicated profile] asexualagenda_feed

Posted by Siggy

The Carnival of Aces for January/February 2026 has been posted by Blue Ice-Tea.  The theme was “Second Glance“.  Please take a look!

Although the Carnival of Aces has been inactive as of late, it’s still open for anyone to volunteer to host, if there’s a topic you’d like to see people talk about.  If you would like to volunteer, please see the masterpost for instructions.

[syndicated profile] dynamicecology_feed

Posted by Jeremy Fox

Writing in Plos Biology, Carl Bergstrom and Kevin Gross model meltdown of the peer review system.

I find these sorts of sociological models interesting to think about, but I’m often unsure how seriously to take them. I feel like there are lots of different plausible-ish models of the peer review system, and the behavior of the (many!) people involved in it, that lead to quite different predictions. I say this as someone who was arguing more than 15 years ago for urgent, fundamental changes to the incentives to do peer review, on the grounds that the peer review system was in imminent danger of meltdown. Those changes did not come to pass, and yet the peer review system seems to be chugging along more or less as it was 15 years ago. I was wrong in thinking that the peer review system was in imminent danger of meltdown unless something radical changed. So if you say that the incentives associated with the peer review system will make it melt down eventually, my question is: when?

Related: my old review of The Bet, a book about a famous prediction of imminent ecological and societal meltdown.

[syndicated profile] dynamicecology_feed

Posted by Jeremy Fox

This week: bird papers still have the best figures, metascience observatory, against metascience observatories, AI vs. particle physics, AI vs. everybody’s jobs, and more.

Years ago, Meghan asked whether bird papers have the best figures. The answer was “yes,” and it’s still “yes.” Check out Fig. 1-2! 🙂

Related to a recent link on how environmental NGOs went off the rails, here’s how NGOs in general went off the rails. Makes for an interesting comparison with that recent link; there’s a lot of overlap even though the previous link was by someone coming from a politically leftwing perspective while today’s link is by people coming from the political center or slightly right-of-center. But there are some differences too. Obviously, I’m not qualified to weigh in on this topic myself, but I found both links interesting.

Commenter Dan Elton sends us to the Metascience Observatory project. They’re trying to compile and synthesize a comprehensive database of all replications in all fields of science, and make it available on the web with an interactive dashboard. I’ve only had a quick glance so far, but what I saw impressed me.

Sticking with metascience, here’s Jessica Hullman on how were on the cusp of living the metascience dream–or nightmare. Thanks to LLMs, it’s about to get a lot easier to do certain kinds of checks for robustness and replicability that many people have been calling for for years. But is this a case of needing to be careful what you wish for because you might get it? Very good post. It’s from a computer science perspective, but most of the points generalize. For instance, this very interesting remark about how it’s not useful to think of “researcher degrees of freedom” as sampling analyses from some statistical population of possible analyses:

Treating model variants as if they form a random sample turns analytic flexibility into an uninterpretable frequency. This is why robustness testing at scale does not guarantee insight: we still have to figure out how to interpret the results, and that is hard.

Still sticking with metascience: are we really so sure that science in the past was broken, or that it was broken in ways that can be fixed by doing more close replications? I have mixed feelings about this. Related: techniques aren’t powerful, scientists are. Also related: transparency vs. trust.

No, AI is not going to lead to mass unemployment, come on. I mean, how is that even supposed to work?

Writing in Science, Philip Ball reviews Maria Popova’s new book Traversal. I wish the review could’ve been longer; sounds like an unusual book in ways both good and bad.

Cory Doctorow on how he uses LLMs (scroll down). Includes some spicy pushback against blanket ethical objections to LLM use. Not that Cory Doctorow (or anyone) is necessarily correct, obviously. But FWIW, he’s someone who’s thought hard about this stuff, and has gone to greater lengths than most people to live by his publicly stated principles regarding technology use. A couple of quotes to give you the flavor:

You know what’s better than refusing to use a technology because you hate its creators? Seizing that technology and making it your own.

AI is like the dotcom bubble, awash in sin and inflicting untold misery, but it will leave something useful behind

Here’s how some physicists are using LLMs: to do real physics research. Apparently, the latest version of ChatGPT appreciably simplified some crucial mathematical expressions, and proved they were correct, within minutes to hours–expressions that human physicists had been trying and failing to simplify and prove for months. Link goes to a Science news article.

The owner of the former campus of Green Mountain College wants to give it away. I link to this purely because it got me wondering: what would you do with a college campus, besides operate a college? The linked article gives one possible answer–the owner wants to give it away to a Catholic mission-based organization. But what are the other possible answers? And whatever you do, you have to either gut or abandon the lab buildings, right?

Profile

sciatrix: A thumbnail from an Escher print, black and white, of a dragon with its tail in its mouth, wing outstretched behind. (Default)
sciatrix

July 2020

S M T W T F S
   1234
5678 91011
12131415161718
19202122232425
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Mar. 12th, 2026 12:06 am
Powered by Dreamwidth Studios