Reframing the debate: more than just data vs. simulation
In the natural resources sector, debates often surface between those who champion monitoring and those who rely on modelling. This tension has always puzzled me.
Long-term, well-designed monitoring programs are essential. They allow us to observe how natural systems behave and change, and provide the raw data that underpins our understanding.
Models translate this understanding into structured, testable, and repeatable predictions. Models can fill data gaps, explore “what if” scenarios, and assess the likely impact of management actions.
A world leading example is the Paddock to Reef Monitoring, Modelling and Reporting program, which integrates long-term monitoring with sophisticated modelling to inform on-ground investment programs.
So why the tension? For model detractors, Often, it’s not about the models themselves, but how they’re used—especially when they’re stretched, or more often shrunk, beyond their intended scale. And scale matters. For critics of monitoring, limited data that supports an hypothesis is ‘proof’ and when the hypothesis is not supported it is explained away by the data limitations.
Understanding natural systems is a matter of scale
Spatial scale
Natural systems operate across vast and often overlapping spatial scales. What we measure at a single monitoring site reflects an integration of geological, hydrological, and ecological processes unfolding over billions of years.
Take water quality. It’s influenced not just by today’s land use, but by:
- erosion and deposition driven by ancient climatic cycles
- past sea-level shifts
- tectonic changes and landscape evolution
Yet we often frame today’s management as if landscapes were shaped solely by recent human activity.
This matters for both monitoring and modelling. A sensor at the end of a paddock tells us something about that paddock—but little about the wider catchment. Conversely, monitoring at the mouth of the river will tell you little about what is happening on any paddock. Similarly, a model designed to predict average catchment-wide effects won’t reliably tell you what’s happening on a single farm. To understand and manage natural systems effectively, we need to both measure and model at the scale of decision-making and be clear about distinguishing the role of anthropogenic and natural drivers.
Temporal scale
Traditional monitoring often involves sampling at regular intervals—but nature rarely changes on a schedule. Most critical changes happen during episodic, infrequent events, not gradually during “average” conditions.
For instance, erosion-driven water quality impacts are determined by just a handful of major storm events, which may occur only once in several years. The same is true for ecological systems shaped by overlapping combinations of long droughts, invasive species outbreaks, sudden habitat destruction, occasional bushfires, or the effects of a changing climate.
Relying solely on periodic measurements risks missing the real drivers of change. Models, by contrast, can simulate these episodic events and their interactions, helping us to separate management effects from natural variability.
Monitoring: long-term patience vs short-term pitfalls
Monitoring natural systems is rarely quick or easy. Long-term data collection can feel slow and unrewarding, but it is this persistence that reveals meaningful trends—and provides the evidence needed to fine-tune our understanding.
Short-term monitoring, particularly at a single site, often fall short. These programs can generate plenty of data but little real insight. Some environmental crediting schemes have fallen into this trap, relying on short-term single site monitoring to justify significant environmental claims. In reality, this approach is unlikely to deliver much more than an expensive weather report because you cannot reliably disentangle the response from anthropogenic impact and from natural drivers with a short term single site approach.
Modelling: powerful, but prone to misinterpretation
A well-designed model is built for a purpose, with a specific spatial and temporal scale in mind. It typically simplifies or “lumps” the background conditions (like climate and soil type) and natural responses to focus on the variables we want to explore (e.g. land management changes).
Scaling up
Models often operate by simulating many small units (paddocks, plots) and then aggregating the results. Stochastic errors at the small unit scale even out through aggregation. Problems arise when we extract and interpret predictions for those small units out of context—the model was never built for that, and it shows.
Predicting change over time
Short-term predictions of anthropogenic impact? Often not very useful because weather dominates short term responses. But by modelling many seasons (e.g. model over decades to capture the range of climatic conditions), we can tease out the variability driven by weather. That’s when the quantifiable impacts of land management practices emerge from the weather noise. Models that run over long timeframes can more reliably reveal these long term effects.
Why relativity matters more than accuracy
When managing natural systems, relative comparisons often matter more than absolute values.
For example, we may not know the exact biodiversity score of a landscape, but we can estimate which intervention is likely to yield better long-term biodiversity outcomes.
Models excel at answering these relative questions—especially when supported by robust, long-term monitoring data. Conversely, monitoring struggles to show relative improvements against the backdrop of weather noise and long lags in environmental response.
Final thoughts: how to use monitoring and modelling together
If your goal is to understand how the environment works, invest in long-term, spatially broad monitoring.
If your goal is to compare the likely outcomes of different management decisions, use a well-calibrated model, supported by robust monitoring data to refine and improve confidence in models over time.
We need both.