Friday, 24 April, 2026
Multi-Hypothesis Comparison
An isolated simulation produces a result, not a proof. The question is never simply "what does the model predict?" but rather "which models consistent with the data predict outcomes different enough to influence decision-making?". IsoFind's multi-hypothesis comparison module embodies this approach by allowing several scenarios to run on the same scene and comparing their outputs side-by-side. This page explains the mechanics and illustrates its use through previous cases.
Principle
A simulation scenario is saved as a complete snapshot: input parameters, subsurface model, boundary conditions, target compound, source, duration, and calculated results. The comparison module stores up to ten snapshots within a single scene. Each can be toggled on or off individually, and multiple scenarios can be displayed simultaneously in the 3D scene or side-by-side in separate thumbnails.
Storage is handled locally within the project, not in volatile memory. A scenario recorded today can be retrieved in a future session with the exact same parameters and results. Comparisons can thus be conducted across time—for instance, to benchmark an older model against new field measurements.
Creating a Scenario
The typical workflow involves defining a reference simulation, running it, saving it, and then creating variants by modifying one parameter at a time. This "all other things being equal" (ceteris paribus) approach is the most effective way to identify the sensitivity of each parameter.
Simulation
>
Configure
>
Run
>
Save as Scenario
>
Name
>
Duplicate for Variant
>
Modify Parameter
>
Rerun and Save
Each scenario includes a custom name and an optional description. Parameters that differ from the reference are highlighted in the scenario list, preventing silent duplicates and facilitating later review.
Comparative Display Modes
Three display modes allow for comparing scenarios based on the type of difference being analyzed.
| Mode | Visual Representation | Typical Usage |
|---|---|---|
| Synchronous Thumbnails | Multiple identical small 3D scenes, each showing a different scenario | Global visual comparison of plume shapes |
| Transparent Overlay | Isosurfaces from different scenarios displayed on the same scene | Measuring the footprint gap between scenarios |
| Difference Map | 3D field of absolute or relative difference between two scenarios | Identifying where sensitivity is highest |
| Temporal Curves | Evolution of a value at a specific point for all scenarios | Comparing future projections at a specific borehole |
Example: Cr(VI) Case Study
Using the Cr(VI) scene presented on a previous page, four variants can be quickly constructed and compared. This sequence illustrates how plausible hypotheses produce significantly different forecasts and how comparison helps identify the parameters that truly matter.
| Scenario | Difference from Reference | Plume Footprint at 20 Years | Concentration at Downgradient Borehole |
|---|---|---|---|
| Reference | Standard settings, reduction calibrated to measurements | 400 m | 30 µg/L |
| No Reduction | Reduction rate disabled | 700 m | 250 µg/L |
| ZVI Barrier at 300 m | Reducing zone injected perpendicularly | 320 m | 5 µg/L |
| Persistent Source | Slower residual desorption | 450 m | 50 µg/L |
Reviewing this table identifies two critical sensitivities: whether natural reduction is active radically changes the prognosis, validating the importance of calibrating this parameter against data. Conversely, uncertainty regarding source desorption only moderately alters the projection. If a project manager needs to prioritize further investigation, quantifying natural reduction is more cost-effective than investigating the residual source stock.
Dedicated Report Block
Multi-hypothesis comparisons can be included in reports via a dedicated block that takes the list of scenarios and one or more display modes as input. The block automatically generates thumbnails, curves, and summary tables, along with the metadata required for traceability (parameters for each scenario, calculation dates, author).
This block is one of the key differentiators of the reports module: it implements a "honest science" approach (presenting alternatives and their consequences) rather than the single-assertion approach often found in traditional expert reports.
When Not to Multiply Scenarios
It can be tempting to explore every possible variant of a configuration. Several safeguards prevent the comparison from turning into useless clutter.
- Every variant must answer a specific question. If the question cannot be stated in one sentence, the variant is likely unnecessary.
- Varying multiple parameters simultaneously makes the comparison uninterpretable. The "one parameter at a time" rule should be strictly followed, except for explicit factorial sensitivity studies.
- If two scenarios produce nearly identical results, keep only one. Comparison is not for documenting the obvious.
- Beyond five or six scenarios, the reading becomes confusing. A preliminary sort should eliminate low-information variants before the final presentation.
Multi-hypothesis comparison is not a substitute for proper initial parameterization. Multiplying scenarios with arbitrary parameters to "cover" uncertainty merely dilutes information. Every selected scenario must be plausible and technically justifiable, not just different.
Technical Limitations
- Storage is limited to 10 scenarios per project. Beyond this, obsolete scenarios must be archived or deleted.
- Transparent overlay comparison becomes difficult to read beyond 3 simultaneous scenarios.
- Scenarios saved before a major change to the scene (change in footprint or grid) are no longer directly comparable to the new setup. They remain accessible in read-only mode.
- Each scenario calculation uses the same machine; scenarios are not automatically launched in parallel, which increases total time for exhaustive comparisons.
Learn More
- Cr(VI) Case Study: Starting point for this page's comparative example.
- Interpreting Results: Coherent reading of a set of scenarios.
- FAIH Philosophy: Multi-hypothesis comparison as a concrete implementation of report honesty.
- Export and Sharing: Extracting comparisons for external presentation.