|
| 1 | +# Code Architecture {#architecture} |
| 2 | + |
| 3 | +This page explains how MFC's source code is organized, how data flows through the solver, and where to find things. Read this before diving into the source. |
| 4 | + |
| 5 | +## Three-Phase Pipeline |
| 6 | + |
| 7 | +MFC runs as three separate executables that communicate via binary files on disk: |
| 8 | + |
| 9 | +``` |
| 10 | +pre_process ──> simulation ──> post_process |
| 11 | + (grid + (time (derived |
| 12 | + initial advance) quantities + |
| 13 | + conditions) visualization) |
| 14 | +``` |
| 15 | + |
| 16 | +| Phase | Entry Point | What It Does | |
| 17 | +|-------|-------------|--------------| |
| 18 | +| **Pre-Process** | `src/pre_process/p_main.f90` | Generates the computational grid and initial conditions from patch definitions. Writes binary grid and state files. | |
| 19 | +| **Simulation** | `src/simulation/p_main.fpp` | Reads the initial state and advances the governing equations in time. Periodically writes solution snapshots. | |
| 20 | +| **Post-Process** | `src/post_process/p_main.fpp` | Reads snapshots, computes derived quantities (vorticity, Schlieren, etc.), and writes Silo/HDF5 files for VisIt or ParaView. | |
| 21 | + |
| 22 | +Each phase is an independent MPI program. The simulation phase is where nearly all compute time is spent. |
| 23 | + |
| 24 | +## Directory Layout |
| 25 | + |
| 26 | +``` |
| 27 | +src/ |
| 28 | + common/ Shared modules used by all three phases |
| 29 | + pre_process/ Pre-process source (grid generation, patch construction) |
| 30 | + simulation/ Simulation source (solver core, physics models) |
| 31 | + post_process/ Post-process source (derived quantities, formatted I/O) |
| 32 | +``` |
| 33 | + |
| 34 | +Shared modules in `src/common/` include MPI communication, derived types, variable conversion, and utility functions. They are compiled into each phase. |
| 35 | + |
| 36 | +## Key Data Structures |
| 37 | + |
| 38 | +Two arrays carry the solution through the entire simulation: |
| 39 | + |
| 40 | +| Variable | Contents | When Used | |
| 41 | +|----------|----------|-----------| |
| 42 | +| `q_cons_vf` | **Conservative** variables: \f$\alpha\rho\f$, \f$\rho u\f$, \f$\rho v\f$, \f$\rho w\f$, \f$E\f$, \f$\alpha\f$ | Storage, time integration, I/O | |
| 43 | +| `q_prim_vf` | **Primitive** variables: \f$\rho\f$, \f$u\f$, \f$v\f$, \f$w\f$, \f$p\f$, \f$\alpha\f$ | Reconstruction, Riemann solving, physics | |
| 44 | + |
| 45 | +Both are `vector_field` types (defined in `m_derived_types`), which are arrays of `scalar_field`. Each `scalar_field` wraps a 3D real array `sf(0:m, 0:n, 0:p)` representing one variable on the grid. |
| 46 | + |
| 47 | +The index layout within `q_cons_vf` depends on the flow model: |
| 48 | + |
| 49 | +``` |
| 50 | +For model_eqns == 2 (5-equation, multi-fluid): |
| 51 | + |
| 52 | + Index: 1 .. num_fluids | num_fluids+1 .. +num_vels | E_idx | adv_idx |
| 53 | + Meaning: alpha*rho_k | momentum components | energy | volume fractions |
| 54 | +``` |
| 55 | + |
| 56 | +Additional variables are appended for bubbles, elastic stress, magnetic fields, or chemistry species when those models are enabled. The total count is `sys_size`. |
| 57 | + |
| 58 | +## The Simulation Loop |
| 59 | + |
| 60 | +The simulation advances the solution through this call chain each time step: |
| 61 | + |
| 62 | +``` |
| 63 | +p_main (time-step loop) |
| 64 | + └─ s_perform_time_step |
| 65 | + ├─ s_compute_dt [adaptive CFL time step] |
| 66 | + └─ s_tvd_rk [Runge-Kutta stages] |
| 67 | + ├─ s_compute_rhs [assemble dq/dt] |
| 68 | + │ ├─ s_convert_conservative_to_primitive_variables |
| 69 | + │ ├─ s_populate_variables_buffers [MPI halo exchange] |
| 70 | + │ └─ for each direction (x, y, z): |
| 71 | + │ ├─ s_reconstruct_cell_boundary_values [WENO] |
| 72 | + │ ├─ s_riemann_solver [HLL/HLLC/HLLD] |
| 73 | + │ ├─ s_compute_advection_source_term [flux divergence] |
| 74 | + │ └─ (physics source terms: viscous, bubbles, etc.) |
| 75 | + ├─ RK update: q_cons = weighted combination of stages |
| 76 | + ├─ s_apply_bodyforces [if enabled] |
| 77 | + ├─ s_pressure_relaxation [if 6-equation model] |
| 78 | + └─ s_ibm_correct_state [if immersed boundaries] |
| 79 | +``` |
| 80 | + |
| 81 | +### What happens at each stage |
| 82 | + |
| 83 | +1. **Conservative → Primitive**: Convert stored `q_cons_vf` to `q_prim_vf` (density, velocity, pressure) using the equation of state. This is done by `m_variables_conversion`. |
| 84 | + |
| 85 | +2. **MPI Halo Exchange**: Ghost cells at subdomain boundaries are filled by communicating with neighbor ranks. Handled by `m_mpi_proxy`. |
| 86 | + |
| 87 | +3. **WENO Reconstruction** (`m_weno`): For each coordinate direction, reconstruct left and right states at cell faces from cell-average primitives using high-order weighted essentially non-oscillatory stencils. |
| 88 | + |
| 89 | +4. **Riemann Solver** (`m_riemann_solvers`): At each cell face, solve the Riemann problem between left and right states to compute intercell fluxes. Available solvers: HLL, HLLC, HLLD. |
| 90 | + |
| 91 | +5. **Flux Differencing** (`m_rhs`): Accumulate the RHS as \f$\partial q / \partial t = -\frac{1}{\Delta x}(F_{j+1/2} - F_{j-1/2})\f$ plus source terms (viscous stress, surface tension, bubble dynamics, body forces, etc.). |
| 92 | + |
| 93 | +6. **Runge-Kutta Update** (`m_time_steppers`): Combine the RHS with the current state using TVD Runge-Kutta coefficients (1st, 2nd, or 3rd order SSP). |
| 94 | + |
| 95 | +## Module Map |
| 96 | + |
| 97 | +MFC has ~80 Fortran modules organized by function. Here is where to look for what: |
| 98 | + |
| 99 | +<!-- MODULE_MAP --> |
| 100 | + |
| 101 | +## MPI Parallelization |
| 102 | + |
| 103 | +The computational domain is decomposed into subdomains via `MPI_Cart_create`. Each rank owns a contiguous block of cells in (x, y, z). Ghost cells of width `buff_size` surround each subdomain and are filled by halo exchange before each RHS evaluation. |
| 104 | + |
| 105 | +On GPUs, the same domain decomposition applies. GPU kernels operate on the local subdomain, with explicit host-device transfers for MPI communication (unless GPU-aware MPI / RDMA is available). |
| 106 | + |
| 107 | +## Adding New Physics |
| 108 | + |
| 109 | +To add a new source term or physics model: |
| 110 | + |
| 111 | +1. **Create a module** in `src/simulation/` (e.g., `m_my_model.fpp`) |
| 112 | +2. **Add initialization/finalization** subroutines called from `m_start_up` |
| 113 | +3. **Add RHS contributions** called from the dimensional loop in `m_rhs:s_compute_rhs` |
| 114 | +4. **Add parameters** to `m_global_parameters` and input validation to `m_checker` |
| 115 | +5. **Add a module-level brief** (enforced by the linter in `lint_docs.py`) |
| 116 | +6. **Add the module to `docs/module_categories.json`** so it appears in this page |
| 117 | + |
| 118 | +Follow the pattern of existing modules like `m_body_forces` (simple) or `m_viscous` (more involved) as a template. |
0 commit comments