You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<li><b>Add the module to <spanclass="tt">docs/module_categories.json</span></b> so it appears in this page</li>
402
402
</ol>
403
403
<p>Follow the pattern of existing modules like <spanclass="tt">m_body_forces</span> (simple) or <spanclass="tt">m_viscous</span> (more involved) as a template.</p>
<p>The table lists formatted database output parameters. The parameters define variables that are outputted from simulation and file types and formats of data as well as options for post-processing.</p>
819
819
<ul>
820
-
<li><spanclass="tt">format</span> specifies the choice of the file format of data file outputted by MFC by an integer of 1 and 2. <spanclass="tt">format = 1</span> and <spanclass="tt">2</span> correspond to Silo-HDF5 format and binary format, respectively.</li>
820
+
<li><spanclass="tt">format</span> specifies the choice of the file format of data file outputted by MFC by an integer of 1 and 2. <spanclass="tt">format = 1</span> and <spanclass="tt">2</span> correspond to Silo-HDF5 format and binary format, respectively. Both formats are supported by <spanclass="tt">./mfc.sh viz</span> (see <aclass="el" href="visualization.html" title="Flow visualization">Flow Visualization</a>). Silo-HDF5 requires the h5py Python package; binary has no extra dependencies.</li>
821
821
<li><spanclass="tt">precision</span> specifies the choice of the floating-point format of the data file outputted by MFC by an integer of 1 and 2. <spanclass="tt">precision = 1</span> and <spanclass="tt">2</span> correspond to single-precision and double-precision formats, respectively.</li>
822
822
<li><spanclass="tt">parallel_io</span> activates parallel input/output (I/O) of data files. It is highly recommended to activate this option in a parallel environment. With parallel I/O, MFC inputs and outputs a single file throughout pre-process, simulation, and post-process, regardless of the number of processors used. Parallel I/O enables the use of different numbers of processors in each process (e.g., simulation data generated using 1000 processors can be post-processed using a single processor).</li>
823
823
<li><spanclass="tt">file_per_process</span> deactivates shared file MPI-IO and activates file per process MPI-IO. The default behavior is to use a shared file. File per process is useful when running on >10K ranks. If <spanclass="tt">file_per_process</span> is true, then pre_process, simulation, and post_process must be run with the same number of ranks.</li>
<p>💡 <b>Tip:</b> If you encounter a validation error, check the relevant section above or review <ahref="https://github.com/MFlowCode/MFC/blob/master/toolchain/mfc/case_validator.py"><spanclass="tt">case_validator.py</span></a> for complete validation logic.</p>
<p>You can navigate Docker entirely from the command line. From a bash-like shell, pull from the <ahref="https://hub.docker.com/r/sbryngelson/mfc">sbryngelson/mfc</a> repository and run the latest MFC container: </p><divclass="fragment"><divclass="line">docker run -it --rm --entrypoint bash sbryngelson/mfc:latest-cpu</div>
185
185
</div><!-- fragment --><p><b>Selecting OS/ARCH:</b> Docker selects the compatible architecture by default when pulling and running a container. You can manually specify your platform if something seems to go wrong, as Docker may suggest doing so. For example, <spanclass="tt">linux/amd64</span> handles many *nix-based x86 architectures, and <spanclass="tt">linux/arm64</span> handles Apple Silicon and Arm-based *nix devices. You can specify it like this: </p><divclass="fragment"><divclass="line">docker run -it --rm --entrypoint bash --platform linux/amd64 sbryngelson/mfc:latest-cpu</div>
186
186
</div><!-- fragment --><p><b>What's Next?</b></p>
187
187
<p>Once a container has started, the primary working directory is <spanclass="tt">/opt/MFC</span>, and all necessary files are located there. You can check out the usual MFC documentation, such as the <aclass="el" href="examples.html" title="Example Cases">Example Cases</a>, to get familiar with running cases. Then, review the <aclass="el" href="case.html" title="Case Files">Case Files</a> to write a custom case file.</p>
<p>Let's take a closer look at running MFC within a container. Kick off a CPU container: </p><divclass="fragment"><divclass="line">docker run -it --rm --entrypoint bash sbryngelson/mfc:latest-cpu</div>
191
191
</div><!-- fragment --><p> Or, start a GPU container. </p><divclass="fragment"><divclass="line">docker run -it --rm --gpus all --entrypoint bash sbryngelson/mfc:latest-gpu</div>
</div><!-- fragment --><p><b>Shared Memory</b></p>
196
196
<p>If you run a job with multiple MPI ranks, you could run into <em>MPI memory binding errors</em>. This can manifest as a failed test (launched via <spanclass="tt">./mfc.sh test</span>) and running cases with <spanclass="tt">./mfc.sh run -n X <path/to/case.py></span> where <spanclass="tt">X > 1</span>. To avoid this issue, you can increase the shared memory size (to keep MPI working): </p><divclass="fragment"><divclass="line">docker run -it --rm --entrypoint bash --shm-size=<e.g., 4gb> sbryngelson/mfc:latest-cpu</div>
197
197
</div><!-- fragment --><p> or avoid MPI altogether via <spanclass="tt">./mfc.sh <your commands> --no-mpi</span>.</p>
<p>On the source machine, pull and save the image: </p><divclass="fragment"><divclass="line">docker pull sbryngelson/mfc:latest-cpu</div>
201
201
<divclass="line">docker save -o mfc:latest-cpu.tar sbryngelson/mfc:latest-cpu</div>
202
202
</div><!-- fragment --><p> On the target machine, load and run the image: </p><divclass="fragment"><divclass="line">docker load -i mfc:latest-cpu.tar</div>
203
203
<divclass="line">docker run -it --rm mfc:latest-cpu</div>
204
-
</div><!-- fragment --><h2class="doxsection"><aclass="anchor" id="autotoc_md126"></a>
204
+
</div><!-- fragment --><h2class="doxsection"><aclass="anchor" id="autotoc_md127"></a>
205
205
Using Supercomputers/Clusters via Apptainer/Singularity</h2>
<p>On the source machine, pull and translate the image into <spanclass="tt">.sif</span> format: </p><divclass="fragment"><divclass="line">apptainer build mfc:latest-gpu.sif docker://sbryngelson/mfc:latest-gpu</div>
215
215
</div><!-- fragment --><p> On the target machine, load and start an interactive shell: </p><divclass="fragment"><divclass="line">apptainer shell --nv --fakeroot --writable-tmpfs --bind "$PWD":/mnt mfc:latest-gpu.sif</div>
216
-
</div><!-- fragment --><h3class="doxsection"><aclass="anchor" id="autotoc_md129"></a>
216
+
</div><!-- fragment --><h3class="doxsection"><aclass="anchor" id="autotoc_md130"></a>
217
217
Slurm Job</h3>
218
218
<p>Below is an example Slurm batch job script. Refer to your machine's user guide for instructions on properly loading and using Apptainer. </p><divclass="fragment"><divclass="line">#!/bin/bash</div>
</div><!-- fragment --><h3class="doxsection"><aclass="anchor" id="autotoc_md134"></a>
267
+
</div><!-- fragment --><h3class="doxsection"><aclass="anchor" id="autotoc_md135"></a>
268
268
Architecture Support</h3>
269
269
<p>You can specify your architecture with <spanclass="tt">--platform <os>/<arch></span>, typically either <spanclass="tt">linux/amd64</span> or <spanclass="tt">linux/arm64</span>. If you are unsure, Docker automatically selects the compatible image with your system architecture. If native support isn't available, QEMU emulation is enabled for the following architectures, albeit with degraded performance. </p><divclass="fragment"><divclass="line">linux/amd64</div>
0 commit comments