Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions docs/getting_started/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,3 @@ Getting started with parcels is easy; here you will find:
🎓 Output tutorial <tutorial_output.ipynb>
📖 Conceptual workflow <explanation_concepts.md>
```

```{note}
TODO: Add links to Reference API in quickstart tutorial and concepts explanation
```
26 changes: 9 additions & 17 deletions docs/getting_started/tutorial_quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,8 @@ ds_fields
As we can see, the reanalysis dataset contains eastward velocity `uo`, northward velocity `vo`, potential temperature
(`thetao`) and salinity (`so`) fields.

These hydrodynamic fields need to be stored in a `parcels.FieldSet` object. Parcels provides tooling to parse many types
of models or observations into such a `parcels.FieldSet` object. Here, we use `FieldSet.from_copernicusmarine()`, which
These hydrodynamic fields need to be stored in a {py:obj}`parcels.FieldSet` object. Parcels provides tooling to parse many types
of models or observations into such a `parcels.FieldSet` object. Here, we use {py:obj}`FieldSet.from_copernicusmarine()`, which
recognizes the standard names of a velocity field:

```{code-cell}
Expand All @@ -61,10 +61,10 @@ velocity = ds_fields.isel(time=0, depth=0).plot.quiver(x="longitude", y="latitud
Now that we have created a `parcels.FieldSet` object from the hydrodynamic data, we need to provide our second input:
the virtual particles for which we will calculate the trajectories.

We need to create a `parcels.ParticleSet` object with the particles' initial time and position. The `parcels.ParticleSet`
We need to create a {py:obj}`parcels.ParticleSet` object with the particles' initial time and position. The `parcels.ParticleSet`
object also needs to know about the `FieldSet` in which the particles "live". Finally, we need to specify the type of
`parcels.Particle` we want to use. The default particles have `time`, `z`, `lat`, and `lon`, but you can easily add
other `Variables` such as size, temperature, or age to create your own particles to mimic plastic or an [ARGO float](../user_guide/examples/tutorial_Argofloats.ipynb).
{py:obj}`parcels.ParticleClass` we want to use. The default particles have `time`, `z`, `lat`, and `lon`, but you can easily add
other {py:obj}`parcels.Variable`s such as size, temperature, or age to create your own particles to mimic plastic or an [ARGO float](../user_guide/examples/tutorial_Argofloats.ipynb).

```{code-cell}
# Particle locations and initial time
Expand All @@ -90,13 +90,9 @@ ax.scatter(lon,lat,s=40,c='w',edgecolors='r');
## Compute: `Kernel`

After setting up the input data and particle start locations and times, we need to specify what calculations to do with
the particles. These calculations, or numerical integrations, will be performed by what we call a `Kernel`, operating on
the particles. These calculations, or numerical integrations, will be performed by what we call a {py:obj}`parcels.Kernel`, operating on
all particles in the `ParticleSet`. The most common calculation is the advection of particles through the velocity field.
Parcels comes with a number of standard kernels, from which we will use the Runge-Kutta advection kernel `AdvectionRK2`:

```{note}
TODO: link to a list of included kernels
```
Parcels comes with a number of common {py:obj}`parcels.kernels`, from which we will use the Runge-Kutta advection kernel {py:obj}`parcels.kernels.AdvectionRK2`:

```{code-cell}
kernels = [parcels.kernels.AdvectionRK2]
Expand All @@ -105,7 +101,7 @@ kernels = [parcels.kernels.AdvectionRK2]
## Prepare output: `ParticleFile`

Before starting the simulation, we must define where and how frequent we want to write the output of our simulation.
We can define this in a `ParticleFile` object:
We can define this in a {py:obj}`parcels.ParticleFile` object:

```{code-cell}
output_file = parcels.ParticleFile("output-quickstart.zarr", outputdt=np.timedelta64(1, "h"))
Expand All @@ -117,18 +113,14 @@ the `outputdt` argument so that it captures the smallest timescales of our inter

## Run Simulation: `ParticleSet.execute()`

Finally, we can run the simulation by _executing_ the `ParticleSet` using the specified list of `kernels`.
Finally, we can run the simulation by _executing_ the `ParticleSet` using the specified list of `kernels`. This is done using the {py:meth}`parcels.ParticleSet.execute()` method.
Additionally, we need to specify:

- the `runtime`: for how long we want to simulate particles.
- the `dt`: the timestep with which to perform the numerical integration in the `kernels`. Depending on the numerical
integration scheme, the accuracy of our simulation will depend on `dt`. Read [this notebook](https://github.com/Parcels-code/10year-anniversary-session2/blob/8931ef69577dbf00273a5ab4b7cf522667e146c5/advection_and_windage.ipynb)
to learn more about numerical accuracy.

```{note}
TODO: add Michaels 10-years Parcels notebook to the user guide
```

```{code-cell}
:tags: [hide-output]
pset.execute(
Expand Down
4 changes: 0 additions & 4 deletions docs/user_guide/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,6 @@ The tutorials written for Parcels v3 are currently being updated for Parcels v4.

## How to

```{note}
TODO: Add links to Reference API throughout
```

```{note}
**Migrate from v3 to v4** using [this migration guide](v4-migration.md)
```
Expand Down
1 change: 1 addition & 0 deletions src/parcels/_core/basegrid.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ def search(self, z: float, y: float, x: float, ei=None) -> dict[str, tuple[int,
- Unstructured grid: {"Z": (zi, zeta), "FACE": (fi, bcoords)}

Where:

- index (int): The cell position of the particles along the given axis
- barycentric_coordinates (float or np.ndarray): The coordinates defining
the particles positions within the grid cell. For structured grids, this
Expand Down
6 changes: 2 additions & 4 deletions src/parcels/_datasets/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,7 @@

Developers, note that you should only add functions that create idealised datasets to this subpackage if they are (a) quick to generate, and (b) only use dependencies already shipped with Parcels. No data files should be added to this subpackage. Real world data files should be added to the `Parcels-code/parcels-data` repository on GitHub.

Parcels Dataset Philosophy
-------------------------
**Parcels Dataset Philosophy**

When adding datasets, there may be a tension between wanting to add a specific dataset or wanting to add machinery to generate completely parameterised datasets (e.g., with different grid resolutions, with different ranges, with different datetimes etc.). There are trade-offs to both approaches:

Expand All @@ -31,8 +30,7 @@

Sometimes we may want to test Parcels against a whole range of datasets varying in a certain way - to ensure Parcels works as expected. For these, we should add machinery to create generated datasets.

Structure
--------
**Structure**

This subpackage is broken down into structured and unstructured parts. Each of these have common submodules:

Expand Down
Loading