Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/intro/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ set this for the current terminal session:
.. code-block:: console

$ export DYLD_FALLBACK_LIBRARY_PATH="/opt/homebrew/lib:/usr/local/lib:$DYLD_FALLBACK_LIBRARY_PATH"
export DYLD_FALLBACK_LIBRARY_PATH="/opt/homebrew/lib:/usr/local/lib:$DYLD_FALLBACK_LIBRARY_PATH"


Latest Stable Release
---------------------
Expand Down
57 changes: 42 additions & 15 deletions docs/tutorials/case_a.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,17 @@ The source code can be found in the ``tutorials/case_a`` folder or in `GitHub <
└── config.yml


* The testing region ``region.txt`` consists of a grid with two 1ºx1º bins, defined by its bottom-left nodes. The grid spacing is obtained automatically. The nodes are:
* The testing region ``region.txt`` consists of a grid with two 1ºx1º bins, defined by its bottom-left nodes. (see :doc:`pycsep:concepts/regions` in **pyCSEP***)/ The grid spacing is obtained automatically. The nodes are:

.. literalinclude:: ../../tutorials/case_a/region.txt
:caption: tutorials/case_a/region.txt

* The testing catalog ``catalog.csep`` contains only one event and is formatted in the :meth:`~pycsep.utils.readers.csep_ascii` style (see :doc:`pycsep:concepts/catalogs`). Catalog formats are detected automatically
* The testing catalog ``catalog.csep`` contains only one event and is formatted in the :meth:`~pycsep.utils.readers.csep_ascii` style (see :doc:`pycsep:concepts/catalogs` in **pyCSEP***). Catalog formats are detected automatically

.. literalinclude:: ../../tutorials/case_a/catalog.csep
:caption: tutorials/case_a/catalog.csep

* The forecast ``best_model.dat`` to be evaluated is written in the ``.dat`` format (see :doc:`pycsep:concepts/forecasts`). Forecast formats are detected automatically (see :mod:`floatcsep.utils.file_io.GriddedForecastParsers`)
* The forecast ``best_model.dat`` to be evaluated is written in the ``.dat`` format (see :doc:`pycsep:concepts/forecasts` in **pyCSEP**). Forecast formats are detected automatically (see :mod:`floatcsep.utils.file_io.GriddedForecastParsers`)

.. literalinclude:: ../../tutorials/case_a/best_model.dat
:caption: tutorials/case_a/best_model.dat
Expand All @@ -61,12 +61,12 @@ The source code can be found in the ``tutorials/case_a`` folder or in `GitHub <
Configuration
-------------

The experiment is defined by a time-, region-, model- and test-configurations, as well as a catalog and a region. In this example, they are written together in the ``config.yml`` file.
The experiment is defined by a time-, region-, model- and test-configurations, as well as a catalog and a region. In this example, they are written together in the ``config.yml`` file.


.. important::
.. warning::

Every file path (e.g., of a catalog) specified in the ``config.yml`` file should be relative to the directory containing the configuration file.
Every file path (e.g., of a catalog) specified in the ``config.yml`` file should be relative to the directory containing the configuration file.



Expand Down Expand Up @@ -100,7 +100,7 @@ Region
Catalog
~~~~~~~

It is defined in the ``catalog`` inset. This should only make reference to a catalog **file** or a catalog **query function** (e.g. :func:`~csep.query_comcat`). **floatCSEP** will automatically filter the catalog to the experiment time, spatial and magnitude frames:
It is defined in the ``catalog`` inset. This should only make reference to a catalog **file** or a catalog **query function** (see catalog loaders in :mod:`csep`). **floatCSEP** will automatically filter the catalog to the experiment time, spatial and magnitude frames:

.. literalinclude:: ../../tutorials/case_a/config.yml
:caption: tutorials/case_a/config.yml
Expand All @@ -109,7 +109,7 @@ Catalog

Models
~~~~~~
The model configuration is set in the ``models`` inset with a list of model names, which specify their file paths (and other attributes). Here, we just set the path as ``best_model.dat``, whose format is automatically detected.
The model configuration is set in the ``models`` inset with a list of model names, which specify their file paths (and other attributes). Here, we just set the path as ``best_model.dat``, whose format is automatically detected (see `Working with conventional gridded forecasts <https://docs.cseptesting.org/concepts/forecasts.html#working-with-conventional-gridded-forecasts>`_ in **pyCSEP**) .

.. literalinclude:: ../../tutorials/case_a/config.yml
:caption: tutorials/case_a/config.yml
Expand All @@ -124,11 +124,22 @@ Evaluations
~~~~~~~~~~~
The experiment's evaluations are defined in the ``tests`` inset. It should be a list of test names making reference to their function and plotting function. These can be either from **pyCSEP** (see :doc:`pycsep:concepts/evaluations`) or defined manually. Here, we use the Poisson consistency N-test: its function is :func:`poisson_evaluations.number_test <csep.core.poisson_evaluations.number_test>` with a plotting function :func:`plot_poisson_consistency_test <csep.utils.plots.plot_poisson_consistency_test>`

.. literalinclude:: ../../tutorials/case_a/config.yml
:caption: tutorials/case_a/config.yml
:language: yaml
:lines: 21-24
.. literalinclude:: ../../tutorials/case_a/config.yml
:caption: tutorials/case_a/config.yml
:language: yaml
:lines: 21-24

.. important::

See here all available `Evaluation Functions <https://floatcsep.readthedocs.io/en/latest/guide/evaluation_config.html#evaluations-functions>`_, along with their corresponding `Plotting Functions <https://floatcsep.readthedocs.io/en/latest/guide/evaluation_config.html#plotting-functions>`_.

.. note::

For further details on how to configure an experiment, models and evaluations, see:

- :ref:`experiment_config`
- :ref:`model_config`
- :ref:`evaluation_config`

Running the experiment
----------------------
Expand Down Expand Up @@ -160,9 +171,25 @@ Results
* The complete results are summarized in ``results/report.md``


Advanced
~~~~~~~~
pyCSEP under the hood
---------------------

This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects.

**Classes and functions used in this tutorial**

- Catalog: :py:class:`csep.core.catalogs.CSEPCatalog`

- :func:`csep.load_catalog`

The experiment run logic can be seen in the file ``case_a.py``, which executes the same example but in python source code. The run logic of the terminal commands ``run``, ``plot`` and ``reproduce`` can be found in :mod:`floatcsep.commands.main`, and can be customized by creating a script similar to ``case_a.py``.
- Region: :py:class:`csep.core.regions.CartesianGrid2D`
- Forecast class: :py:class:`csep.core.forecasts.GriddedForecast`
- Test functions: :py:func:`csep.core.poisson_evaluations.number_test`
- Result plotting functions: :py:func:`csep.utils.plots.plot_poisson_consistency_test`

**Where to learn pyCSEP further:**

- Catalogs: :doc:`pycsep:concepts/catalogs`
- Regions: :doc:`pycsep:concepts/regions`
- Forecasts: :doc:`pycsep:concepts/forecasts`
- Evaluations: :doc:`pycsep:concepts/evaluations`
49 changes: 47 additions & 2 deletions docs/tutorials/case_b.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,19 @@ Evaluations
:caption: tutorials/case_b/tests.yml

.. note::
Plotting keyword arguments can be set in the ``plot_kwargs`` option - see :func:`~csep.utils.plots.plot_poisson_consistency_test` and :func:`~csep.utils.plots.plot_comparison_test` -.
Plotting keyword arguments can be set in the ``plot_kwargs`` option (see :func:`~csep.utils.plots.plot_poisson_consistency_test` and :func:`~csep.utils.plots.plot_comparison_test`).

.. important::
Comparison tests (such as the ``paired_t_test``) requires a reference model, whose name should be set in ``ref_model`` at the given test configuration.
Comparison tests (such as the :py:func:`poisson_evaluations.paired_t_test <csep.core.poisson_evaluations.paired_t_test>`) requires a reference model, whose name should be set in ``ref_model`` at the given test configuration.See all available `Evaluation Functions <https://floatcsep.readthedocs.io/en/latest/guide/evaluation_config.html#evaluations-functions>`_ and `Plotting Functions <https://floatcsep.readthedocs.io/en/latest/guide/evaluation_config.html#plotting-functions>`_.

.. note::

For further details on how to configure an experiment, models and evaluations, see:

- :ref:`experiment_config`
- :ref:`model_config`
- :ref:`evaluation_config`


Running the experiment
----------------------
Expand All @@ -106,3 +115,39 @@ The experiment can be run by simply navigating to the ``tutorials/case_b`` folde
This will automatically set all the file paths of the calculation (testing catalogs, evaluation results, figures) and will display a summarized report in ``results/report.md``.


pyCSEP under the hood
---------------------

This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects.

**Classes and functions used in this tutorial**

- Catalog: :py:class:`csep.core.catalogs.CSEPCatalog`

- :meth:`CSEPCatalog.write_json() <csep.core.catalogs.CSEPCatalog.write_json>`
- :meth:`CSEPCatalog.load_json() <csep.core.catalogs.CSEPCatalog.load_json>`

- Region: :py:class:`csep.core.regions.CartesianGrid2D`
- Forecast class: :py:class:`csep.core.forecasts.GriddedForecast`

- :meth:`floatcsep.utils.file_io.GriddedForecastParsers.csv`

- Test functions:

- :py:func:`csep.core.poisson_evaluations.number_test`
- :py:func:`csep.core.poisson_evaluations.spatial_test`
- :py:func:`csep.core.poisson_evaluations.magnitude_test`
- :py:func:`csep.core.poisson_evaluations.conditional_likelihood_test`
- :py:func:`csep.core.poisson_evaluations.paired_t_test`

- Result plotting functions:

- :py:func:`csep.utils.plots.plot_poisson_consistency_test`
- :py:func:`csep.utils.plots.plot_comparison_test`

**Where to learn pyCSEP further:**

- :doc:`pycsep:concepts/catalogs`
- :doc:`pycsep:concepts/regions`
- :doc:`pycsep:concepts/forecasts`
- :doc:`pycsep:concepts/evaluations`
42 changes: 41 additions & 1 deletion docs/tutorials/case_c.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,12 @@ Evaluations

.. note::

Plot arguments (title, labels, font sizes, axes limits, etc.) can be passed as a dictionary in ``plot_args`` (see the arguments details in :func:`~csep.utils.plots.plot_poisson_consistency_test`)
Plot arguments (title, labels, font sizes, axes limits, etc.) can be passed as a dictionary in ``plot_args`` (see the arguments details in :func:`~csep.utils.plot_poisson_consistency_test`)

.. important::

See here all available `Evaluation Functions <https://floatcsep.readthedocs.io/en/latest/guide/evaluation_config.html#evaluations-functions>`_ and their corresponding `Plotting Functions <https://floatcsep.readthedocs.io/en/latest/guide/evaluation_config.html#plotting-functions>`_.


Results
-------
Expand All @@ -99,3 +104,38 @@ now creates the result path tree for all time windows.
The report shows the temporal evaluations for all time-windows, whereas the discrete evaluations are shown only for the last time window.



pyCSEP under the hood
---------------------

This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects.

**Classes and functions used in this tutorial**

- Catalog: :py:class:`csep.core.catalogs.CSEPCatalog`

- :func:`csep.load_catalog`
- :meth:`csep.core.catalogs.CSEPCatalog.write_json`

- Region: :py:class:`csep.core.regions.CartesianGrid2D`
- Forecast class: :py:class:`csep.core.forecasts.GriddedForecast`

- :meth:`floatcsep.utils.file_io.GriddedForecastParsers.csv`

- Test functions:

- :py:func:`csep.core.poisson_evaluations.spatial_test`
- :py:func:`floatcsep.utils.helpers.sequential_likelihood`
- :py:func:`floatcsep.utils.helpers.sequential_information_gain`

- Result plotting functions:

- :py:func:`csep.utils.plots.plot_poisson_consistency_test`
- :py:func:`floatcsep.utils.helpers.plot_sequential_likelihood`

**Where to learn pyCSEP further:**

- :doc:`pycsep:concepts/catalogs`
- :doc:`pycsep:concepts/regions`
- :doc:`pycsep:concepts/forecasts`
- :doc:`pycsep:concepts/evaluations`
37 changes: 36 additions & 1 deletion docs/tutorials/case_d.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Once the catalog and models have been downloaded, the experiment structure will
└── tests.yml

.. note::
In this experiment no region file is needed, because the region is encoded in the forecasts themselves (QuadTree models, see https://zenodo.org/record/6289795 and https://zenodo.org/record/6255575 ).
In this experiment no region file is needed because the region is encoded in the forecasts themselves, which are based on the QuadTree description (See `Working with quadtree-gridded forecasts <https://docs.cseptesting.org/concepts/forecasts.html#working-with-quadtree-gridded-forecasts>`_, and the Zenodo repositories https://zenodo.org/record/6289795 and https://zenodo.org/record/6255575 ).

Configuration
-------------
Expand Down Expand Up @@ -120,3 +120,38 @@ Running the experiment



pyCSEP under the hood
---------------------

This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects.

**Classes and functions used in this tutorial**

- Catalog: :py:class:`csep.core.catalogs.CSEPCatalog`

- :func:`csep.load_catalog`
- :meth:`csep.core.catalogs.CSEPCatalog.write_json`

- Region: :py:class:`csep.core.regions.QuadtreeGrid2D`
- Forecast class: :py:class:`csep.core.forecasts.GriddedForecast`

- :meth:`floatcsep.utils.file_io.GriddedForecastParsers.quadtree`

- Test functions:

- :py:func:`csep.core.poisson_evaluations.spatial_test`
- :py:func:`csep.core.poisson_evaluations.paired_t_test`
- :py:func:`floatcsep.utils.helpers.vector_poisson_t_w_test`

- Result plotting functions:

- :py:func:`csep.utils.plots.plot_poisson_consistency_test`
- :py:func:`csep.utils.plots.plot_comparison_test`
- :py:func:`floatcsep.utils.helpers.plot_matrix_comparative_test`

**Where to learn pyCSEP further:**

- :doc:`pycsep:concepts/catalogs`
- :doc:`pycsep:concepts/regions`
- :doc:`pycsep:concepts/forecasts`
- :doc:`pycsep:concepts/evaluations`
45 changes: 42 additions & 3 deletions docs/tutorials/case_e.rst
Original file line number Diff line number Diff line change
Expand Up @@ -97,9 +97,6 @@ Models

The forecasts are defined in ``[Earthquakes / 10-years]``, which is specified with the ``forecast_unit`` option (The default is `forecast_unit: 1`).

.. note::

The ``use_db`` flag allows ``floatcsep`` to transform the forecasts into a database (HDF5), which speeds up the calculations.

Post-Process
~~~~~~~~~~~~
Expand Down Expand Up @@ -144,6 +141,48 @@ Plot command

and re-run with the ``plot`` command. A forecast figure will re-appear in ``results/{window}/forecasts`` with a different colormap. Additional forecast and catalog plotting options can be found in the :func:`csep.utils.plots.plot_spatial_dataset` and :func:`csep.utils.plots.plot_catalog` ``pycsep`` functions.

.. note::

For further details on how to configure the **post-process** of an experiment, see:

- :ref:`postprocess`


pyCSEP under the hood
---------------------

This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects.

**Classes and functions used in this tutorial**

- Catalog: :py:class:`csep.core.catalogs.CSEPCatalog`

- :meth:`csep.core.catalogs.CSEPCatalog.load_json`
- :meth:`csep.core.catalogs.CSEPCatalog.write_json`

- Region: :py:class:`csep.core.regions.italy_csep_region`
- Forecast class: :py:class:`csep.core.forecasts.GriddedForecast`

- :meth:`floatcsep.utils.file_io.GriddedForecastParsers.xml`

- Test functions:

- :py:func:`csep.core.poisson_evaluations.spatial_test`
- :py:func:`floatcsep.utils.helpers.sequential_likelihood`

- Result plotting functions:

- :py:func:`csep.utils.plots.plot_poisson_consistency_test`
- :py:func:`floatcsep.utils.helpers.plot_sequential_likelihood`


**Where to learn pyCSEP further:**

- :doc:`pycsep:concepts/catalogs`
- :doc:`pycsep:concepts/regions`
- :doc:`pycsep:concepts/forecasts`
- :doc:`pycsep:concepts/evaluations`


.. _case_e_references:

Expand Down
Loading
Loading