diff --git a/docs/intro/installation.rst b/docs/intro/installation.rst index f50a20e..b929e55 100644 --- a/docs/intro/installation.rst +++ b/docs/intro/installation.rst @@ -110,7 +110,7 @@ set this for the current terminal session: .. code-block:: console $ export DYLD_FALLBACK_LIBRARY_PATH="/opt/homebrew/lib:/usr/local/lib:$DYLD_FALLBACK_LIBRARY_PATH" -export DYLD_FALLBACK_LIBRARY_PATH="/opt/homebrew/lib:/usr/local/lib:$DYLD_FALLBACK_LIBRARY_PATH" + Latest Stable Release --------------------- diff --git a/docs/tutorials/case_a.rst b/docs/tutorials/case_a.rst index 36fe5aa..6d5010c 100644 --- a/docs/tutorials/case_a.rst +++ b/docs/tutorials/case_a.rst @@ -42,17 +42,17 @@ The source code can be found in the ``tutorials/case_a`` folder or in `GitHub < └── config.yml -* The testing region ``region.txt`` consists of a grid with two 1ºx1º bins, defined by its bottom-left nodes. The grid spacing is obtained automatically. The nodes are: +* The testing region ``region.txt`` consists of a grid with two 1ºx1º bins, defined by its bottom-left nodes. (see :doc:`pycsep:concepts/regions` in **pyCSEP***)/ The grid spacing is obtained automatically. The nodes are: .. literalinclude:: ../../tutorials/case_a/region.txt :caption: tutorials/case_a/region.txt -* The testing catalog ``catalog.csep`` contains only one event and is formatted in the :meth:`~pycsep.utils.readers.csep_ascii` style (see :doc:`pycsep:concepts/catalogs`). Catalog formats are detected automatically +* The testing catalog ``catalog.csep`` contains only one event and is formatted in the :meth:`~pycsep.utils.readers.csep_ascii` style (see :doc:`pycsep:concepts/catalogs` in **pyCSEP***). Catalog formats are detected automatically .. literalinclude:: ../../tutorials/case_a/catalog.csep :caption: tutorials/case_a/catalog.csep -* The forecast ``best_model.dat`` to be evaluated is written in the ``.dat`` format (see :doc:`pycsep:concepts/forecasts`). Forecast formats are detected automatically (see :mod:`floatcsep.utils.file_io.GriddedForecastParsers`) +* The forecast ``best_model.dat`` to be evaluated is written in the ``.dat`` format (see :doc:`pycsep:concepts/forecasts` in **pyCSEP**). Forecast formats are detected automatically (see :mod:`floatcsep.utils.file_io.GriddedForecastParsers`) .. literalinclude:: ../../tutorials/case_a/best_model.dat :caption: tutorials/case_a/best_model.dat @@ -61,12 +61,12 @@ The source code can be found in the ``tutorials/case_a`` folder or in `GitHub < Configuration ------------- -The experiment is defined by a time-, region-, model- and test-configurations, as well as a catalog and a region. In this example, they are written together in the ``config.yml`` file. + The experiment is defined by a time-, region-, model- and test-configurations, as well as a catalog and a region. In this example, they are written together in the ``config.yml`` file. -.. important:: + .. warning:: - Every file path (e.g., of a catalog) specified in the ``config.yml`` file should be relative to the directory containing the configuration file. + Every file path (e.g., of a catalog) specified in the ``config.yml`` file should be relative to the directory containing the configuration file. @@ -100,7 +100,7 @@ Region Catalog ~~~~~~~ - It is defined in the ``catalog`` inset. This should only make reference to a catalog **file** or a catalog **query function** (e.g. :func:`~csep.query_comcat`). **floatCSEP** will automatically filter the catalog to the experiment time, spatial and magnitude frames: + It is defined in the ``catalog`` inset. This should only make reference to a catalog **file** or a catalog **query function** (see catalog loaders in :mod:`csep`). **floatCSEP** will automatically filter the catalog to the experiment time, spatial and magnitude frames: .. literalinclude:: ../../tutorials/case_a/config.yml :caption: tutorials/case_a/config.yml @@ -109,7 +109,7 @@ Catalog Models ~~~~~~ - The model configuration is set in the ``models`` inset with a list of model names, which specify their file paths (and other attributes). Here, we just set the path as ``best_model.dat``, whose format is automatically detected. + The model configuration is set in the ``models`` inset with a list of model names, which specify their file paths (and other attributes). Here, we just set the path as ``best_model.dat``, whose format is automatically detected (see `Working with conventional gridded forecasts `_ in **pyCSEP**) . .. literalinclude:: ../../tutorials/case_a/config.yml :caption: tutorials/case_a/config.yml @@ -124,11 +124,22 @@ Evaluations ~~~~~~~~~~~ The experiment's evaluations are defined in the ``tests`` inset. It should be a list of test names making reference to their function and plotting function. These can be either from **pyCSEP** (see :doc:`pycsep:concepts/evaluations`) or defined manually. Here, we use the Poisson consistency N-test: its function is :func:`poisson_evaluations.number_test ` with a plotting function :func:`plot_poisson_consistency_test ` -.. literalinclude:: ../../tutorials/case_a/config.yml - :caption: tutorials/case_a/config.yml - :language: yaml - :lines: 21-24 + .. literalinclude:: ../../tutorials/case_a/config.yml + :caption: tutorials/case_a/config.yml + :language: yaml + :lines: 21-24 + + .. important:: + + See here all available `Evaluation Functions `_, along with their corresponding `Plotting Functions `_. + +.. note:: + For further details on how to configure an experiment, models and evaluations, see: + + - :ref:`experiment_config` + - :ref:`model_config` + - :ref:`evaluation_config` Running the experiment ---------------------- @@ -160,9 +171,25 @@ Results * The complete results are summarized in ``results/report.md`` -Advanced -~~~~~~~~ +pyCSEP under the hood +--------------------- + + This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects. + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :func:`csep.load_catalog` -The experiment run logic can be seen in the file ``case_a.py``, which executes the same example but in python source code. The run logic of the terminal commands ``run``, ``plot`` and ``reproduce`` can be found in :mod:`floatcsep.commands.main`, and can be customized by creating a script similar to ``case_a.py``. + - Region: :py:class:`csep.core.regions.CartesianGrid2D` + - Forecast class: :py:class:`csep.core.forecasts.GriddedForecast` + - Test functions: :py:func:`csep.core.poisson_evaluations.number_test` + - Result plotting functions: :py:func:`csep.utils.plots.plot_poisson_consistency_test` + **Where to learn pyCSEP further:** + - Catalogs: :doc:`pycsep:concepts/catalogs` + - Regions: :doc:`pycsep:concepts/regions` + - Forecasts: :doc:`pycsep:concepts/forecasts` + - Evaluations: :doc:`pycsep:concepts/evaluations` diff --git a/docs/tutorials/case_b.rst b/docs/tutorials/case_b.rst index 7a34ba1..48cdb61 100644 --- a/docs/tutorials/case_b.rst +++ b/docs/tutorials/case_b.rst @@ -88,10 +88,19 @@ Evaluations :caption: tutorials/case_b/tests.yml .. note:: - Plotting keyword arguments can be set in the ``plot_kwargs`` option - see :func:`~csep.utils.plots.plot_poisson_consistency_test` and :func:`~csep.utils.plots.plot_comparison_test` -. + Plotting keyword arguments can be set in the ``plot_kwargs`` option (see :func:`~csep.utils.plots.plot_poisson_consistency_test` and :func:`~csep.utils.plots.plot_comparison_test`). .. important:: - Comparison tests (such as the ``paired_t_test``) requires a reference model, whose name should be set in ``ref_model`` at the given test configuration. + Comparison tests (such as the :py:func:`poisson_evaluations.paired_t_test `) requires a reference model, whose name should be set in ``ref_model`` at the given test configuration.See all available `Evaluation Functions `_ and `Plotting Functions `_. + +.. note:: + + For further details on how to configure an experiment, models and evaluations, see: + + - :ref:`experiment_config` + - :ref:`model_config` + - :ref:`evaluation_config` + Running the experiment ---------------------- @@ -106,3 +115,39 @@ The experiment can be run by simply navigating to the ``tutorials/case_b`` folde This will automatically set all the file paths of the calculation (testing catalogs, evaluation results, figures) and will display a summarized report in ``results/report.md``. +pyCSEP under the hood +--------------------- + + This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects. + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :meth:`CSEPCatalog.write_json() ` + - :meth:`CSEPCatalog.load_json() ` + + - Region: :py:class:`csep.core.regions.CartesianGrid2D` + - Forecast class: :py:class:`csep.core.forecasts.GriddedForecast` + + - :meth:`floatcsep.utils.file_io.GriddedForecastParsers.csv` + + - Test functions: + + - :py:func:`csep.core.poisson_evaluations.number_test` + - :py:func:`csep.core.poisson_evaluations.spatial_test` + - :py:func:`csep.core.poisson_evaluations.magnitude_test` + - :py:func:`csep.core.poisson_evaluations.conditional_likelihood_test` + - :py:func:`csep.core.poisson_evaluations.paired_t_test` + + - Result plotting functions: + + - :py:func:`csep.utils.plots.plot_poisson_consistency_test` + - :py:func:`csep.utils.plots.plot_comparison_test` + + **Where to learn pyCSEP further:** + + - :doc:`pycsep:concepts/catalogs` + - :doc:`pycsep:concepts/regions` + - :doc:`pycsep:concepts/forecasts` + - :doc:`pycsep:concepts/evaluations` diff --git a/docs/tutorials/case_c.rst b/docs/tutorials/case_c.rst index 5b58033..aa46509 100644 --- a/docs/tutorials/case_c.rst +++ b/docs/tutorials/case_c.rst @@ -78,7 +78,12 @@ Evaluations .. note:: - Plot arguments (title, labels, font sizes, axes limits, etc.) can be passed as a dictionary in ``plot_args`` (see the arguments details in :func:`~csep.utils.plots.plot_poisson_consistency_test`) + Plot arguments (title, labels, font sizes, axes limits, etc.) can be passed as a dictionary in ``plot_args`` (see the arguments details in :func:`~csep.utils.plot_poisson_consistency_test`) + + .. important:: + + See here all available `Evaluation Functions `_ and their corresponding `Plotting Functions `_. + Results ------- @@ -99,3 +104,38 @@ now creates the result path tree for all time windows. The report shows the temporal evaluations for all time-windows, whereas the discrete evaluations are shown only for the last time window. + +pyCSEP under the hood +--------------------- + + This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects. + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :func:`csep.load_catalog` + - :meth:`csep.core.catalogs.CSEPCatalog.write_json` + + - Region: :py:class:`csep.core.regions.CartesianGrid2D` + - Forecast class: :py:class:`csep.core.forecasts.GriddedForecast` + + - :meth:`floatcsep.utils.file_io.GriddedForecastParsers.csv` + + - Test functions: + + - :py:func:`csep.core.poisson_evaluations.spatial_test` + - :py:func:`floatcsep.utils.helpers.sequential_likelihood` + - :py:func:`floatcsep.utils.helpers.sequential_information_gain` + + - Result plotting functions: + + - :py:func:`csep.utils.plots.plot_poisson_consistency_test` + - :py:func:`floatcsep.utils.helpers.plot_sequential_likelihood` + + **Where to learn pyCSEP further:** + + - :doc:`pycsep:concepts/catalogs` + - :doc:`pycsep:concepts/regions` + - :doc:`pycsep:concepts/forecasts` + - :doc:`pycsep:concepts/evaluations` diff --git a/docs/tutorials/case_d.rst b/docs/tutorials/case_d.rst index b23d991..7ab98bb 100644 --- a/docs/tutorials/case_d.rst +++ b/docs/tutorials/case_d.rst @@ -61,7 +61,7 @@ Once the catalog and models have been downloaded, the experiment structure will └── tests.yml .. note:: - In this experiment no region file is needed, because the region is encoded in the forecasts themselves (QuadTree models, see https://zenodo.org/record/6289795 and https://zenodo.org/record/6255575 ). + In this experiment no region file is needed because the region is encoded in the forecasts themselves, which are based on the QuadTree description (See `Working with quadtree-gridded forecasts `_, and the Zenodo repositories https://zenodo.org/record/6289795 and https://zenodo.org/record/6255575 ). Configuration ------------- @@ -120,3 +120,38 @@ Running the experiment +pyCSEP under the hood +--------------------- + + This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects. + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :func:`csep.load_catalog` + - :meth:`csep.core.catalogs.CSEPCatalog.write_json` + + - Region: :py:class:`csep.core.regions.QuadtreeGrid2D` + - Forecast class: :py:class:`csep.core.forecasts.GriddedForecast` + + - :meth:`floatcsep.utils.file_io.GriddedForecastParsers.quadtree` + + - Test functions: + + - :py:func:`csep.core.poisson_evaluations.spatial_test` + - :py:func:`csep.core.poisson_evaluations.paired_t_test` + - :py:func:`floatcsep.utils.helpers.vector_poisson_t_w_test` + + - Result plotting functions: + + - :py:func:`csep.utils.plots.plot_poisson_consistency_test` + - :py:func:`csep.utils.plots.plot_comparison_test` + - :py:func:`floatcsep.utils.helpers.plot_matrix_comparative_test` + + **Where to learn pyCSEP further:** + + - :doc:`pycsep:concepts/catalogs` + - :doc:`pycsep:concepts/regions` + - :doc:`pycsep:concepts/forecasts` + - :doc:`pycsep:concepts/evaluations` diff --git a/docs/tutorials/case_e.rst b/docs/tutorials/case_e.rst index 7b76925..7ffb1df 100644 --- a/docs/tutorials/case_e.rst +++ b/docs/tutorials/case_e.rst @@ -97,9 +97,6 @@ Models The forecasts are defined in ``[Earthquakes / 10-years]``, which is specified with the ``forecast_unit`` option (The default is `forecast_unit: 1`). - .. note:: - - The ``use_db`` flag allows ``floatcsep`` to transform the forecasts into a database (HDF5), which speeds up the calculations. Post-Process ~~~~~~~~~~~~ @@ -144,6 +141,48 @@ Plot command and re-run with the ``plot`` command. A forecast figure will re-appear in ``results/{window}/forecasts`` with a different colormap. Additional forecast and catalog plotting options can be found in the :func:`csep.utils.plots.plot_spatial_dataset` and :func:`csep.utils.plots.plot_catalog` ``pycsep`` functions. + .. note:: + + For further details on how to configure the **post-process** of an experiment, see: + + - :ref:`postprocess` + + +pyCSEP under the hood +--------------------- + + This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects. + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :meth:`csep.core.catalogs.CSEPCatalog.load_json` + - :meth:`csep.core.catalogs.CSEPCatalog.write_json` + + - Region: :py:class:`csep.core.regions.italy_csep_region` + - Forecast class: :py:class:`csep.core.forecasts.GriddedForecast` + + - :meth:`floatcsep.utils.file_io.GriddedForecastParsers.xml` + + - Test functions: + + - :py:func:`csep.core.poisson_evaluations.spatial_test` + - :py:func:`floatcsep.utils.helpers.sequential_likelihood` + + - Result plotting functions: + + - :py:func:`csep.utils.plots.plot_poisson_consistency_test` + - :py:func:`floatcsep.utils.helpers.plot_sequential_likelihood` + + + **Where to learn pyCSEP further:** + + - :doc:`pycsep:concepts/catalogs` + - :doc:`pycsep:concepts/regions` + - :doc:`pycsep:concepts/forecasts` + - :doc:`pycsep:concepts/evaluations` + .. _case_e_references: diff --git a/docs/tutorials/case_f.rst b/docs/tutorials/case_f.rst index 433f0be..d1a52ae 100644 --- a/docs/tutorials/case_f.rst +++ b/docs/tutorials/case_f.rst @@ -46,7 +46,7 @@ The source files can be found in the ``tutorials/case_e`` folder or in `the Git ├── models.yml └── tests.yml -* The model to be evaluated (``etas``) is a collection of daily forecasts from ``2016-11-14`` until ``2016-11-21``. +* The model to be evaluated (``etas``) is a collection of daily forecasts from ``2016-11-14`` until ``2016-11-21``. The forecasts are `Catalog-Based `_, which are composed of multiple individual simulations (See `Working with catalog-based forecasts `_) .. important:: The forecasts must be located in a folder ``forecasts`` inside the model folder. This is meant for consistency with models based on source codes (see subsequent tutorials). @@ -55,7 +55,7 @@ The source files can be found in the ``tutorials/case_e`` folder or in `the Git Model ----- -The time-dependency of a model is manifested here by the provision of different forecasts, i.e., statistical descriptions of seismicity, for different time-windows. In this example, the forecasts were created from an external `ETAS model `_ (:ref:`Mizrahi et al. 2021 `), with which the experiment has no interface. This means that we use **only the forecast files** and no source code. We leave the handling of a model source code for subsequent tutorials. + The time-dependency of a model is manifested here by the provision of different `Catalog-Based Forecasts `_, i.e., stochastic descriptions of seismicity, for different time-windows. In this example, the forecasts were created from an external `ETAS model `_ (:ref:`Mizrahi et al. 2021 `), with which the experiment has no interface for this case. This means that we use **only the forecast files** and no source code. We leave the handling of a model source code for tutorial :ref:`case_h`. @@ -66,7 +66,7 @@ Configuration Time ~~~~ - The configuration is analogous to time-independent models with multiple time-windows (e.g., case C) with the exception that a ``horizon`` could be defined instead of ``intervals``, which is the forecast time-window length. The experiment's class should now be explicited as ``exp_class: td``. + The configuration is analogous to time-independent models with multiple time-windows (e.g., :ref:`case_c`) with the exception that a ``horizon`` could be defined instead of ``intervals``, which is the forecast time-window length. The experiment's class should now be explicited as ``exp_class: td``. .. literalinclude:: ../../tutorials/case_f/config.yml :caption: tutorials/case_f/config.yml @@ -95,11 +95,12 @@ Models :language: yaml :lines: 1-4 -.. note:: - For consistency with time-dependent models that will create forecasts from a source code, the ``path`` should point to the folder of the model, which itself should contain a sub-folder named ``{path}/forecasts`` where the files are located. + .. warning:: + For consistency with time-dependent models that will create forecasts from a source code, the ``path`` should point to the folder of the model, which itself should contain a sub-folder named ``{path}/forecasts`` where the files are located. For format descriptions, see `Working with catalog-based forecasts `_). + + .. important:: + Note that for catalog-based forecast models, the number of catalog simulations (``n_sims``) must be specified – because a forecast may contain synthetic catalogs with zero-event simulations and therefore does not imply the total number of simulated synthetic catalogs. -.. important:: - Note that for catalog-based forecast models, the number of catalog simulations (``n_sims``) must be specified – because a forecast may contain synthetic catalogs with zero-event simulations and therefore does not imply the total number of simulated synthetic catalogs. Tests ~~~~~ @@ -114,6 +115,14 @@ Tests It is possible to assign two plotting functions to a test, whose ``plot_args`` and ``plot_kwargs`` can be placed indented beneath. +.. note:: + + For further details on how to configure an experiment, models and evaluations, see: + + - :ref:`experiment_config` + - :ref:`model_config` + - :ref:`evaluation_config` + Running the experiment ---------------------- @@ -126,6 +135,44 @@ Running the experiment This will automatically set all the calculation paths (testing catalogs, evaluation results, figures) and will create a summarized report in ``results/report.md``. + +pyCSEP under the hood +--------------------- + + This tutorial uses *floatCSEP* as the orchestrator, but relies on *pyCSEP* for functions and objects. + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :meth:`csep.core.catalogs.CSEPCatalog.load_json` + - :meth:`csep.core.catalogs.CSEPCatalog.write_json` + + - Region: :py:class:`csep.core.regions.nz_csep_region` + - Forecast class: :py:class:`csep.core.forecasts.CatalogForecast` + + - :meth:`csep.load_catalog_forecast` + - :meth:`floatcsep.utils.file_io.CatalogForecastParsers.csv` + + - Test functions: + + - :py:func:`csep.core.catalog_evaluations.number_test` + - :py:func:`csep.core.catalog_evaluations.spatial_test` + + - Result plotting functions: + + - :py:func:`csep.utils.plots.plot_number_test` + - :py:func:`csep.utils.plots.plot_consistency_test` + + + **Where to learn pyCSEP further:** + + - :doc:`pycsep:concepts/catalogs` + - :doc:`pycsep:concepts/regions` + - :doc:`pycsep:concepts/forecasts` + - :doc:`pycsep:concepts/evaluations` + + .. _case_f_references: References diff --git a/docs/tutorials/case_g.rst b/docs/tutorials/case_g.rst index b443cf7..c69a92c 100644 --- a/docs/tutorials/case_g.rst +++ b/docs/tutorials/case_g.rst @@ -37,7 +37,7 @@ Here, we set up a time-dependent model from its **source code** for an experimen Experiment Components --------------------- -The example folder contains also, along with the already known components (configurations, catalog), a sub-folder for the **source code** of the model `pymock `_. The components of the experiment (and model) are: +The example folder contains also, along with the already known components (configurations, catalog), a sub-folder for the **source code** of the model `pymock `_. The components of the experiment (and model) are: :: @@ -65,7 +65,7 @@ The example folder contains also, along with the already known components (confi ├── custom_plot_script.py └── tests.yml -* The model to be evaluated (``pymock``) is a source code that generates forecasts for multiple time windows. +* The model to be evaluated (``pymock``) is a source code that generates `Catalog-Based Forecasts `_ for multiple time windows. * The testing catalog ``catalog.csv`` works also as the input catalog, by being filtered until the testing ``start_date`` and allocated in `pymock/input` dynamically (before each time the model is run) @@ -158,6 +158,12 @@ Models For these tutorials, we use ``venv`` sub-environments, but we recommend ``Docker`` to set up real experiments. + .. note:: + + For further details on how to configure time-dependent models, see: + + - :ref:`model_config` + Tests ~~~~~ @@ -200,6 +206,13 @@ Custom Post-Process In this way, the plot function can use all the :class:`~floatcsep.experiment.Experiment` attributes/methods to access catalogs, forecasts and test results. The script ``tutorials/case_g/custom_plot_script.py`` can also be viewed directly in `the GitHub repository `_, where it is exemplified how to access the experiment data at runtime. + .. note:: + + For further details on how to configure post-procesing, see: + + - :ref:`postprocess` + + Running the experiment ---------------------- @@ -209,5 +222,44 @@ Running the experiment $ floatcsep run config.yml - This will automatically set all the calculation paths (testing catalogs, evaluation results, figures) and will create a summarized report in ``results/report.md``. + This will automatically set all the calculation paths (testing catalogs, evaluation results, figures) and will create a summarized report in ``results/report.md`` and ``results/report.pdf``. + + To view the results in a dashboard, type: + + .. code-block:: console + + $ floatcsep view config.yml + +pyCSEP under the hood +--------------------- + + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :meth:`csep.load_catalog` + - :meth:`csep.core.catalogs.CSEPCatalog.write_json` + + - Region: :py:class:`csep.core.regions.italy_csep_region` + - Forecast class: :py:class:`csep.core.forecasts.CatalogForecast` + + - :meth:`csep.load_catalog_forecast` + - :meth:`floatcsep.utils.file_io.CatalogForecastParsers.csv` + + - Test functions: + + - :py:func:`csep.core.catalog_evaluations.number_test` + + - Result plotting functions: + + - :py:func:`csep.utils.plots.plot_number_test` + - :py:func:`csep.utils.plots.plot_consistency_test` + + + **Where to learn pyCSEP further:** + - :doc:`pycsep:concepts/catalogs` + - :doc:`pycsep:concepts/regions` + - :doc:`pycsep:concepts/forecasts` + - :doc:`pycsep:concepts/evaluations` \ No newline at end of file diff --git a/docs/tutorials/case_h.rst b/docs/tutorials/case_h.rst index f5ca656..e2fb02a 100644 --- a/docs/tutorials/case_h.rst +++ b/docs/tutorials/case_h.rst @@ -76,12 +76,12 @@ As in :ref:`Tutorial G`, each **Model** requires to build and execute a .. note:: The ``models.yml`` will define how to interface **floatCSEP** to each Model, implying that a Model should be developed, or adapted to ensure the interface requirements specified below. -1. The repository URL of each model and their specific versions (e.g., commit hash, tag, release) are specified as: +1. The repository URL of each model and their specific versions if needed (e.g., commit hash, tag, release) are specified as: .. literalinclude:: ../../tutorials/case_h/models.yml :caption: tutorials/case_h/models.yml :language: yaml - :lines: 1-3, 11-13, 21-23 + :lines: 1-3, 12-13, 22-23 2. A ``path`` needs to be indicated for each model, to both download the repository contents therein and from where the source code will be executed. @@ -100,7 +100,8 @@ As in :ref:`Tutorial G`, each **Model** requires to build and execute a .. literalinclude:: ../../tutorials/case_h/models.yml :caption: tutorials/case_h/models.yml :language: yaml - :lines: 5 + :emphasize-lines: 6 + :lines: 1-6 :lineno-match: .. note:: @@ -131,7 +132,7 @@ As in :ref:`Tutorial G`, each **Model** requires to build and execute a .. literalinclude:: ../../tutorials/case_h/models.yml :caption: tutorials/case_h/models.yml :language: yaml - :lines: 1,6,11,15,21,25 + :lines: 1,7,12,15,22,25 .. important:: Please refer to :ref:`Tutorial G` for example of how to set up ``func`` for the model and interface it to **floatCSEP**. @@ -141,13 +142,13 @@ As in :ref:`Tutorial G`, each **Model** requires to build and execute a .. literalinclude:: ../../tutorials/case_h/models.yml :caption: tutorials/case_h/models.yml :language: yaml - :lines: 21, 26 + :lines: 12,16,22, 26 The experiment will read the forecasts as: .. code-block:: - {model_path}/{forecasts}/{prefix}_{start}_{end}.csv + {model_path}/forecasts/{prefix}_{start}_{end}.csv where ``start`` and ``end`` follow either the ``%Y-%m-%dT%H:%M:%S.%f`` - ISO861 FORMAT, or the short date version ``%Y-%m-%d`` if the windows are set by midnight. @@ -156,8 +157,11 @@ As in :ref:`Tutorial G`, each **Model** requires to build and execute a .. literalinclude:: ../../tutorials/case_h/models.yml :caption: tutorials/case_h/models.yml :language: yaml - :lines: 11,17-20,21,27-31 + :lines: 12,18-21,22,28-33 +.. note:: + + For further details on how to configure time-dependent models, see :ref:`model_config` Time ~~~~ @@ -172,7 +176,7 @@ Time Catalog ~~~~~~~ - The catalog was obtained *prior* to the experiment using ``query_bsi``, but it was filtered from 2006 onwards, so it has enough data for the model calibration. + The catalog was obtained *prior* to the experiment using :func:`query bsi `, but it was filtered from 2006 onwards, so it has enough data for the model calibration. Tests @@ -213,6 +217,11 @@ Custom Post-Process In this way, the report function use all the :class:`~floatcsep.experiment.Experiment` attributes/methods to access catalogs, forecasts and test results. The script ``tutorials/case_h/custom_report.py`` can also be viewed directly in `the GitHub repository `_, where it is exemplified how to access the experiment artifacts. + .. note:: + + For further details on how to configure post-procesing, see: + + - :ref:`postprocess` Running the experiment ---------------------- @@ -225,3 +234,42 @@ Running the experiment This will automatically set all the calculation paths (testing catalogs, evaluation results, figures) and will create a summarized report in ``results/report.md``. + To view the results in a dashboard, type: + + .. code-block:: console + + $ floatcsep view config.yml + +pyCSEP under the hood +--------------------- + + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :meth:`csep.load_catalog` + - :meth:`csep.core.catalogs.CSEPCatalog.write_json` + + - Region: :py:class:`csep.core.regions.italy_csep_region` + - Forecast class: :py:class:`csep.core.forecasts.CatalogForecast` + + - :meth:`csep.load_catalog_forecast` + - :meth:`floatcsep.utils.file_io.CatalogForecastParsers.csv` + + - Test functions: + + - :py:func:`csep.core.catalog_evaluations.number_test` + + - Result plotting functions: + + - :py:func:`csep.utils.plots.plot_number_test` + - :py:func:`csep.utils.plots.plot_consistency_test` + + + **Where to learn pyCSEP further:** + + - :doc:`pycsep:concepts/catalogs` + - :doc:`pycsep:concepts/regions` + - :doc:`pycsep:concepts/forecasts` + - :doc:`pycsep:concepts/evaluations` \ No newline at end of file diff --git a/docs/tutorials/case_i.rst b/docs/tutorials/case_i.rst index d47b1ad..e285d88 100644 --- a/docs/tutorials/case_i.rst +++ b/docs/tutorials/case_i.rst @@ -218,3 +218,36 @@ Troubleshooting - **Plots not generated**: - Inspect logs under ``results/`` for tracebacks. + +pyCSEP under the hood +--------------------- + + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :meth:`csep.load_catalog` + - :meth:`csep.core.catalogs.CSEPCatalog.write_json` + + - Region: :py:class:`csep.core.regions.italy_csep_region` + - Forecast class: :py:class:`csep.core.forecasts.CatalogForecast` + + - :meth:`csep.load_catalog_forecast` + - :meth:`floatcsep.utils.file_io.CatalogForecastParsers.csv` + + - Test functions: + + - :py:func:`csep.core.catalog_evaluations.number_test` + + - Result plotting functions: + + - :py:func:`csep.utils.plots.plot_poisson_consistency_test` + + + **Where to learn pyCSEP further:** + + - :doc:`pycsep:concepts/catalogs` + - :doc:`pycsep:concepts/regions` + - :doc:`pycsep:concepts/forecasts` + - :doc:`pycsep:concepts/evaluations` \ No newline at end of file diff --git a/docs/tutorials/case_j.rst b/docs/tutorials/case_j.rst index 83ec47e..eaabc17 100644 --- a/docs/tutorials/case_j.rst +++ b/docs/tutorials/case_j.rst @@ -153,3 +153,36 @@ Running the experiment This will automatically set all the calculation paths (testing catalogs, evaluation results, figures) and will create a summarized report in ``results/report.md``. + +pyCSEP under the hood +--------------------- + + + **Classes and functions used in this tutorial** + + - Catalog: :py:class:`csep.core.catalogs.CSEPCatalog` + + - :meth:`csep.load_catalog` + - :meth:`csep.core.catalogs.CSEPCatalog.write_json` + + - Region: :py:class:`csep.core.regions.nz_csep_region` + - Forecast class: :py:class:`csep.core.forecasts.GriddedForecast` + + - :meth:`csep.load_gridded_forecast` + - :meth:`floatcsep.utils.file_io.GriddedForecastParsers.dat` + + - Test functions: + + - :py:func:`csep.core.poisson_evaluations.number_test` + + - Result plotting functions: + + - :py:func:`csep.utils.plots.plot_poisson_consistency_test` + + + **Where to learn pyCSEP further:** + + - :doc:`pycsep:concepts/catalogs` + - :doc:`pycsep:concepts/regions` + - :doc:`pycsep:concepts/forecasts` + - :doc:`pycsep:concepts/evaluations` \ No newline at end of file diff --git a/tutorials/case_e/models.yml b/tutorials/case_e/models.yml index b677d2a..873b16f 100644 --- a/tutorials/case_e/models.yml +++ b/tutorials/case_e/models.yml @@ -1,12 +1,9 @@ - ALM: path: models/gulia-wiemer.ALM.italy.10yr.2010-01-01.xml forecast_unit: 10 - store_db: True - MPS04: path: models/meletti.MPS04.italy.10yr.2010-01-01.xml forecast_unit: 10 - store_db: True - TripleS-CPTI: path: models/zechar.TripleS-CPTI.italy.10yr.2010-01-01.xml forecast_unit: 10 - store_db: True \ No newline at end of file