analysis - High-level interface

Introduction

The high-level interface for Gammapy provides a high-level Python API for the most common use cases identified in the analysis process. The classes and methods included may be used in Python scripts, notebooks or as commands within IPython sessions. The high-level user interface could also be used to automatise processes driven by parameters declared in a configuration file in YAML format that addresses the most common analysis use cases identified.

Getting started

The easiest way to get started with the high-level interface is using it within an IPython console or a notebook.

>>> from gammapy.analysis import Analysis, AnalysisConfig
>>> config = AnalysisConfig()
>>> analysis = Analysis(config)

Configuration and methods

You can have a look at the configuration settings provided by default, and also dump them into a file that you can edit to start a new analysis from the modified config file.

>>> print(config)
>>> config.write("config.yaml")
>>> config = AnalysisConfig.read("config.yaml")

You can also start with the built-in default analysis configuration and update it by passing values for just the parameters you want to set, using the AnalysisConfig.from_yaml method:

config = AnalysisConfig.from_yaml("""
general:
    log:
        level: warning
""")

Once you have your configuration defined you may start an analysis instance:

analysis = Analysis(config)

The hierarchical structure of the tens of parameters needed may be hard to follow. You can print your analysis config as a mean to display its format and syntax, the parameters and units allowed, as well as the different sections where they belong in the config structure.

>>> print(analysis.config)

At any moment you may add or change the value of one specific parameter needed in your analysis.

>>> analysis.config.datasets.geom.wcs.skydir.frame = "galactic"

General settings

In the following you may find more detailed information on the different sections which compose the YAML formatted nested configuration settings hierarchy. The different high-level analysis commands exposed may be reproduced within the First analysis tutorial.

The general section comprises information related with the log configuration, as well as the output folder where all file outputs and datasets will be stored, declared as value of the outdir parameter.

# Section: general
# General settings for the high-level interface / optional
general:
    # logging settings for the session
    log:
        # choose one of the example values for level
        level: INFO            # also CRITICAL, ERROR, WARNING, DEBUG
        filename: filename.log
        filemode: w
        format: "%(asctime)s - %(message)s"
        datefmt: "%d-%b-%y %H:%M:%S"
    # output folder where files will be stored
    outdir: .

Observations selection

The observations used in the analysis may be selected from a datastore declared in the observations section of the settings, using also different parameters and values to create a composed filter.

# Section: observations
# Observations used in the analysis / mandatory
observations:
    # path to data store where to fetch observations
    datastore: $GAMMAPY_DATA/hess-dl3-dr1/
    obs_ids: [23523, 23526]
    obs_file:   # csv file with obs_ids
    # spatial /time filters applied on the obs_ids
    obs_cone: {frame: icrs, lon: 83.633 deg, lat: 22.014 deg, radius: 3 deg}
    obs_time: {start: '2019-12-01', stop: '2020-03-01'}

You may use the get_observations() method to proceed to make the observation filtering. The observations are stored as a list of Observation objects.

>>> analysis.get_observations()
>>> analysis.observations.ids
['23592', '23523', '23526', '23559']

Data reduction and datasets

The data reduction process needs a choice of a dataset type, declared as 1d or 3d in the type parameter of datasets section of the settings. For the estimation of the background in a 1d use case, a background method is needed, other parameters related like the on_region and exclusion FITS file may be also present. Parameters for geometry are also needed and declared in this section, as well as a boolean flag stack.

# Section: datasets
# Process of data reduction / mandatory
datasets:
    type: 3d   # also 1d
    stack: false
    geom:
        wcs:
            skydir: {frame: icrs, lon: 83.633 deg, lat: 22.014 deg}
            binsize: 0.1 deg
            fov: {width: 7 deg, height: 5 deg}
            binsize_irf: 0.1 deg
        axes:
            energy: {min: 0.1 TeV, max: 10 TeV, nbins: 30}
            energy_true: {min: 0.1 TeV, max: 10 TeV, nbins: 30}
    map_selection: ['counts', 'exposure', 'background', 'psf', 'edisp']
    background:
        method: ring        # also fov_background, reflected for 1d
        exclusion:          # fits file for exclusion mask
        parameters: {r_in: 0.7 deg, width: 0.7 deg} # ring
    safe_mask:
            methods: ['aeff-default', 'offset-max']
            parameters: {offset_max: 2.5 deg}
    on_region: {frame: icrs, lon: 83.633 deg, lat: 22.014 deg, radius: 3 deg}
    containment_correction: true

You may use the get_datasets() method to proceed to the data reduction process. The final reduced datasets are stored in the datasets attribute. For spectrum datasets reduction the information related with the background estimation is stored in the background property.

>>> analysis.get_datasets()
>>> print(analysis.datasets)

Model

For now we simply declare the model as a reference to a separate YAML file, passing the filename into the read_model method to fetch the model and attach it to your datasets.

>>> analysis.read_models("model.yaml")

If you have a Models object, or a YAML string representing one, you can use the set_models method:

>>> models = Models(...)
>>> analysis.set_models(models)

Fitting

The parameters used in the fitting process are declared in the fit section.

# Section: fit
# Fitting process / optional
fit:
    fit_range: {min: 0.1 TeV, max: 10 TeV}

You may use the run_fit() method to proceed to the model fitting process. The result is stored in the fit_result property.

>>> analysis.run_fit()

Flux points

For spectral analysis where we aim to calculate flux points in a range of energies, we may declare the parameters needed in the flux_points section.

# Section: flux_points
# Flux estimation process /optional
flux_points:
    energy: {min: 0.1 TeV, max: 10 TeV, nbins: 30}
    source: "source"
    parameters: {}

You may use the get_flux_points() method to calculate the flux points. The result is stored in the flux_points property as a FluxPoints object.

>>> analysis.config.flux_points.source="crab"
>>> analysis.get_flux_points()
INFO:gammapy.analysis.analysis:Calculating flux points.
INFO:gammapy.analysis.analysis:
      e_ref               ref_flux        ...        dnde_err        is_ul
       TeV              1 / (cm2 s)       ...    1 / (cm2 s TeV)
------------------ ---------------------- ... ---------------------- -----
1.4125375446227544  1.928877387452331e-11 ... 1.2505519776748809e-12 False
3.1622776601683795  7.426613493860134e-12 ...  2.106743519478604e-13 False
  7.07945784384138 1.4907957189689605e-12 ...   4.74857915062012e-14 False
>>> analysis.flux_points.peek()

You may set fine-grained optional parameters for the FluxPointsEstimator in the flux_points.params settings.

>>>  analysis.config.flux_points.params["reoptimize"]=True

Residuals

For 3D analysis we can compute a residual image to check how good are the models for the source and/or the background.

>>> analysis.datasets[0].plot_residuals()

Using the high-level interface

Gammapy tutorial notebooks that show examples using the high-level interface:

Reference/API

gammapy.analysis Package

Gammapy high-level interface (analysis).

Classes

Analysis(config)

Config-driven high-level analysis interface.

AnalysisConfig

Gammapy analysis configuration.