How To#
This page contains short “how to” or “frequently asked question” entries for Gammapy. Each entry is for a very specific task, with a short answer, and links to examples and documentation.
If you’re new to Gammapy, please check the Getting started section and the User guide and have a look at the list of Tutorials. The information below is in addition to those pages, it’s not a complete list of how to do everything in Gammapy.
Please give feedback and suggest additions to this page!
The recommended spelling is “Gammapy” as proper name. The recommended pronunciation is [ɡæməpaɪ] where the syllable “py” is pronounced like the english word “pie”. You can listen to it here.
The DataStore
provides access to a summary table of all observations available.
It can be used to select observations with various criterion. You can for instance apply a cone search
or also select observations based on other information available using the select_observations
method.
Gammapy offers a number of methods to explore the content of the various IRFs
contained in an observation. This is usually done thanks to their peek()
methods.
Observations
can be grouped depending on a number of various quantities.
The two methods to do so are manual grouping and hierarchical clustering. The quantity
you group by can be adjusted according to each science case.
The DataStore
provides access to a summary table of all observations available.
It can be used to obtain various quantities from your Observations
list, such as livetime.
The on-axis equivalent number of observation hours on the source can be calculated.
The resample_energy_edges
provides a way to resample the energy bins
t o satisfy a minimum number of counts of significance per bin.
Units for plotting are handled with a combination of matplotlib
and astropy.units
.
The methods ax.xaxis.set_units()
and ax.yaxis.set_units()
allow
you to define the x and y axis units using astropy.units
. Here is a minimal example:
import matplotlib.pyplot as plt
from gammapy.estimators import FluxPoints
from astropy import units as u
filename = "$GAMMAPY_DATA/hawc_crab/HAWC19_flux_points.fits"
fp = FluxPoints.read(filename)
ax = plt.subplot()
ax.xaxis.set_units(u.eV)
ax.yaxis.set_units(u.Unit("erg cm-2 s-1"))
fp.plot(ax=ax, sed_type="e2dnde")
Estimate the significance of a source, or more generally of an additional model
component (such as e.g. a spectral line on top of a power-law spectrum), is done
via a hypothesis test. You fit two models, with and without the extra source or
component, then use the test statistic values from both fits to compute the
significance or p-value. To obtain the test statistic, call
stat_sum
for the model corresponding to your two
hypotheses (or take this value from the print output when running the fit), and
take the difference. Note that in Gammapy, the fit statistic is defined as S =
- 2 * log(L)
for likelihood L
, such that TS = S_0 - S_1
. See
Datasets (DL4) for an overview of fit statistics used.
There are two ways for the data reduction steps to be implemented. Either a loop is used to
run the full reduction chain, or the reduction is performed with multi-processing tools by
utilising the DatasetsMaker
to perform the loop internally.
A classical plot in gamma-ray astronomy is the cumulative significance of a
source as a function of observing time. In Gammapy, you can produce it with 1D
(spectral) analysis. Once datasets are produced for a given ON region, you can
access the total statistics with the info_table(cumulative=True)
method of
Datasets
.
Gammapy allows the flexibility of using user-defined models for analysis.
While Gammapy does not ship energy dependent spatial models, it is possible to define such models within the modeling framework.
It is possible to combine Gammapy with astrophysical modeling codes, if they
provide a Python interface. Usually this requires some glue code to be written,
e.g. NaimaSpectralModel
is an example of a Gammapy
wrapper class around the Naima spectral model and radiation classes, which then
allows modeling and fitting of Naima models within Gammapy (e.g. using CTAO,
H.E.S.S. or Fermi-LAT data).
Temporal models can be directly fit on available lightcurves, or on the reduced datasets. This is done through a joint fitting of the datasets, one for each time bin.
It happens that a 3D fit does not converge with warning messages indicating that the scanned positions of the model are outside the valid IRF map range. The type of warning message is:
Position <SkyCoord (ICRS): (ra, dec) in deg
(329.71693826, -33.18392464)> is outside valid IRF map range, using nearest IRF defined within
This issue might happen when the position of a model has no defined range. The minimizer might scan positions outside the spatial range in which the IRFs are computed and then it gets lost.
The simple solution is to add a physically-motivated range on the model’s position, e.g. within the field of view or around an excess position. Most of the time, this tip solves the issue. The documentation of the models sub-package explains how to add a validity range of a model parameter.
When dealing with surveys and large sky regions, the amount of memory required might become
problematic, in particular because of the default settings of the IRF maps stored in the
MapDataset
used for the data reduction. Several options can be used to reduce
the required memory:
- Reduce the spatial sampling of the PSFMap
and the EDispKernelMap
using the binsz_irf
argument of the create
method. This will reduce
the accuracy of the IRF kernels used for model counts predictions.
- Change the default IRFMap axes, in particular the rad_axis
argument of create
This axis is used to define the geometry of the PSFMap
and controls the distribution of error angles
used to sample the PSF. This will reduce the quality of the PSF description.
- If one or several IRFs are not required for the study at hand, it is possible not to build them
by removing it from the list of options passed to the MapDatasetMaker
.
To share specific data from a database, it might be necessary to create a new data storage with
a limited set of observations and summary files following the scheme described in gadf.
This is possible with the method copy_obs
provided by the
DataStore
. It allows to copy individual observations files in a given directory
and build the associated observation and HDU tables.
To interpolate maps onto a different geometry use interp_to_geom
.
In general it is not recommended to suppress warnings from code because they might point to potential issues or help debugging a non-working script. However in some cases the cause of the warning is known and the warnings clutter the logging output. In this case it can be useful to locally suppress a specific warning like so:
from astropy.io.fits.verify import VerifyWarning
import warnings
with warnings.catch_warnings():
warnings.simplefilter('ignore', VerifyWarning)
# do stuff here
Sometimes, upper limit values may show as nan
while running a FluxPointsEstimator
or a LightCurveEstimator
. This often arises because the range of the norm parameter
being scanned over is not sufficient. Increasing this range usually solves the problem. In some cases,
you can also consider configuring the estimator with a different Fit
backend.
Gammapy provides the possibility of displaying a
progress bar to monitor the advancement of time-consuming processes. To activate this
functionality, make sure that tqdm
is installed and add the following code snippet
to your code:
from gammapy.utils import pbar
pbar.SHOW_PROGRESS_BAR = True
The progress bar is available within the following:
get_datasets
methodget_observations
methodThe
run()
method from theestimator
classes:ASmoothMapEstimator
,TSMapEstimator
,LightCurveEstimator
stat_profile
andstat_surface
methodsprogress_download
methodrun_multiprocessing
method
As the Gammapy visualisations are using the library matplotlib
that provides color styles, it is possible to change the
default colors map of the Gammapy plots. Using using the
style sheet of matplotlib, you
should add into your notebooks or scripts the following lines after the Gammapy imports:
import matplotlib.style as style
style.use('XXXX')
# with XXXX from `print(plt.style.available)`
Note that you can create your own style with matplotlib (see here and here)
The CTAO observatory released a document describing best practices for data visualisation in a way friendly to color-blind people: CTAO document. To use them, you should add into your notebooks or scripts the following lines after the Gammapy imports:
import matplotlib.style as style
style.use('tableau-colorblind10')
or
import matplotlib.style as style
style.use('seaborn-colorblind')
To do a pulsar analysis, one must compute the pulsar phase of
each event and put this new information in a new Observation
.
Computing pulsar phases can be done using an external library such as
[PINT](https://nanograv-pint.readthedocs.io/en/latest/) or
[Tempo2](https://www.pulsarastronomy.net/pulsar/software/tempo2). A
[Gammapy Recipe](https://gammapy.github.io/gammapy-recipes/_build/html/index.html)
showing how to use PINT within the Gammapy framework is available
[here](https://gammapy.github.io/gammapy-recipes/_build/html/notebooks/pulsar_phase/pulsar_phase_computation.html).
For brevity, the code below shows how to add a dummy phase column to a new
EventList
and Observation
.
import numpy as np
from gammapy.data import DataStore, Observation, EventList
# read the observation
datastore = DataStore.from_dir("$GAMMAPY_DATA/hess-dl3-dr1/")
obs = datastore.obs(23523)
# use the phase information - dummy in this example
phase = np.random.random(len(obs.events.table))
# create a new `EventList`
table = obs.events.table
table["PHASE"] = phase
new_events = EventList(table)
# copy the observation in memory, changing the events
obs2 = obs.copy(events=new_events, in_memory=True)
# The new observation and the new events table can be serialised independently
obs2.write("new_obs.fits.gz")
obs2.write("events.fits.gz", include_irfs=False)