SpectrumDatasetOnOff¶
-
class
gammapy.datasets.SpectrumDatasetOnOff(models=None, counts=None, counts_off=None, exposure=None, edisp=None, mask_safe=None, mask_fit=None, acceptance=None, acceptance_off=None, name=None, gti=None, meta_table=None)[source]¶ Bases:
gammapy.datasets.SpectrumDatasetSpectrum dataset for on-off likelihood fitting.
The on-off spectrum dataset bundles reduced counts data, off counts data, with a spectral model, relative background efficiency and instrument response functions to compute the fit-statistic given the current model and data.
- Parameters
- models
Models Fit model
- counts
RegionNDMap ON Counts spectrum
- counts_off
RegionNDMap OFF Counts spectrum
- exposure
RegionNDMap Exposure
- edisp
EDispKernelMap Energy dispersion kernel
- mask_safe
RegionNDMap Mask defining the safe data range.
- mask_fit
RegionNDMap Mask to apply to the likelihood for fitting.
- acceptance
RegionNDMapor float Relative background efficiency in the on region.
- acceptance_off
RegionNDMapor float Relative background efficiency in the off region.
- namestr
Name of the dataset.
- gti
GTI GTI of the observation or union of GTI if it is a stacked observation
- meta_table
Table Table listing informations on observations used to create the dataset. One line per observation for stacked datasets.
- models
See also
Attributes Summary
Exposure ratio between signal and background regions
alpha * noff
A lazy FITS data descriptor.
Shape of the counts or background data (tuple)
A lazy FITS data descriptor.
Energy range defined by the safe mask
Model evaluators
Excess
A lazy FITS data descriptor.
Map geometries
Combined fit and safe mask
A lazy FITS data descriptor.
A lazy FITS data descriptor.
Mask safe for edisp maps
Reduced mask safe
Mask safe for psf maps
Models (
Models).A lazy FITS data descriptor.
Methods Summary
Apply mask safe to the dataset
copy([name])A deep copy.
create(e_reco[, e_true, region, …])Create empty SpectrumDatasetOnOff.
cutout(*args, **kwargs)Returns self
downsample(factor[, axis_name, name])Downsample map dataset.
fake(npred_background[, random_state])Simulate fake counts for the current model and reduced irfs.
from_dict(data, **kwargs)Create flux point dataset from dict.
from_geoms(geom, geom_exposure, geom_psf, …)Create a MapDataset object with zero filled maps according to the specified geometries
Create map dataset from list of HDUs.
from_ogip_files(filename)Read
SpectrumDatasetOnOfffrom OGIP files.from_spectrum_dataset(dataset, acceptance, …)Create spectrum dataseton off from another dataset.
info_dict([in_safe_data_range])Info dict with summary statistics, summed over energy
npred()Predicted source and background counts
Background counts estimated from the marginalized likelihood estimate.
Predicted counts in the off region
npred_signal([model])“Model predicted signal counts.
pad(*args, **kwargs)Returns self
peek([fig])Quick-look summary plots.
plot_counts([ax, kwargs_counts, …])Plot counts and background.
plot_excess([ax, kwargs_excess, …])Plot excess and predicted signal.
plot_fit([ax_spectrum, ax_residuals, …])Plot spectrum and residuals in two panels.
plot_residuals([ax, method])Plot spectrum residuals.
plot_residuals_spatial([ax, method, …])Plot spatial residuals.
plot_residuals_spectral([ax, method, region])Plot spectral residuals.
read(filename)Read from file
resample_energy_axis(energy_axis[, name])Resample SpectrumDatasetOnOff over new reconstructed energy axis.
Reset data cache to free memory space
residuals([method])Compute the spectral residuals.
slice_by_energy(energy_min, energy_max[, name])Select and slice datasets in energy range
slice_by_idx(slices[, name])Slice sub dataset.
stack(other)Stack this dataset with another one.
Likelihood per bin given the current model parameters
stat_sum()Total statistic given the current model parameters.
to_dict(filename, *args, **kwargs)Convert to dict for YAML serialization.
Convert map dataset to list of HDUs.
to_image([name])Create images by summing over the reconstructed energy axis.
to_ogip_files([outdir, use_sherpa, overwrite])Write OGIP files.
to_spectrum_dataset([name])Convert a SpectrumDatasetOnOff to a SpectrumDataset The background model template is taken as alpha*counts_off
write(filename, overwrite)Write spectrum dataset on off to file.
Attributes Documentation
-
alpha¶ Exposure ratio between signal and background regions
-
background¶ alpha * noff
-
background_model¶
-
counts¶ A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
-
data_shape¶ Shape of the counts or background data (tuple)
-
edisp¶ A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
-
energy_range¶ Energy range defined by the safe mask
-
evaluators¶ Model evaluators
-
excess¶ Excess
-
exposure¶ A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
-
geoms¶ Map geometries
- Returns
- geomsdict
Dict of map geometries involved in the dataset.
-
mask¶ Combined fit and safe mask
-
mask_fit¶ A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
-
mask_safe¶ A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
-
mask_safe_edisp¶ Mask safe for edisp maps
-
mask_safe_image¶ Reduced mask safe
-
mask_safe_psf¶ Mask safe for psf maps
-
name¶
-
psf¶ A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
-
stat_type= 'wstat'¶
-
tag= 'SpectrumDatasetOnOff'¶
Methods Documentation
-
apply_mask_safe()¶ Apply mask safe to the dataset
-
copy(name=None)¶ A deep copy.
-
classmethod
create(e_reco, e_true=None, region=None, reference_time='2000-01-01', name=None, meta_table=None)[source]¶ Create empty SpectrumDatasetOnOff.
Empty containers are created with the correct geometry. counts, counts_off and aeff are zero and edisp is diagonal.
The safe_mask is set to False in every bin.
- Parameters
- e_reco
MapAxis counts energy axis. Its name must be “energy”.
- e_true
MapAxis effective area table energy axis. Its name must be “energy-true”. If not set use reco energy values. Default : None
- region
SkyRegion Region to define the dataset for.
- reference_time
Time reference time of the dataset, Default is “2000-01-01”
- meta_table
Table Table listing informations on observations used to create the dataset. One line per observation for stacked datasets.
- e_reco
-
cutout(*args, **kwargs)¶ Returns self
-
downsample(factor, axis_name=None, name=None)¶ Downsample map dataset.
The PSFMap and EDispKernelMap are not downsampled, except if a corresponding axis is given.
- Parameters
- factorint
Downsampling factor.
- axis_namestr
Which non-spatial axis to downsample. By default only spatial axes are downsampled.
- namestr
Name of the downsampled dataset.
- Returns
- dataset
MapDatasetorSpectrumDataset Downsampled map dataset.
- dataset
-
fake(npred_background, random_state='random-seed')[source]¶ Simulate fake counts for the current model and reduced irfs.
This method overwrites the counts and off counts defined on the dataset object.
- Parameters
- npred_background
RegionNDMap Predicted background to be used in the on region.
- random_state{int, ‘random-seed’, ‘global-rng’,
RandomState} Defines random number generator initialisation. Passed to
get_random_state.
- npred_background
-
classmethod
from_dict(data, **kwargs)[source]¶ Create flux point dataset from dict.
- Parameters
- datadict
Dict containing data to create dataset from.
- Returns
- dataset
SpectrumDatasetOnOff Spectrum dataset on off.
- dataset
-
classmethod
from_geoms(geom, geom_exposure, geom_psf, geom_edisp, reference_time='2000-01-01', name=None, **kwargs)¶ Create a MapDataset object with zero filled maps according to the specified geometries
- Parameters
- geom
Geom geometry for the counts and background maps
- geom_exposure
Geom geometry for the exposure map
- geom_psf
Geom geometry for the psf map
- geom_edisp
Geom geometry for the energy dispersion kernel map. If geom_edisp has a migra axis, this wil create an EDispMap instead.
- reference_time
Time the reference time to use in GTI definition
- namestr
Name of the returned dataset.
- geom
- Returns
- dataset
MapDatasetorSpectrumDataset A dataset containing zero filled maps
- dataset
-
from_hdulist()¶ Create map dataset from list of HDUs.
- Parameters
- hdulist
HDUList List of HDUs.
- namestr
Name of the new dataset.
- hdulist
- Returns
- dataset
MapDataset Map dataset.
- dataset
-
classmethod
from_ogip_files(filename)[source]¶ Read
SpectrumDatasetOnOfffrom OGIP files.BKG file, ARF, and RMF must be set in the PHA header and be present in the same folder.
The naming scheme is fixed to the following scheme:
PHA file is named
pha_obs{name}.fitsBKG file is named
bkg_obs{name}.fitsARF file is named
arf_obs{name}.fitsRMF file is named
rmf_obs{name}.fitswith{name}the dataset name.
- Parameters
- filenamestr
OGIP PHA file to read
-
classmethod
from_spectrum_dataset(dataset, acceptance, acceptance_off, counts_off=None)[source]¶ Create spectrum dataseton off from another dataset.
- Parameters
- dataset
SpectrumDataset Spectrum dataset defining counts, edisp, exposure etc.
- acceptance
arrayor float Relative background efficiency in the on region.
- acceptance_off
arrayor float Relative background efficiency in the off region.
- counts_off
RegionNDMap Off counts spectrum . If the dataset provides a background model, and no off counts are defined. The off counts are deferred from counts_off / alpha.
- dataset
- Returns
- dataset
SpectrumDatasetOnOff Spectrum dataset on off.
- dataset
-
info_dict(in_safe_data_range=True)[source]¶ Info dict with summary statistics, summed over energy
- Parameters
- in_safe_data_rangebool
Whether to sum only in the safe energy range
- Returns
- info_dictdict
Dictionary with summary info.
-
npred()¶ Predicted source and background counts
- Returns
- npred
Map Total predicted counts
- npred
-
npred_background()[source]¶ Background counts estimated from the marginalized likelihood estimate. See WStat : Poisson data with background measurement
-
npred_signal(model=None)¶ “Model predicted signal counts.
If a model is passed, predicted counts from that component is returned. Else, the total signal counts are returned.
- Parameters
- model: `~gammapy.modeling.models.SkyModel`, optional
Sky model to compute the npred for. If none, the sum of all components (minus the background model) is returned
- Returns
- npred_sig:
gammapy.maps.Map Map of the predicted signal counts
- npred_sig:
-
pad(*args, **kwargs)¶ Returns self
-
peek(fig=None)¶ Quick-look summary plots.
- Parameters
- fig
Figure Figure to add AxesSubplot on.
- fig
- Returns
- ax1, ax2, ax3
AxesSubplot Counts, effective area and energy dispersion subplots.
- ax1, ax2, ax3
-
plot_counts(ax=None, kwargs_counts=None, kwargs_background=None, **kwargs)¶ Plot counts and background.
-
plot_excess(ax=None, kwargs_excess=None, kwargs_npred_signal=None, **kwargs)¶ Plot excess and predicted signal.
-
plot_fit(ax_spectrum=None, ax_residuals=None, kwargs_spectrum=None, kwargs_residuals=None)¶ Plot spectrum and residuals in two panels.
Calls
plot_excessandplot_residuals.- Parameters
- ax_spectrum
Axes Axes to plot spectrum on.
- ax_residuals
Axes Axes to plot residuals on.
- kwargs_spectrumdict
Keyword arguments passed to
plot_excess.- kwargs_residualsdict
Keyword arguments passed to
plot_residuals.
- ax_spectrum
- Returns
- ax_spectrum, ax_residuals
Axes Spectrum and residuals plots.
- ax_spectrum, ax_residuals
-
plot_residuals(ax=None, method='diff', **kwargs)¶ Plot spectrum residuals.
- Parameters
- ax
Axes Axes to plot on.
- method{“diff”, “diff/model”, “diff/sqrt(model)”}
Normalization used to compute the residuals, see
SpectrumDataset.residuals.- **kwargsdict
Keyword arguments passed to
errorbar.
- ax
- Returns
- ax
Axes Axes object.
- ax
-
plot_residuals_spatial(ax=None, method='diff', smooth_kernel='gauss', smooth_radius='0.1 deg', **kwargs)¶ Plot spatial residuals.
The normalization used for the residuals computation can be controlled using the method parameter.
- Parameters
- ax
WCSAxes Axes to plot on.
- method{“diff”, “diff/model”, “diff/sqrt(model)”}
Normalization used to compute the residuals, see
MapDataset.residuals.- smooth_kernel{“gauss”, “box”}
Kernel shape.
- smooth_radius: `~astropy.units.Quantity`, str or float
Smoothing width given as quantity or float. If a float is given, it is interpreted as smoothing width in pixels.
- **kwargsdict
Keyword arguments passed to
imshow.
- ax
- Returns
- ax
WCSAxes WCSAxes object.
- ax
-
plot_residuals_spectral(ax=None, method='diff', region=None, **kwargs)¶ Plot spectral residuals.
The residuals are extracted from the provided region, and the normalization used for its computation can be controlled using the method parameter.
- Parameters
- ax
Axes Axes to plot on.
- method{“diff”, “diff/model”, “diff/sqrt(model)”}
Normalization used to compute the residuals, see
SpectrumDataset.residuals.- region: `~regions.SkyRegion` (required)
Target sky region.
- **kwargsdict
Keyword arguments passed to
errorbar.
- ax
- Returns
- ax
Axes Axes object.
- ax
-
classmethod
read(filename)[source]¶ Read from file
For now, filename is assumed to the name of a PHA file where BKG file, ARF, and RMF names must be set in the PHA header and be present in the same folder
- Parameters
- filenamestr
OGIP PHA file to read
-
resample_energy_axis(energy_axis, name=None)[source]¶ Resample SpectrumDatasetOnOff over new reconstructed energy axis.
Counts are summed taking into account safe mask.
- Parameters
- energy_axis
MapAxis New reconstructed energy axis
- name: str
Name of the new dataset.
- energy_axis
- Returns
- dataset:
SpectrumDataset Resampled spectrum dataset .
- dataset:
-
reset_data_cache()¶ Reset data cache to free memory space
-
residuals(method='diff')¶ Compute the spectral residuals.
- Parameters
- method{“diff”, “diff/model”, “diff/sqrt(model)”}
- Method used to compute the residuals. Available options are:
diff(default): data - modeldiff/model: (data - model) / modeldiff/sqrt(model): (data - model) / sqrt(model)
- Returns
- residuals
RegionNDMap Residual spectrum
- residuals
-
slice_by_energy(energy_min, energy_max, name=None)¶ Select and slice datasets in energy range
- Parameters
- energy_min, energy_max
Quantity Energy bounds to compute the flux point for.
- namestr
Name of the sliced dataset.
- energy_min, energy_max
- Returns
- dataset
MapDataset Sliced Dataset
- dataset
-
slice_by_idx(slices, name=None)[source]¶ Slice sub dataset.
The slicing only applies to the maps that define the corresponding axes.
- Parameters
- slicesdict
Dict of axes names and integers or
sliceobject pairs. Contains one element for each non-spatial dimension. For integer indexing the corresponding axes is dropped from the map. Axes not specified in the dict are kept unchanged.- namestr
Name of the sliced dataset.
- Returns
- map_out
Map Sliced map object.
- map_out
-
stack(other)[source]¶ Stack this dataset with another one.
Safe mask is applied to compute the stacked counts vector. Counts outside each dataset safe mask are lost.
Stacking is performed in-place.
The stacking of 2 datasets is implemented as follows. Here, \(k\) denotes a bin in reconstructed energy and \(j = {1,2}\) is the dataset number
The
mask_safeof each dataset is defined as:\[\begin{split}\epsilon_{jk} =\left\{\begin{array}{cl} 1, & \mbox{if k is inside the energy thresholds}\\ 0, & \mbox{otherwise} \end{array}\right.\end{split}\]Then the total
countsandcounts_offare computed according to:\[ \begin{align}\begin{aligned}\overline{\mathrm{n_{on}}}_k = \mathrm{n_{on}}_{1k} \cdot \epsilon_{1k} + \mathrm{n_{on}}_{2k} \cdot \epsilon_{2k}\\\overline{\mathrm{n_{off}}}_k = \mathrm{n_{off}}_{1k} \cdot \epsilon_{1k} + \mathrm{n_{off}}_{2k} \cdot \epsilon_{2k}\end{aligned}\end{align} \]The stacked
safe_maskis then:\[\overline{\epsilon_k} = \epsilon_{1k} OR \epsilon_{2k}\]In each energy bin \(k\), the count excess is computed taking into account the ON
acceptance, \(a_{on}_k\) and the OFF one:acceptance_off, \(a_{off}_k\). They define the \(\alpha_k=a_{on}_k/a_{off}_k\) factors such that \(n_{ex}_k = n_{on}_k - \alpha_k n_{off}_k\). We define the stacked value of \(\overline{{a}_{on}}_k = 1\) so that:\[\overline{{a}_{off}}_k = \frac{\overline{\mathrm {n_{off}}}}{\alpha_{1k} \cdot \mathrm{n_{off}}_{1k} \cdot \epsilon_{1k} + \alpha_{2k} \cdot \mathrm{n_{off}}_{2k} \cdot \epsilon_{2k}}\]The stacking of \(j\) elements is implemented as follows. \(k\) and \(l\) denote a bin in reconstructed and true energy, respectively.
\[ \begin{align}\begin{aligned}\begin{split}\epsilon_{jk} =\left\{\begin{array}{cl} 1, & \mbox{if bin k is inside the energy thresholds}\\ 0, & \mbox{otherwise} \end{array}\right.\end{split}\\\overline{t} = \sum_{j} t_i\\\overline{\mathrm{aeff}}_l = \frac{\sum_{j}\mathrm{aeff}_{jl} \cdot t_j}{\overline{t}}\\\overline{\mathrm{edisp}}_{kl} = \frac{\sum_{j} \mathrm{edisp}_{jkl} \cdot \mathrm{aeff}_{jl} \cdot t_j \cdot \epsilon_{jk}}{\sum_{j} \mathrm{aeff}_{jl} \cdot t_j}\end{aligned}\end{align} \]- Parameters
- other
SpectrumDatasetOnOff the dataset to stack to the current one
- other
Examples
>>> from gammapy.datasets import SpectrumDatasetOnOff >>> obs_ids = [23523, 23526, 23559, 23592] >>> datasets = [] >>> for obs in obs_ids: >>> filename = "$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs{}.fits" >>> ds = SpectrumDatasetOnOff.from_ogip_files(filename.format(obs)) >>> datasets.append(ds) >>> stacked = datasets[0] >>> for ds in datasets[1:]: >>> stacked.stack(ds) >>> print(stacked)
-
stat_sum()¶ Total statistic given the current model parameters.
-
to_image(name=None)¶ Create images by summing over the reconstructed energy axis.
- Parameters
- namestr
Name of the new dataset.
- Returns
- dataset
MapDatasetorSpectrumDataset Dataset integrated over non-spatial axes.
- dataset
-
to_ogip_files(outdir=None, use_sherpa=False, overwrite=False)[source]¶ Write OGIP files.
If you want to use the written files with Sherpa you have to set the
use_sherpaflag. Then all files will be written in units ‘keV’ and ‘cm2’.The naming scheme is fixed, with {name} the dataset name:
PHA file is named pha_obs{name}.fits
BKG file is named bkg_obs{name}.fits
ARF file is named arf_obs{name}.fits
RMF file is named rmf_obs{name}.fits
- Parameters
- outdir
pathlib.Path output directory, default: pwd
- use_sherpabool, optional
Write Sherpa compliant files, default: False
- overwritebool
Overwrite existing files?
- outdir
-
to_spectrum_dataset(name=None)[source]¶ Convert a SpectrumDatasetOnOff to a SpectrumDataset The background model template is taken as alpha*counts_off
- Parameters
- name: str
Name of the new dataset
- Returns
- dataset:
SpectrumDataset SpectrumDatset with cash statistics
- dataset: