SpectrumDataset#

class gammapy.datasets.SpectrumDataset(models=None, counts=None, exposure=None, background=None, psf=None, edisp=None, mask_safe=None, mask_fit=None, gti=None, meta_table=None, name=None)[source]#

Bases: gammapy.datasets.spectrum.PlotMixin, gammapy.datasets.map.MapDataset

Attributes Summary

background

A lazy FITS data descriptor.

background_model

counts

A lazy FITS data descriptor.

data_shape

Shape of the counts or background data (tuple)

edisp

A lazy FITS data descriptor.

energy_range

Energy range maps defined by the mask_safe and mask_fit.

energy_range_fit

Energy range maps defined by the mask_fit only.

energy_range_safe

Energy range maps defined by the mask_safe only.

energy_range_total

Largest energy range among all pixels, defined by mask_safe and mask_fit.

evaluators

Model evaluators

excess

Observed excess: counts-background

exposure

A lazy FITS data descriptor.

geoms

Map geometries

mask

Combined fit and safe mask

mask_fit

A lazy FITS data descriptor.

mask_fit_image

Reduced mask fit

mask_image

Reduced mask

mask_safe

A lazy FITS data descriptor.

mask_safe_edisp

Mask safe for edisp maps

mask_safe_image

Reduced mask safe

mask_safe_psf

Mask safe for psf maps

models

Models set on the dataset (Models).

name

psf

A lazy FITS data descriptor.

stat_type

tag

Methods Summary

copy([name])

A deep copy.

create(geom[, energy_axis_true, migra_axis, ...])

Create a MapDataset object with zero filled maps.

cutout(*args, **kwargs)

Cutout map dataset.

downsample(factor[, axis_name, name])

Downsample map dataset.

fake([random_state])

Simulate fake counts for the current model and reduced IRFs.

from_dict(data[, lazy, cache])

Create from dicts and models list generated from YAML serialization.

from_geoms(geom[, geom_exposure, geom_psf, ...])

Create a MapDataset object with zero filled maps according to the specified geometries

from_hdulist(hdulist[, name, lazy, format])

Create map dataset from list of HDUs.

info_dict([in_safe_data_range])

Info dict with summary statistics, summed over energy

npred()

Total predicted source and background counts

npred_background()

Predicted background counts

npred_signal([model_name])

Model predicted signal counts.

pad(pad_width[, mode, name])

Pad the spatial dimensions of the dataset.

peek([figsize])

Quick-look summary plots.

plot_counts([ax, kwargs_counts, ...])

Plot counts and background.

plot_excess([ax, kwargs_excess, ...])

Plot excess and predicted signal.

plot_fit([ax_spectrum, ax_residuals, ...])

Plot spectrum and residuals in two panels.

plot_masks([ax, kwargs_fit, kwargs_safe])

Plot mask safe and mask fit

plot_residuals([ax_spatial, ax_spectral, ...])

Plot spatial and spectral residuals in two panels.

plot_residuals_spatial(*args, **kwargs)

Plot spatial residuals.

plot_residuals_spectral([ax, method, region])

Plot spectral residuals.

read(filename[, name, lazy, cache, format])

Read a dataset from file.

resample_energy_axis(energy_axis[, name])

Resample MapDataset over new reco energy axis.

reset_data_cache()

Reset data cache to free memory space

residuals([method])

Compute residuals map.

slice_by_energy([energy_min, energy_max, name])

Select and slice datasets in energy range

slice_by_idx(slices[, name])

Slice sub dataset.

stack(other[, nan_to_num])

Stack another dataset in place.

stat_array()

Statistic function value per bin given the current model parameters

stat_sum()

Total statistic function value given the current model parameters.

to_dict()

Convert to dict for YAML serialization.

to_hdulist()

Convert map dataset to list of HDUs.

to_image([name])

Create images by summing over the reconstructed energy axis.

to_masked([name, nan_to_num])

Return masked dataset

to_region_map_dataset(region[, name])

Integrate the map dataset in a given region.

to_spectrum_dataset(*args, **kwargs)

Return a ~gammapy.datasets.SpectrumDataset from on_region.

write(filename[, overwrite])

Write Dataset to file.

Attributes Documentation

background#

A lazy FITS data descriptor.

Parameters
cachebool

Whether to cache the data.

background_model#
counts#

A lazy FITS data descriptor.

Parameters
cachebool

Whether to cache the data.

data_shape#

Shape of the counts or background data (tuple)

edisp#

A lazy FITS data descriptor.

Parameters
cachebool

Whether to cache the data.

energy_range#

Energy range maps defined by the mask_safe and mask_fit.

energy_range_fit#

Energy range maps defined by the mask_fit only.

energy_range_safe#

Energy range maps defined by the mask_safe only.

energy_range_total#

Largest energy range among all pixels, defined by mask_safe and mask_fit.

evaluators#

Model evaluators

excess#

Observed excess: counts-background

exposure#

A lazy FITS data descriptor.

Parameters
cachebool

Whether to cache the data.

geoms#

Map geometries

Returns
geomsdict

Dict of map geometries involved in the dataset.

mask#

Combined fit and safe mask

mask_fit#

A lazy FITS data descriptor.

Parameters
cachebool

Whether to cache the data.

mask_fit_image#

Reduced mask fit

mask_image#

Reduced mask

mask_safe#

A lazy FITS data descriptor.

Parameters
cachebool

Whether to cache the data.

mask_safe_edisp#

Mask safe for edisp maps

mask_safe_image#

Reduced mask safe

mask_safe_psf#

Mask safe for psf maps

models#

Models set on the dataset (Models).

name#
psf#

A lazy FITS data descriptor.

Parameters
cachebool

Whether to cache the data.

stat_type = 'cash'#
tag = 'SpectrumDataset'#

Methods Documentation

copy(name=None)#

A deep copy.

Parameters
namestr

Name of the copied dataset

Returns
datasetDataset

Copied datasets.

classmethod create(geom, energy_axis_true=None, migra_axis=None, rad_axis=None, binsz_irf=None, reference_time='2000-01-01', name=None, meta_table=None, **kwargs)#

Create a MapDataset object with zero filled maps.

Parameters
geomWcsGeom

Reference target geometry in reco energy, used for counts and background maps

energy_axis_trueMapAxis

True energy axis used for IRF maps

migra_axisMapAxis

If set, this provides the migration axis for the energy dispersion map. If not set, an EDispKernelMap is produced instead. Default is None

rad_axisMapAxis

Rad axis for the psf map

binsz_irffloat

IRF Map pixel size in degrees.

reference_timeTime

the reference time to use in GTI definition

namestr

Name of the returned dataset.

meta_tableTable

Table listing information on observations used to create the dataset. One line per observation for stacked datasets.

Returns
empty_mapsMapDataset

A MapDataset containing zero filled maps

Examples

>>> from gammapy.datasets import MapDataset
>>> from gammapy.maps import WcsGeom, MapAxis
>>> energy_axis = MapAxis.from_energy_bounds(1.0, 10.0, 4, unit="TeV")
>>> energy_axis_true = MapAxis.from_energy_bounds(
            0.5, 20, 10, unit="TeV", name="energy_true"
        )
>>> geom = WcsGeom.create(
            skydir=(83.633, 22.014),
            binsz=0.02, width=(2, 2),
            frame="icrs",
            proj="CAR",
            axes=[energy_axis]
        )
>>> empty = MapDataset.create(geom=geom, energy_axis_true=energy_axis_true, name="empty")
cutout(*args, **kwargs)[source]#

Cutout map dataset.

Parameters
positionSkyCoord

Center position of the cutout region.

widthtuple of Angle

Angular sizes of the region in (lon, lat) in that specific order. If only one value is passed, a square region is extracted.

mode{‘trim’, ‘partial’, ‘strict’}

Mode option for Cutout2D, for details see Cutout2D.

namestr

Name of the new dataset.

Returns
cutoutMapDataset

Cutout map dataset.

downsample(factor, axis_name=None, name=None)#

Downsample map dataset.

The PSFMap and EDispKernelMap are not downsampled, except if a corresponding axis is given.

Parameters
factorint

Downsampling factor.

axis_namestr

Which non-spatial axis to downsample. By default only spatial axes are downsampled.

namestr

Name of the downsampled dataset.

Returns
datasetMapDataset or SpectrumDataset

Downsampled map dataset.

fake(random_state='random-seed')#

Simulate fake counts for the current model and reduced IRFs.

This method overwrites the counts defined on the dataset object.

Parameters
random_state{int, ‘random-seed’, ‘global-rng’, RandomState}

Defines random number generator initialisation. Passed to get_random_state.

classmethod from_dict(data, lazy=False, cache=True)#

Create from dicts and models list generated from YAML serialization.

classmethod from_geoms(geom, geom_exposure=None, geom_psf=None, geom_edisp=None, reference_time='2000-01-01', name=None, **kwargs)#

Create a MapDataset object with zero filled maps according to the specified geometries

Parameters
geomGeom

geometry for the counts and background maps

geom_exposureGeom

geometry for the exposure map

geom_psfGeom

geometry for the psf map

geom_edispGeom

geometry for the energy dispersion kernel map. If geom_edisp has a migra axis, this will create an EDispMap instead.

reference_timeTime

the reference time to use in GTI definition

namestr

Name of the returned dataset.

Returns
datasetMapDataset or SpectrumDataset

A dataset containing zero filled maps

classmethod from_hdulist(hdulist, name=None, lazy=False, format='gadf')#

Create map dataset from list of HDUs.

Parameters
hdulistHDUList

List of HDUs.

namestr

Name of the new dataset.

format{“gadf”}

Format the hdulist is given in.

Returns
datasetMapDataset

Map dataset.

info_dict(in_safe_data_range=True)#

Info dict with summary statistics, summed over energy

Parameters
in_safe_data_rangebool

Whether to sum only in the safe energy range

Returns
info_dictdict

Dictionary with summary info.

npred()#

Total predicted source and background counts

Returns
npredMap

Total predicted counts

npred_background()#

Predicted background counts

The predicted background counts depend on the parameters of the FoVBackgroundModel defined in the dataset.

Returns
npred_backgroundMap

Predicted counts from the background.

npred_signal(model_name=None)#

Model predicted signal counts.

If a model name is passed, predicted counts from that component are returned. Else, the total signal counts are returned.

Parameters
model_name: str

Name of SkyModel for which to compute the npred for. If none, the sum of all components (minus the background model) is returned

Returns
npred_sig: gammapy.maps.Map

Map of the predicted signal counts

pad(pad_width, mode='constant', name=None)#

Pad the spatial dimensions of the dataset.

The padding only applies to counts, masks, background and exposure.

Counts, background and masks are padded with zeros, exposure is padded with edge value.

Parameters
pad_width{sequence, array_like, int}

Number of pixels padded to the edges of each axis.

namestr

Name of the padded dataset.

Returns
datasetMapDataset

Padded map dataset.

peek(figsize=(16, 4))#

Quick-look summary plots.

Parameters
figsizetuple

Size of the figure.

plot_counts(ax=None, kwargs_counts=None, kwargs_background=None, **kwargs)#

Plot counts and background.

Parameters
axAxes

Axes to plot on.

kwargs_counts: dict

Keyword arguments passed to hist for the counts.

kwargs_background: dict

Keyword arguments passed to hist for the background.

**kwargs: dict

Keyword arguments passed to both hist.

Returns
axAxes

Axes object.

plot_excess(ax=None, kwargs_excess=None, kwargs_npred_signal=None, **kwargs)#

Plot excess and predicted signal.

The error bars are computed with a symmetric assumption on the excess.

Parameters
axAxes

Axes to plot on.

kwargs_excess: dict

Keyword arguments passed to errorbar for the excess.

kwargs_npred_signaldict

Keyword arguments passed to hist for the predicted signal.

**kwargs: dict

Keyword arguments passed to both plot methods.

Returns
axAxes

Axes object.

Examples

>>> #Creating a spectral dataset
>>> from gammapy.datasets import SpectrumDatasetOnOff
>>> from gammapy.modeling.models import PowerLawSpectralModel, SkyModel
>>> filename = "$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs23523.fits"
>>> dataset = SpectrumDatasetOnOff.read(filename)
>>> p = PowerLawSpectralModel()
>>> dataset.models = SkyModel(spectral_model=p)
>>> #Plot the excess in blue and the npred in black dotted lines
>>> kwargs_excess = {"color": "blue", "markersize":8, "marker":'s', }
>>> kwargs_npred_signal = {"color": "black", "ls":"--"}
>>> dataset.plot_excess(kwargs_excess=kwargs_excess, kwargs_npred_signal=kwargs_npred_signal)  
plot_fit(ax_spectrum=None, ax_residuals=None, kwargs_spectrum=None, kwargs_residuals=None)#

Plot spectrum and residuals in two panels.

Calls plot_excess and plot_residuals_spectral.

Parameters
ax_spectrumAxes

Axes to plot spectrum on.

ax_residualsAxes

Axes to plot residuals on.

kwargs_spectrumdict

Keyword arguments passed to plot_excess.

kwargs_residualsdict

Keyword arguments passed to plot_residuals_spectral.

Returns
ax_spectrum, ax_residualsAxes

Spectrum and residuals plots.

Examples

>>> #Creating a spectral dataset
>>> from gammapy.datasets import SpectrumDatasetOnOff
>>> from gammapy.modeling.models import PowerLawSpectralModel, SkyModel
>>> filename = "$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs23523.fits"
>>> dataset = SpectrumDatasetOnOff.read(filename)
>>> p = PowerLawSpectralModel()
>>> dataset.models = SkyModel(spectral_model=p)
>>> # optional configurations
>>> kwargs_excess = {"color": "blue", "markersize":8, "marker":'s', }
>>> kwargs_npred_signal = {"color": "black", "ls":"--"}
>>> kwargs_spectrum = {"kwargs_excess":kwargs_excess, "kwargs_npred_signal":kwargs_npred_signal}  # noqa: E501
>>> kwargs_residuals = {"color": "black", "markersize":4, "marker":'s', }  # optional configuration  # noqa: E501
>>> dataset.plot_fit(kwargs_residuals=kwargs_residuals, kwargs_spectrum=kwargs_spectrum)  
plot_masks(ax=None, kwargs_fit=None, kwargs_safe=None)#

Plot mask safe and mask fit

Parameters
axAxes

Axes to plot on.

kwargs_fit: dict

Keyword arguments passed to plot_mask() for mask fit.

kwargs_safe: dict

Keyword arguments passed to plot_mask() for mask safe.

Returns
axAxes

Axes object.

Examples

>>> # Reading a spectral dataset
>>> from gammapy.datasets import SpectrumDatasetOnOff
>>> filename = "$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs23523.fits"
>>> dataset = SpectrumDatasetOnOff.read(filename)
>>> dataset.mask_fit = dataset.mask_safe.copy()
>>> dataset.mask_fit.data[40:46] = False  # setting dummy mask_fit for illustration
>>> # Plot the masks on top of the counts histogram
>>> kwargs_safe = {"color":"green", "alpha":0.2} #optinonal arguments to configure
>>> kwargs_fit = {"color":"pink", "alpha":0.2}
>>> ax=dataset.plot_counts()  
>>> dataset.plot_masks(ax=ax, kwargs_fit=kwargs_fit, kwargs_safe=kwargs_safe)  
plot_residuals(ax_spatial=None, ax_spectral=None, kwargs_spatial=None, kwargs_spectral=None)#

Plot spatial and spectral residuals in two panels.

Calls plot_residuals_spatial and plot_residuals_spectral. The spectral residuals are extracted from the provided region, and the normalization used for its computation can be controlled using the method parameter. The region outline is overlaid on the residuals map. If no region is passed, the residuals are computed for the entire map

Parameters
ax_spatialWCSAxes

Axes to plot spatial residuals on.

ax_spectralAxes

Axes to plot spectral residuals on.

kwargs_spatialdict

Keyword arguments passed to plot_residuals_spatial.

kwargs_spectraldict

Keyword arguments passed to plot_residuals_spectral. The region should be passed as a dictionary key

Returns
ax_spatial, ax_spectralWCSAxes, Axes

Spatial and spectral residuals plots.

Examples

>>> from regions import CircleSkyRegion
>>> from astropy.coordinates import SkyCoord
>>> import astropy.units as u
>>> from gammapy.datasets import MapDataset
>>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz")
>>> reg = CircleSkyRegion(SkyCoord(0,0, unit="deg", frame="galactic"), radius=1.0 * u.deg)
>>> kwargs_spatial = {"cmap": "RdBu_r", "vmin":-5, "vmax":5, "add_cbar": True}
>>> kwargs_spectral = {"region":reg, "markerfacecolor": "blue", "markersize": 8, "marker": "s"}  # noqa: E501
>>> dataset.plot_residuals(kwargs_spatial=kwargs_spatial, kwargs_spectral=kwargs_spectral) 
plot_residuals_spatial(*args, **kwargs)[source]#

Plot spatial residuals.

The normalization used for the residuals computation can be controlled using the method parameter.

Parameters
axWCSAxes

Axes to plot on.

method{“diff”, “diff/model”, “diff/sqrt(model)”}

Normalization used to compute the residuals, see MapDataset.residuals.

smooth_kernel{“gauss”, “box”}

Kernel shape.

smooth_radius: `~astropy.units.Quantity`, str or float

Smoothing width given as quantity or float. If a float is given, it is interpreted as smoothing width in pixels.

**kwargsdict

Keyword arguments passed to imshow.

Returns
axWCSAxes

WCSAxes object.

Examples

>>> from gammapy.datasets import MapDataset
>>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz")
>>> kwargs = {"cmap": "RdBu_r", "vmin":-5, "vmax":5, "add_cbar": True}
>>> dataset.plot_residuals_spatial(method="diff/sqrt(model)", **kwargs) 
plot_residuals_spectral(ax=None, method='diff', region=None, **kwargs)#

Plot spectral residuals.

The residuals are extracted from the provided region, and the normalization used for its computation can be controlled using the method parameter.

The error bars are computed using the uncertainty on the excess with a symmetric assumption.

Parameters
axAxes

Axes to plot on.

method{“diff”, “diff/sqrt(model)”}

Normalization used to compute the residuals, see SpectrumDataset.residuals.

region: `~regions.SkyRegion` (required)

Target sky region.

**kwargsdict

Keyword arguments passed to errorbar.

Returns
axAxes

Axes object.

Examples

>>> from gammapy.datasets import MapDataset
>>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz")
>>> kwargs = {"markerfacecolor": "blue", "markersize":8, "marker":'s'}
>>> dataset.plot_residuals_spectral(method="diff/sqrt(model)", **kwargs) 
classmethod read(filename, name=None, lazy=False, cache=True, format='gadf')#

Read a dataset from file.

Parameters
filenamestr

Filename to read from.

namestr

Name of the new dataset.

lazybool

Whether to lazy load data into memory

cachebool

Whether to cache the data after loading.

format{“gadf”}

Format of the dataset file.

Returns
datasetMapDataset

Map dataset.

resample_energy_axis(energy_axis, name=None)#

Resample MapDataset over new reco energy axis.

Counts are summed taking into account safe mask.

Parameters
energy_axisMapAxis

New reconstructed energy axis.

name: str

Name of the new dataset.

Returns
dataset: MapDataset or SpectrumDataset

Resampled dataset.

reset_data_cache()#

Reset data cache to free memory space

residuals(method='diff', **kwargs)#

Compute residuals map.

Parameters
method: {“diff”, “diff/model”, “diff/sqrt(model)”}
Method used to compute the residuals. Available options are:
  • “diff” (default): data - model

  • “diff/model”: (data - model) / model

  • “diff/sqrt(model)”: (data - model) / sqrt(model)

**kwargsdict

Keyword arguments forwarded to Map.smooth()

Returns
residualsgammapy.maps.Map

Residual map.

slice_by_energy(energy_min=None, energy_max=None, name=None)#

Select and slice datasets in energy range

Parameters
energy_min, energy_maxQuantity

Energy bounds to compute the flux point for.

namestr

Name of the sliced dataset.

Returns
datasetMapDataset

Sliced Dataset

Examples

>>> from gammapy.datasets import MapDataset
>>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz")
>>> sliced = dataset.slice_by_energy(energy_min="1 TeV", energy_max="5 TeV")
>>> sliced.data_shape
(3, 240, 320)
slice_by_idx(slices, name=None)#

Slice sub dataset.

The slicing only applies to the maps that define the corresponding axes.

Parameters
slicesdict

Dict of axes names and integers or slice object pairs. Contains one element for each non-spatial dimension. For integer indexing the corresponding axes is dropped from the map. Axes not specified in the dict are kept unchanged.

namestr

Name of the sliced dataset.

Returns
datasetMapDataset or SpectrumDataset

Sliced dataset

Examples

>>> from gammapy.datasets import MapDataset
>>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz")
>>> slices = {"energy": slice(0, 3)} #to get the first 3 energy slices
>>> sliced = dataset.slice_by_idx(slices)
>>> print(sliced.geoms["geom"])
WcsGeom
        axes       : ['lon', 'lat', 'energy']
        shape      : (320, 240, 3)
        ndim       : 3
        frame      : galactic
        projection : CAR
        center     : 0.0 deg, 0.0 deg
        width      : 8.0 deg x 6.0 deg
        wcs ref    : 0.0 deg, 0.0 deg
stack(other, nan_to_num=True)#

Stack another dataset in place. The original dataset is modified.

Safe mask is applied to compute the stacked counts data. Counts outside each dataset safe mask are lost.

The stacking of 2 datasets is implemented as follows. Here, \(k\) denotes a bin in reconstructed energy and \(j = {1,2}\) is the dataset number

The mask_safe of each dataset is defined as:

\[\begin{split}\epsilon_{jk} =\left\{\begin{array}{cl} 1, & \mbox{if bin k is inside the thresholds}\\ 0, & \mbox{otherwise} \end{array}\right.\end{split}\]

Then the total counts and model background bkg are computed according to:

\[ \begin{align}\begin{aligned}\overline{\mathrm{n_{on}}}_k = \mathrm{n_{on}}_{1k} \cdot \epsilon_{1k} + \mathrm{n_{on}}_{2k} \cdot \epsilon_{2k}\\\overline{bkg}_k = bkg_{1k} \cdot \epsilon_{1k} + bkg_{2k} \cdot \epsilon_{2k}\end{aligned}\end{align} \]

The stacked safe_mask is then:

\[\overline{\epsilon_k} = \epsilon_{1k} OR \epsilon_{2k}\]
Parameters
other: `~gammapy.datasets.MapDataset` or `~gammapy.datasets.MapDatasetOnOff`

Map dataset to be stacked with this one. If other is an on-off dataset alpha * counts_off is used as a background model.

nan_to_num: bool

Non-finite values are replaced by zero if True (default).

stat_array()#

Statistic function value per bin given the current model parameters

stat_sum()#

Total statistic function value given the current model parameters.

to_dict()#

Convert to dict for YAML serialization.

to_hdulist()#

Convert map dataset to list of HDUs.

Returns
hdulistHDUList

Map dataset list of HDUs.

to_image(name=None)#

Create images by summing over the reconstructed energy axis.

Parameters
namestr

Name of the new dataset.

Returns
datasetMapDataset or SpectrumDataset

Dataset integrated over non-spatial axes.

to_masked(name=None, nan_to_num=True)#

Return masked dataset

Parameters
namestr

Name of the masked dataset.

nan_to_num: bool

Non-finite values are replaced by zero if True (default).

Returns
datasetMapDataset or SpectrumDataset

Masked dataset

to_region_map_dataset(region, name=None)#

Integrate the map dataset in a given region.

Counts and background of the dataset are integrated in the given region, taking the safe mask into accounts. The exposure is averaged in the region again taking the safe mask into account. The PSF and energy dispersion kernel are taken at the center of the region.

Parameters
regionSkyRegion

Region from which to extract the spectrum

namestr

Name of the new dataset.

Returns
datasetMapDataset

the resulting reduced dataset

to_spectrum_dataset(*args, **kwargs)[source]#

Return a ~gammapy.datasets.SpectrumDataset from on_region.

Counts and background are summed in the on_region. Exposure is taken from the average exposure.

The energy dispersion kernel is obtained at the on_region center. Only regions with centers are supported.

The model is not exported to the ~gammapy.datasets.SpectrumDataset. It must be set after the dataset extraction.

Parameters
on_regionSkyRegion

the input ON region on which to extract the spectrum

containment_correctionbool

Apply containment correction for point sources and circular on regions

namestr

Name of the new dataset.

Returns
datasetSpectrumDataset

the resulting reduced dataset

write(filename, overwrite=False)#

Write Dataset to file.

A MapDataset is serialised using the GADF format with a WCS geometry. A SpectrumDataset uses the same format, with a RegionGeom.

Parameters
filenamestr

Filename to write to.

overwritebool

Overwrite file if it exists.