SpectrumDataset#
- class gammapy.datasets.SpectrumDataset(models=None, counts=None, exposure=None, background=None, psf=None, edisp=None, mask_safe=None, mask_fit=None, gti=None, meta_table=None, name=None)[source]#
Bases:
gammapy.datasets.spectrum.PlotMixin
,gammapy.datasets.map.MapDataset
Main dataset for spectrum fitting (1D analysis). It bundles together binned counts, background, IRFs into
RegionNDMap
(a Map with only one spatial bin). A safe mask and a fit mask can be added to exclude bins during the analysis. If models are assigned to it, it can compute the predicted number of counts and the statistic function, here the Cash statistic (seecash
).For more information see Datasets (DL4).
Attributes Summary
A lazy FITS data descriptor.
A lazy FITS data descriptor.
Shape of the counts or background data (tuple).
A lazy FITS data descriptor.
Energy range maps defined by the mask_safe and mask_fit.
Energy range maps defined by the mask_fit only.
Energy range maps defined by the mask_safe only.
Largest energy range among all pixels, defined by mask_safe and mask_fit.
Model evaluators.
Observed excess: counts-background.
A lazy FITS data descriptor.
Map geometries.
Combined fit and safe mask.
A lazy FITS data descriptor.
Reduced fit mask.
Reduced mask.
A lazy FITS data descriptor.
Safe mask for edisp maps.
Reduced safe mask.
Safe mask for psf maps.
Models set on the dataset (
Models
).A lazy FITS data descriptor.
Methods Summary
copy
([name])A deep copy.
create
(geom[, energy_axis_true, migra_axis, ...])Create a MapDataset object with zero filled maps.
cutout
(*args, **kwargs)Cutout map dataset.
downsample
(factor[, axis_name, name])Downsample map dataset.
fake
([random_state])Simulate fake counts for the current model and reduced IRFs.
from_dict
(data[, lazy, cache])Create from dicts and models list generated from YAML serialization.
from_geoms
(geom[, geom_exposure, geom_psf, ...])Create a MapDataset object with zero filled maps according to the specified geometries.
from_hdulist
(hdulist[, name, lazy, format])Create map dataset from list of HDUs.
info_dict
([in_safe_data_range])Info dict with summary statistics, summed over energy.
npred
()Total predicted source and background counts.
Predicted background counts.
npred_signal
([model_names, stack])Model predicted signal counts.
pad
(pad_width[, mode, name])Pad the spatial dimensions of the dataset.
peek
([figsize])Quick-look summary plots.
plot_counts
([ax, kwargs_counts, ...])Plot counts and background.
plot_excess
([ax, kwargs_excess, ...])Plot excess and predicted signal.
plot_fit
([ax_spectrum, ax_residuals, ...])Plot spectrum and residuals in two panels.
plot_masks
([ax, kwargs_fit, kwargs_safe])Plot safe mask and fit mask.
plot_residuals
([ax_spatial, ax_spectral, ...])Plot spatial and spectral residuals in two panels.
plot_residuals_spatial
(*args, **kwargs)Plot spatial residuals.
plot_residuals_spectral
([ax, method, region])Plot spectral residuals.
read
(filename[, name, lazy, cache, format])Read a dataset from file.
resample_energy_axis
(energy_axis[, name])Resample MapDataset over new reco energy axis.
Reset data cache to free memory space
residuals
([method])Compute residuals map.
slice_by_energy
([energy_min, energy_max, name])Select and slice datasets in energy range
slice_by_idx
(slices[, name])Slice sub dataset.
stack
(other[, nan_to_num])Stack another dataset in place.
Statistic function value per bin given the current model parameters.
stat_sum
()Total statistic function value given the current model parameters.
to_dict
()Convert to dict for YAML serialization.
Convert map dataset to list of HDUs.
to_image
([name])Create images by summing over the reconstructed energy axis.
to_masked
([name, nan_to_num])Return masked dataset.
to_region_map_dataset
(region[, name])Integrate the map dataset in a given region.
to_spectrum_dataset
(*args, **kwargs)Return a ~gammapy.datasets.SpectrumDataset from on_region.
write
(filename[, overwrite])Write Dataset to file.
Attributes Documentation
- background#
A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
- background_model#
- counts#
A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
- data_shape#
Shape of the counts or background data (tuple).
- edisp#
A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
- energy_range#
Energy range maps defined by the mask_safe and mask_fit.
- energy_range_fit#
Energy range maps defined by the mask_fit only.
- energy_range_safe#
Energy range maps defined by the mask_safe only.
- energy_range_total#
Largest energy range among all pixels, defined by mask_safe and mask_fit.
- evaluators#
Model evaluators.
- excess#
Observed excess: counts-background.
- exposure#
A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
- geoms#
Map geometries.
- Returns
- geomsdict
Dict of map geometries involved in the dataset.
- mask#
Combined fit and safe mask.
- mask_fit#
A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
- mask_fit_image#
Reduced fit mask.
- mask_image#
Reduced mask.
- mask_safe#
A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
- mask_safe_edisp#
Safe mask for edisp maps.
- mask_safe_image#
Reduced safe mask.
- mask_safe_psf#
Safe mask for psf maps.
- name#
- psf#
A lazy FITS data descriptor.
- Parameters
- cachebool
Whether to cache the data.
- stat_type = 'cash'#
- tag = 'SpectrumDataset'#
Methods Documentation
- copy(name=None)#
A deep copy.
- Parameters
- namestr
Name of the copied dataset
- Returns
- dataset
Dataset
Copied datasets.
- dataset
- classmethod create(geom, energy_axis_true=None, migra_axis=None, rad_axis=None, binsz_irf=<Quantity 0.2 deg>, reference_time='2000-01-01', name=None, meta_table=None, reco_psf=False, **kwargs)#
Create a MapDataset object with zero filled maps.
- Parameters
- geom
WcsGeom
Reference target geometry in reco energy, used for counts and background maps
- energy_axis_true
MapAxis
True energy axis used for IRF maps
- migra_axis
MapAxis
If set, this provides the migration axis for the energy dispersion map. If not set, an EDispKernelMap is produced instead. Default is None
- rad_axis
MapAxis
Rad axis for the psf map
- binsz_irffloat
IRF Map pixel size in degrees.
- reference_time
Time
the reference time to use in GTI definition
- namestr
Name of the returned dataset.
- meta_table
Table
Table listing information on observations used to create the dataset. One line per observation for stacked datasets.
- reco_psfbool
Use reconstructed energy for the PSF geometry. Default is False
- geom
- Returns
- empty_maps
MapDataset
A MapDataset containing zero filled maps
- empty_maps
Examples
>>> from gammapy.datasets import MapDataset >>> from gammapy.maps import WcsGeom, MapAxis
>>> energy_axis = MapAxis.from_energy_bounds(1.0, 10.0, 4, unit="TeV") >>> energy_axis_true = MapAxis.from_energy_bounds( 0.5, 20, 10, unit="TeV", name="energy_true" ) >>> geom = WcsGeom.create( skydir=(83.633, 22.014), binsz=0.02, width=(2, 2), frame="icrs", proj="CAR", axes=[energy_axis] ) >>> empty = MapDataset.create(geom=geom, energy_axis_true=energy_axis_true, name="empty")
- cutout(*args, **kwargs)[source]#
Cutout map dataset.
- Parameters
- position
SkyCoord
Center position of the cutout region.
- widthtuple of
Angle
Angular sizes of the region in (lon, lat) in that specific order. If only one value is passed, a square region is extracted.
- mode{‘trim’, ‘partial’, ‘strict’}
Mode option for Cutout2D, for details see
Cutout2D
.- namestr
Name of the new dataset.
- position
- Returns
- cutout
MapDataset
Cutout map dataset.
- cutout
- downsample(factor, axis_name=None, name=None)#
Downsample map dataset.
The PSFMap and EDispKernelMap are not downsampled, except if a corresponding axis is given.
- Parameters
- factorint
Downsampling factor.
- axis_namestr
Which non-spatial axis to downsample. By default only spatial axes are downsampled.
- namestr
Name of the downsampled dataset.
- Returns
- dataset
MapDataset
orSpectrumDataset
Downsampled map dataset.
- dataset
- fake(random_state='random-seed')#
Simulate fake counts for the current model and reduced IRFs.
This method overwrites the counts defined on the dataset object.
- Parameters
- random_state{int, ‘random-seed’, ‘global-rng’,
RandomState
} Defines random number generator initialisation. Passed to
get_random_state
.
- random_state{int, ‘random-seed’, ‘global-rng’,
- classmethod from_dict(data, lazy=False, cache=True)#
Create from dicts and models list generated from YAML serialization.
- classmethod from_geoms(geom, geom_exposure=None, geom_psf=None, geom_edisp=None, reference_time='2000-01-01', name=None, **kwargs)#
Create a MapDataset object with zero filled maps according to the specified geometries.
- Parameters
- geom
Geom
geometry for the counts and background maps
- geom_exposure
Geom
geometry for the exposure map
- geom_psf
Geom
geometry for the psf map
- geom_edisp
Geom
geometry for the energy dispersion kernel map. If geom_edisp has a migra axis, this will create an EDispMap instead.
- reference_time
Time
the reference time to use in GTI definition
- namestr
Name of the returned dataset.
- geom
- Returns
- dataset
MapDataset
orSpectrumDataset
A dataset containing zero filled maps
- dataset
- classmethod from_hdulist(hdulist, name=None, lazy=False, format='gadf')#
Create map dataset from list of HDUs.
- Parameters
- hdulist
HDUList
List of HDUs.
- namestr
Name of the new dataset.
- format{“gadf”}
Format the hdulist is given in.
- hdulist
- Returns
- dataset
MapDataset
Map dataset.
- dataset
- info_dict(in_safe_data_range=True)#
Info dict with summary statistics, summed over energy.
- Parameters
- in_safe_data_rangebool
Whether to sum only in the safe energy range
- Returns
- info_dictdict
Dictionary with summary info.
- npred()#
Total predicted source and background counts.
- Returns
- npred
Map
Total predicted counts
- npred
- npred_background()#
Predicted background counts.
The predicted background counts depend on the parameters of the
FoVBackgroundModel
defined in the dataset.- Returns
- npred_background
Map
Predicted counts from the background.
- npred_background
- npred_signal(model_names=None, stack=True)#
Model predicted signal counts.
If a list of model name is passed, predicted counts from these components are returned. If stack is set to True, a map of the sum of all the predicted counts is returned. If stack is set to False, a map with an additional axis representing the models is returned.
- Parameters
- model_names: list of str
List of name of SkyModel for which to compute the npred. If none, all the SkyModel predicted counts are computed
- stack: bool
Whether to stack the npred maps upon each other.
- Returns
- npred_sig:
gammapy.maps.Map
Map of the predicted signal counts
- npred_sig:
- pad(pad_width, mode='constant', name=None)#
Pad the spatial dimensions of the dataset.
The padding only applies to counts, masks, background and exposure.
Counts, background and masks are padded with zeros, exposure is padded with edge value.
- Parameters
- pad_width{sequence, array_like, int}
Number of pixels padded to the edges of each axis.
- namestr
Name of the padded dataset.
- Returns
- dataset
MapDataset
Padded map dataset.
- dataset
- peek(figsize=(16, 4))#
Quick-look summary plots.
- Parameters
- figsizetuple
Size of the figure.
- plot_counts(ax=None, kwargs_counts=None, kwargs_background=None, **kwargs)#
Plot counts and background.
- plot_excess(ax=None, kwargs_excess=None, kwargs_npred_signal=None, **kwargs)#
Plot excess and predicted signal.
The error bars are computed with a symmetric assumption on the excess.
- Parameters
- Returns
- ax
Axes
Axes object.
- ax
Examples
>>> #Creating a spectral dataset >>> from gammapy.datasets import SpectrumDatasetOnOff >>> from gammapy.modeling.models import PowerLawSpectralModel, SkyModel >>> filename = "$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs23523.fits" >>> dataset = SpectrumDatasetOnOff.read(filename) >>> p = PowerLawSpectralModel() >>> dataset.models = SkyModel(spectral_model=p) >>> #Plot the excess in blue and the npred in black dotted lines >>> kwargs_excess = {"color": "blue", "markersize":8, "marker":'s', } >>> kwargs_npred_signal = {"color": "black", "ls":"--"} >>> dataset.plot_excess(kwargs_excess=kwargs_excess, kwargs_npred_signal=kwargs_npred_signal)
- plot_fit(ax_spectrum=None, ax_residuals=None, kwargs_spectrum=None, kwargs_residuals=None)#
Plot spectrum and residuals in two panels.
Calls
plot_excess
andplot_residuals_spectral
.- Parameters
- ax_spectrum
Axes
Axes to plot spectrum on
- ax_residuals
Axes
Axes to plot residuals on
- kwargs_spectrumdict
Keyword arguments passed to
plot_excess
- kwargs_residualsdict
Keyword arguments passed to
plot_residuals_spectral
- ax_spectrum
- Returns
- ax_spectrum, ax_residuals
Axes
Spectrum and residuals plots
- ax_spectrum, ax_residuals
Examples
>>> #Creating a spectral dataset >>> from gammapy.datasets import SpectrumDatasetOnOff >>> from gammapy.modeling.models import PowerLawSpectralModel, SkyModel >>> filename = "$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs23523.fits" >>> dataset = SpectrumDatasetOnOff.read(filename) >>> p = PowerLawSpectralModel() >>> dataset.models = SkyModel(spectral_model=p) >>> # optional configurations >>> kwargs_excess = {"color": "blue", "markersize":8, "marker":'s', } >>> kwargs_npred_signal = {"color": "black", "ls":"--"} >>> kwargs_spectrum = {"kwargs_excess":kwargs_excess, "kwargs_npred_signal":kwargs_npred_signal} # noqa: E501 >>> kwargs_residuals = {"color": "black", "markersize":4, "marker":'s', } # optional configuration # noqa: E501 >>> dataset.plot_fit(kwargs_residuals=kwargs_residuals, kwargs_spectrum=kwargs_spectrum)
- plot_masks(ax=None, kwargs_fit=None, kwargs_safe=None)#
Plot safe mask and fit mask.
- Parameters
- ax
Axes
Axes to plot on
- kwargs_fit: dict
Keyword arguments passed to
plot_mask()
for mask fit- kwargs_safe: dict
Keyword arguments passed to
plot_mask()
for mask safe
- ax
- Returns
- ax
Axes
Axes object.
- ax
Examples
>>> # Reading a spectral dataset >>> from gammapy.datasets import SpectrumDatasetOnOff >>> filename = "$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs23523.fits" >>> dataset = SpectrumDatasetOnOff.read(filename) >>> dataset.mask_fit = dataset.mask_safe.copy() >>> dataset.mask_fit.data[40:46] = False # setting dummy mask_fit for illustration >>> # Plot the masks on top of the counts histogram >>> kwargs_safe = {"color":"green", "alpha":0.2} #optinonal arguments to configure >>> kwargs_fit = {"color":"pink", "alpha":0.2} >>> ax=dataset.plot_counts() >>> dataset.plot_masks(ax=ax, kwargs_fit=kwargs_fit, kwargs_safe=kwargs_safe)
- plot_residuals(ax_spatial=None, ax_spectral=None, kwargs_spatial=None, kwargs_spectral=None)#
Plot spatial and spectral residuals in two panels.
Calls
plot_residuals_spatial
andplot_residuals_spectral
. The spectral residuals are extracted from the provided region, and the normalization used for its computation can be controlled using the method parameter. The region outline is overlaid on the residuals map. If no region is passed, the residuals are computed for the entire map- Parameters
- ax_spatial
WCSAxes
Axes to plot spatial residuals on.
- ax_spectral
Axes
Axes to plot spectral residuals on.
- kwargs_spatialdict
Keyword arguments passed to
plot_residuals_spatial
.- kwargs_spectraldict
Keyword arguments passed to
plot_residuals_spectral
. The region should be passed as a dictionary key
- ax_spatial
- Returns
Examples
>>> from regions import CircleSkyRegion >>> from astropy.coordinates import SkyCoord >>> import astropy.units as u >>> from gammapy.datasets import MapDataset >>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz") >>> reg = CircleSkyRegion(SkyCoord(0,0, unit="deg", frame="galactic"), radius=1.0 * u.deg) >>> kwargs_spatial = {"cmap": "RdBu_r", "vmin":-5, "vmax":5, "add_cbar": True} >>> kwargs_spectral = {"region":reg, "markerfacecolor": "blue", "markersize": 8, "marker": "s"} # noqa: E501 >>> dataset.plot_residuals(kwargs_spatial=kwargs_spatial, kwargs_spectral=kwargs_spectral)
- plot_residuals_spatial(*args, **kwargs)[source]#
Plot spatial residuals.
The normalization used for the residuals computation can be controlled using the method parameter.
- Parameters
- ax
WCSAxes
Axes to plot on.
- method{“diff”, “diff/model”, “diff/sqrt(model)”}
Normalization used to compute the residuals, see
MapDataset.residuals
.- smooth_kernel{“gauss”, “box”}
Kernel shape.
- smooth_radius: `~astropy.units.Quantity`, str or float
Smoothing width given as quantity or float. If a float is given, it is interpreted as smoothing width in pixels.
- **kwargsdict
Keyword arguments passed to
imshow
.
- ax
- Returns
- ax
WCSAxes
WCSAxes object.
- ax
Examples
>>> from gammapy.datasets import MapDataset >>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz") >>> kwargs = {"cmap": "RdBu_r", "vmin":-5, "vmax":5, "add_cbar": True} >>> dataset.plot_residuals_spatial(method="diff/sqrt(model)", **kwargs)
- plot_residuals_spectral(ax=None, method='diff', region=None, **kwargs)#
Plot spectral residuals.
The residuals are extracted from the provided region, and the normalization used for its computation can be controlled using the method parameter.
The error bars are computed using the uncertainty on the excess with a symmetric assumption.
- Parameters
- ax
Axes
Axes to plot on.
- method{“diff”, “diff/sqrt(model)”}
Normalization used to compute the residuals, see
SpectrumDataset.residuals
.- region: `~regions.SkyRegion` (required)
Target sky region.
- **kwargsdict
Keyword arguments passed to
errorbar
.
- ax
- Returns
- ax
Axes
Axes object.
- ax
Examples
>>> from gammapy.datasets import MapDataset >>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz") >>> kwargs = {"markerfacecolor": "blue", "markersize":8, "marker":'s'} >>> dataset.plot_residuals_spectral(method="diff/sqrt(model)", **kwargs)
- classmethod read(filename, name=None, lazy=False, cache=True, format='gadf')#
Read a dataset from file.
- Parameters
- filenamestr
Filename to read from.
- namestr
Name of the new dataset.
- lazybool
Whether to lazy load data into memory
- cachebool
Whether to cache the data after loading.
- format{“gadf”}
Format of the dataset file.
- Returns
- dataset
MapDataset
Map dataset.
- dataset
- resample_energy_axis(energy_axis, name=None)#
Resample MapDataset over new reco energy axis.
Counts are summed taking into account safe mask.
- Parameters
- energy_axis
MapAxis
New reconstructed energy axis.
- name: str
Name of the new dataset.
- energy_axis
- Returns
- dataset:
MapDataset
orSpectrumDataset
Resampled dataset.
- dataset:
- reset_data_cache()#
Reset data cache to free memory space
- residuals(method='diff', **kwargs)#
Compute residuals map.
- Parameters
- method: {“diff”, “diff/model”, “diff/sqrt(model)”}
- Method used to compute the residuals. Available options are:
“diff” (default): data - model
“diff/model”: (data - model) / model
“diff/sqrt(model)”: (data - model) / sqrt(model)
- **kwargsdict
Keyword arguments forwarded to
Map.smooth()
- Returns
- residuals
gammapy.maps.Map
Residual map.
- residuals
- slice_by_energy(energy_min=None, energy_max=None, name=None)#
Select and slice datasets in energy range
- Parameters
- energy_min, energy_max
Quantity
Energy bounds to compute the flux point for.
- namestr
Name of the sliced dataset.
- energy_min, energy_max
- Returns
- dataset
MapDataset
Sliced Dataset
- dataset
Examples
>>> from gammapy.datasets import MapDataset >>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz") >>> sliced = dataset.slice_by_energy(energy_min="1 TeV", energy_max="5 TeV") >>> sliced.data_shape (3, 240, 320)
- slice_by_idx(slices, name=None)#
Slice sub dataset.
The slicing only applies to the maps that define the corresponding axes.
- Parameters
- slicesdict
Dict of axes names and integers or
slice
object pairs. Contains one element for each non-spatial dimension. For integer indexing the corresponding axes is dropped from the map. Axes not specified in the dict are kept unchanged.- namestr
Name of the sliced dataset.
- Returns
- dataset
MapDataset
orSpectrumDataset
Sliced dataset
- dataset
Examples
>>> from gammapy.datasets import MapDataset >>> dataset = MapDataset.read("$GAMMAPY_DATA/cta-1dc-gc/cta-1dc-gc.fits.gz") >>> slices = {"energy": slice(0, 3)} #to get the first 3 energy slices >>> sliced = dataset.slice_by_idx(slices) >>> print(sliced.geoms["geom"]) WcsGeom axes : ['lon', 'lat', 'energy'] shape : (320, 240, 3) ndim : 3 frame : galactic projection : CAR center : 0.0 deg, 0.0 deg width : 8.0 deg x 6.0 deg wcs ref : 0.0 deg, 0.0 deg
- stack(other, nan_to_num=True)#
Stack another dataset in place. The original dataset is modified.
Safe mask is applied to compute the stacked counts data. Counts outside each dataset safe mask are lost.
The stacking of 2 datasets is implemented as follows. Here, \(k\) denotes a bin in reconstructed energy and \(j = {1,2}\) is the dataset number
The
mask_safe
of each dataset is defined as:\[\begin{split}\epsilon_{jk} =\left\{\begin{array}{cl} 1, & \mbox{if bin k is inside the thresholds}\\ 0, & \mbox{otherwise} \end{array}\right.\end{split}\]Then the total
counts
and model backgroundbkg
are computed according to:\[ \begin{align}\begin{aligned}\overline{\mathrm{n_{on}}}_k = \mathrm{n_{on}}_{1k} \cdot \epsilon_{1k} + \mathrm{n_{on}}_{2k} \cdot \epsilon_{2k}\\\overline{bkg}_k = bkg_{1k} \cdot \epsilon_{1k} + bkg_{2k} \cdot \epsilon_{2k}\end{aligned}\end{align} \]The stacked
safe_mask
is then:\[\overline{\epsilon_k} = \epsilon_{1k} OR \epsilon_{2k}\]- Parameters
- other: `~gammapy.datasets.MapDataset` or `~gammapy.datasets.MapDatasetOnOff`
Map dataset to be stacked with this one. If other is an on-off dataset alpha * counts_off is used as a background model.
- nan_to_num: bool
Non-finite values are replaced by zero if True (default).
- stat_array()#
Statistic function value per bin given the current model parameters.
- stat_sum()#
Total statistic function value given the current model parameters.
- to_dict()#
Convert to dict for YAML serialization.
- to_image(name=None)#
Create images by summing over the reconstructed energy axis.
- Parameters
- namestr
Name of the new dataset.
- Returns
- dataset
MapDataset
orSpectrumDataset
Dataset integrated over non-spatial axes.
- dataset
- to_masked(name=None, nan_to_num=True)#
Return masked dataset.
- Parameters
- namestr
Name of the masked dataset.
- nan_to_num: bool
Non-finite values are replaced by zero if True (default).
- Returns
- dataset
MapDataset
orSpectrumDataset
Masked dataset
- dataset
- to_region_map_dataset(region, name=None)#
Integrate the map dataset in a given region.
Counts and background of the dataset are integrated in the given region, taking the safe mask into account. The exposure is averaged in the region again taking the safe mask into account. The PSF and energy dispersion kernel are taken at the center of the region.
- Parameters
- region
SkyRegion
Region from which to extract the spectrum
- namestr
Name of the new dataset.
- region
- Returns
- dataset
MapDataset
the resulting reduced dataset
- dataset
- to_spectrum_dataset(*args, **kwargs)[source]#
Return a ~gammapy.datasets.SpectrumDataset from on_region.
Counts and background are summed in the on_region. Exposure is taken from the average exposure.
The energy dispersion kernel is obtained at the on_region center. Only regions with centers are supported.
The model is not exported to the ~gammapy.datasets.SpectrumDataset. It must be set after the dataset extraction.
- Parameters
- on_region
SkyRegion
the input ON region on which to extract the spectrum
- containment_correctionbool
Apply containment correction for point sources and circular on regions
- namestr
Name of the new dataset.
- on_region
- Returns
- dataset
SpectrumDataset
the resulting reduced dataset
- dataset
- write(filename, overwrite=False)#
Write Dataset to file.
A MapDataset is serialised using the GADF format with a WCS geometry. A SpectrumDataset uses the same format, with a RegionGeom.
- Parameters
- filenamestr
Filename to write to.
- overwritebool
Overwrite file if it exists.