Fit#

class gammapy.modeling.Fit(backend='minuit', optimize_opts=None, covariance_opts=None, confidence_opts=None, store_trace=False)[source]#

Bases: object

Fit class.

The fit class provides a uniform interface to multiple fitting backends. Currently available: “minuit”, “sherpa” and “scipy”.

Parameters:
backend{“minuit”, “scipy” “sherpa”}

Global backend used for fitting. Default is “minuit”.

optimize_optsdict

Keyword arguments passed to the optimizer. For the "minuit" backend see https://iminuit.readthedocs.io/en/stable/reference.html#iminuit.Minuit for a detailed description of the available options. If there is an entry ‘migrad_opts’, those options will be passed to iminuit.Minuit.migrad().

For the "sherpa" backend you can from the options:

  • "simplex"

  • "levmar"

  • "moncar"

  • "gridsearch"

Those methods are described and compared in detail on http://cxc.cfa.harvard.edu/sherpa/methods/index.html. The available options of the optimization methods are described on the following pages in detail:

For the "scipy" backend the available options are described in detail here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html

covariance_optsdict

Covariance options passed to the given backend.

confidence_optsdict

Extra arguments passed to the backend. E.g. iminuit.Minuit.minos supports a maxcall option. For the scipy backend confidence_opts are forwarded to brentq. If the confidence estimation fails, the bracketing interval can be adapted by modifying the upper bound of the interval (b) value.

store_tracebool

Whether to store the trace of the fit.

Methods Summary

confidence(datasets, parameter[, sigma, ...])

Estimate confidence interval.

covariance(datasets[, optimize_result])

Estimate the covariance matrix.

optimize(datasets)

Run the optimization.

run(datasets)

Run all fitting steps.

stat_contour(datasets, x, y[, numpoints, sigma])

Compute stat contour.

stat_profile(datasets, parameter[, reoptimize])

Compute fit statistic profile.

stat_surface(datasets, x, y[, reoptimize])

Compute fit statistic surface.

Methods Documentation

confidence(datasets, parameter, sigma=1, reoptimize=True)[source]#

Estimate confidence interval.

Extra kwargs are passed to the backend. E.g. iminuit.Minuit.minos supports a maxcall option.

For the scipy backend kwargs are forwarded to brentq. If the confidence estimation fails, the bracketing interval can be adapted by modifying the upper bound of the interval (b) value.

Parameters:
datasetsDatasets or list of Dataset

Datasets to optimize.

parameterParameter

Parameter of interest.

sigmafloat, optional

Number of standard deviations for the confidence level. Default is 1.

reoptimizebool, optional

Re-optimize other parameters, when computing the confidence region. Default is True.

Returns:
resultdict

Dictionary with keys “errp”, ‘errn”, “success” and “nfev”.

covariance(datasets, optimize_result=None)[source]#

Estimate the covariance matrix.

Assumes that the model parameters are already optimised.

Parameters:
datasetsDatasets or list of Dataset

Datasets to optimize.

optimize_resultOptimizeResult, optional

Optimization result. Can be optionally used to pass the state of the IMinuit object to the covariance estimation. This might save computation time in certain cases. Default is None.

Returns:
resultCovarianceResult

Results.

optimize(datasets)[source]#

Run the optimization.

Parameters:
datasetsDatasets or list of Dataset

Datasets to optimize.

Returns:
optimize_resultOptimizeResult

Optimization result.

run(datasets)[source]#

Run all fitting steps.

Parameters:
datasetsDatasets or list of Dataset

Datasets to optimize.

Returns:
fit_resultFitResult

Fit result.

stat_contour(datasets, x, y, numpoints=10, sigma=1)[source]#

Compute stat contour.

Calls iminuit.Minuit.mncontour.

This is a contouring algorithm for a 2D function which is not simply the fit statistic function. That 2D function is given at each point (par_1, par_2) by re-optimising all other free parameters, and taking the fit statistic at that point.

Very compute-intensive and slow.

Parameters:
datasetsDatasets or list of Dataset

Datasets to optimize.

x, yParameter

Parameters of interest.

numpointsint, optional

Number of contour points. Default is 10.

sigmafloat, optional

Number of standard deviations for the confidence level. Default is 1.

Returns:
resultdict

Dictionary containing the parameter values defining the contour, with the boolean flag “success” and the information objects from mncontour.

Examples

>>> from gammapy.datasets import Datasets, SpectrumDatasetOnOff
>>> from gammapy.modeling.models import SkyModel, LogParabolaSpectralModel
>>> from gammapy.modeling import Fit
>>> datasets = Datasets()
>>> for obs_id in [23523, 23526]:
...     dataset = SpectrumDatasetOnOff.read(
...         f"$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs{obs_id}.fits"
...     )
...     datasets.append(dataset)
>>> datasets = datasets.stack_reduce(name="HESS")
>>> model = SkyModel(spectral_model=LogParabolaSpectralModel(), name="crab")
>>> datasets.models = model
>>> fit = Fit(backend='minuit')
>>> optimize = fit.optimize(datasets)
>>> stat_contour = fit.stat_contour(
...     datasets=datasets,
...     x=model.spectral_model.alpha,
...     y=model.spectral_model.amplitude,
... )
stat_profile(datasets, parameter, reoptimize=False)[source]#

Compute fit statistic profile.

The method used is to vary one parameter, keeping all others fixed. So this is taking a “slice” or “scan” of the fit statistic.

Parameters:
datasetsDatasets or list of Dataset

Datasets to optimize.

parameterParameter

Parameter of interest. The specification for the scan, such as bounds and number of values is taken from the parameter object.

reoptimizebool, optional

Re-optimize other parameters, when computing the confidence region. Default is False.

Returns:
resultsdict

Dictionary with keys “parameter_name_scan”, “stat_scan” and “fit_results”. The latter contains an empty list, if reoptimize is set to False.

Notes

The progress bar can be displayed for this function.

Examples

>>> from gammapy.datasets import Datasets, SpectrumDatasetOnOff
>>> from gammapy.modeling.models import SkyModel, LogParabolaSpectralModel
>>> from gammapy.modeling import Fit
>>> datasets = Datasets()
>>> for obs_id in [23523, 23526]:
...     dataset = SpectrumDatasetOnOff.read(
...         f"$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs{obs_id}.fits"
...     )
...     datasets.append(dataset)
>>> datasets = datasets.stack_reduce(name="HESS")
>>> model = SkyModel(spectral_model=LogParabolaSpectralModel(), name="crab")
>>> datasets.models = model
>>> fit = Fit()
>>> result = fit.run(datasets)
>>> parameter = datasets.models.parameters['amplitude']
>>> stat_profile = fit.stat_profile(datasets=datasets, parameter=parameter)
stat_surface(datasets, x, y, reoptimize=False)[source]#

Compute fit statistic surface.

The method used is to vary two parameters, keeping all others fixed. So this is taking a “slice” or “scan” of the fit statistic.

Caveat: This method can be very computationally intensive and slow

See also: Fit.stat_contour.

Parameters:
datasetsDatasets or list of Dataset

Datasets to optimize.

x, yParameter

Parameters of interest.

reoptimizebool, optional

Re-optimize other parameters, when computing the confidence region. Default is False.

Returns:
resultsdict

Dictionary with keys “x_values”, “y_values”, “stat” and “fit_results”. The latter contains an empty list, if reoptimize is set to False.

Notes

The progress bar can be displayed for this function.

Examples

>>> from gammapy.datasets import Datasets, SpectrumDatasetOnOff
>>> from gammapy.modeling.models import SkyModel, LogParabolaSpectralModel
>>> from gammapy.modeling import Fit
>>> import numpy as np
>>> datasets = Datasets()
>>> for obs_id in [23523, 23526]:
...     dataset = SpectrumDatasetOnOff.read(
...         f"$GAMMAPY_DATA/joint-crab/spectra/hess/pha_obs{obs_id}.fits"
...     )
...     datasets.append(dataset)
>>> datasets = datasets.stack_reduce(name="HESS")
>>> model = SkyModel(spectral_model=LogParabolaSpectralModel(), name="crab")
>>> datasets.models = model
>>> par_alpha = datasets.models.parameters["alpha"]
>>> par_beta = datasets.models.parameters["beta"]
>>> par_alpha.scan_values = np.linspace(1.55, 2.7, 20)
>>> par_beta.scan_values = np.linspace(-0.05, 0.55, 20)
>>> fit = Fit()
>>> stat_surface = fit.stat_surface(
...     datasets=datasets,
...     x=par_alpha,
...     y=par_beta,
...     reoptimize=False,
... )