AnalysisConfig#

class gammapy.analysis.AnalysisConfig(*, general: gammapy.analysis.config.GeneralConfig = GeneralConfig(log=LogConfig(level='info', filename=None, filemode=None, format=None, datefmt=None), outdir='.', n_jobs=1, datasets_file=None, models_file=None), observations: gammapy.analysis.config.ObservationsConfig = ObservationsConfig(datastore=PosixPath('/home/runner/work/gammapy-docs/gammapy-docs/gammapy-datasets/dev/hess-dl3-dr1'), obs_ids=[], obs_file=None, obs_cone=SpatialCircleConfig(frame=None, lon=None, lat=None, radius=None), obs_time=TimeRangeConfig(start=None, stop=None), required_irf=['aeff', 'edisp', 'psf', 'bkg']), datasets: gammapy.analysis.config.DatasetsConfig = DatasetsConfig(type='1d', stack=True, geom=GeomConfig(wcs=WcsConfig(skydir=SkyCoordConfig(frame=None, lon=None, lat=None), binsize=<Angle 0.02 deg>, width=WidthConfig(width=<Angle 5. deg>, height=<Angle 5. deg>), binsize_irf=<Angle 0.2 deg>), selection=SelectionConfig(offset_max=<Angle 2.5 deg>), axes=EnergyAxesConfig(energy=EnergyAxisConfig(min=<Quantity 1. TeV>, max=<Quantity 10. TeV>, nbins=5), energy_true=EnergyAxisConfig(min=<Quantity 0.5 TeV>, max=<Quantity 20. TeV>, nbins=16))), map_selection=['counts', 'exposure', 'background', 'psf', 'edisp'], background=BackgroundConfig(method=None, exclusion=None, parameters={}), safe_mask=SafeMaskConfig(methods=['aeff-default'], parameters={}), on_region=SpatialCircleConfig(frame=None, lon=None, lat=None, radius=None), containment_correction=True), fit: gammapy.analysis.config.FitConfig = FitConfig(fit_range=EnergyRangeConfig(min=None, max=None)), flux_points: gammapy.analysis.config.FluxPointsConfig = FluxPointsConfig(energy=EnergyAxisConfig(min=None, max=None, nbins=None), source='source', parameters={'selection_optional': 'all'}), excess_map: gammapy.analysis.config.ExcessMapConfig = ExcessMapConfig(correlation_radius=<Angle 0.1 deg>, parameters={}, energy_edges=EnergyAxisConfig(min=None, max=None, nbins=None)), light_curve: gammapy.analysis.config.LightCurveConfig = LightCurveConfig(time_intervals=TimeRangeConfig(start=None, stop=None), energy_edges=EnergyAxisConfig(min=None, max=None, nbins=None), source='source', parameters={'selection_optional': 'all'}))[source]#

Bases: gammapy.analysis.config.GammapyBaseConfig

Gammapy analysis configuration.

Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

Attributes Summary

model_computed_fields

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_extra

Get extra fields set during validation.

model_fields

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

model_fields_set

Returns the set of fields that have been explicitly set on this model instance.

Methods Summary

construct([_fields_set])

copy(*[, include, exclude, update, deep])

Returns a copy of the model.

dict(*[, include, exclude, by_alias, ...])

from_orm(obj)

from_yaml(config_str)

Create from YAML string.

json(*[, include, exclude, by_alias, ...])

model_construct([_fields_set])

Creates a new instance of the Model class with validated data.

model_copy(*[, update, deep])

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#model_copy

model_dump(*[, mode, include, exclude, ...])

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump

model_dump_json(*[, indent, include, ...])

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump_json

model_json_schema([by_alias, ref_template, ...])

Generates a JSON schema for a model class.

model_parametrized_name(params)

Compute the class name for parametrizations of generic classes.

model_post_init(_BaseModel__context)

Override this method to perform additional initialization after __init__ and model_construct.

model_rebuild(*[, force, raise_errors, ...])

Try to rebuild the pydantic-core schema for the model.

model_validate(obj, *[, strict, ...])

Validate a pydantic model instance.

model_validate_json(json_data, *[, strict, ...])

Usage docs: https://docs.pydantic.dev/2.6/concepts/json/#json-parsing

model_validate_strings(obj, *[, strict, context])

Validate the given object contains string data against the Pydantic model.

parse_file(path, *[, content_type, ...])

parse_obj(obj)

parse_raw(b, *[, content_type, encoding, ...])

read(path)

Read from YAML file.

schema([by_alias, ref_template])

schema_json(*[, by_alias, ref_template])

set_logging()

Set logging config.

to_yaml()

Convert to YAML string.

update([config])

Update config with provided settings.

update_forward_refs(**localns)

validate(value)

write(path[, overwrite])

Write to YAML file.

Attributes Documentation

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'json_encoders': {<class 'astropy.units.quantity.Quantity'>: <function GammapyBaseConfig.<lambda>>}, 'use_enum_values': True, 'validate_assignment': True, 'validate_default': True}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_extra#

Get extra fields set during validation.

Returns:

A dictionary of extra fields, or None if config.extra is not set to "allow".

model_fields: ClassVar[dict[str, FieldInfo]] = {'datasets': FieldInfo(annotation=DatasetsConfig, required=False, default=DatasetsConfig(type='1d', stack=True, geom=GeomConfig(wcs=WcsConfig(skydir=SkyCoordConfig(frame=None, lon=None, lat=None), binsize=<Angle 0.02 deg>, width=WidthConfig(width=<Angle 5. deg>, height=<Angle 5. deg>), binsize_irf=<Angle 0.2 deg>), selection=SelectionConfig(offset_max=<Angle 2.5 deg>), axes=EnergyAxesConfig(energy=EnergyAxisConfig(min=<Quantity 1. TeV>, max=<Quantity 10. TeV>, nbins=5), energy_true=EnergyAxisConfig(min=<Quantity 0.5 TeV>, max=<Quantity 20. TeV>, nbins=16))), map_selection=['counts', 'exposure', 'background', 'psf', 'edisp'], background=BackgroundConfig(method=None, exclusion=None, parameters={}), safe_mask=SafeMaskConfig(methods=['aeff-default'], parameters={}), on_region=SpatialCircleConfig(frame=None, lon=None, lat=None, radius=None), containment_correction=True)), 'excess_map': FieldInfo(annotation=ExcessMapConfig, required=False, default=ExcessMapConfig(correlation_radius=<Angle 0.1 deg>, parameters={}, energy_edges=EnergyAxisConfig(min=None, max=None, nbins=None))), 'fit': FieldInfo(annotation=FitConfig, required=False, default=FitConfig(fit_range=EnergyRangeConfig(min=None, max=None))), 'flux_points': FieldInfo(annotation=FluxPointsConfig, required=False, default=FluxPointsConfig(energy=EnergyAxisConfig(min=None, max=None, nbins=None), source='source', parameters={'selection_optional': 'all'})), 'general': FieldInfo(annotation=GeneralConfig, required=False, default=GeneralConfig(log=LogConfig(level='info', filename=None, filemode=None, format=None, datefmt=None), outdir='.', n_jobs=1, datasets_file=None, models_file=None)), 'light_curve': FieldInfo(annotation=LightCurveConfig, required=False, default=LightCurveConfig(time_intervals=TimeRangeConfig(start=None, stop=None), energy_edges=EnergyAxisConfig(min=None, max=None, nbins=None), source='source', parameters={'selection_optional': 'all'})), 'observations': FieldInfo(annotation=ObservationsConfig, required=False, default=ObservationsConfig(datastore=PosixPath('/home/runner/work/gammapy-docs/gammapy-docs/gammapy-datasets/dev/hess-dl3-dr1'), obs_ids=[], obs_file=None, obs_cone=SpatialCircleConfig(frame=None, lon=None, lat=None, radius=None), obs_time=TimeRangeConfig(start=None, stop=None), required_irf=['aeff', 'edisp', 'psf', 'bkg']))}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

model_fields_set#

Returns the set of fields that have been explicitly set on this model instance.

Returns:
A set of strings representing the fields that have been set,

i.e. that were not filled from defaults.

Methods Documentation

classmethod construct(_fields_set: set[str] | None = None, **values: Any) Model#
copy(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, update: Dict[str, Any] | None = None, deep: bool = False) Model#

Returns a copy of the model.

!!! warning “Deprecated”

This method is now deprecated; use model_copy instead.

If you need include or exclude, use:

`py data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) `

Args:

include: Optional set or mapping specifying which fields to include in the copied model. exclude: Optional set or mapping specifying which fields to exclude in the copied model. update: Optional dictionary of field-value pairs to override field values in the copied model. deep: If True, the values of fields that are Pydantic models will be deep-copied.

Returns:

A copy of the model with included, excluded and updated fields as specified.

dict(*, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) Dict[str, Any]#
classmethod from_orm(obj: Any) Model#
classmethod from_yaml(config_str)[source]#

Create from YAML string.

json(*, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Callable[[Any], Any] | None = PydanticUndefined, models_as_dict: bool = PydanticUndefined, **dumps_kwargs: Any) str#
classmethod model_construct(_fields_set: set[str] | None = None, **values: Any) Model#

Creates a new instance of the Model class with validated data.

Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = 'allow' was set since it adds all passed values

Args:

_fields_set: The set of field names accepted for the Model instance. values: Trusted or pre-validated data dictionary.

Returns:

A new instance of the Model class with validated data.

model_copy(*, update: dict[str, Any] | None = None, deep: bool = False) Model#

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#model_copy

Returns a copy of the model.

Args:
update: Values to change/add in the new model. Note: the data is not validated

before creating the new model. You should trust this data.

deep: Set to True to make a deep copy of the model.

Returns:

New model instance.

model_dump(*, mode: Literal['json', 'python'] | str = 'python', include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True) dict[str, Any]#

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Args:
mode: The mode in which to_python should run.

If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.

include: A list of fields to include in the output. exclude: A list of fields to exclude from the output. by_alias: Whether to use the field’s alias in the dictionary key if defined. exclude_unset: Whether to exclude fields that have not been explicitly set. exclude_defaults: Whether to exclude fields that are set to their default value. exclude_none: Whether to exclude fields that have a value of None. round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T]. warnings: Whether to log warnings when invalid fields are encountered.

Returns:

A dictionary representation of the model.

model_dump_json(*, indent: int | None = None, include: IncEx = None, exclude: IncEx = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: bool = True) str#

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump_json

Generates a JSON representation of the model using Pydantic’s to_json method.

Args:

indent: Indentation to use in the JSON output. If None is passed, the output will be compact. include: Field(s) to include in the JSON output. exclude: Field(s) to exclude from the JSON output. by_alias: Whether to serialize using field aliases. exclude_unset: Whether to exclude fields that have not been explicitly set. exclude_defaults: Whether to exclude fields that are set to their default value. exclude_none: Whether to exclude fields that have a value of None. round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T]. warnings: Whether to log warnings when invalid fields are encountered.

Returns:

A JSON string representation of the model.

classmethod model_json_schema(by_alias: bool = True, ref_template: str = '#/$defs/{model}', schema_generator: type[pydantic.json_schema.GenerateJsonSchema] = <class 'pydantic.json_schema.GenerateJsonSchema'>, mode: typing_extensions.Literal[validation, serialization] = 'validation') dict[str, Any]#

Generates a JSON schema for a model class.

Args:

by_alias: Whether to use attribute aliases or not. ref_template: The reference template. schema_generator: To override the logic used to generate the JSON schema, as a subclass of

GenerateJsonSchema with your desired modifications

mode: The mode in which to generate the schema.

Returns:

The JSON schema for the given model class.

classmethod model_parametrized_name(params: tuple[type[Any], ...]) str#

Compute the class name for parametrizations of generic classes.

This method can be overridden to achieve a custom naming scheme for generic BaseModels.

Args:
params: Tuple of types of the class. Given a generic class

Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.

Returns:

String representing the new class where params are passed to cls as type variables.

Raises:

TypeError: Raised when trying to generate concrete names for non-generic models.

model_post_init(_BaseModel__context: Any) None#

Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.

classmethod model_rebuild(*, force: bool = False, raise_errors: bool = True, _parent_namespace_depth: int = 2, _types_namespace: dict[str, Any] | None = None) bool | None#

Try to rebuild the pydantic-core schema for the model.

This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.

Args:

force: Whether to force the rebuilding of the model schema, defaults to False. raise_errors: Whether to raise errors, defaults to True. _parent_namespace_depth: The depth level of the parent namespace, defaults to 2. _types_namespace: The types namespace, defaults to None.

Returns:

Returns None if the schema is already “complete” and rebuilding was not required. If rebuilding _was_ required, returns True if rebuilding was successful, otherwise False.

classmethod model_validate(obj: Any, *, strict: bool | None = None, from_attributes: bool | None = None, context: dict[str, Any] | None = None) Model#

Validate a pydantic model instance.

Args:

obj: The object to validate. strict: Whether to enforce types strictly. from_attributes: Whether to extract data from object attributes. context: Additional context to pass to the validator.

Raises:

ValidationError: If the object could not be validated.

Returns:

The validated model instance.

classmethod model_validate_json(json_data: str | bytes | bytearray, *, strict: bool | None = None, context: dict[str, Any] | None = None) Model#

Usage docs: https://docs.pydantic.dev/2.6/concepts/json/#json-parsing

Validate the given JSON data against the Pydantic model.

Args:

json_data: The JSON data to validate. strict: Whether to enforce types strictly. context: Extra variables to pass to the validator.

Returns:

The validated Pydantic model.

Raises:

ValueError: If json_data is not a JSON string.

classmethod model_validate_strings(obj: Any, *, strict: bool | None = None, context: dict[str, Any] | None = None) Model#

Validate the given object contains string data against the Pydantic model.

Args:

obj: The object contains string data to validate. strict: Whether to enforce types strictly. context: Extra variables to pass to the validator.

Returns:

The validated Pydantic model.

classmethod parse_file(path: str | Path, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Model#
classmethod parse_obj(obj: Any) Model#
classmethod parse_raw(b: str | bytes, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Model#
classmethod read(path)[source]#

Read from YAML file.

classmethod schema(by_alias: bool = True, ref_template: str = '#/$defs/{model}') Dict[str, Any]#
classmethod schema_json(*, by_alias: bool = True, ref_template: str = '#/$defs/{model}', **dumps_kwargs: Any) str#
set_logging()[source]#

Set logging config.

Calls logging.basicConfig, i.e. adjusts global logging state.

to_yaml()[source]#

Convert to YAML string.

update(config=None)[source]#

Update config with provided settings.

Parameters
configstr or AnalysisConfig object, optional

Configuration settings provided in dict() syntax. Default is None.

classmethod update_forward_refs(**localns: Any) None#
classmethod validate(value: Any) Model#
write(path, overwrite=False)[source]#

Write to YAML file.