{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "
\n", "\n", "**This is a fixed-text formatted version of a Jupyter notebook**\n", "\n", "- Try online [![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/gammapy/gammapy-webpage/v0.13?urlpath=lab/tree/image_analysis.ipynb)\n", "- You can contribute with your own notebooks in this\n", "[GitHub repository](https://github.com/gammapy/gammapy/tree/master/tutorials).\n", "- **Source files:**\n", "[image_analysis.ipynb](../_static/notebooks/image_analysis.ipynb) |\n", "[image_analysis.py](../_static/notebooks/image_analysis.py)\n", "
\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Fitting 2D images with Gammapy\n", "\n", "Gammapy does not have any special handling for 2D images, but treats them as a subset of maps. Thus, classical 2D image analysis can be done in 2 independent ways: \n", "\n", "1. Using the sherpa pacakge, see: [image_fitting_with_sherpa.ipynb](image_fitting_with_sherpa.ipynb),\n", "\n", "2. Within gammapy, by assuming 2D analysis to be a sub-set of the generalised `maps`. Thus, analysis should proceeexactly as demonstrated in [analysis_3d.ipynb](analysis_3d.ipynb), taking care of a few things that we mention in this tutorial\n", "\n", "We consider 2D `images` to be a special case of 3D `maps`, ie, maps with only one energy bin. This is a major difference while analysing in `sherpa`, where the `maps` must not contain any energy axis. In this tutorial, we do a classical image analysis using three example observations of the Galactic center region with CTA - i.e., study the source flux and morphology.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "import astropy.units as u\n", "from astropy.coordinates import SkyCoord\n", "from regions import CircleSkyRegion\n", "from gammapy.data import DataStore\n", "from gammapy.irf import make_mean_psf\n", "from gammapy.maps import Map, MapAxis, WcsGeom\n", "from gammapy.cube import MapMaker, PSFKernel, MapDataset\n", "from gammapy.cube.models import SkyModel, BackgroundModel\n", "from gammapy.spectrum.models import PowerLaw2\n", "from gammapy.image.models import SkyPointSource\n", "from gammapy.utils.fitting import Fit" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prepare modeling input data\n", "\n", "### The counts, exposure and the background maps\n", "This is the same drill - use `DataStore` object to access the CTA observations and retrieve a list of observations by passing the observations IDs to the `.get_observations()` method, then use `MapMaker` to make the maps." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Data store:\n", "HDU index table:\n", "BASE_DIR: /Users/adonath/data/gammapy-data/cta-1dc/index/gps\n", "Rows: 24\n", "OBS_ID: 110380 -- 111630\n", "HDU_TYPE: ['aeff', 'bkg', 'edisp', 'events', 'gti', 'psf']\n", "HDU_CLASS: ['aeff_2d', 'bkg_3d', 'edisp_2d', 'events', 'gti', 'psf_3gauss']\n", "\n", "Observation table:\n", "Observatory name: 'N/A'\n", "Number of observations: 4\n" ] } ], "source": [ "# Define which data to use and print some information\n", "data_store = DataStore.from_dir(\"$GAMMAPY_DATA/cta-1dc/index/gps/\")\n", "data_store.info()" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Total observation time: 2.0 h\n" ] } ], "source": [ "print(\n", " \"Total observation time: {}\".format(\n", " data_store.obs_table[\"ONTIME\"].quantity.sum().to(\"hour\")\n", " )\n", ")" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# Select some observations from these dataset by hand\n", "obs_ids = [110380, 111140, 111159]\n", "observations = data_store.get_observations(obs_ids)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "emin, emax = [0.1, 10] * u.TeV\n", "energy_axis = MapAxis.from_bounds(\n", " emin.value, emax.value, 10, unit=\"TeV\", name=\"energy\", interp=\"log\"\n", ")\n", "geom = WcsGeom.create(\n", " skydir=(0, 0),\n", " binsz=0.02,\n", " width=(10, 8),\n", " coordsys=\"GAL\",\n", " proj=\"CAR\",\n", " axes=[energy_axis],\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that even when doing a 2D analysis, it is better to use fine energy bins in the beginning and then sum them over. This is to ensure that the background shape can be approximated by a power law function in each energy bin. The `run_images()` can be used to compute maps in fine bins and then squash them to have one bin. This can be done by specifying `keep_dims = True`. This will compute a summed counts and background maps, and a spectral weighted exposure map." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "WARNING: Tried to get polar motions for times after IERS data is valid. Defaulting to polar motion from the 50-yr mean for those. This may affect precision at the 10s of arcsec level [astropy.coordinates.builtin_frames.utils]\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 11.5 s, sys: 2.22 s, total: 13.7 s\n", "Wall time: 13.8 s\n" ] } ], "source": [ "%%time\n", "maker = MapMaker(geom, offset_max=4.0 * u.deg)\n", "spectrum = PowerLaw2(index=2)\n", "maps2D = maker.run_images(observations, spectrum=spectrum, keepdims=True)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'counts': WcsNDMap\n", " \n", " \tgeom : WcsGeom \n", " \taxes : lon, lat, energy\n", " \tshape : (500, 400, 1)\n", " \tndim : 3\n", " \tunit : '' \n", " \tdtype : float32 , 'exposure': WcsNDMap\n", " \n", " \tgeom : WcsGeom \n", " \taxes : lon, lat, energy\n", " \tshape : (500, 400, 1)\n", " \tndim : 3\n", " \tunit : 'm2 s' \n", " \tdtype : float32 , 'background': WcsNDMap\n", " \n", " \tgeom : WcsGeom \n", " \taxes : lon, lat, energy\n", " \tshape : (500, 400, 1)\n", " \tndim : 3\n", " \tunit : '' \n", " \tdtype : float32 }" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "maps2D" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a typical 2D analysis, using an energy dispersion usually does not make sense. A PSF map can be made as in the regular 3D case, taking care to weight it properly with the spectrum." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "# mean PSF\n", "geom2d = maps2D[\"exposure\"].geom\n", "src_pos = SkyCoord(0, 0, unit=\"deg\", frame=\"galactic\")\n", "table_psf = make_mean_psf(observations, src_pos)\n", "\n", "table_psf_2d = table_psf.table_psf_in_energy_band(\n", " (emin, emax), spectrum=spectrum\n", ")\n", "\n", "# PSF kernel used for the model convolution\n", "psf_kernel = PSFKernel.from_table_psf(\n", " table_psf_2d, geom2d, max_radius=\"0.3 deg\"\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, the analysis proceeds as usual. Just take care to use the proper geometry in this case." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Define a mask" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "region = CircleSkyRegion(center=src_pos, radius=0.6 * u.deg)\n", "mask = geom2d.region_mask([region])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Modeling the source\n", "\n", "This is the important thing to note in this analysis. Since modelling and fitting in `gammapy.maps` needs to have a combination of spectral models, we have to use a dummy Powerlaw as for the spectral model and fix its index to 2. Since we are interested only in the integral flux, we will use the `PowerLaw2` model which directly fits an integral flux." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "spatial_model = SkyPointSource(lon_0=\"0.01 deg\", lat_0=\"0.01 deg\")\n", "spectral_model = PowerLaw2(\n", " emin=emin, emax=emax, index=2.0, amplitude=\"3e-12 cm-2 s-1\"\n", ")\n", "model = SkyModel(spatial_model=spatial_model, spectral_model=spectral_model)\n", "model.parameters[\"index\"].frozen = True" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Modeling the background\n", "\n", "Gammapy fitting framework assumes the background to be an integrated model.\n", "Thus, we will define the background as a model, and freeze its parameters for now." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "background_model = BackgroundModel(maps2D[\"background\"])\n", "background_model.parameters[\"norm\"].frozen = True\n", "background_model.parameters[\"tilt\"].frozen = True" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "dataset = MapDataset(\n", " model=model,\n", " counts=maps2D[\"counts\"],\n", " exposure=maps2D[\"exposure\"],\n", " background_model=background_model,\n", " mask_fit=mask,\n", " psf=psf_kernel,\n", ")" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 1.05 s, sys: 10.4 ms, total: 1.06 s\n", "Wall time: 1.06 s\n" ] } ], "source": [ "%%time\n", "fit = Fit(dataset)\n", "result = fit.run()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To see the actual best-fit parameters, do a print on the result" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "SkyModel\n", "\n", "Parameters: \n", "\n", "\t name value error unit min max frozen\n", "\t--------- ---------- ----- -------- ---------- --------- ------\n", "\t lon_0 -5.363e-02 nan deg -1.800e+02 1.800e+02 False\n", "\t lat_0 -5.057e-02 nan deg -9.000e+01 9.000e+01 False\n", "\t index 2.000e+00 nan nan nan True\n", "\tamplitude 4.292e-11 nan cm-2 s-1 nan nan False\n", "\t emin 1.000e-01 nan TeV nan nan True\n", "\t emax 1.000e+01 nan TeV nan nan True\n" ] } ], "source": [ "print(model)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "Table length=9\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
namelon_0lat_0indexamplitudeeminemaxnormtiltreference
str9float64float64float64float64float64float64float64float64float64
lon_01.014e-053.140e-070.000e+00-9.785e-170.000e+000.000e+000.000e+000.000e+000.000e+00
lat_03.140e-071.060e-050.000e+00-2.543e-160.000e+000.000e+000.000e+000.000e+000.000e+00
index0.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+00
amplitude-9.785e-17-2.543e-160.000e+002.946e-240.000e+000.000e+000.000e+000.000e+000.000e+00
emin0.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+00
emax0.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+00
norm0.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+00
tilt0.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+00
reference0.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+000.000e+00
" ], "text/plain": [ "\n", " name lon_0 lat_0 index ... norm tilt reference\n", " str9 float64 float64 float64 ... float64 float64 float64 \n", "--------- ---------- ---------- --------- ... --------- --------- ---------\n", " lon_0 1.014e-05 3.140e-07 0.000e+00 ... 0.000e+00 0.000e+00 0.000e+00\n", " lat_0 3.140e-07 1.060e-05 0.000e+00 ... 0.000e+00 0.000e+00 0.000e+00\n", " index 0.000e+00 0.000e+00 0.000e+00 ... 0.000e+00 0.000e+00 0.000e+00\n", "amplitude -9.785e-17 -2.543e-16 0.000e+00 ... 0.000e+00 0.000e+00 0.000e+00\n", " emin 0.000e+00 0.000e+00 0.000e+00 ... 0.000e+00 0.000e+00 0.000e+00\n", " emax 0.000e+00 0.000e+00 0.000e+00 ... 0.000e+00 0.000e+00 0.000e+00\n", " norm 0.000e+00 0.000e+00 0.000e+00 ... 0.000e+00 0.000e+00 0.000e+00\n", " tilt 0.000e+00 0.000e+00 0.000e+00 ... 0.000e+00 0.000e+00 0.000e+00\n", "reference 0.000e+00 0.000e+00 0.000e+00 ... 0.000e+00 0.000e+00 0.000e+00" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# To get the errors on the model, we can check the covariance table:\n", "result.parameters.covariance_to_table()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Todo: Demonstrate plotting a flux map" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises\n", "1. Plot residual maps as done in the `analysis_3d` notebook\n", "2. Iteratively add and fit sources as explained in `image_fitting_with_sherpa` notebook\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.0" }, "nbsphinx": { "orphan": true } }, "nbformat": 4, "nbformat_minor": 2 }