Reduction of IFU Spectral Images with PyRAF

Note

The following code was revised and runs successfully on data from Gemini South. The only modifications to the code involve selecting data from Gemini South.

This tutorial will reduce observations from programs GS-2012B-Q-26 and GN-2012B-Q-226 (PI: M. Schirmer), IFU spectroscopy of the “green bean” galaxy, J224024.1–092748. The observations were obtained on the nights of 2012 Oct 18-19 (GMOS-N) and 2012 Aug 27+29 (GMOS-S), and included g-band direct (acquisition) imaging of the field of interest, IFU 2-slit spectroscopy, and supporting calibration exposures. The spectra were obtained with the following configurations:

Configurations and Spectral Coverage

Configuration

Passband

Rest Passband

B600/499 + g

429–542 nm

324–409 nm

B600/625 + r

559–688 nm

422–519 nm

R831/853 + z + CaT

847–905 nm

639–683 nm

The original science goal was to analyze the 3-D kinematics, ionization state, and physical parameters of the gas in this Type-2 Seyfert galaxy at \(z=0.326\) (Davies, et al. 2015: [DST]).

IFU spectra are especially complex to reduce, and demanding science goals require careful attention to detail. For an advanced discussion and a detailed recipe for deriving high quality science products from this program’s data, see the posting by James Turner on the Gemini Data Reduction User Forum (download the reduction script: allred.cl) . For a high-level description of IFU reductions, see IFU Workflow. Here is an outline for reduction of this program’s IFU data:

For other tutorials, see the following links:

Retrieve & Organize Data

The first step is to retrieve the data from the Gemini Observatory Archive (see Archive Searches). You may search the GOA yourself, or instead just cut-and-paste the direct URLs in your browser.

# IFU data of Green Bean galaxy:
https://archive.gemini.edu/searchform/cols=CTOWEQ/GS-2012B-Q-26/notengineering/GMOS/NotFail
https://archive.gemini.edu/searchform/cols=CTOWEQ/GN-2012B-Q-226/notengineering/GMOS/NotFail

After retrieving the science data, click the Load Associated Calibrations tab on the search results page and download the associated bias and flat-field exposures. See Retrieving Data for details. Unpack all of them in a subdirectory of your working directory named /raw. Be sure to uncompress the files.

It is highly recommended to create an observing log (see Creating an Observing Log), and review the exposures to understand the observing program and the nature of the science exposures. A review of the log for this program reveals the following:

  • GCAL flat exposures were obtained contemporaneously with the science and standard star exposures, which will enable an accurate tracing for the fiber locations on the detector format.

  • The Arc exposures were obtained as dayCal observations, well separated in time from the science exposures and other Arcs. Thus, it is only useful to process one Arc for each configuration, since they cannot be combined and instrument flexure will introduce zero-point errors. A refinement to the wavelength calibration for science exposures will be necessary.

  • Exposures of standard stars were obtained, but not on the same nights as the science exposures, and no single star was observed in all configurations.

  • Multiple science exposures were obtained in each configuration, all with long durations. The emission features are weak, the cosmic rays are plentiful, and (not surprisingly) the spectral images will have to be aligned post-extraction, prior to combining.

  • Some twilight flat dayCal exposures were obtained, but are not needed or used for this IFU program.

Science & Standard Star Observations

DateObs

Configuration

Target

Used?

2012-08-27

GS-B600/625

J2240-0927

Yes

2012-08-29

GS-B600/499

EG131

No

GS-B600/625

EG131

Yes

2012-10-05

GN-B600/499

BD+28_4211

Yes

2012-10-08

GN-B600/499

J2240-0927

Yes

2012-10-18

GN-R831/853

J2240-0927

Yes

2012-10-31

GS-B600/625

LTT9239

Yes

2013-04-03

GS-R831/853

Wolf1346

No

Reductions for the observations highlighted in bold, above, will be described in the following sections.

Processing Preparation

Software

Example Reduction Script

You must create an observing log database of the data in the ./raw subdirectory. Download: obslog.py to the ./raw subdirectory, and execute it from the unix prompt.

cd /path/to/work_directory/raw
python obslog.py obsLog.sqlite3

See Creating an Observing Log for details.

Also retrieve the following two python modules:

  • the selection module, which includes template SQL statements for selecting files, and functions for specifying metadata on which to perform selections.

  • the IFU data processing script for this tutorial

From an active PyRAF session:

import copy
from pyraf import iraf
from pyraf.iraf import gemini, gemtools, gmos, rv
import fileSelect as fs
import gmos_ifu_proc

It will take awhile to run (the better part of a day on a 2010-era Mac Pro). You may find it more useful to download the script just to follow this tutorial in detail, and use it as the basis for reducing other IFU observations.

Caution

This script may use a large amount of available storage if Google Drive is connected to your desktop. Disable synching for the working directory before running the script or a hidden directory named ‘.tmp.driveupload’, which is in the same directory as ‘vm_transfer.’, can take up 200+ GB of space.

Caution

The reduction script includes steps that should be performed interactively for best results, but the interactive options have been turned off in the script in order not to interrupt the automated processing flow.

The processing steps for this tutorial are described below, but only a subset of the commands for G600/625 are given in the interest of brevity.

Suppressing Cosmic Rays

To suppress the cosmic rays effectively, particularly for the long-duration science exposures, install the L.A.Cosmic script (see [VD])

if you have not already done so. You may define this task in your login.cl file to avoid this step each time you start IRAF. See: setting up the Add-on Task for CR Rejection. This task, which is called by gemtools.gemcrspec, works much better than the gmos.gscrrej task, which is called when the fl_gscrrej is set in gfreduce processing.

Reference File Checklist

The Table below lists the basic calibration files that will be constructed for the science data processing steps, apart from the wavelength solutions (which are written to the /database subdirectory). Files that include an asterisk (*) wildcard character refer to any names with the specified suffix.

Required MasterCals

Type

GMOS-N

GMOS-S

Static BPM

[N/A]

bpm_gmos-s_EEV_1x1_3amp_full

Bias Residual

MCbiasN

MCbiasS1 and MCbiasS2

Flat-field

eprgN20121018S0070_flat

eprgS20120829S0062_flat

eprgN20121018S0073_flat

eprgS20120827S0069_flat

eprgN20121005S0296_flat

eprgS20121031S0025_flat

Sensitivity

Sens_B6-499

Sens_B6-625

Sens_R8-853

Fetch and uncompress the Static BPM MasterCal

for full-frame read-outs for the GMOS-S EEV CCDs. The GMOS-N CCDs have very few cosmetic defects, so a Static BPM is not necessary.

You will need to make local copies (i.e., in your work directory) of the monochromatic magnitudes for the standard stars (see: Standards List). These standards will be used to make a common sensitivity function for GMOS-S B600/625, but the reference files reside in different directories in the IRAF system.

  • EG131: gmos$calib/eg131.dat

  • LTT9239: onedstds$ctionewcal/l9239.dat

Now take a deep breath; this is going to be a bit of a slog.

Software Initialization

Start a PyRAF session and import the essential python modules (see above). Then define the location for a the observing log database, the Static BPM MasterCal, and the LACosmic processing script:

dbFile='raw/obsLog.sqlite3'
bpm = 'bpm_gmos-s_EEV_v2_1x1_spec_MEF.fits'
iraf.task(lacos_spec='./lacos_spec.cl')

The PyRAF reduction script makes use of a simple container class, which can be populated with the names of MasterCal reference files that apply to different types of exposures. Within a PyRAF session, define the reduction class:

class ReductParam(object):
    '''Container for data reduction file lists, MasterCal names, etc.
    '''
    def __init__(self, expType, expParams, dbFile, bias=None, trace=None,
        gaps=None, flat=None, wvtran=None, sensfunc=None, target=None):
        '''Construct a data reduction container for a set of exposures.
        '''
        self.type = str(expType)
        self.qd = expParams
        self.dbFile = str(dbFile)
        self.MCbias = str(bias)
        self.MCtrace = str(trace)
        self.MCgaps = str(gaps)
        self.MCresponse = str(flat)
        self.MCwavtran = str(wvtran)
        self.MCsens = str(sensfunc)
        self.targName = str(target)
        # Construct a list of files appropriate for the exposure type.
        qd = self.qd
        if qd is not None and self.dbFile is not None:
            self.fileList = fs.fileListQuery(self.dbFile,
                            fs.createQuery(self.type, qd), qd)
        else:
            self.fileList = []

Now create an exposure parameter dictionary of essential exposure parameter=value pairs. Restricting the bias exposures to within about 2 months of the target observations will yield about 60 bias exposures, which is plenty. There is a second epoch of GMOS-S B600/625 exposures, for the standard star LTT9239, so you will need a second list of biases.

qds1 = {'use_me':1,
       'CcdBin':'1 1',
       'DateObs':'2012-08-20:2012-09-01',
       'Instrument':'GMOS-S',
       'Disperser':'B600+_%',
       'AperMask':'IFU-2',
       'CentWave':625.0,
       'Object':'J2240-0927',
       'RoI':'Full'
       }
# Also make a dictionary for late-epoch GMOS-S
qds2 = copy.deepcopy(qds1)
qds2['DateObs'] = '2012-10-26:2012-11-03'
qds2['Object'] = 'LTT9239'

Building MasterCals

The next steps will create the necessary MasterCal reference files that are used to calibrate the science files. Files are selected by matching specific exposure metadata in the observing log database (see GMOS Processing Keywords). Now create the Bias Residual MasterCal:

# Use primarily the default task parameters.
gemtools.gemextn.unlearn()    # Disarm a bug in gbias
gmos.gbias.unlearn()
gmos.gbias.logfile = 'biasLog.txt'
gmos.gbias.rawpath = './raw/'
gmos.gbias.fl_vardq = 'yes'
gmos.gbias.verbose = 'no'

bias = {}
# The following SQL generates the list of full-frame files to process.
SQL = fs.createQuery('bias', qds1)
bias['S1'] = fs.fileListQuery(dbFile, SQL, qds1)

# All in one statement for the second epoch.
bias['S2'] = fs.fileListQuery(dbFile, fs.createQuery('bias', qds2), qds2)

# Process bias files for each epoch of observations.
for epoch in ['S1', 'S2']:
    # The str.join() function is needed to transform a python list into
    # a string of comma-separated files that IRAF can understand.
    gmos.gbias(','.join(str(x) for x in bias[epoch]), 'MCbias' + epoch)

Calibration Processing

Generating calibration reference files beyond the Bias Residual MasterCal is sufficiently involved that the topic will be covered here. The following section describes the preparation of calibration reference files necessary for the removal of the instrumental signature.

Processing with gfreduce

Many reduction steps are performed by the gfreduce task. This task has more than 75 parameters; the table below lists the defaults for the “flag” keywords—i.e., keywords with logical values to indicate whether to include a specific step in the processing.

gfreduce Processing Flag defaults

Flag

Default

Description

fl_addmdf

Yes

Append MDF extension?

fl_bias

Yes

Subtract Bias Residual?

fl_fixgaps

No

Interpolate over chip gaps after extraction?

fl_fixnc

No

Auto-correct for nod count-mismatch in N&S observations

fl_fulldq

No

Decompose DQ into constituent bits before transforming them?

fl_extract

Yes

Extract the spectra?

fl_fluxcal

Yes

Apply flux calibration?

fl_nodsuffle

No

Using one of the N&S slitmasks?

fl_gsappwave

Yes

Insert approximate dispersion solution into header?

fl_gnsskysub

No

Subtract sky from N&S images?

fl_gscrrej

Yes

Clean images of cosmic rays?

fl_inter

Yes

Perform operations interactively?

fl_novlap

Yes

Extract only non-overlapping sections of 2-slit spectra?

fl_over

Yes

Perform overscan correction?

fl_skysub

Yes

Subtract mean sky spectrum?

fl_trim

Yes

Trim overscan region?

fl_vardq

No

Propagate VAR and DQ?

fl_wavtran

Yes

Apply wavelength calibration?

Processing IFU data is more complicated than for other instrument configurations, for a variety of reasons including:

  • The gfreduce task does not fully integrate some options and operations (e.g., an input parameter for the Static BPM), and frankly, it is a little cantankerous.

  • The narrow spatial extent of the IFU fibers, and the close packing of the output fiber spectra on the FPA means that effects such as instrument flexure and cosmic-ray hits have an outsized impact on the data processing, and must be addressed with care.

  • Some operations such as flat-field generation have not been broken out into a separate task, and other tasks do not fully support file lists.

The workflow for IFU processing is, therefore, somewhat iterative and requires explicit iteration over files.

Spectrum Trace

Ideally GCAL flats were obtained contemporaneously with the science exposures so that they may be used as templates for tracing the spectrum from each fiber on the science image array; that is the case for this program. A representative shift in location of the fiber traces for two different flat-field exposures is shown in the figure below, which illustrates the need for a contemporaneous GCAL flat with science exposures.

../_images/IFU_fiberSpec.png

A shift of about 1 pixel in the vertical position of fiber spectra in two GCAL flat-fields, taken weeks apart and at a different telescope orientation. Each fiber spectrum subtends only about 6 spatial pixels, corresponding to 0.2 arcsec in the focal plane. Wavelength increases from right to left.

The first reduction step is to process those GCAL exposures though the following:

  • bias correction

  • spectrum location and extraction for each fiber

  • identification of “gaps” in the slit where there are no fiber spectra

You will use the extraction parameters, which are written to the /database directory, for downstream processing. To get started, we first populate some dictionaries with processing task parameters. These dictionaries provide persistent storage of parameters, and allow the call to the task to be very compact.

# Set parameters.
gmos.gireduce.unlearn()
gmos.gfreduce.unlearn()
gmos.gfreduce.verbose = 'no'
traceFlags = {
    'fl_addmdf':'yes', 'fl_over':'yes', 'fl_trim':'yes', 'fl_bias':'yes',
    'fl_extract':'yes', 'fl_gsappwave':'no', 'fl_wavtran':'no',
    'fl_gscrrej':'no', 'fl_skysub':'no', 'fl_fluxcal':'no', 'fl_vardq':'yes',
    'fl_inter':'no', 'rawpath':'./raw/', 'logfile':'gfreduceLog.txt'
    }
flatFlags = {
     'fl_addmdf':'no', 'fl_over':'no', 'fl_trim':'no', 'fl_bias':'no',
     'fl_extract':'yes', 'fl_gsappwave':'yes', 'fl_wavtran':'no',
     'fl_gscrrej':'no', 'fl_skysub':'no', 'fl_fluxcal':'no', 'fl_vardq':'yes',
     'fl_inter':'no', 'rawpath':'./', 'logfile':'gfreduceLog.txt'
    }
scatsubFlags = {
    'prefix':'b', 'xorder':'5,9,5', 'yorder':'5,7,5', 'cross':'yes'
    }

The gfreduce task does not have an option for inserting a Static BPM MasterCal into the DQ extensions of the processed flats, so this must be done explicitly. Alas, the applicable task, addbpm, appears not to work properly. So, the DQ extensions will be inserted in-place. The bad columns will be interpolated over so as not to corrupt down-stream processing. Note that the intermediate files from each major step in the data reduction process are tagged with a prefix: see File Nomenclature for a list.

prefix='rg'
gmos.gfscatsub.unlearn()
gemtools.gemfix.unlearn()
gemtools.gemfix.logfile='gemfixLog.txt'

# Basic reductions for each epoch of GCAL flat-fields.
flat = {}
flat['S1'] = fs.fileListQuery(dbFile, fs.createQuery('gcalFlat', qds1), qds1)
flat['S2'] = fs.fileListQuery(dbFile, fs.createQuery('gcalFlat', qds2), qds2)
for epoch in ['S1','S2']:
    for f in flat[epoch]:
        # The task parameters are passed as: **paramDictionary
        gmos.gfreduce (f, bias='MCbias'+epoch, **traceFlags)

        # Insert the BPM as the initial DQ extensions.
        rgf = prefix + f
        for i in ['1','2','3']:
            dq = '[DQ,'+i+']'
            dqow = '[DQ,'+i+',overwrite+]'
            iraf.imcopy(bpm+dq, rgf+'.fits'+dqow)

        # Interpolate over bad pixels
        gemtools.gemfix (rgf, 'p'+rgf, method='fit1d', bitmask=1, order=32)

        # Extracting spectra
        prgf = 'prg' + f
        outfile = 'e' + prgf + '_trace'
        gmos.gfreduce (prgf, outimages=outfile, **flatFlags)

        # Subtracting scattered light.
        prgf = 'prg' + f
        traceFile = 'e' + prgf + '_trace'
        gapsFile = 'e' + prgf + '_gaps'
        gmos.gffindblocks (prgf, traceFile, gapsFile)
        gmos.gfscatsub (prgf, gapsFile, **scatsubFlags)

The *_trace extracted flat-field files generated above are used for two purposes: the fiber extraction parameters, and as input to the flat-field normalization. The flat-field exposures are of such short duration that cosmic-rays (CRs) are not a concern. There is, however, a need to correct for scattered light in well exposed images such as flat-fields.

Caution

The gfscatsub task has not been completely reliable during testing, sometimes producing wildly incorrect results. Though the scattered light correction can be skipped, it will adversely affect the subsequent normalization of extracted science spectra.

For some detectors on GMOS-S (the EEV and the Hamamatsu CCDs) the relative QE correction between sensors should be applied. This step is skipped in this tutorial, but if implemented then the correction must be applied to both the flats and the astrophysical targets.

If the gfscatsub task is skipped, and the QE correction is performed (in v1.13.1+), bear in mind that these steps prepend a b and a q, respectively, to the output filenames. Therefore the prefix for the filename lists used below would all need to be updated.

Flat Normalization

The response function will be built from the background-corrected flat-field exposures. Re-run the extraction, and normalize each fiber spectrum by fitting a 1-D curve in the dispersion direction to remove the color term. The fit to the response curve should done interactively for best results; a very high order will be needed.

gmos.gfresponse.unlearn()
gmos.gfresponse.verbose = 'no'
responseFlags = {
    'skyimage':'', 'fl_inter':'no', 'fl_fit':'no', 'sample':'1:1,30:2855,2879:2879',
    'logfile':'gfresponseLog.txt', 'function':'spline3', 'order':47
    }
flatFlags = {
     'fl_addmdf':'no', 'fl_over':'no', 'fl_trim':'no', 'fl_bias':'no',
     'fl_extract':'yes', 'fl_gsappwave':'yes', 'fl_wavtran':'no',
     'fl_gscrrej':'no', 'fl_skysub':'no', 'fl_fluxcal':'no', 'fl_vardq':'yes',
     'fl_inter':'no', 'rawpath':'./', 'logfile':'gfreduceLog.txt'
    }
prefix='bprg'
for fList in ['S1','S2']:
    for f in flat[fList]:
        outFile = prefix+f + '_flat'
        gmos.gfreduce (prefix+f, **flatFlags)
        gmos.gfresponse ('e'+prefix+f+'_flat', outimage=outFile, **responseFlags)

Arcs

The wavelength calibration can be performed once the order trace has been defined, so the arc exposures will be processed next. The traces (specified with the reference parameter) do not match the dayCal arcs perfectly, but they are adequate for wavelength calibration. The following basic reduction performs overscan correction and trimming, but does not apply the Bias Residual MasterCal because dayCal Arc exposures are normally obtained in fast read-out mode.

The default CuAr line list (gmos$calib/CuAr_GMOS.dat) has been pretty heavily groomed to produce good fits at low S/N, and with 1500 fibers one either has to trust the default or perform only the first few wavelength solutions interactively.

gmos.gswavelength.unlearn()
waveFlags = {
    'fwidth':8, 'minsep':2.5, 'fl_inter':'no', 'logfile':'gswaveLog.txt'
    }
arcs = fs.fileListQuery(dbFile, fs.createQuery('arc', qds1), qds1)
arcFlags = {
    'fl_addmdf':'yes', 'fl_over':'yes', 'fl_trim':'yes', 'fl_bias':'no',
    'fl_extract':'yes', 'fl_gsappwave':'yes', 'fl_wavtran':'no',
    'fl_gscrrej':'no', 'fl_skysub':'no', 'fl_fluxcal':'no', 'fl_vardq':'no',
    'fl_fixgaps':'no', 'reference':'eprgS20120827S0069_trace',
    'fl_inter':'no', 'rawpath':'./raw/', 'logfile':'gfreduceLog.txt'
    }
arcs = fs.fileListQuery(dbFile, fs.createQuery('arc', qds1), qds1)
for f in arcs:
    gmos.gfreduce (f, **arcFlags)
    gmos.gswavelength ('erg'+f, **waveFlags)

Standard Stars

The standard star exposures need to be processed all the way through aperture extraction, so that the sensitivity function can be derived. We start with basic processing, but there is a wrinkle: EG131 was obtained as a partnerCal and LTT9239 was obtained as a science exposure. Thus, the selection criteria are slightly different.

std1 = ReductParam ('std', qds1, 'raw/obsLog.sqlite3', bias='MCbiasS1',
       trace='eprgS20120829S0062_trace', flat='eprgS20120829S0062_flat',
       gaps='eprgS20120829S0062_gaps', wvtran='ergS20120829S0199')
std2 = ReductParam ('sciSpec', qds2, 'raw/obsLog.sqlite3', bias='MCbiasS2',
       trace='eprgS20121031S0025_trace', flat='eprgS20121031S0025_flat',
       gaps='eprgS20121031S0025_gaps', wvtran='ergS20120829S0199')
basicFlags = {
    'fl_addmdf':'yes', 'fl_over':'yes', 'fl_trim':'yes', 'fl_bias':'yes',
    'fl_extract':'no', 'fl_gsappwave':'no', 'fl_wavtran':'no', 'fl_fixgaps':'no',
    'fl_gscrrej':'no', 'fl_skysub':'no', 'fl_fluxcal':'no', 'fl_vardq':'yes',
    'fl_inter':'no', 'rawpath':'./raw/', 'logfile':'gfreduceLog.txt'
    }
prefix = 'rg'
for s in [std1,std2]:
    for f in s.fileList:
        gmos.gfreduce (f, bias=s.MCbias, **basicFlags)
        # Insert the BPM as the initial DQ extensions.
        rgf = prefix + f
        for i in ['1','2','3']:
            dq = '[DQ,'+i+']'
            dqow = '[DQ,'+i+',overwrite+]'
            iraf.imcopy(bpm+dq, rgf+'.fits'+dqow)

Reject Artifacts

Cleaning the exposures of cosmic rays is important for the standard stars; about 4 iterations are typically necessary. Note that the gemcrspec task requires that the lacos_spec.cl task be installed (see Add-on Task for CR Rejection). This task will run for quite awhile, perhaps several hours on a desktop machine….

gemtools.gemcrspec.unlearn()
gemtools.gemcrspec.fl_vardq='yes'
gemtools.gemcrspec.verbose='yes'
gemtools.gemcrspec.logfile='gemcrspecLog.txt'

for f in std1.fileList+std2.fileList:
    xFile = 'x' + prefix + f
    gemtools.gemcrspec (prefix+f, xFile, sigfrac=0.32, niter=4)
    gemtools.gemfix (xFile, 'p'+xFile, method='fixpix')

Note that lacosmic does not completely clean the CRs, although they are marked in the BPM, and it does not deal well with bad columns. Thus the final interpolation above over bad pixels with gemfix is necessary, and very helpful for downstream processing.

Remove Scattered Light

Standard stars are not usually as well exposed as flats, but scattered light can still be significant. We remove it here for good measure.

prefix = 'pxrg'
for s in [std1,std2]:
    for f in s.fileList:
        fName = prefix + f
        gmos.gfscatsub (fName, s.MCgaps)

Note

Again, this is the place where the QE correction would appear in the processing workflow for GMOS-S CCDs.

Standard Star Spectral Processing

Resume basic processing of the Standard Stars with gfreduce to apply the flat-field and wavelength calibrations, then sky subtraction and aperture summation. The wavelength transformation parameters below will select image columns in the dispersion direction that are in common to all the fibers in the undistorted frame. (The specific parameters were adopted from inspecting a prior reduction run.) Recall that there are different trace references for the standard stars, so they are processed separately.

stdFlags = {
    'fl_addmdf':'no', 'fl_over':'no', 'fl_trim':'no', 'fl_bias':'no',
    'fl_extract':'yes', 'trace':'no', 'recenter':'no', 'fl_gscrrej':'no',
    'fl_fixgaps':'yes', 'fl_gsappwave':'yes', 'fl_wavtran':'yes',
    'w1':5618., 'w2':'INDEF', 'dw':0.4622, 'nw':2822,
    'fl_skysub':'yes', 'sepslits':'yes', 'fl_fluxcal':'no',
    'fl_vardq':'yes', 'fl_inter':'no', 'rawpath':'./'
    }
prefix = 'bpxrg'

for s in [std1,std2]:
    for f in s.fileList:
        print "  -Processing file: %s" % (prefix+f)
        gmos.gfreduce (prefix+f, reference=s.MCtrace, response=s.MCresponse,
                       wavtraname=s.MCwavtran, **stdFlags)

Aperture Sum

All the fibers from the object field are combined with gfapsum into a 1-D spectrum, using the MDF extension (which was inserted into each file) to select the fibers. We process the stars in separate loops to:

  • specify a more convenient name for the summed spectrum of EG131

  • loop over multiple exposures for LTT9239.

gmos.gfapsum.unlearn()
gmos.gfapsum.fl_inter="no"
gmos.gfapsum.logfile="gfapsumLog.txt"
prefix = 'stebpxrg'

# EG131:
for f in std1.fileList:
    gmos.gfapsum (prefix+f, outimages='EG131sum_B6-625', lthreshold=0.)
# LTT9239:
for f in std2.fileList:
    gmos.gfapsum (prefix+f, lthreshold=0.)

The three sequential, extracted spectra for LTT9239 can be combined, and given a more convenient name.

gemtools.gemcombine.unlearn()
gemtools.gemcombine.logfile="gemcombineLog.txt"
prefix = 'astebpxrg'

gemtools.gemcombine (','.join(prefix+str(x) for x in std2.fileList),
               'LTT9239sum_B6-625', reject='none', scale='exposure')

Sensitivity Function

The next step is to derive the sensitivity function from the standard star spectra. To use the Mauna Kea extinction function for GMOS-N data, download: mk_extinct.txt and place it in your work directory. If you have not already done so, make a local copy of the monochromatic magnitudes reference files for the standards (see Reference File Checklist).

copy onedstds$ctionewcal/l9239.dat ./
copy gmos$calib/eg131.dat ./

Now create the sensitivity function MasterCal, using both standard star spectra.

gmos.gsstandard.unlearn()
gsstdFlags = {
    'caldir':'./', 'extinction':'onedstds$ctioextinct.dat', 'order':11,
    'observatory':'Gemini-South', 'logfile':'gsstandardLog.txt',
    'fl_inter':'yes'
    }
gmos.gsstandard ('EG131sum_B6-625,LTT9239sum_B6-625', 'std_B6-625',
                 'Sens_B6-625', starname='eg131,l9239', **gsstdFlags)

The sensitivity curves for the separate standards agree very well, once a grey shift is applied.

../_images/IFU_B6-625_sens.png

Fit to the sensitivity function with B600/625 after greyshift of the two contributing standard stars EG131 and LTT9239. One artificial point (blue) was added beyond the end of the reddest passband to avoid an otherwise unphysical decline in the derived sensitivity. Click image to enlarge.

Science Processing

Almost done. With the calibrations in place, proceed with processing the science exposures.

Basic Science Processing

Since GCAL flats were taken contemporaneously with the science exposures, the flat obtained closest in time will be used for the trace reference.

prefix = 'rg'
sci = ReductParam ('sciSpec', qds1, 'raw/obsLog.sqlite3', bias='MCbiasS1',
            trace='eprgS20120827S0069_trace', flat='eprgS20120827S0069_flat',
            wvtran='ergS20120829S0199', sensfunc='Sens_B6-625')

for f in sci.fileList:
    gmos.gfreduce (f, bias=sci.MCbias, **basicFlags)
    # Insert the BPM as the initial DQ extensions.
    rgf = prefix + f
    for i in ['1','2','3']:
        dq = '[DQ,'+i+']'
        dqow = '[DQ,'+i+',overwrite+]'
        iraf.imcopy(bpm+dq, rgf+'.fits'+dqow)

Reject Artifacts

Cleaning the exposures of cosmic rays is essential for the science exposures; perhaps 4 or 5 iterations will be necessary. This task will run for quite awhile, probably hours on a desktop machine….

for f in sci.fileList:
    print "  -CR rejecting file: %s" % (prefix+f)
    xFile = 'x' + prefix + f
    gemtools.gemcrspec (prefix+f, xFile, sigfrac=0.32, niter=4)
    gemtools.gemfix (xFile, 'p'+xFile, method='fixpix')

Note

Again, this is the place where the scattered light and QE corrections would appear in the processing workflow. A scattered light correction would have only a minor effect on the science exposures.

Spectral Processing

Note that the wavelength range has been trimmed slightly (using the w1, w2, dw, and nw parameters), in order to exclude ends of the spectrum that are not covered in some fibers once the distortion correction has been applied. This truncation is not strictly necessary, but can be convenient for display and downstream processing.

sciFlags = {
    'fl_addmdf':'no', 'fl_over':'no', 'fl_trim':'no', 'fl_bias':'no',
    'fl_extract':'yes', 'trace':'no', 'recenter':'no', 'fl_gscrrej':'no',
    'fl_fixgaps':'yes', 'fl_gsappwave':'yes', 'fl_wavtran':'yes',
    'w1':5618., 'w2':'INDEF', 'dw':0.4622, 'nw':2822,
    'fl_skysub':'yes', 'sepslits':'yes', 'fl_fluxcal':'yes',
    'fl_vardq':'yes', 'fl_inter':'no', 'rawpath':'./'
    }
prefix = 'pxrg'
for f in sci.fileList:
    gmos.gfreduce (prefix+f, reference=sci.MCtrace, response=sci.MCresponse,
                   wavtraname=sci.MCwavtran, sfunction=sci.MCsens, **sciFlags)
# ...and similarly for other grating settings.

This concludes basic processing and calibration, though some refinements and advanced data products are discussed below.

Advanced Processing

Wavelength Alignment

Since a dayCal Arc exposure was used to determine the wavelength scale, the separate science exposures will in general not have identical wavelength zero-points owing to instrument flexure and the time-dependent barycentric correction. To determine the wavelength offsets, load the rv package and run the rvidlines task on the background aperture (i.e., the [SKY] extension). Reference night sky emission lines will be adopted from Osterbrock et al. (1996, PASP, 108, 277). Download: skylines.txt and place it in your work directory.

rv.rvidlines.unlearn()
rv.observatory='gemini-south'
rvFlags = {
    'coordlist':'skylines.txt', 'ftype':'emission', 'nsum':1, 'threshold':7.,
    'maxfeatures':10, 'fwidth':10., 'cradius':10., 'minsep':5.,
    'logfile':'rvLog.txt', 'autowrite':'yes'
    }
rv.rvidlines ('stepxrgS20120827S0066.fits[SKY,inherit+]', **rvFlags)
# ...and so on for the other target exposures.

In the interactive plot type:

  • l to identify the night sky lines

  • f to fit for the mean velocity

  • q to write the parameters to the log file and quit the task

The output will include a mean shift in velocity and redshift z. Compute a wavelength difference and add the WCS reference wavelength:

\[\Delta\lambda = \lambda_0 + (-z * \bar{\lambda})\]

where \(\lambda_0\) is taken from the file header (\(\mathtt{CRVAL1} = 5618\)), the zero-point specified during the gfreduce calibration processing), \(z\) was output from the task, and \(\bar{\lambda}\) is the central wavelength of the configuration (6250). The output from this same analysis by James Turner produced the shifted wavelengths used below to update the file headers.

waveCorr = {
    'cstepxrgS20120827S0066':5618.164,
    'cstepxrgS20120827S0067':5618.268,
    'cstepxrgS20120827S0068':5618.387,
    'cstepxrgS20120827S0070':5618.560,
    'cstepxrgS20120827S0071':5618.667,
    'cstepxrgS20120827S0072':5618.787,
    }
iraf.hedit.unlearn()
iraf.hedit.update = 'yes'
iraf.hedit.verify = 'no'

for file,w0 in waveCorr.iteritems():
    iraf.hedit (file+'.fits[sci]', 'CRVAL1', w0)

The calibrated spectral images are now on the same wavelength scale, but are not yet perfectly aligned spatially.

Data Cube

One optional but, for this program, important output product is a datacube of the science exposures which is a 3-dimensional representation of the IFU observations. The datacube is an \((x,y,\lambda)\) 3-space that can be used to visualize the observations, or to facilitate the extraction of spatially summed spectra or monochromatic, 2-dimensional images. The gfcube task resamples the image when the atmospheric dispersion correction is enabled, and can optionally rejects pixel flagged in the BPM (but recall that they were interpolated over in prior processing). We also elect to convert the brightness units to flux per square arcsec.

gmos.gfcube.unlearn()
cubeFlags = {
    'fl_atmdisp':'yes', 'fl_flux':'yes', 'fl_var':'yes', 'fl_dq':'yes',
    'logfile':'gfcubeLog.txt'
}
prefix = 'cstepxrg'
for f in sci.fileList:
    gmos.gfcube (prefix+f, **cubeFlags)

Spatial Alignment

The separate exposures should be combined maximize the S/N for analysis, but they must first be aligned spatially. This can be accomplished with the PyFU datacube mosaicing packge, posted by James Turner to the Gemini Data Reduction User Forum. Use of this package is beyond the scope of this tutorial, but the results show that spatial shifts affect the final 3 science exposures in the sequence, obtained just after the GCAL flat. Edit the WCS keywords in the headers of these calibrated spectra:

iraf.hedit ('dcstepxrgS20120827S0070.fits[sci]', 'CRVAL1', 65.48042)

iraf.hedit ('dcstepxrgS20120827S0071.fits[sci]', 'CRVAL1', 65.47042)
iraf.hedit ('dcstepxrgS20120827S0071.fits[sci]', 'CRVAL2', 0.045000)

iraf.hedit ('dcstepxrgS20120827S0072.fits[sci]', 'CRVAL1', 65.50042)
iraf.hedit ('dcstepxrgS20120827S0072.fits[sci]', 'CRVAL2', 0.035000)

Combine Cubes

Now the datacubes may be combined with, e.g., the PyFU package. Here we use the imcombine task, which will suffice for a quick-look but will not propagate the variance array. No rejection is enabled, for simplicity.

iraf.imcombine.unlearn()
iraf.imcombine ('dcstepxrgS20120827S00*.fits[SCI]', 'j2240-097_cube.fits')

The datacube may be visualized in an appropriate tool, such as SAOImage DS9.

../_images/J2240_6645.png

Combined, false color image of the galaxy J224024.1–092748 obtained with with grating R600/625 in the rest frame of [O_III] \(\lambda5007\). This narrow-band (about 28 km/s) image is one spatial frame of a combined data cube, with north up and east to the left. Click image to enlarge.

Extracting spectra from the datacube is straightforward using apsum or, e.g., stsdas.improject.

../_images/j2240_spec.png

Extracted spectra in the vicinity of [O_III] \(\lambda5007\) from the nucleus (left) and the nearby ionized region (right). These spectra correspond to the left- and right-hand side, respectively, of the color image above. Click image to enlarge.

A great deal of science analysis is possible with these data products, as the [DST] paper demonstrated.