Reduction of Long-Slit, Nod&Shuffle Spectra with PyRAF¶
Note
The following code was revised and runs successfully on data from the October epoch. The only modifications to the code involve specifying the epoch.
This tutorial will use observations from program GN-2003B-Q-14, longslit spectra of supernovae in the ESSENCE survey, taken in Nod-and-Shuffle (N&S) mode on 11 nights from 2003-Oct-28 through 2004-Jan-30.
The spectra were obtained with the NS0.75arcsec
slit and the R400 grating, centered at 740 nm.
The original science goal was to classify the spectral types of the supernovae.
Comparison arcs were obtained contemporaneously for each target; GCAL flats were in addition obtained at each at each DTA-X offset position.
Two flux calibration standards were included in the observing plan.
For other tutorials, see the following links:
Retrieve & Organize Data¶
The first step is to retrieve the data from the Gemini Observatory Archive (see Archive Searches). You may search the GOA yourself, or instead just cut-and-paste the direct URL in your browser. Oddly, the standard stars that were taken contemporaneously with this program do not show up in the archive search for this program, so a second search is necessary.
# longslit nod&shuffle data:
https://archive.gemini.edu/searchform/NodAndShuffle/cols=CTOWDEQ/GN-2003B-Q-14/notengineering/GMOS/NOTAO/NotFail/LS
# longslit standard stars
https://archive.gemini.edu/searchform/exposure_time=1-300/cenwlen=0.740/disperser=R400/cols=CTOWDEQb/NOTAO/notengineering/GMOS-N/LS/20031028-20040130/NotFail/OBJECT
After retrieving the science data, click the Load Associated Calibrations tab on the search results page and download the associated bias and flat-field exposures.
Unpack all of the raw data files in a subdirectory of your working directory (called /raw
throughout this tutorial).
Be sure to uncompress the files.
See Retrieving Data for details.
Caution
This script may use a large amount of available storage if Google Drive is connected to your desktop. Disable synching for the working directory before running the script or a hidden directory named ‘.tmp.driveupload’, which is in the same directory as ‘vm_transfer.’, can take up 200+ GB of space.
Caution
If flux calibration is important to the selected program, it is always a good idea to perform an explicit search for applicable standard star exposures in the event they are not associated in the Archive with the science program. Note also that standard star observations are never performed in N&S mode.
Processing Preparation¶
Reference Files¶
The required MasterCals are:
Bias Residual
Nod-and-Shuffle Dark (not required for Hamamatsu CCDs)
GCAL Flat-field (for each target and DTAX position)
Wavelength calibrations (from CuAr comparison arcs, for each target)
Flux calibration (from the standard stars BD \(+28^{\circ}4211\) and Hiltner 600)
All of them will be constructed in this tutorial. Note that only the Darks and the Science exposures were taken in N&S mode.
Software¶
You must create an observing log, in the form of an SQLite database, from the header metadata of exposures in the ./raw
subdirectory.
Download: obslog.py
to the ./raw
subdirectory, and execute it from the unix prompt.
python obslog.py obsLog.sqlite3
The second argument is the name of the output database. See Creating an Observing Log for details.
Also retrieve the python file selection module, which includes template SQL statements for selecting files, and functions for specifying metadata on which to perform selections.
Download:
fileSelect.py
Place this module in your work directory; it is used by the reduction script (below). You can perform all of the processing steps for this tutorial by downloading the Longslit Nod & Shuffle Tutorial python script.
Download:
gmos_lsns_proc.py
From an active PyRAF session:
import copy
import sys
import numpy as np
from pyraf import iraf
from pyraf.iraf import gemini, gemtools, gmos
import fileSelect as fs
Copy-and-paste the commands from the code blocks below. While it is possible to import the script directly into your PyRAF session:
import gmos_lsns_proc
you may find it more informative to use it as a guide to follow this tutorial in detail; it should also be useful as the basis for reducing other longslit N&S observations.
Caution
The reduction script includes steps that should be performed interactively for best results, but most interactive options have been disabled in the script in order not to interrupt the automated processing flow.
Building MasterCals¶
The next steps will create the necessary MasterCal reference files that are used to calibrate the science files. Files are selected by matching selected exposure metadata in the observing log database (see GMOS Processing Keywords).
# Observing log database
dbFile='raw/obsLog.sqlite3'
# From the work_directory:
# Create the query dictionary of essential parameter=value pairs.
# Select bias exposures within a few days of the Oct epoch observations:
qd = {'Oct':
{'use_me':1,'Instrument':'GMOS-N','CcdBin':'2 1','RoI':'Full',
'Disperser':'R400+_%','CentWave':740.0,'AperMask':'NS0.75arcsec',
'Object':'%.w%',
'DateObs':'2003-10-20:2003-11-02'}
}
# Make additional copies for the Nov-Dec, and January epochs;
# this will yield ~50 bias exposures per epoch.
qd['Nov'] = copy.deepcopy(qd['Oct'])
qd['Nov']['DateObs'] = '2003-11-22:2003-11-25'
qd['Dec'] = copy.deepcopy(qd['Oct'])
qd['Dec']['DateObs'] = '2003-11-17:2003-12-26'
qd['NovDec'] = copy.deepcopy(qd['Oct'])
qd['NovDec']['DateObs'] = '2003-11-22:2003-12-26'
qd['Jan'] = copy.deepcopy(qd['Oct'])
qd['Jan']['DateObs'] = '2004-01-20:2004-02-16'
qd['All'] = copy.deepcopy(qd['Oct'])
qd['All']['DateObs'] = '2003-11-22:2004-02-16'
Create the Bias Residual MasterCal files, one for each epoch of observations. The task parameters are stored in a python dictionary for convenience and (in some cases) reuse.
# Use primarily the default task parameters.
gemtools.gemextn.unlearn() # Disarm a bug in gbias
gmos.gbias.unlearn()
biasPars = {
'logfile':'biasLog.txt','rawpath':'./raw/','fl_vardq':'yes','verbose':'no'
}
epochs = ['Oct','NovDec','Jan']
for e in epochs:
# The following SQL generates the list of bias files to process.
SQL = fs.createQuery('bias', qd[e])
biasFiles = fs.fileListQuery(dbFile, SQL, qd[e])
# The str.join() funciton is needed to transform a python list into a
# comma-separated string of filenames that IRAF can understand.
gmos.gbias(','.join(str(x) for x in biasFiles), 'MCbias'+e, **biasPars)
Now create the Dark MasterCal file from the dark exposures obtained in the timeframe of all observing epochs in this program. Darks are necessary to ameliorate the effects of detector defects when the charge is shuffled; they are not necessary for the Hamamtsu CCDs. See the discussion on the Gemini website for Defining a N&S observation.
Caution
While the darks are fairly stable over periods of months, it is important to use dark exposures with the same exposure time, number of N&S cycles, and shuffle distance as the science target exposures. Regrettably, matching darks for most targets in this program are not available in the Archive: they do not match the parameters as the darks routinely obtained in the Baseline Calibrations.
gmos.gnsdark.unlearn()
darkPars = {
'fl_over':'yes','fl_trim':'yes','fl_bias':'yes','bias':'MCbiasNovDec',
'rawpath':'./raw/','fl_vardq':'yes','fl_inter':'no',
'logfile':'gnsdarkLog.txt','verbose':'no'
}
darkFiles = fs.fileListQuery(dbFile, fs.createQuery('dark', qd['All']),
qd['All'])
gmos.gnsdark(','.join(str(x) for x in darkFiles), 'MCdark', **darkPars)
# Fudge the number of N&S cycles to match Dark MasterCal.
iraf.hedit('MCdark.fits[0]', 'NODCOUNT', 10, add='no', update='yes',
verify='no')
Basic Processing¶
The following steps include processing raw exposures with gsreduce
, which has nearly 70 parameters.
The table below lists the defaults for the processing flag keywords—i.e., the keywords with logical values to indicate whether to perform an operation.
For the most part you can use the default parameter values.
Flag |
Default |
Description |
---|---|---|
|
Yes |
Subtract bias residual? |
|
Yes |
Cut MOS SCI extensions into slitlets? |
|
No |
Subtract scaled dark image? |
|
Yes |
Interpolate across chip gaps? |
|
Yes |
Apply flat-field correction? |
|
No |
Decompose DQ extensions into component bits? |
|
Yes |
Mosaic the image extensions? |
|
Yes |
Insert approximate wavelength WCS keywords into header? |
|
No |
Single-frame CR rejection? |
|
No |
Fit overscan levels interactively? |
|
Yes |
Perform overscan correction? |
|
Yes |
Scale slit lengths by x1.05 when cutting? |
|
Yes |
Put MOS objID into title for each extension? |
|
Yes |
Trim overscan region? |
|
No |
Propagate VAR and DQ? |
Wavelength Calibration¶
The Arc exposures were taken contemporaneously with each science target to ameliorate wavelength zero-point errors introduced by instrument flexure.
The arc exposures only really require overscan correction; the Bias Residual correction, which is specific to an observing epoch, is not critical to wavelength calibration (but we’ll do it anyway).
The Arc image extensions are then trimmed and mosaiced together.
Following basic processing, a dispersion solution is derived (you should be able to achieve an RMS of about 0.2 pix) and the results stored in the /database
subdirectory.
Below we set the task parameters and process all the Arcs in a loop:
arcPars = {
'fl_over':'yes','fl_trim':'yes','fl_bias':'yes','fl_gscrrej':'no',
'fl_dark':'no','fl_flat':'no','fl_gmosaic':'yes','fl_fixpix':'no',
'fl_gsappwave':'yes','fl_oversize':'no','fl_vardq':'no','fl_fulldq':'no',
'rawpath':'./raw','fl_inter':'no','logfile':'gsreduceLog.txt','verbose':'no'
}
# Wavelength solution might be improved slightly if done interactively.
gmos.gswavelength.unlearn()
wavePars = {
'coordlist':'gmos$data/CuAr_GMOS.dat','fwidth':6,'nsum':50,
'function':'chebyshev','order':4,
'fl_inter':'no','logfile':'gswaveLog.txt','verbose':'no'
}
for e in epochs:
arcFiles = fs.fileListQuery(dbFile, fs.createQuery('arc', qd[e]), qd[e])
if len(arcFiles) > 0:
gmos.gsreduce(','.join(str(x) for x in arcFiles),
bias='MCbias'+e, **arcPars)
gmos.gswavelength(','.join('gs'+str(x) for x in arcFiles),
**wavePars)
Spectral Flat-fields¶
Multiple GCAL spectral flat-fields were obtained for each science target to account for flexure in the instrument, and for the the small offsets along the slit that were applied to ameliorate the effects of charge traps and other detector artifacts. Thus, every flat-field exposure must be reduced and normalized individually.
Ordinarily the fit to the flat-field response should be done interactively. For this tutorial the best fit function and order have been determined. If you do the fits interactively, at the prompt (below a blank plotting window) enter a middle row (e.g., 2300 for a full-frame RoI) to perform the fit; enter <CR> (return/enter) when finished with a fit. Note that for this epoch of flats, emission lines must be masked.
Caution
GCAL flats obtained prior to August 2004 (GMOS-N) and September 2004 (GMOS-S) suffer from emission lines near 692.6 nm and 694.1 nm, originating from a fluorescent surface inside the Gemini calibration system. The affected columns must be identified and set to 1.0 in the normalized Flat-field MasterCal.
The flat-field exposures are never taken in N&S mode, but the science data were.
Since the flat-field will be applied to the exposure before sky subtraction, the flats must in effect be “doubled” (and further trimmed) to apply to the science data by setting the fl_double
flag.
gmos.gireduce.unlearn()
gmos.gsflat.unlearn()
sciFlatFlags = {
'fl_over':'yes','fl_trim':'yes','fl_bias':'yes','fl_dark':'no',
'fl_fixpix':'no','fl_oversize':'no','fl_vardq':'yes','fl_fulldq':'no',
'rawpath':'./raw','fl_inter':'no','fl_detec':'yes',
'fl_double':'yes','nshuffle':1536,
'function':'spline3','order':'5,5,17',
'logfile':'gsflatLog.txt','verbose':'no'
}
# Response curve fitting should be done interactively.
for e in epochs:
flatFiles = fs.fileListQuery(dbFile,fs.createQuery('gcalFlat',qd[e]),qd[e])
for f in flatFiles:
outFile = 'fgs' + f
gmos.gsflat(f, outFile, bias='MCbias'+e, **sciFlatFlags)
# Mask emission lines.
ext = outFile + '.fits[sci,2]'
if e == 'Oct':
iraf.imreplace(ext+'[830:852,*]', 1.)
else:
iraf.imreplace(ext+'[840:865,*]', 1.)
Standard Stars¶
The standard stars we will use for sensitivity calibration in this tutorial are BD \(+28^{\circ}4211\) and Hiltner 600, where multiple exposures were obtained for each.
Note
Although one standard star was observed during each epoch, not all exposures had the same RoI, slit size, or central wavelength settings as the science targets. One standard, BD \(+17^{\circ}4708\), is problematic because it is variable, and a close binary. It is left as an exercise to incorporate other standard star data into the sensitivity calibration.
Flat-fields for Standards¶
The exposures of Standard Stars are seldom obtained in N&S mode, so the flats created above for science targets are not suitable. However the basic reductions have already been performed, so all that is necessary is to use a couple of flats near the epoch of each standard star to create appropriate MasterCal Flat-fields. The following dictionary will associate the target names (in the headers) with the applicable Arc, the magic “iraf name”, and the applicable flat-field exposures.
stdPars = {
'BD+28d4211':{'arc':'gsN20031028S0152','irafName':'bd284211',
'flat':['N20031028S0153','N20031028S0164']},
'Hiltner600':{'arc':'gsN20040130S0126','irafName':'hilt600',
'flat':['N20040128S0048','N20040129S0115']}
}
The gsflat
parameters are set below to skip basic processing, and proceed with combining multiple flat-field exposures and normalizing. Note again the need to mask emission lines in the GCAL flat: see Spectral Flat-fields above for details.
stdFlatFlags = {
'fl_over':'no','fl_trim':'no','fl_bias':'no','fl_dark':'no',
'fl_fixpix':'no','fl_oversize':'no','fl_vardq':'yes','fl_fulldq':'no',
'rawpath':'./','fl_inter':'yes','fl_detec':'yes',
'function':'spline3','order':'5,5,17',
'logfile':'gsflatLog.txt','verbose':'no'
}
for targ,p in stdPars.iteritems():
flatFiles = p['flat']
outFile = targ + '_flat'
gmos.gsflat(','.join('gs'+str(x) for x in flatFiles), outFile,
**stdFlatFlags)
# Mask emission lines.
ext = outFile + '.fits[sci,2]'
if e == 'Oct':
iraf.imreplace(ext+'[830:852,*]', 1.)
else:
iraf.imreplace(ext+'[840:865,*]', 1.)
Basic Processing: Std Stars¶
The individual standard star exposures are first processed through overscan, bias, and flat-field corrections.
qd['Std'] = {'use_me':1,
'Instrument':'GMOS-N','CcdBin':'2 1','RoI':'Full',
'Disperser':'R400+_%','CentWave':740.0,'AperMask':'NS0.75arcsec',
'Object':'BD+28d4211',
'DateObs':'*',
}
gmos.gsreduce.unlearn()
stdFlags = {
'fl_over':'yes','fl_trim':'yes','fl_bias':'yes','fl_gscrrej':'no',
'fl_dark':'no','fl_flat':'yes','fl_gmosaic':'yes','fl_fixpix':'yes',
'fl_gsappwave':'yes','fl_oversize':'no',
'fl_vardq':'no','fl_fulldq':'no','rawpath':'./raw',
'fl_inter':'no','logfile':'gsreduceLog.txt','verbose':'no'
}
for e in epochs:
bias = 'MCbias'+e
for targ in stdPars.keys():
qd[e]['Object'] = targ
flat = targ + '_flat'
stdFiles = fs.fileListQuery(dbFile, fs.createQuery('sciSpec',qd[e]),
qd[e])
if len(stdFiles) > 0:
gmos.gsreduce(','.join(str(x) for x in stdFiles), bias=bias,
flat=flat, **stdFlags)
The paired Standard exposures are offset 10 arcsec along the slit from each other, as indicated by the YOFFSET
keyword values.
Thus, the spectra must be wavelength calibrated and extracted before they can be combined.
Note that gsextract
will locate the spectrum automatically and fit the trace along the dispersion direction with a low-order function, although this is best done interactively.
gmos.gstransform.unlearn()
transPars = {
'fl_vardq':'no','interptype':'linear','fl_flux':'yes',
'logfile':'gstransLog.txt'
}
gmos.gsextract.unlearn()
extrPars = {
'apwidth':3.,'fl_inter':'no','find':'yes',
'trace':'yes','tfunction':'spline3','torder':'5','tnsum':20,
'background':'fit','bfunction':'chebyshev','border':2,
'long_bsample':'-700:-150,150:700',
'fl_vardq':'no','logfile':'gsextrLog.txt'
}
gemtools.gemcombine.unlearn()
combPars = {
'combine':'average','reject':'none','logfile':'combLog.txt',
'verbose':'no'
}
for targ,p in stdPars.iteritems():
qs = qd['Std']
qs['Object'] = targ
stdFiles = fs.fileListQuery(dbFile, fs.createQuery('sciSpec', qs), qs)
gmos.gstransform (','.join('gs'+str(x) for x in stdFiles),
wavtraname=p['arc'], **transPars)
gmos.gsextract(','.join('tgs'+str(x) for x in stdFiles), **extrPars)
gemtools.gemcombine(','.join('etgs'+str(x) for x in stdFiles), targ,
**combPars)
Sensitivity Calibration¶
Now both stars may contribute to the sensitivity calibration.
You should use the improved extinction function for Mauna Kea: download mk_extinct.txt
to your work directory.
It is also preferable to used the more finely sampled monochromatic magnitudes for BD \(+28^{\circ}4211\) from ESO: see Standards List for details.
If you download this calibration file, you will also want to make a local copy of the monochromatic magnitudes for Hiltner 600, as the gsstandard
task requires both files to reside in the same directory.
stars = ','.join(str(x['irafName']) for s,x in stdPars.iteritems())
sensPars = {
'fl_inter':'yes','starname':stars,'caldir':'./',
'observatory':'Gemini-North','extinction':'./mk_extinct.txt',
'function':'chebyshev','order':6,'verbose':'no','logfile':'gsstdLog.txt'
}
gsstandard(','.join(str(x) for x in stdPars.keys()), 'std.txt', 'sens',
**sensPars)
The result of the sensitivity calibration is shown below.
Science Targets¶
All of the many science targets are faint, so multiple long exposures were obtained for each one, and small spatial offsets along the slit were imposed to ameliorate detector artifacts. Thus, a relationship must be established to track each exposure to the custom flat-field, and also to the Arc exposure for each target. There is no automatic and robust way to do this: just inspect the observing log to define the relationship.
The following dictionary captures the associations for two of the science targets, although a small container class would work at least as well. A more advanced approach would be to store these associations in a new table in the observing log database.
sciPars = {'d034.waa7_10': {'arc':'N20031028S0152',
'sci':['N20031028S0154','N20031028S0157'],
'flat':['N20031028S0153','N20031028S0158']}}
# Add another target to the dictionary:
sciPars['f017.wdd9_10'] = {'arc':'N20031220S0192',
'sci':['N20031220S0194','N20031220S0196','N20031220S0199'],
'flat':['N20031220S0193','N20031220S0197','N20031220S0198']}
Basic Processing¶
Now perform basic processing, except for flat-field, on epochs that include the science targets of interest.
gmos.gsreduce.unlearn()
sciFlags = {
'fl_over':'yes','fl_trim':'yes','fl_bias':'yes','fl_gscrrej':'no',
'fl_dark':'yes','fl_flat':'no','fl_gmosaic':'no','fl_fixpix':'no',
'fl_gsappwave':'no','fl_cut':'no','fl_oversize':'no',
'fl_vardq':'yes','fl_fulldq':'no','ovs_flinter':'no',
'rawpath':'./raw','fl_inter':'no','logfile':'gnsreduceLog.txt','verbose':'no'
}
epochs = ['Oct','NovDec']
qs = qd['Sci']
for e in epochs:
bias = 'MCbias'+e
qs = qd[e]
sciFiles = fs.fileListQuery(dbFile, fs.createQuery('sciSpec', qs), qs)
gmos.gsreduce(','.join(str(x) for x in sciFiles), bias=bias,
dark='MCdark', **sciFlags)
Advanced Processing¶
Next, loop through each target and each science exposure to apply the associated flat-field, determine the relative shifts and combine the offset exposures, then to mosaic the extensions, and apply the wavelength calibration.
Note
Regrettably, gnscombine
reads image offsets from a file, rather than from the image header. The y-value can be derived from the DTAX
keywords in an exposure sequence (which are captured in the observing log), where the value for the first exposure must be subtracted from that of each exposure.
First, set the parameters for each of the tasks:
qd['Sci'] = {'use_me':1,
'Instrument':'GMOS-N','CcdBin':'2 1','RoI':'Full',
'Disperser':'R400+_%','CentWave':740.0,'AperMask':'NS0.75arcsec',
'Object':'%.%',
'DateObs':'*',
}
sciFlags.update({'fl_over':'no','fl_trim':'no','fl_bias':'no','fl_dark':'no',
'fl_flat':'yes','rawpath':'./','outpref':'f'})
sciFlags2 = {
'fl_over':'no','fl_trim':'no','fl_bias':'no','fl_gscrrej':'yes',
'fl_dark':'no','fl_flat':'no','fl_gmosaic':'yes','fl_fixpix':'yes',
'fl_gsappwave':'yes','fl_cut':'yes','fl_oversize':'no',
'fl_vardq':'yes','fl_fulldq':'no','fl_inter':'no','outpref':'m',
'rawpath':'./','logfile':'gnsreduceLog.txt','verbose':'no'
}
gmos.gnscombine.unlearn()
gnscombPars = {
'fl_vardq':'yes','logfile':'gemcombineLog.txt','verbose':'no'
}
gmos.gstransform.unlearn()
transPars = {
'fl_vardq':'yes','interptype':'linear','fl_flux':'yes',
'logfile':'gstransLog.txt'
}
The following loop will process each exposure, combine those that correspond to a single target, and apply the remaining calibrations.
for targ,pars in sciPars.iteritems():
qs['Object'] = targ + '%'
sciFiles = fs.fileListQuery(dbFile, fs.createQuery('sciSpec', qs), qs)
shifts = fs.offsetQuery(dbFile, fs.createQuery('offset',qs), qs)
if len(sciFiles) > 0:
# Now we have a sequence of exposures for a single target.
# Compute & write to a file the Y-offsets relative to the first exposure.
shft0 = shifts[0][1]
shiftFile = targ+'_shft.txt'
with open (shiftFile, 'w') as o:
for s in shifts:
line = '0 %d %s\n' % (int(np.rint(s[1]-shft0)), s[0])
o.write(line)
# Apply the matched (by index) flat-field.
for f in sciFiles:
flat = 'fgs' + pars['flat'][pars['sci'].index(f)]
gmos.gsreduce('gs'+f, flat=flat, **sciFlags)
targFile = targ + '_comb.fits'
# Combine, mosaic, wavelength calibrate, extract, flux calibrate.
gmos.gnscombine(','.join('fgs'+str(x) for x in sciFiles), shiftFile,
targFile, **gnscombPars)
gmos.gsreduce(targFile, **sciFlags2)
gmos.gstransform('m'+targFile, wavtraname='gs'+pars['arc'], **transPars)
Now to extract the spectra.
This must be done twice: for the “positive” and the “negative” spectra, since gsextract does not support extraction from multiple apertures.
(We will use the output name prefix to distinguish the two: ep
for positive extraction and en
for negative.)
The automated trace
option has been turned off because these objects are faint and the task is a little unstable.
After extraction, the two spectra must be combined and calibrated.
gmos.gsextract.unlearn()
extrPars = {
'apwidth':4.,'find':'yes','trace':'no',
'tfunction':'chebyshev','torder':2,'tnsum':100,'tstep':100,
'background':'none',
'fl_inter':'yes','fl_vardq':'no','logfile':'gsextrLog.txt'
}
gemtools.gemarith.unlearn()
gemarithPars = {
'fl_vardq':'no','verbose':'no','logfile':'gemarithLog.txt'
}
gmos.gscalibrate.unlearn()
calibPars = {
'extinction':'./mk_extinct.txt','fl_ext':'yes','fl_scale':'no',
'sfunc':'sens','fl_vardq':'no','logfile':'gscalibLog.txt'
}
for targ in sciPars.keys():
targFile = targ + '_comb'
outPos = 'eptm'+ targFile
outNeg = 'entm'+ targFile
# Extract interactively
gmos.gsextract('tm'+targFile, outimages=outPos, **extrPars)
gmos.gsextract('tm'+targFile, outimages=outNeg, **extrPars)
gemtools.gemarith(outPos, '-', outNeg, 'etm'+targFile, **gemarithPars)
gmos.gscalibrate('etm'+targFile, **calibPars)
A spectrum of target f017.wbb9_10 is shown below.
The resulting spectrum could be improved with some tweaks to the reduction:
Perform the extraction using variance weighting. Unfortunately the variance is derived from a noise model by the underlying task, rather than using the actual VAR extension.
Derive and apply a correction for the telluric absorption.
Condition the spectrum in the regions of night sky lines, where the background subtraction leaves a noisy residual. This must be done on a target-by-target basis to avoid corrupting the signal from the source.