Documentation
The EchoLocation manages analyses, curates and displays the results, and distributes them to the world. There is a lot going on under the hood, and along the way, so let's dig in.
Trigger Reports:
What is a trigger report?
A trigger report is a summary of the results of the analysis of a single trigger. It is the main product of the EchoLocation, and is the primary way that the community interacts with the results of the EchoLocation. The trigger report hosts a variety of products, both search results and plots, used to interpet possible signals, and their provenance in the context of the spacecraft, its environment, and data quality. We will explain each product below, and provide some commentary on the scope of their applicability and interpretation.
The Information Sidebar
Source Information:
- Trigger ID: Every trigger is uniquely identified by its ID number, this number is the timestamp of the trigger, in units of Swift spacecraft time, also known as Mission Elapsed Time (MET). This number is not corrected for spacecraft clock drift, and reflects the time that the astrophysical trigger occured according to the spacecraft clock.
- Time: The time of the trigger, in UTC.
- Trigger Inst: The instrument that triggered the GUANO system to analyze this trigger, via through an alert or otherwise. This is usually the instrument that detected the astrophysical signal. Any triggers that occur within the same second, are collated together into the same trigger object in EchoLocation. There can therefore be more than one triggering instrument.
- Name: If there is a name associated with this trigger, it is listed here. For the same reason as above there can be more than one name.
- Position: If the external trigger has an associated sky localization in the form of an uncertainty circle, we report the celestial coordinates of its center here. If the external triggers have a sky localization which cannot be approximated with a circle, the localization is not reported here, but can be visualized in the sky map (explained later).
- Error: If the coordinates of the sky localization are reported, the correspondent radial error will be shown here.
GUANO Information:
For every trigger on EchoLocation, GUANO will determine whether to attempt to command the spacecraft to save the event-level data around this time. This will almost always be attempted, except for cases where the trigger arrives too late (>30 minutes post trigger time), or the location of the associated signal is confidently known to be below the Earth-limb with respect to the spacecraft at trigger time (ie no flux could reach Swift). Most triggers result in attempted GUANO commands, and currently the success rate in retrieving the event-level data is >90%, however there can be failures for a variety of reasons.
- Status: This lists the status of the GUANO command/data. It can display the following values:
- No Data: there is no data, either because no command was attempted, or because it failed.
- Executed: the command was executed onboard the spacecraft, but the data is not yet available.
- Data Received: the data is available on the ground for analysis.
- Obs ID: If the data has been received, the Observation ID associated with the data, indicating where it can be found at the Swift Data Center, is shown here.
- Exposure: If the data has been received, the exposure time of the data will be shown here. This will typically be 90 s or 200 s, and centered around the trigger time, but can be shorter or longer in some circumstances.
BAT Observability:
This panel lists information relevant to determining what area of the sky BAT was sensitive to at the trigger time.
- BAT Coverage: Not Currently Used
- Boresight RA/Dec: Not Currently Used
- Boresight Roll: Not Currently Used
- Geocenter RA/Dec: Not Currently Used
- Earth Radius: Not Currently Used
Raw binned time-series
BAT produced various data products. Among them are the raw binned time-series. These are the summed counts across the entire detector in each of 4 energy channels, binned into 64ms and 1.6s time-series.
These are produced onboard, and therefore are not cleaned of noisy detectors, glitches, or other artifacts. They are also not background subtracted.
For these reasons they can often exhibit large fluctuations that can be similar to GRBs. They are typically not suitable for analysis without deep expertise in the BAT instrument.
They are provided here for completeness, and because they are most often the first data product that available.
Spacecraft Position History
This map provides context for the local environment around Swift at the trigger time, as this can affect the background levels, and interpretation of possible signals in the data.
The Swift orbital track is shown in blue, with the spacecraft position at the trigger time marked with a Swift icon. The map is color-coded by the McIlwain L-number (L-shell), which roughly corresponds to the local particle flux environment at that position.
In Red we report the South Atlantic Anomaly region, where particle flux interacting with the spacecraft is highest. BAT typically does not record data when passing through this region. On approach and exit from this region, background levels will be strongly elevated.
Skyplot
This skymap is used to determine the area of the sky, and of the signal region, that BAT was sensitive to at the trigger time.
The region shaded blue is the area of the sky that was occulted by the Earth, from the perspective of Swift, at the trigger time. In grey we indicate the BAT coded field-of-view at the trigger time, its area of highest sensitivity and within which arcminute localization of signals is possible. However, signals can be detected from anywhere on the unocculted sky.
If a HEALPix skymap describing the localization area associated with the external trigger is available, this will also be plotted, along with containment contours.
Attitude history
This plot shows the attitude history of the spacecraft, which is the orientation of the spacecraft with respect to the sky. This is important for determining the area of the sky that BAT was sensitive to at the trigger time.
In addition, an unstable attitude can cause the background levels to fluctuate, and can cause artifacts in the data that can be mistaken for signals.
Further, times of unstable attitude (typically due to a slew, ~15% of the time) are not suitable for analysis with the NITRATES likelihood analysis, and will therefore not have NITRATES results or data products.
NITRATES results and data products:
The NITRATES analysis and data products are too complex to describe comprehensively here. We will provide a brief overview of the most important products, and their interpretation. The full details can be found in the
NITRATES paper.
Pipeline Status
This panel shows the status of the NITRATES analysis pipeline. This analysis is computationally heavy, and can take up to 500 CPU hours per trigger.
It consists of many stages, and this panel helps to track the progress of the analysis, and determine when the results may be available, or if there are any issues.
- Data Available: The UTC timestamp at which NITRATES acquires the data and is therefore ready begin the analysis
- Time Bins: The number of time bins that pass the seeding stage, and will be analyzed. This will be described more in the Full Rate Results below.
- Square Seeds: The number of spatial positions inside the FOV (IFOV) that pass the seeding stage, and will be analyzed. This will be described more in the Split Rate Results below.
- Total Seeds: The total number of seeds that will pass to the full analysis.
- OFOV: The fraction (in %) of the out of FOV jobs that have completed.
- IFOV: The fraction (in %) of the in FOV jobs that have completed.
- Last Updated: The timestamp of when the NITRATES analysis last communicated with and updated the EchoLocation.
Full Rate Results
NITRATES searches in a +-20s window around the trigger time. In this time window, it searches the data on 8 different timescales [0.128, 0.256, 0.512, 1.024, 2.048, 4.096, 8.192, 16.384] seconds, and with bins slid across the data with steps of 1/4 of the time bin size.
This results in >1000 different time bins that are searched. Performing the full NITRATES likelihood analysis on all of these time bins is computationally prohibitive, so a seeding stage is performed first.
The full rate panel shows a waterfall plot, where on the x axis we report the time from the trigger time and on the y axis the timescale of each time bin. We highlight with a color all the time bins that pass our threshold, where the color scheme reflects the SNR value of each bin. Typically, a significant candidate, like a GRB pulse, appears in the full rate plot as a cluster of rectangles in a shape of an upside down triangle (which reminds a waterfall). The table right below reports the list of all the time bins that pass the threshold and they are subsequently used for the split rate analysis.
Split Rate Results
After NITRATES identifies the temporal seeds, these are fed to the Split Detector Rates Analysis, which identifies the spatial seeds, performing a more fine sampling of the IFOV positions and a more coarse one for the OFOV ones. Once the seeds analysis is complete, NITRATES can finally initiate the full likelihood analysis, performed on a 3x3 grid of spectral parameters, consisting in the photon index gamma and the peak energy E_peak. Each seed has an associated likelihood ratio test statistic lambda. For the IFOV likelihood search we report a sky map where each point is a spatial seed and the corresponding color is the value of sqrt(Delta Lambda), being Delta Lambda the difference with respect the point with max Lambda. Look at the paper for a full definition of Lambda. Darker colors indicate the points closer to the peak of the likelihood. Similarly, for the OFOV likelihood search we report a sky map where each point is a spatial seed and the corresponding color is the value of log_10(2Delta LLH). Look at the paper for a full definition of Delta LLH. Also here, darker colors indicate the points closer to the max of the likelihood, being the maximization restricted to the OFOV seeds. The table below each sky plot reports the list of seeds and relative info, ranked is descending order of test statistic.
Top Results
The final table reports, among all the analyzed IFOV and OFOV seeds, the most significant ones, ranked according to the test statistic TS. From the likelihood analysis, three important quantities are reported here:
- TS: the split rate test statistic. The larger the value, the more significantly the signal goes above the background
- DLLH_peak: a indicator of how peaked is the likelihood for that time bin. Higher values are an indication of higher confidence about the sky position of the candidate.
- DLLH_out: an indicator of how we are confident about the fact that the candidate is located inside or outside the FOV.
The top result table reports also the RA and Dec of each top bin, as well as optimal spectral parameters, duration of the bin (dur) a starting time of the bin (dt) with respect to the trigger time.
Interpreting the 'Top Results' table of NITRATES candidates:
- TS > 8: confident detection of a signal.
- DLLH_out < 0: The source is confidently outside the BAT FOV.
- DLLH_out > 10 and DLLH_peak > 10: The source is confidently inside the BAT FOV, localized with a 5 arcminute precision. In the best case scenario, all the top results should have the same coordinates.
- DLLH_out > 10 and DLLH_peak > 5: inside the BAT FOV, mild confidence about the position.
- DLLH_out > 0 and DLLH_peak < 5: inconclusive, neither in or out FoV can be claimed or arcminute position.
Note: Always pay attention to the classification column. The candidate can be a significant signal, but not of astrophysical origin or not associated to the external trigger. If the TS>8 and classification is not set, please contact the admin.
Listening for GCN Kafka notices and writing probability skymap
Largely stolen from
https://emfollow.docs.ligo.org/userguide/tutorial/receiving/gcn.html#receiving-and-parsing-notices
from base64 import b64decode
from io import BytesIO
import json
from astropy.io import fits
from gcn_kafka import Consumer
def parse_notice(record):
record = json.loads(record)
# Parse sky map if there
if 'healpix_file' in record.keys():
skymap_str = record.pop('healpix_file')
# Decode, parse skymap, and write to file
fname = 'skymap.fits'
skymap_bytes = b64decode(skymap_str)
skymap = fits.open(BytesIO(skymap_bytes))
skymap.write(fname)
print('Skymap written to ', fname)
# Get coordinates if single location
if 'ra' in record.keys():
ra = record.get('ra')
dec = record.get('dec')
err = record.get('ra_dec_error')
print('Has single point localization')
print(f'ra, dec: {ra}, {dec}')
print(f'error = {err} deg')
consumer = Consumer(client_id='fill me in', client_secret='fill me in')
consumer.subscribe(['gcn.notices.swift.bat.guano'])
while True:
for message in consumer.consume():
parse_notice(message.value())
Reading skymap from url or file and calculating basic statistics
mhealpy is used in this example but ligo.skymap can also be used for MOC skymaps
import mhealpy as mhp
import numpy as np
from astropy import units as u
def moc_prob_dens2cred_levs(moc_skymap):
# takes in a mhealpy moc prob density map
# outputs mhealpy moc map with the credible level at each pixel
pdens_map = np.copy(moc_skymap.data)
inds_sort = np.argsort(pdens_map)[::-1]
cl_map = np.zeros_like(pdens_map)
cl_map[inds_sort] = np.cumsum((pdens_map*moc_skymap.pixarea())[inds_sort])
cl_moc = mhp.HealpixMap(data=cl_map, uniq=moc_skymap.uniq)
return cl_moc
fname = 'skymap.fits' # file_name or url
prob_dens_moc = mhp.HealpixMap.read_map(fname, field=1, density=True)
# find credible level at each pixel
cl_moc = moc_prob_dens2cred_levs(prob_dens_moc)
# find credible level at given coord
ra, dec = 180.0, 0.0
pix = cl_moc.ang2pix(ra, dec, lonlat=True)
cl = cl_moc[pix]
# find 90% area
area_90 = np.sum(cl_moc.pixarea()[cl_moc.data<0.9])
print(area_90.to(u.deg**2))
# find integrated probability in a circle on the sky
ra_cent, dec_cent = 180.0, 0.0
circ_rad = np.radians(1.0) # 1 deg circ
vec = mhp.ang2vec(ra_cent, dec_cent, lonlat=True)
pixels = prob_dens_moc.query_disc(vec, circ_rad)
prob = np.sum((prob_dens_moc.data*prob_dens_moc.pixarea())[pixels]/u.steradian))