Documentation

The EchoLocation manages analyses, curates and displays the results, and distributes them to the world. There is a lot going on under the hood, and along the way, so let's dig in.

An overview of the GUANO ecosystem:

  1. Another experiment/telescope (e.g., LVK, CHIME, IceCube, Fermi) detects a candidate signal and distributes a prompt alert, either publicly to the world or privately to GUANO. The minimum requirement is that this signal has a timestamp associated with it, although often other information is also promptly sent. This alert must be prompt (within ~10 minutes from the trigger time) in order to successfuly save the coincident data.
  2. The GUANO listener, at the Swift Mission Operations Center (MOC), receives the alert and takes two actions:
    1. GUANO autonomously commands the Swift spacecraft in low-latency, telling the Burst Alert Telescope (BAT) to dump the data around this signal time out of its ring buffer, save it to the onboard recorder, and mark it for high-priority downlink to the ground.
    2. It sends the alert to the EchoLocation, which adds a series of analysis jobs for this trigger to the job queue.
  3. Standing processes at two High-Performance Compute (HPC) clusters, Penn State's ROAR and NASA's Discover, receive assignments from the job queue and wait for the data to become available.
  4. Once the data is found, the analysis jobs are launched. These analyses take ~300-500 CPU hours per trigger, and are parallelized on the clusters, so typical completion times are <2 hours (see latencies here).
  5. The clusters continuously communicate with the EchoLocation via API, updating status and results in real-time. The results are saved to a database, and the EchoLocation displays them on the web interface.
  6. If the results indicate a signal passing certain checks and thresholds, they trigger alerts distributed to the community, to the LVK, and Target of Opportunity (ToO) triggers to repoint Swift and other telescopes.
This entire loop is triggered >10 times per day, with EchoLocation managing many simultaneous analyses at once.

Trigger Reports:

What is a trigger report?

A trigger report is a summary of the results of the analysis of a single trigger. It is the main product of the EchoLocation, and is the primary way that the community interacts with the results of the EchoLocation. The trigger report hosts a variety of products, both search results and plots, used to interpet possible signals, and their provenance in the context of the spacecraft, its environment, and data quality. We will explain each product below, and provide some commentary on the scope of their applicability and interpretation.

The Information Sidebar

Source Information:
  1. Trigger ID: Every trigger is uniquely identified by its ID number, this number is the timestamp of the trigger, in units of Swift spacecraft time, also known as Mission Elapsed Time (MET). This number is not corrected for spacecraft clock drift, and reflects the time that the astrophysical trigger occured according to the spacecraft clock.
  2. Time: The time of the trigger, in UTC.
  3. Trigger Inst: The instrument that triggered the GUANO system to analyze this trigger, via through an alert or otherwise. This is usually the instrument that detected the astrophysical signal. Any triggers that occur within the same second, are collated together into the same trigger object in EchoLocation. There can therefore be more than one triggering instrument.
  4. Name: If there is a name associated with this trigger, it is listed here. For the same reason as above there can be more than one name.
  5. Position: If the external trigger has an associated sky localization in the form of an uncertainty circle, we report the celestial coordinates of its center here. If the external triggers have a sky localization which cannot be approximated with a circle, the localization is not reported here, but can be visualized in the sky map (explained later).
  6. Error: If the coordinates of the sky localization are reported, the correspondent radial error will be shown here.
GUANO Information:
For every trigger on EchoLocation, GUANO will determine whether to attempt to command the spacecraft to save the event-level data around this time. This will almost always be attempted, except for cases where the trigger arrives too late (>30 minutes post trigger time), or the location of the associated signal is confidently known to be below the Earth-limb with respect to the spacecraft at trigger time (ie no flux could reach Swift). Most triggers result in attempted GUANO commands, and currently the success rate in retrieving the event-level data is >90%, however there can be failures for a variety of reasons.
  1. Status: This lists the status of the GUANO command/data. It can display the following values:
    1. No Data: there is no data, either because no command was attempted, or because it failed.
    2. Executed: the command was executed onboard the spacecraft, but the data is not yet available.
    3. Data Received: the data is available on the ground for analysis.
  2. Obs ID: If the data has been received, the Observation ID associated with the data, indicating where it can be found at the Swift Data Center, is shown here.
  3. Exposure: If the data has been received, the exposure time of the data will be shown here. This will typically be 90 s or 200 s, and centered around the trigger time, but can be shorter or longer in some circumstances.
BAT Observability:
This panel lists information relevant to determining what area of the sky BAT was sensitive to at the trigger time.
  1. BAT Coverage: Not Currently Used
  2. Boresight RA/Dec: Not Currently Used
  3. Boresight Roll: Not Currently Used
  4. Geocenter RA/Dec: Not Currently Used
  5. Earth Radius: Not Currently Used

Raw binned time-series

BAT produced various data products. Among them are the raw binned time-series. These are the summed counts across the entire detector in each of 4 energy channels, binned into 64ms and 1.6s time-series. These are produced onboard, and therefore are not cleaned of noisy detectors, glitches, or other artifacts. They are also not background subtracted. For these reasons they can often exhibit large fluctuations that can be similar to GRBs. They are typically not suitable for analysis without deep expertise in the BAT instrument. They are provided here for completeness, and because they are most often the first data product that available.

Spacecraft Position History

This map provides context for the local environment around Swift at the trigger time, as this can affect the background levels, and interpretation of possible signals in the data. The Swift orbital track is shown in blue, with the spacecraft position at the trigger time marked with a Swift icon. The map is color-coded by the McIlwain L-number (L-shell), which roughly corresponds to the local particle flux environment at that position. In Red we report the South Atlantic Anomaly region, where particle flux interacting with the spacecraft is highest. BAT typically does not record data when passing through this region. On approach and exit from this region, background levels will be strongly elevated.

Skyplot
This skymap is used to determine the area of the sky, and of the signal region, that BAT was sensitive to at the trigger time.
The region shaded blue is the area of the sky that was occulted by the Earth, from the perspective of Swift, at the trigger time. In grey we indicate the BAT coded field-of-view at the trigger time, its area of highest sensitivity and within which arcminute localization of signals is possible. However, signals can be detected from anywhere on the unocculted sky. If a HEALPix skymap describing the localization area associated with the external trigger is available, this will also be plotted, along with containment contours.

Attitude history
This plot shows the attitude history of the spacecraft, which is the orientation of the spacecraft with respect to the sky. This is important for determining the area of the sky that BAT was sensitive to at the trigger time. In addition, an unstable attitude can cause the background levels to fluctuate, and can cause artifacts in the data that can be mistaken for signals. Further, times of unstable attitude (typically due to a slew, ~15% of the time) are not suitable for analysis with the NITRATES likelihood analysis, and will therefore not have NITRATES results or data products.


NITRATES results and data products:

The NITRATES analysis and data products are too complex to describe comprehensively here. We will provide a brief overview of the most important products, and their interpretation. The full details can be found in the NITRATES paper.

Pipeline Status

This panel shows the status of the NITRATES analysis pipeline. This analysis is computationally heavy, and can take up to 500 CPU hours per trigger. It consists of many stages, and this panel helps to track the progress of the analysis, and determine when the results may be available, or if there are any issues.
  1. Data Available: The UTC timestamp at which NITRATES acquires the data and is therefore ready begin the analysis
  2. Time Bins: The number of time bins that pass the seeding stage, and will be analyzed. This will be described more in the Full Rate Results below.
  3. Square Seeds: The number of spatial positions inside the FOV (IFOV) that pass the seeding stage, and will be analyzed. This will be described more in the Split Rate Results below.
  4. Total Seeds: The total number of seeds that will pass to the full analysis.
  5. OFOV: The fraction (in %) of the out of FOV jobs that have completed.
  6. IFOV: The fraction (in %) of the in FOV jobs that have completed.
  7. Last Updated: The timestamp of when the NITRATES analysis last communicated with and updated the EchoLocation.

Full Rate Results

NITRATES searches in a +-20s window around the trigger time. In this time window, it searches the data on 8 different timescales [0.128, 0.256, 0.512, 1.024, 2.048, 4.096, 8.192, 16.384] seconds, and with bins slid across the data with steps of 1/4 of the time bin size. This results in >1000 different time bins that are searched. Performing the full NITRATES likelihood analysis on all of these time bins is computationally prohibitive, so a seeding stage is performed first. The full rate panel shows a waterfall plot, where on the x axis we report the time from the trigger time and on the y axis the timescale of each time bin. We highlight with a color all the time bins that pass our threshold, where the color scheme reflects the SNR value of each bin. Typically, a significant candidate, like a GRB pulse, appears in the full rate plot as a cluster of rectangles in a shape of an upside down triangle (which reminds a waterfall). The table right below reports the list of all the time bins that pass the threshold and they are subsequently used for the split rate analysis.

Split Rate Results

After NITRATES identifies the temporal seeds, these are fed to the Split Detector Rates Analysis, which identifies the spatial seeds, performing a more fine sampling of the IFOV positions and a more coarse one for the OFOV ones. Once the seeds analysis is complete, NITRATES can finally initiate the full likelihood analysis, performed on a 3x3 grid of spectral parameters, consisting in the photon index gamma and the peak energy E_peak. Each seed has an associated likelihood ratio test statistic lambda. For the IFOV likelihood search we report a sky map where each point is a spatial seed and the corresponding color is the value of sqrt(Delta Lambda), being Delta Lambda the difference with respect the point with max Lambda. Look at the paper for a full definition of Lambda. Darker colors indicate the points closer to the peak of the likelihood. Similarly, for the OFOV likelihood search we report a sky map where each point is a spatial seed and the corresponding color is the value of log_10(2Delta LLH). Look at the paper for a full definition of Delta LLH. Also here, darker colors indicate the points closer to the max of the likelihood, being the maximization restricted to the OFOV seeds. The table below each sky plot reports the list of seeds and relative info, ranked is descending order of test statistic.

Top Results

The final table reports, among all the analyzed IFOV and OFOV seeds, the most significant ones, ranked according to the test statistic TS. From the likelihood analysis, three important quantities are reported here:
  1. TS: the split rate test statistic. The larger the value, the more significantly the signal goes above the background
  2. DLLH_peak: a indicator of how peaked is the likelihood for that time bin. Higher values are an indication of higher confidence about the sky position of the candidate.
  3. DLLH_out: an indicator of how we are confident about the fact that the candidate is located inside or outside the FOV.
The top result table reports also the RA and Dec of each top bin, as well as optimal spectral parameters, duration of the bin (dur) a starting time of the bin (dt) with respect to the trigger time.

Interpreting the 'Top Results' table of NITRATES candidates:

  1. TS > 8: confident detection of a signal.
  2. DLLH_out < 0: The source is confidently outside the BAT FOV.
  3. DLLH_out > 10 and DLLH_peak > 10: The source is confidently inside the BAT FOV, localized with a 5 arcminute precision. In the best case scenario, all the top results should have the same coordinates.
  4. DLLH_out > 10 and DLLH_peak > 5: inside the BAT FOV, mild confidence about the position.
  5. DLLH_out > 0 and DLLH_peak < 5: inconclusive, neither in or out FoV can be claimed or arcminute position.

Note: Always pay attention to the classification column. The candidate can be a significant signal, but not of astrophysical origin or not associated to the external trigger. If the TS>8 and classification is not set, please contact the admin.


Probability Skymaps

If a significant time window is found in the analysis then inference will be performed to calculate the probability denstiy of the source location across the sky in the form of a Multi-Order Coverage (MOC) skymap. The url to find the skymap will be presented in the Probability Skymap table. The skymap will also be distributed inside a GCN Kafka notice. Code snippets can be found below in order to receive, parse, and make basic calculations with these skymaps. The localization areas of these skymaps can vary greatly, spanning from a single circle of a few arcminutes to over 10,000 square degrees. Many skymaps will have probability distributed over both inside and outside the coded field of view of BAT. Due to the coded mask, inside the coded field of view (FoV) the probability density will fluctuate on the scale of the PSF (~22 arcminutes) forming many seperate peaks across the FoV. Outside the FoV, the probability density will fluctuate much more slowly, with the most accurately localized bursts that originate from outside the coded FoV having 90% areas of ~200 square degrees.

Listening for GCN Kafka notices and writing probability skymap

Largely stolen from https://emfollow.docs.ligo.org/userguide/tutorial/receiving/gcn.html#receiving-and-parsing-notices
        
from base64 import b64decode
from io import BytesIO
import json

from astropy.io import fits
from gcn_kafka import Consumer

def parse_notice(record):
    record = json.loads(record)
    # Parse sky map if there
    if 'healpix_file' in record.keys():
        skymap_str = record.pop('healpix_file')
        # Decode, parse skymap, and write to file
        fname = 'skymap.fits'
        skymap_bytes = b64decode(skymap_str)
        skymap = fits.open(BytesIO(skymap_bytes))
        skymap.write(fname)
        print('Skymap written to ', fname)
    # Get coordinates if single location
    if 'ra' in record.keys():
        ra = record.get('ra')
        dec = record.get('dec')
        err = record.get('ra_dec_error')
        print('Has single point localization')
        print(f'ra, dec: {ra}, {dec}')
        print(f'error = {err} deg')
    


consumer = Consumer(client_id='fill me in', client_secret='fill me in')
consumer.subscribe(['gcn.notices.swift.bat.guano'])

while True:
    for message in consumer.consume():
        parse_notice(message.value())

            
        

Reading skymap from url or file and calculating basic statistics

mhealpy is used in this example but ligo.skymap can also be used for MOC skymaps
            
import mhealpy as mhp
import numpy as np
from astropy import units as u

def moc_prob_dens2cred_levs(moc_skymap):
    # takes in a mhealpy moc prob density map
    # outputs mhealpy moc map with the credible level at each pixel
    
    pdens_map = np.copy(moc_skymap.data)    
    inds_sort = np.argsort(pdens_map)[::-1]
    cl_map = np.zeros_like(pdens_map)
    cl_map[inds_sort] = np.cumsum((pdens_map*moc_skymap.pixarea())[inds_sort])
    cl_moc = mhp.HealpixMap(data=cl_map, uniq=moc_skymap.uniq)
    
    return cl_moc


fname = 'skymap.fits' # file_name or url    
prob_dens_moc = mhp.HealpixMap.read_map(fname, field=1, density=True)
# find credible level at each pixel
cl_moc = moc_prob_dens2cred_levs(prob_dens_moc)
# find credible level at given coord
ra, dec = 180.0, 0.0
pix = cl_moc.ang2pix(ra, dec, lonlat=True)
cl = cl_moc[pix]
# find 90% area
area_90 = np.sum(cl_moc.pixarea()[cl_moc.data<0.9])
print(area_90.to(u.deg**2))
# find integrated probability in a circle on the sky
ra_cent, dec_cent = 180.0, 0.0
circ_rad = np.radians(1.0) # 1 deg circ
vec = mhp.ang2vec(ra_cent, dec_cent, lonlat=True)
pixels = prob_dens_moc.query_disc(vec, circ_rad)
prob = np.sum((prob_dens_moc.data*prob_dens_moc.pixarea())[pixels]/u.steradian))