Tags:
create new tag
view all tags

eSDO Phase A Report

This document can be viewed as a PDF.
Title: eSDO Phase A Report: Deliverables
Date: 30 September 2005
Authors: E.Auden, W. Chaplin, L. Culhane, Y. Elsworth, A. Fludra, V. Graffagnino, R. Harrison, M. Smith, M. Thompson, T. Toutain, L. van Driel-Gesztelyi, S. Zharkov

I. Phase A Summary Report

eSDO Phase A Summary Report

This document is also available as a pdf.
eSDO Summary Report
Elizabeth Auden
30 September 2005

Introduction

PPARC has funded the eSDO project to make data and algorithms from the Solar Dynamic Observatory (SDO) available to the UK solar community using the virtual observatory. The projected is funded for three years beginning on 1 October 2004 and terminating 30 September 2007. Elizabeth Auden is the eSDO technical lead / project manager, and the four developers are Vito Graffagnino, Mike Smith, Thierry Toutain, and Sergei Zharkov. The project is advised by seven scientists: Len Culhane, Bill Chaplin, Yvonne Elsworth, Andrzej Fludra, Richard Harrison, Michael Thompson, and Lidia van Driel-Gesztelyi.

The Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) instruments on board SDO will produce ~ 2 TB of raw data per day, and the high-level pipeline data products will be approximately one tenth of that volume. Four UK Solar Physics groups have Co-Investigator involvement in the SDO AIA and HMI investigations: MSSL, Rutherford Appleton Laboratory, and the Universities of Birmingham and Sheffield. The huge SDO data volume requires special measures to ensure effective data handling:

  • Local and global helioseismology specialist algorithms and feature recognition procedures for AIA and HMI images
  • Development of visualization tecnniques and summary data to allow high speed searches of the SDO databases
  • Implementation of AstroGrid software and close coordination with the Virtual Solar Observatory at the US SDO data centre hosted at Stanford University
  • Deployment of a UK SDO data centre to provide access to science data products, catalogues and thumbnail images

The eSDO Phase A ends on 30 September 2005; the deliverables generated in this initial year of research describe how solar algorithms, data centres, and data visualization will be achieved in two years of development during Phase B. The six deliverables achieved at the end of Phase A are outlined below. At the end of this summary report is a synopsis of workpackages for corresponding Phase B deliverables, followed by hyperlinks to the Phase A formal deliverables. The remainder of this report presents a broad overview of the conclusions reached during the Phase A period of the eSDO grant on how best to achieve the deliverables of Phase B.

  • eSDO 1111: List of solar algorithms that eSDO institutions will develop and deploy as grid services. Completed 1 April 2005
  • eSDO 1121: Proposed solutions to each of the 11 algorithms detailed in eSDO 1111. Completed 30 September 2005.
  • eSDO 1131: Plan for integrating algorithms as grid services with AstroGrid, the JSOC pipeline, and SolarSoft.
  • eSDO 1211: Plans for quicklook products and visualization techniques including catalogues, thumbnail, gallery and movie generation, and the SDO streaming tool. Completed 30 September 2005
  • eSDO 1311: Design plans for implementing the UK data centre and integrating resource requests with the US data centre. Completed 1 April 2005
  • eSDO 1321: Design plans for integrating the UK data centre and, where possible, the US data centre with AstroGrid. Completed 30 September 2005

Algorithms

Solar Algorithms

The four institutions of the eSDO project will design and implement 11 solar algorithms covering three disciplines: image and feature recognition, global helioseismology, and local helioseismology. A full list of these algorithms will institutional responsibilities can be viewed as deliverable eSDO 1111: Solar Algorithm List. Individual technical writeups for each algorithm constitute the second deliverable, eSDO 1121: Solar Algorithm Proposed Solutions.

MSSL and RAL will develop image and feature recognition algorithms for use with AIA and HMI data. These algorithms include coronal loop recognition, non-linear magnetic field extrapolation, helicity computation, small event detection, differential emission measure (DEM) computation, and coronal mass ejection (CME) dimming region recognition. The universities of Birmingham and Sheffield will concentrate on global and local helioseismology respectively; the Birmingham group will implement mode frequency analysis and mode asymmetry analysis algorithms while the Sheffield group will develop subsurface flow analysis, perturbation map generation and computation of local helioseismology inversion algorithms.

Algorithm Distribution

Solar algorithms developed by eSDO will be made available to UK users in three ways. First, each algorithm will be deployed as an AstroGrid CEA web service hosted in the UK. AstroGrid CEA web services can be accessed through the AstroGrid portal, workbench and workflow systems. Second, all suitable algorithms will be wrapped in IDL for SolarSoft distribution through the MSSL gateway. Initial trials for wrapping C modules as IDL procedures has been documented at WrappingCInIDL. Third and finally, algorithms will be deployed in the Joint Science Operations Committee (JSOC) pipeline systems at Stanford University and Lockheed Martin. The JSOC team will designate some pipeline modules to run automatically, and other modules will be invoked by user requests. UK SDO co-investigators and their teams will be able to access the JSOC pipeline directly through accounts at Stanford University. The eSDO project will investigate installation of a CEA application that can allow authorized registered AstroGrid users to execute JSOC pipeline commands. For more detail, please view deliverable eSDO 1131: Algorithm Integration.

Data Visualization

The huge volume of SDO data collected every day makes it imperative to search the data archive efficiently. Visualization techniques such as streaming tools, catalogues, thumbnail extraction and movie generation aid scientists in archive navigation. More details about quicklooks and visualiztion are available in deliverable eSDO 1211: Quicklook and Visualization Plan.

Quicklook Products

Users will have access to three types of quicklook products: image thumbnails, catalogues, and movies. Software developed in conjunction with Rick Bogart at Stanford University will extract thumbnail images from AIA and HMI FITS files, add labels and instrument metadata to the images, and save the images as GIFs. These images can then be stored in the US or UK. A thumbnail catalogue will store basic scientific metadata about each extracted image such as observation time, wavelength, and product type: full disk AIA image, tracked active region AIA image, HMI line-of-sight magnetograms, HMI 20 minute averaged filtergrams, or HMI dopplergrams. Users will interact with a web browser GUI to specify start time, end time, cadence and data products that will generate image galleries or MPEG movies on-the-fly. The images or movie will be displayed in the web browser.

Catalogues

In addition to the thumbnail catalogue, two science catalogues will be generated from eSDO algorithms. One catalogue will store stastical information about small solar events and CME dimming regions. This catalogue will be annotated continuously as the two relevant algorithms automatically process AIA data in the UK. A second catalogue will provide monthly helioseismology information generated from the mode parameters analysis algorithm; this catalogue will be generated in the US using eSDO software. Both catalogues will be searchable the AstroGrid system through instances of DSA.

Visualization Techniques

The SDO streaming tool, originally conceptualized by Phil Scherrer, will allow users to view HMI and AIA science products in a web browser and then interact with a GUI to pan and zoom in both space and time. A user will open a web browser, navigate to the SDO streaming tool, and then specify a start time, stop time, cadence, and data product. Three types of SDO products will be available: HMI line-of-sight magnetograms, HMI continuum maps, and AIA images from 10 channels. The user will be able to zoom in spatially from a full disk, low resolution image to a full resolution display of a solar area. Zoomed images will be able to be panned in eight directions. Similarly, once a user has selected a cadence (for instance, 1 image per hour), data products matching that cadence will be displayed on the screen; users will be able to "zoom" in time by increasing or decreasing this cadence, while rewind, fast forward and pause facilities will allow users to "pan" in time.

Development of this tool will be based on wavelet compression algorithms developed by Rasmus Larsen at Stanford University. This type of compression allows data to be streamed effectively since only pixels pertaining to the user's requsted resolution are sent over the network. Pixels around the image boundary are also held in a buffer so that the user can spatially pan smoothly. The streaming tool will be refined by intelligently streaming data from the disc cache closest to the user.

Data Centres

The primary SDO data centre will be based in the US, and users can access the data directly through the Virtual Solar Observatory (VSO). A second data centre will be established in the UK. Rather than providing a full data mirror, this data centre will cache recent data products and popular data requests. This local data cache will provide UK scientists with more rapid access to SDO data and allow SDO data searches to be included in AstroGrid workflows.

UK Data Centre

The UK data centre will provide searchability and fast access to SDO data for users within the UK. The UK data centre will require three software components to enable this functionality. A MySQL database will hold searchable metadata for SDO science products, and an instance of the AstroGrid DataSet Access (DSA) module will make the database searchable through AstroGrid workflows. An instance of an AstroGrid Common Execution Architecture (CEA) application will transfer data from the UK data centre to the user's virtual storage area or local storage. More details of the UK data centre implementation can be found in deliverable eSDO 1311: Data Centre Implemenation Plan.

The eSDO development and science advisory teams investigated several data centre storage models with input from the JSOC and UK solar community. A "light footprint" has been chosen for development; this data centre model will use a ~30 TB disc cache to store a 60 day rolling cache of the most recent AIA science products along with a "true cache" of HMI and older AIA science products requested by UK users through the AstroGrid system. When a UK user requests an HMI or AIA science product through the AstroGrid system, the request will be sent to the UK data centre. If the data is not available there, the request will be redirected through an AstroGrid / VSO interface, and the relevant data products will be returned to the UK in export format (FITS, JPEG, VOTable, etc). The data will be cached in the UK data centre when a copy is transferred to the user. A fuller explanation of this data centre model along with descriptions of other models investigated is provided in deliverable eSDO 1321: Data Centre Integration Plan.

US Data Centre

The US data centre will be maintained by the JSOC team at Stanford University. VSO will be front end for user requests to the JSOC data centre, so eSDO development of an AstroGrid / VSO interface will permit AstroGrid users in any country to include VSO searches of SDO data in AstroGrid workflows. Colleagues at Stanford are investigating three AstroGrid components to aid grid integration of SDO data and tools: DSA for data searching, CEA for data transfers and access to pipeline commands, and the JES workflow engine to drive pipeline execution flows. eSDO developers will advise and support on the installation and configuration of these modules; during Phase A, a reference implementation of all AstroGrid modules was completed on the eSDO server at MSSL. The AstroGrid installation was documented along with a science user guide as a resources for the UK solar community and JSOC scientists. Please see links to these documents in the appendix.

Network latency tests will be undertaken between Stanford University, MSSL, UCL and RAL early in Phase B. The results of these tests will aid development of the AstroGrid / VSO bridge, configuring the CEA application to transfer AIA and HMI data from the JSOC data centre to the UK data centre, and how data will be streamed in the SDO visualization tool described in the next section. Details of eSDO involvement with the US data centre can be read in deliverable eSDO 1321: Data Centre Integration.

Community Support

The needs of the UK solar community drive the eSDO project. Development and grid accessibility of data centres, solar algorithms and visualization techniques will be refined as feedback from the community shapes the requirements of these three areas of work. During Phase A, community feedback has been solicited through an article and questionnaire in UK Solar News along with interviews with solar research groups, meetings with the JSOC team and SDO co-investigators, and collaboration with the AstroGrid development team. In addition, eSDO development plans have been disseminated to the UK solar community, the US solar community and the UK grid community through conferences and workshops. A full list of eSDO attendences can be viewed online at http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/ConferencesIn2004 and http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/ConferencesIn2005.

At the beginning of Phase B, the full Phase A report will be distributed to key members of the UK solar community to elicit final feedback of the eSDO research phase. This report, along with all eSDO documents, are permanently available online at http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO. As Phase B progresses, interaction with the community will be maintained in a number of ways. First, data centre development will continue in conjunction with the JSOC team, the ATLAS storage facility, and the AstroGrid project. Second, the eSDO team will attend an SDO algorithm workshop in February 2006 with JSOC developers; a number of parallel sessions will allow algorithm developers from the US, UK and elsewhere in Europe to collaborate on complex problems such as DEM measure and magnetic field extrapolation. The high performance computing centre at UCL will assist in enabling grid accessibility to algorithms that require parallel processing, and rigorous science testing of algorithms with test data from Solar-B and other instruments will occur in late 2006. Finally, eSDO developers will work closely with Stanford University scientists in the development of the SDO streaming tool.

Future Plans

The eSDO Phase B development stage will last from 1 October 2005 to 30 September 2007, and a post-launch support proposal will be completed by the eSDO consortium in early 2007. The following workpackages have been identified:

  • 2100 Solar Algorithms
    • 2110 Algorithm Coding
      • 2111 Completed code and grid interfaces for algorithm applications 30/06/06
      • 2112 Test script for scientific testing of algorithms 01/09/06
      • 2113 Completed tests with scientists' comments 22/12/06
      • 2114 Completed, refined code satisfying scientific testing results 30/03/07
    • 2120 Algorithm Grid Integration
      • 2121 Completed integration of solar algorithms with AstroGrid 31/05/07
      • 2122 Completed integration of solar algorithms with JSOC 31/07/07
      • 2123 Completed integration of solar algorithms with SolarSoft 30/09/07
  • 2200 Quicklook and Visualization
    • 2210 Quicklook Products
      • 2211 Completed application to extract labelled thumbnail images from FITS files 31/03/06
      • 2212 Completed web browser application to generate image galleries on the fly 30/06/06
      • 2213 Completed web browser application to generate movies on the fly 31/07/06
    • 2220 Catalogues
      • 2221 Completed database and DSA instance for thumbnail catalogue 31/08/06
      • 2222 Completed database and DSA instance for small event / CME dimming region catalogue 30/09/07
      • 2223 Completed database and DSA instance for helioseismology mode parameters catalogue 30/09/07
    • 2230 SDO Streaming Tool
      • 2231 Rapid prototype for streaming tool 30/06/06
      • 2232 Evaluation of rapid prototype functionality 31/07/06
      • 2233 Completed functionality to stream data from multiple caches 22/12/06
      • 2234 Completed streaming tool 30/06/07
  • 2300 Data Centres
    • 2310 UK Data Centre
      • 2311 Completed implementation of data centre on eSDO development server 22/12/06
      • 2312 Completed integration of SDO data centre with AstroGrid 30/09/07
    • 2320 US Data Centre
      • 2321 Completed network latency tests 31/03/06
      • 2322 Completed AstroGrid / VSO interface 30 /09/06
      • 2323 Completed support for AstroGrid modules installed at JSOC data centre 30/09/07

Web References

Formal Deliverables

  1. eSDO 1111: Solar Algorithm List, http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/SolarAlgorithms1111
  2. eSDO 1121: Solar Algorithm Proposed Solutions, http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/SolarAlgorithms1121
  3. eSDO 1131: Algorithm Integration Plan, http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/AlgorithmIntegration1131
  4. eSDO 1211: Quicklook and Visualization Plan, http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/QuicklookVisualization1211
  5. eSDO 1311: Data Centre Implementation Plan, http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/DataCentreImplementation1311
  6. eSDO 1321: Data Centre Integration plan, http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/DataCentreIntegration1321

Appendices and Other Documents

  1. Phase A Report (Full) - http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/PhaseAReport
  2. Phase A Summary Report - http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/PhaseASummaryReport
  3. Phase B workpackages - http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/PhaseBWorkpackages
  4. AstroGrid User’s Tutorial - - http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/AstroGridTutorials
  5. AstroGrid Installation / Configuration Tutorial - - http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/AstrogridInstallationV11
  6. Wrapping C modules in IDL - http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/CallingCfromIDL

II. eSDO Algorithms

eSDO 1111: Solar Algorithm List

This document can be viewed as a PDF. Deliverable eSDO-1111
E.Auden, W. Chaplin, L. Culhane, Y. Elsworth, A. Fludra, R. Harrison, M. Thompson, L. van Driel-Gesztelyi
31 March 2005

Introduction

The following solar algorithms will be written and implemented as grid services. All four institutions in the eSDO consortium will be responsible for 2 - 4 algorithms. A full description for each algorithm including inputs, outputs, science and technical use cases, interfaces, and proposed solutions will be included with deliverable eSDO-1121.

List of Algorithms

  1. Mode Parameters Analysis
  2. Mode Asymmetry Analysis
  3. Subsurface Flow Analysis
  4. Perturbation Map Generation
  5. Local Helioseismology Inversion Workbench
  6. Loop Recognition
  7. Magnetic Field Extrapolation
  8. Helicity Computation
  9. CME Dimming Region Recognition
  10. DEM Computation
  11. Small Event Detection
  12. Solar Rotation Profiling (if time permits)
  13. Oscillation Sources Analysis (if time permits)

Algorithm Detail

  1. Mode Parameters Analysis: (renamed from "mode frequency analysis" 23/09/05)
    • Primary Responsibility: University of Birmingham, Co-I Yvonne Elsworth and Bill Chaplin
    • Description: The mode frequency analysis algorithm will calculate mode parameters such as frequency, power, line width and variability. The analysis procedure will filter HMI time series data to isolate single oscillation modes or a family of modes. After a fourier analysis is applied to these modes, fitting procedures will determine the mode parameters.
  2. Mode Asymmetry Analysis
    • Primary Responsibility: University of Birmingham, Co-I Yvonne Elsworth and Bill Chaplin
    • Description: This algorithm will measure the departure of the mode from a symmetric profile by applying simultaneous fitting over several modes to identify the noise contributions.
  3. Subsurface Flow Analysis
    • Primary Responsibility: University of Sheffield, Co-I Michael Thompson
    • Description: Using HMI tracked dopplergrams as input, this algorithm will measure subsurface flows in upper convection zone, resulting in travel-time difference maps for different skip distances and orientations, subsurface flow maps under the tracked region obtained by inversion, and combinations of flow maps under tracked regions into synoptic maps. These data products will promote understanding and predicting active region evolution and evolution of atmospheric magnetic structures.
  4. Perturbation Map Generation
    • Primary Responsibility: University of Sheffield, Co-I Michael Thompson
    • Description: This algorithm will measure wavespeed anomalies in the upper convection zone and, in particular, under active regions to understand active region and sunspot subsurface structures and flux emergence. This analysis will generate travel-time anomaly maps for different skip distances, subsurface wavespeed anomalies under the tracked region obtained by inversion, and combinations of wavespeed anomaly maps under tracked regions into synoptic maps.
  5. Local Helioseismology Inversion Workbench
    • Primary Responsibility: University of Sheffield, Co-I Michael Thompson
    • Description: The workbench will provide a front-end GUI for specifying and launching inversions of helioseismic data on local or remote machines. Subsurface reconstructions of flows or wavespeed anomalies will be obtained by inversion. The workbench will maintain queueing systems, data indices and inversions results; in addition, the user will be able to retrieve and display inversions results.
  6. Loop Recognition
    • Primary Responsibility: Mullard Space Science Laboratory, Co-I Len Culhane and Lidia van Driel-Gesztelyi
    • Description: Coronal loops observed by SDO will be identified in AIA images. Using an iterative approach, identified coronal loops will be matched with magnetic field line extrapolations.
  7. Magnetic Field Extrapolation
    • Primary Responsibility: Mullard Space Science Laboratory, Co-I Len Culhane and Lidia van Driel-Gesztelyi
    • Description: Using SDO magnetograms as photospheric boundary condition, a fast fourier transform will be used to calculate magnetic field strength and directionality for each point of an arbitrarily defined computational box. The algorithm will compute magnetic connectivities between opposite magnetic polarities and draw characteristic field lines.
  8. Helicity Computation
    • Primary Responsibility: Mullard Space Science Laboratory, Co-I Len Culhane and Lidia van Driel-Gesztelyi
    • Description: Using a series of SDO magnetograms as input, this procedure will compute the flux of magnetic helicity through the photospheric boundary. Magnetic field extrapolations and the derived magnetic connectivities will be used to improve the helicity flux results.
  9. CME Dimming Region Recognition
    • Primary Responsibility: Rutherford Appleton Laboratory, Co-I Richard Harrison and Andrzej Fludra
    • Description: Time series of difference images in coronal lines will be examined for the change of brightness (dimming). Visual detection of the coronal dimming has proved successful in the past, but this automated algorithm will detect varying spatial scales of the dimming region from approximately 45 heliographic degrees down to a fraction of an active region area.
  10. DEM Computation
    • Primary Responsibility: Rutherford Appleton Laboratory, Co-I Richard Harrison and Andrzej Fludra
    • Description: Calculation of differential emissional measure (DEM)1 requires calibrated line intensities and line emissivities as a function of temperature, or G(T) functions. The solar community has developed several DEM algorithms use either iterative methods or gradient optimization methods with smoothing constraints. One of these existing algorithms, such as the DEM program included in the CHIANTI package, will be selected for eSDO based on current availability of software and ease of adaptation.
  11. Small Event Detection
    • Primary Responsibility: Rutherford Appleton Laboratory, Co-I Richard Harrison and Andrzej Fludra
    • Description: The algorithm will detect small-scale brightenings in AIA images of the corona, transition region and photosphere. Based on the EIT brightness detection work of Berghmans, Clette, and Moses2, this procedure will define a reference background emission, identify events through a scan of light curves of all pixels, and determine event dimensions in the temporal and spatial domains.
  12. Solar Rotation Profiling (if time permits)
    • Primary Responsibility: University of Sheffield
    • Description: HMI data will be used to develop solar rotation profiles as a function of depth and latitude for the internal Sun between the solar surface and the convective zone.
  13. Oscillation Sources Analysis (if time permits)
    • Primary Responsibility: University of Birmingham, Co-I Yvonne Elsworth and Bill Chaplin
    • Description: Beginning with the simple assumptions that the mode asymmetry in velocity can be related to the location of the oscillation source, this algorithm will augment the oscillation source profile using noise analysis in both velocity and intensity.

References

  1. Withbroe, 1975, Solar Phys., 45, 301.
  2. Berghmans, Clette and Moses, 1998, A&A, 336, 103.

eSDO 1121: Solar Algorithm Proposed Solutions

This document can be viewed as a PDF.

Deliverable eSDO-1121
E.Auden, W. Chaplin, L. Culhane, Y. Elsworth, A. Fludra, R. Harrison, M. Thompson, L. van Driel-Gesztelyi
24 August 2005

The following documents will comprise eSDO deliverable 1121, formal algorithm descriptions.

eSDO 1121: Mode Parameters Analysis

This document can be viewed as a PDF.
Deliverable eSDO-1121: Mode Frequency Analysis
T. Toutain, Y. Elsworth, W. Chaplin
28 June 2005

Description

The aim of the algorithm is to apply a helioseismic data analysis technique developed by T. Toutain and A. Kosovichev(member of MDI and HMI teams) the so-called optimal-mask technique1 to the HMI data. This technique allows to clean a p-mode power spectrum around a given target mode making the determination of its parameters (frequency, linewidth, power amplitude) more reliable. Usual techniques based on spherical-harmonic masks are known to produce "mode leakage" around the target mode making determination of the parameters more difficult.

Once a cleaned power spectrum is obtained around the target mode a standard likelihood-minimization fitting method3 with a Lorentzian profile model is applied to extract the parameters of the target mode. The Lorentzian profile writes:

L(ν) = H/(1+(ν-ν0)2/(Γ/2)2)

The parameters are : ν0 the mode central frequency, Γ the mode linewidth and H the power height. The central frequency is given by the position of the Lorentzian profile in the Fourier power spectrum, the typical unit is mHz. The linewidth is given by the width of the Lorentzian profile at half-height in the Fourier power spectrum, its unit is μHz. The power height is given by the height of the Lorentzian profile in the Fourier power spectrum, its typical unit for velocity observations is cm2/μHz.

The frequency range in the power spectrum for which mode central frequencies will be calculated is 1.0 - 5.0 mHz.
The algorithm increases in numerical complexity and therefore stability as the degree of the p-modes increases. At first, only low-degree modes will be targeted therefore the range of degrees for which mode parameters will be calculated is between 0 and 5. This range can be extended to higher degrees as the algorithm proves stable and fast.

In real data the mode parameters are not known very precisely so it might be difficult to quantify how well this method performs compared to the existing method implemented in the MDI peak-bagging pipeline. It is therefore useful as a first step to implement an algorithm to produce artificial helioseismic time series. It will then be possible to check how well the outputs both from the existing fitting routine and the new one compare to the parameters put in the artificial time series. We have now finished the development of such an algorithm and artificial timeseries for modes of degree up to l=5 have been produced. The time series will be tested in the MDI peak-bagging pipeline.

Because of the similarities between MDI and HMI data the algorithm will be "validated" on the existing MDI data.

Inputs

  • HMI dopplergrams.

Outputs

  • FITS file: table containing frequency, linewidth, and amplitude for each p mode of low degree and their associated error bars.

Test Data

  • Artificial helioseismic time series.
  • MDI helioseismic time series.

Tool Interface

  • commandline:
    1. AstroGrid CEA web service: this algorithm will be deployed as a CEA service hosted in the UK that users can call the web service to process datasets on the grid. Due to computational intensity, access to this service may be restricted to registered solar users of AstroGrid.
    2. SolarSoft routine: the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline or GUI to process locally held data.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

  1. The user identifies a period of observation during which p-mode parameters will be calculated
  2. The user obtains HMI dopplergrams covering this period of observation and constructs time series using optimal masks.
  3. Next, the mode frequency analysis algorithm is applied to sht time series.
  4. The algorithm runs and returns a list of p-modes with the frequency, line width and amplitude for each mode (inlcuding error bars for each parameter).

Technical Use Case

  1. Apply the Optimal Mask technique to an HMI dopplergram:
    • First, divide the dopplergram into a number of bins (for example the binning of the LOI-proxy as defined for MDI data).
    • Next, model the signal of each image bin produced by a specific mode; average the signals coming from Nk CCD pixels in each bin.
    • Finally, choose the optimal mask vector to maximize the target mode's signal while minimizing signals from other nearby modes by using the singular value decomposition (SVD)2 method to minimize contamination or leakage of modes onto the target mode. It consists of the following steps:
      • Construct a local optical mask: identify a window around the target mode, and filter out modes whose frequencies fall in this window.
      • Add a regularization term to model the noise contribution.
  2. Apply the preceeding method to all the Dopplergrams in the period of observations as defined by the user obtaining a time series for the target mode.
  3. A discrete Fourier transform is applied to convert the time series to a power spectrum. * The resulting power spectrum should reflect the frequency of the selected mode while minimizing frequencies from nearby modes.
  4. The mode frequency analysis algorithm is called with the target mode time series data as input.
  • the mode parameter determination is based on a standard likelihood-minimization technique using a Lorentzian profile as a model for the mode profile in the power spectrum.
  1. Return the frequency, linewidth and amplitude of the target mode.

Quicklook Products

  1. Artificial time series (fits format)
    These artificial time series are made using the following steps:
    • Make independent complex spectra for each simulated mode. Modes of degrees l=0-20 and azimuthal order |m|l are modelled using observed solar p-mode frequencies. Their linewidths and amplitudes are obtained fitting a curve to exisiting measurements of these parameters.
    • Inverse Fourier transform each spectrum obtaining a time series of complex amplitude for each mode.
    • Multiply at each instant t the complex amplitude of each mode with its corresponding spherical harmonics pattern of projected velocity onto CCD pixels and sum-up for all modes.
    • To obtain an artificial time series for a given mode (l,m),multiply the signal on each CCD pixel with the value of the complex conjuguate of the corresponding spherical harmonics and sum-up for all pixels.

Support Information

  1. Toutain, T.; Kosovichev, A. G., 2000, "Optimal Masks for Low-Degree Solar Acoustic Modes", The Astrophysical Journal, Volume 534, Issue 2, pp. L211-L214.
  2. Kosovichev, A. G. 1986, Bull. Crimean Astrophys. Obs., 75, 19
  3. Anderson, E.R.; Duvall,T.L.; Jefferies, S.M., 1990, "Modeling of Solar Oscillation Spectra", The Astrophysical Journal, Volume 364, pp. 699-705

eSDO 1121: Mode Asymmetry Analysis

This document can be viewed as a PDF.
Deliverable eSDO-1121: Mode Asymmetry Analysis
T. Toutain, Y. Elsworth, W. Chaplin
28 June 2005

Description

The algorithm developed here is based on a work done internally by the eSDO Birmingham group and which is submitted to the MNRAS journal. It is based on a modification of the usual formula describing an asymmetrical p-mode profile accounting for c the so-called correlated-noise coefficient. This coefficient describes to which extend the excitation of a mode can be described by the solar background noise. We assume that the excitation function of a p-mode is the same as the background noise component having the same spatial pattern as the mode. In that case, the p-mode line profile in the power spectrum is no longer modeled with a Lorentzian profile but instead with the following formula :

P(ν) = L(ν) [1 + 2.c.√(n/H)] + n(ν)

where L is the usual Lorentzian profile :
L(ν) = H/(1+(ν-ν0)2/(Γ/2)2)

H,ν0,Γ and n(ν) are the mode height, frequency, linewidth and the background noise, respectively.
c is related to a the usual asymmetry as defined by Nigam and Kosovichev (see ref. 1 here below) with :
a = c.√(n/H)

Note: the asymmetry or the coefficient of correlated-noise are parameters which define the shape of the p-mode profile as do also the usual parameters (frequency, linewidth...). Therefore their determination requests to simultaneaously determine the others parameters. Hence "mode asymmetry analysis" is somehow a part of "mode frequency analysis" which means that both have similar Inputs, Outputs... as described here below.

Inputs

  • HMI dopplergrams.

Outputs

  • FITS file: table containing frequency, mode asymmetry for each p mode of low degree and their associated error bars.

Test Data

  • Artificial helioseismic time series.
  • MDI helioseismic time series.

Tool Interface

  • commandline:
    1. AstroGrid CEA web service: this algorithm will be deployed as a CEA service hosted in the UK that users can call the web service to process datasets on the grid. Due to computational intensity, access to this service may be restricted to registered solar users of AstroGrid.
    2. SolarSoft routine: the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline or GUI to process locally held data.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

  1. The user identifies a period of observation during which p-mode parameters will be calculated
  2. The user obtains HMI dopplergrams covering this period of observation and constructs time series using optimal masks.
  3. Next, the mode frequency analysis algorithm is applied to sht time series.
  4. The algorithm runs and returns a list of p-modes with the frequency and asymmetry for each mode (inlcuding error bars for each parameter).

Technical Use Case

  1. Apply the Optimal Mask technique to an HMI dopplergram:
    • First, divide the dopplergram into a number of bins (for example the binning of the LOI-proxy as defined for MDI data).
    • Next, model the signal of each image bin produced by a specific mode; average the signals coming from Nk CCD pixels in each bin.
    • Finally, choose the optimal mask vector to maximize the target mode's signal while minimizing signals from other nearby modes by using the singular value decomposition (SVD)2 method to minimize contamination or leakage of modes onto the target mode. It consists of the following steps:
      • Construct a local optical mask: identify a window around the target mode, and filter out modes whose frequencies fall in this window.
      • Add a regularization term to model the noise contribution.
  2. Apply the preceeding method to all the Dopplergrams in the period of observations as defined by the user obtaining a time series for the target mode.
  3. A discrete Fourier transform is applied to convert the time series to a power spectrum. * The resulting power spectrum should reflect the frequency of the selected mode while minimizing frequencies from nearby modes.
  4. The mode frequency analysis algorithm is called with the target mode time series data as input.
  • the mode parameter determination is based on a standard likelihood-minimization technique using a Lorentzian profile as a model for the mode profile in the power spectrum.
  1. Return the frequency and asymmetry of the target mode.

Quicklook Products

  1. Artificial time series (fits format)
    These artificial time series are made using the following steps:
    • Make independent complex spectra for each simulated mode. Modes of degrees l=0-20 and azimuthal order |m|l are modelled using observed solar p-mode frequencies. Their linewidths and amplitudes are obtained fitting a curve to exisiting measurements of these parameters. Asymmetry is included via a correlation between background noise and excitation function of the mode.
    • Inverse Fourier transform each spectrum obtaining a time series of complex amplitude for each mode.
    • Multiply at each instant t the complex amplitude of each mode with its corresponding spherical harmonics pattern of projected velocity onto CCD pixels and sum-up for all modes.
    • To obtain an artificial time series for a given mode (l,m),multiply the signal on each CCD pixel with the value of the complex conjuguate of the corresponding spherical harmonics and sum-up for all pixels.

Support Information

  1. Nigam, R.; Kosovichev, A. G., "Measuring the Sun's Eigenfrequencies from Velocity and Intensity Helioseismic Spectra: Asymmetrical Line Profile-fitting Formula ", 1998, Astrophysical Journal Letters v.505, p.L51

eSDO 1121: Subsurface Flow Analysis

This document can be viewed as a PDF.
Deliverable eSDO-1121: Subsurface Flow Analysis
S. Zharkov, M. Thompson
28 June 2005

eSDO 1121: Perturbation Map Generation

This document can be viewed as a PDF.
Deliverable eSDO-1121: Perturbation Map Generation
S. Zharkov, M. Thompson
28 June 2005

Description

The purpose of this algorithm is to measure and interpret the travel times of the waves between any two locations on the solar surface in terms of subsurface wave-speed between them. An anomaly in the mean travel-time contains the seismic signature of the wave speed perturbation within the proximity of the ray path. The wave speed perturbation is obtained by solving the inverse problem.

Inputs

  • HMI tracked and remapped Dopplergrams of rectangular regions of solar disk.

Outputs

  • Travel-time anomaly maps for different skip-distances
  • Subsurface wave speed maps under the tracked region
  • Synoptic wave speed anomaly maps

Tool Interface

  • commandline: input of AIA images, output of FITS files containing images and statistical data.
    1. AstroGrid CEA web service: this algorithm will be deployed as a CEA service hosted in the UK that users can call the web service to process datasets on the grid.
    2. SolarSoft routine: the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline or GUI to process locally held data.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

The aim of the package is to measure sound speed perturbations in upper convection zone, for use in understanding and predicting AR evolution and evolution of atmospheric magnetic structures.

  1. First, input tracked datacube is pre-processed and center to annulus travel-time measurements are extracted by computing the temporal cross covariance of the signal at a point on the solar surface with the signal at another point.
  2. The mean travel-times are then obtained from one-way travel times. These contain information about solar interior wave speed perturbation which is extracted by solving the equation
    http://twiki.mssl.ucl.ac.uk/twiki/pub/SDO/PhaseAPerturbationMapGeneration/formula1.GIF
  3. This is done by building wave speed sensitivity kernels in Rytov's approximation
    http://twiki.mssl.ucl.ac.uk/twiki/pub/SDO/PhaseAPerturbationMapGeneration/formula2.GIF
    and solving the first equation using the Multichannel Deconvolution method.
  4. The package will provide three main outputs: mean travel times, sensitivity kernels and inversion results.
  5. Travel times and sensitivity kernels can be used as input for Local Helioseismology Inversion package to refine the inversion. The travel time data could also be used on its own with sensitivity kernels and inversion routines generated or provided by the user.

Technical Use Case

The problem consists of three stages: data interpretation via filtering stages and cross-correlation and estimation of travel time and travel-time means; building a forward model of the Sun to tie the surface data and subsurface features; the solution of the resulting inversion problem to recover the soundspeed perturbation.

Data Interpretation:

  1. Input Doppler tracked and remapped datacube is Fourier transformed and filtered by applying a high-pass filter to remove convective motions, f-mode filter (removing f-mode ridge) and then a phase speed filter to select the waves that travel similar skip-distance.
  2. From the filtered signal compute the cross-covariance function, suitably averaging to increase the signal-to-noise ratio.
  3. Travel-times of the waves travelling in each direction are obtained by fitting the averaged cross-covariance function with a smooth cross-covariance function computed from a solar model or from quiet Sun data. Travel-time means are then computed.
  4. The noise covariance matrix is estimated by measuring the rms travel time within a quiet Sun region.

Forward Model: building travel time sensitivity kernels for sound speed perturbation using Rytov approximation

1Input
Solar model, spatial resolution, skip-distance
  1. For every pair of points in the output data cube, calculate ray paths and theoretical travel times for rays of the given frequency travelling to and from surface points via depth point. The horizontal translational invariance of the background model greatly reduces the amount of computing required.
  2. Using the ray travel times and ray path length calculate approximate sensitivity kernels for each of the skip-distances in the Rytov approximation.
  3. Output: Sound speed perturbation sensitivity kernels, 3D data cube.

Inversion:

To infer the soundspeed perturbation from the observation we invert the travel-time differences using the travel-time sensitivity kernels and multi-channel deconvolution algorithm.

  1. Input: Mean travel time for various skip distances, corresponding sensitivity kernels, Solar model, mean travel time error covariance matrix
  2. Perform 2D Fourier transforms of the input mean travel time perturbations and sensitivity kernels
  3. Calculate weight matrices for model vector using error covariance matrix and chosen trade-off parameter
  4. Calculate the Fourier transform of the estimated soundspeed perturbation
  5. Apply layer by layer inverse Fourier transform to obtain soundspeed perturbation estimate.
  6. Output: sound speed perturbation as a function of depth and position, covariance matrix of the estimated model

Other methods considered Regularised Least Square; Optimally Localised Averages, LSQR, and Singular Value Decomposition.

Quicklook Products

none

Support Information

  1. Gizon, L., Birch, A.C., Local helioseismology, Living Reviews of Solar Physics, 2005
  2. Giles, P.M., Time-distance Measurements of Large Scale Flows in the Solar Convection Zone (Ph.D. Thesis)
  3. J.M. Jensen and F.P. Pijpers, Sensitivity kernels for time-distance inversion based on the Rytov approximation, Astronomy & Astrophysics, 412, 257-265 (2003)
  4. J.M. Jensen, Helioseismic Time-Distance Inversion, (Ph.D. thesis), 2001

eSDO 1121: Local Helioseismology Inversion

This document can be viewed as a PDF.
Deliverable eSDO-1121: Local Helioseismology Inversion
S. Zharkov, M. Thompson
28 June 2005

Description

The aim of this package is to provide front-end interface for specifying and launching inversions of helioseismic data, on local or remote machines; maintain queueing system and indices of data and inversions results; retrieve inversions results and allow display and local storage.

Inputs

  • Travel times (mean/difference)
  • corresponding travel time sensitivity kernels
  • noise covariance matrix
  • regularisation option
  • trade-off parameter

Outputs

  • Solar interior inversion, error estimates, resolution estimates

Tool Interface

  • commandline: input of AIA images, output of FITS files containing images and statistical data.
    1. AstroGrid CEA web service: this algorithm will be deployed as a CEA service hosted in the UK that users can call the web service to process datasets on the grid.
    2. SolarSoft routine: the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline or GUI to process locally held data.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

In general, the inverse problem in local helioseismology is ill posed and thus requires some "a priori" knowledge which is used to select the proper solution from the set of possible solutions given by the measurements. This normally consists of choosing an additional constraint which causes the chosen solution to be "smooth" in some way. The relative influence of such regularization is controlled by "regularization trade-off parameter". In general, model solutions are computed for a range of different values, and then the "best" model is chosen.

In addition, the degree to which any inversion method succeeds depends on the accuracy of the forward model, noise level in travel times, the depth and spatial scales of real variations in the Sun, the number and type of travel times that are available as input to inversion, the accuracy of the travel time covariance matrix. This package will provide the user with tools to apply different inversion methods with varying trade-off parameters to the data enabling the user to choose "best" results.

For Perturbation Map Generation and Subsurface Flow Analysis algorithms during the inversion part the choice of regularisation and trade-off parameter will be made automatically as a first approximation. User will be able to take travel-times, error covariance matrix and travel time sensitivity kernels available as additional output in the above algorithms and use those as input to further local helioseismology inversion systems to refine the inversion results.

  1. User begins by inputing travel times, time sensitivity kernels, regularisation option and noise covariance matrix to local helioseismology inversion.
  2. Trade-off parameter is chosen automatically for first approximation.
  3. Inversion is calculated and returned to the user with error estimates.
  4. Inversion can then be re-calculated with different trade-off parameters to refine inversion results.

Technical Use Case

Inversion methods considered for implementation: Multi channel deconvolution, Regularized Least Squares, Singular Value Decomposition, Subtractive Optimally Localized Averages.

Option 1: Multi Channel Deconvolution (Priority for implementation)

  1. Input: Travel times, corresponding sensitivity kernels, Solar model, noise covariance matrix, trade-off parameter, regularization operator
  2. Perform 2D Fourier transforms of the input mean travel time perturbations and sensitivity kernels
  3. Calculate weight matrices for model vector using chosen regularization and trade-off parameter.
  4. Calculate the Fourier transform of the inverse operator using weight matrices, Fourier transformed model data.
  5. Calculate the Fourier transform of the estimated soundspeed perturbation
  6. The soundspeed perturbation estimate is found by inverse Fourier transforming of 3 layer by layer.
  7. Calculate the covariance matrix of the estimated model using inverse operator and data error covariance matrix
  8. Calculate the resolution matrix using inverse operator and Fourier transformed data.
  9. output: Solar interior inversion, error estimates, resolution estimates

Option 2: Subtractive Optimally Localized Averages

  1. Input: travel times, corresponding sensitivity kernels, noise covariance matrix, trade-off parameter
  2. From sensitivity kernels, noise covariance matrix and trade-off parameter calculate the the mode kernel cross-correlation matrix and calculate its inverse.
  3. Choose Gaussian target function parameters, ie spatial location in 3D, spatial horizontal and vertical extent.
  4. For each data point calculate the cross-correlation vector of the mode kernels with the target function with an extra column used for constraint.
  5. From 2. and 4. calculate weight coefficients at each data point.
  6. Calculate the solar interior parameters using input data and weight coefficients.
  7. Calculate error bars and spatial resolution attained.

Option 3: Generalized Singular Value Decomposition

Here we are essentially minimizing the expression

||Ax-b||2+Lambda.||Lx||2,
where Lambda is the trade-off parameter, L is the regularization operator, and ||.|| denotes L2 norm.

  1. Input: travel times, corresponding sensitivity kernels, Solar model, noise covariance matrix, trade-off parameter
  2. Using sensitivity kernels and top hat functions as basis calculate matrix A.
  3. Solve generalized SVD for A and regularization operator L obtaining generalized singular values of (A, L), rank of A and decomposition matrices U, V, W-1.
  4. Calculate W and filter factors.
  5. Using U, V, W, filter factors and non-zero singular values calculate the inversion from input data at each data point.
  6. Calculate resolution matrix and error estimates.

Regularization operator choice

The choice of the regularization operator will be the zero-th, first or second derivative of the model.

Quicklook Products

none

Support Information

  1. B.H. Jacobsen, I. Moller, J.M. Jensen and F.Efferso, Multichannel Deconvolution, MCD, in Geophysics and Helioseismology, Phys. Chen. Earth(A), Vol. 24, No. 3, 215-220, 1999
  2. Pijpers, F. P., Thompson, M. J., The SOLA method for helioseismic inversion, A&A, 1994,
  3. Christensen-Dalsgaard, J., Hansen, P. C., Thompson, M. J., Generalized Singular Value Decomposition Analysis of Helioseismic Inversions, 1993
  4. Tikhonov A N and Arsenin V Ya, Solution of Ill-Posed Problems, 1977

eSDO 1121: Loop Recognition

This document can be viewed as a PDF.
Deliverable eSDO-1121: Loop Recognition
M. Smith, E.Auden, L. van Driel-Gesztelyi
28 June 2005

Description

Coronal loops observed by SDO will be identified in AIA images over multiple wavelengths. Three avenues of development will be explored: the primary investigation will concentrate on the use of Rangupathy's [3] 'Improved Curve Tracing' method (based on the work of Steger [2]), which will be independent of magnetic field information. If this method proves unreliable, then the secondary approach will extend automated loop recognition work developed by Lee and Gary[1]. Using an iterative approach, coronal loops identified by the oriented connectivity method will be matched with magnetic field line extrapolations. If this method, too, proves unsatisfactory, then curvelet analysis will be used to identify coronal loops.

Inputs

  • Multi-wavelength flatfield AIA filtergrams
  • HMI vector magnetograms (if OCM with magnetic field line extrapolation approach is used)

Outputs

  • Coronal Loops FITS file
    • Image extension - AIA flatfield image superimposed with recognized loops. Loops will be shown in different colours for specific temperatures.
    • Tabular extension - footprints for each terminus of recognized loops. A footprint will be described in terms of solar coordinates, radius, and temperature.

Test Data

  • TRACE images

Tool Interface

  • commandline: input of AIA images, output of FITS files containing images and statistical data.
    1. AstroGrid CEA web service: this algorithm will be deployed as a CEA service hosted in the UK that users can call the web service to process datasets on the grid.
    2. SolarSoft routine: the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline or GUI to process locally held data.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

The user wants to identify coronal loops in multiple wavelengths over a given time period. In addition to viewing recognized loops superimposed on a filtergram image, the user also wants a description of each loop's footprint that includes solar coordinates, radius, and temperature.

  1. The user identifies flatfield images from one or more of the 10 AIA channels taken during the specified time period.
  2. The user inputs the flatfield AIA images to the automated loop recognition algorithm. (Currently there are no user-specified variables to describe for the loop recognition algorithm to be distributed through SolarSoft or deployed as an AstroGrid CEA service.)
  3. The algorithm runs and returns a FITS file to the user.
  4. The user can view an image within the FITS file displaying recognized loops superimposed in colour over the original AIA flatfield image. Loops are colour-coded by temperature.
  5. The user can also view a table of footprints for each terminus of a recognized loop. The footprints are described in terms of solar coordinates, radius, and temperature.

Technical Use Case

Steger and Raghupathy's 'Improved Curve Tracing' algorithm (first choice)

  1. The automated loop recognition algorithm receives an AIA flatfield image as input.
  2. Iteratively apply Steger's curve tracing algorithm to identify loops, limiting large changes of angle to improve curve following at junctions.
  3. Calculate inner products based on average curve orientation to detect fading curves.
  4. Apply lookahead to fading curves to detect possible re-emergence of the curve.
  5. Identify loop terminating pixels. Use ancillary SDO pointing files to establish corresponding solar coordinates.
  6. Superimpose loops onto original AIA flatfield image. Write to a FITS file with an image extension for the loop recognition image and a tabular extension containing footprint information, including solar coordinates, radius of footprint, and temperature.
  7. Return FITS file.

OCM with dependence on magnetic extrapolation (second choice)

  1. The automated loop recognition algorithm receives an AIA flatfield image as input.
  2. Clean image with median filtering, unsharp masking, and linear filtering
  3. Apply Strous algorithm to identify loop pixels
  4. Now find HMI magnetogram corresponding to time coverage of AIA image; call magnetic field extrapolation algorithm. Return azimuths to loop recognition algorithm
  5. Create a weighted pixel matrix using magnetic field azimuths, pixel intensity, angular information and proximity to previously identified loops. Apply matrix iteratively over all pixels to enhance loop connectivity (oriented connectivity method).
  6. Smooth loop curves with a B-spline filter.
  7. Link disconnected loop subsections using an edge-linking algorithm such as the Hough Transform.
  8. Smooth the edge-linked loops with a second B-spline filter.
  9. Identify loop terminating pixels. Use ancillary SDO pointing files to establish corresponding solar coordinates.
  10. Superimpose loops onto original AIA flatfield image. Write to a FITS file with an image extension for the loop recognition image and a tabular extension containing footprint information, including solar coordinates, radius of footprint, and temperature.
  11. Return FITS file.

Curvelet analysis (third choice)

  1. The automated loop recognition algorithm receives an AIA flatfield image as input.
  2. Clean image with median filtering, unsharp masking, and linear filtering
  3. Iteratively apply curvelet transform to identify loops.
  4. Smooth loop curves with a B-spline filter.
  5. Link disconnected loop subsections using an edge-linking algorithm such as the Hough Transform.
  6. Smooth the edge-linked loops with a second B-spline filter.
  7. Identify loop terminating pixels. Use ancillary SDO pointing files to establish corresponding solar coordinates.
  8. Superimpose loops onto original AIA flatfield image. Write to a FITS file with an image extension for the loop recognition image and a tabular extension containing footprint information, including solar coordinates, radius of footprint, and temperature.
  9. Return FITS file.

Quicklook Products

  • FITS file containing two extensions:
    • AIA image with superimposed recognized loops
    • Table suggesting footprint coordinates, intensity and other statistical data in multiple wavelengths observed using different AIA channels.

Support Information

  1. Lee, J.K., Gary, G.A., Newman, T.S. American Astronomy Society, SPD meeting #34. 02/2003. 2003SPD....34.0305L
  2. C. Steger - 'An Unbiased Detector of Curvilinear Structures'.
  3. K. Raghupathy, Thomas W. Parks - 'Improved Curve Tracing in Images'.

eSDO 1121: Magnetic Field Extrapolation

This document can be viewed as a PDF.
Deliverable eSDO-1121: Magnetic Field Extrapolation
M. Smith, E.Auden, L. van Driel-Gesztelyi
10 August 2005

Description

The Magnetic field extrapolation process will calculate magnetic field lines between Solar active regions using vector and line-of-sight magnetograms from the SDO HMI instrument. The extrapolated fields will be shown as an overlay on the magnetogram and their footpoints displayed in tabular form.

The primary investigation will concentrate on Wiegelmann's [1] improved 'Optimization' method of extrapolation, which uses the nonlinear magnetic model. The nonlinear model provides the most accurate representation of magnetic fields, particularly around active regions. The Optimization technique, originally developed by Wheatland [2], computes the magnetic field over a predefined volume, or box, using full-disk vector magnetogram data for the extrapolation. The magnetogram is also used to determine the bottom boundary conditions of the box, while the lateral and top boundaries are computed from a Potential Field model, which provides a valid approximation of magnetic field activity high in the the Sun's corona.

Wiegelmann's Optimization method improves on other nonlinear methods by introducing the concepts of a boundary layer and weighting function. These diminish the influence of the lateral and top boundary conditions on the calculation of the magnetic field over the region of interest.

The vector magnetogram data will be preprocessed in two stages before it is submitted to the Optimization method for extrapolation. These are (in order):

  • Azimuth Disambiguation
  • Wiegelmann and Sakurai's preprocessing procedure

Azimuth Disambiguation

'Azimuth Disambiguation' (Georgoulis [3]) is a new technique designed to resolve the Pi ambiguity, one of several problems inherent in modern vector magnetograms. The Pi ambiguity describes the case where the azimuth angle of the transverse component of the magnetic field has two equally likely values 180 degrees apart. Azimuth Disambiguation will typically add a 1-2 minute processing time penalty to the overall extrapolation computation on an average desktop computer.

Wiegelmann, Inhester and Sakurai's preprocessing procedure

Another drawback of currently available vector magnetograms is the level of noise that is sometimes apparent in the transverse components. Such noisy data can violate the force-free assumptions used in the Optimization extrapolation and invalidate the results. Wiegelmann, Inhester and Sakurai [4] have recently developed a numerical technique that detects any data lying outside force-free constraints and 'smooths' them into a force-free condition. The amount of additional processing time incurred by this technique is unknown at present.

Should the Optimization method prove computationally expensive, then the 6 nonlinear force-free algorithms discussed at the Lockheed Martin NLFFF modelling meeting in May 2005 will be reviewed; the algorithm which provides the best balance between speed and accuracy will be deployed.

Again, if none of these is acceptably fast, then the simpler (but less accurate) Potential Field model will be used.

Inputs

  • HMI active region vector magnetogram.
  • HMI line-of-sight magnetogram.

Outputs

  • FITS file containing extrapolated fields overlayed on the original magnetogram.
  • FITS file tabular extension containing footprints for each extrapolated field. A footprint will be described in terms of solar coordinates and radius.

Test Data

  • SOHO MDI vector magnetograms, migrating to Solar-B vector magnetograms

Tool Interface

  • commandline:
    1. AstroGrid CEA web service: this algorithm will be deployed as a CEA service hosted in the UK that users can call to process datasets on the grid. Note: due to the computational intensity of this algorithm, access to the CEA service may be restricted to recognized solar users registered with AstroGrid.
    2. SolarSoft routine: if computational intensity is not deemed too high, the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline to process locally held data.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

The user wants to view extrapolated magnetic fields superimposed on a vector magnetogram image, and also a description of each field's footpoint that includes solar coordinates and radius.

  1. The user identifies the image taken during the specified time period.
  2. The user input the full-disk magnetogram HMI image to the Magnetic Field Extrapolation algorithm. (Currently there are no user-specified variables to describe for the loop recognition algorithm to be distributed through SolarSoft or deployed as an AstroGrid CEA service.)
  3. The algorithm is run and returns a FITS file to the user.
  4. The user can view an image within the FITS file displaying extrapolated magnetic fields superimposed in colour over the original HMI magnetogram image.
  5. The user can also view a table of footprints for each of the fields identified. The footprints are described in terms of solar coordinates and radius.

Technical Use Case

Optimization

  1. The Magnetic Field Extrapolation algorithm receives a full-disk vector magnetogram as input.
  2. Georgoulis's Azimuth Disambiguation technique is applied and used to remove the Pi Ambiguity of the transverse components of the vector magnetogram data.
  3. Wiegelmann, Inhester and Sakurai's preprocessing procedure is applied to check for and 'smooth' magnetogram data that lie outside force-free constraints.
  4. A computational box is defined, i.e. a bounded volume which represents the region in which the magnetic field will be calculated.
  5. A physical domain is defined inside the box which represents the volume in which the nonlinear magnetic field will be calculated using vector magnetogram data.
  6. A boundary layer is defined which stretches from the edge of the physical domain to the computational box boundary.
  7. The measured normal component of the magnetic field, Bz, is used to calculate a potential magnetic field over the whole box using Seehafer's method [5] and is used to define the lateral and top boundary conditions.
  8. The vector magnetogram data is used to determine the bottom boundary (photosphere) conditions of the box.
  9. The nonlinear magnetic field is computed from vector magnetogram data using the Landweber iteration method [6]. A weighting cosine function is applied which is set to unity within the physical domain, decreases within the boundary layer and falls to zero at the lateral and top boundaries of the computational box.

Quicklook Products

none

Support Information

  1. T.Wiegelmann, 'Optimization code with weighting function for the reconstruction of coronal magnetic fields', Solar Physics, 219, 87-108 (2004)
  2. M.S.Wheatland, P.A. Sturrock and G.Roumeliotis, AstroPhysical Journal 540, 1150-1155.
  3. M. Georgoulis, A New Technique for a Routine Azimuth Disambiguation of Solar Vector Magnetograms, AstroPhysical Journal 629:L69-72.
  4. T. Wiegelmann, B.Inhester, T.Sakurai, 'Preprocessing of vector magnetograph data for a nonlinear force-free magnetic field reconstruction', Solar Physics, In Press (2005).
  5. N.Seehafer, Solar Physics, 58, 215 (1978).
  6. A.K.Louis, 'Inverse und schlecht gestellte Probleme', Teubner Studienbuecher, ISBN 3-519-02085-X (1989) (discusses Landweber's iteration method).

eSDO 1121: Helicity Computation

This document can be viewed as a PDF.
Deliverable eSDO-1121: Helicity Computation
M. Smith, E.Auden, L. van Driel-Gesztelyi
28 June 2005

Description

Magnetic helicity is a measurement of the magnetic chirality or "handedness" of the shear and twist that has occured to the solar magnetic field. Over the last decade, researchers have investigated observations that solar features such as coronal loops and active regions tend to exhibit clockwise patterns if they are located in the Sun's southern hemisphere, while those in the Sun's northern hemisphere exhibit counter-clockwise patterns. Solar physicists continue to search for a mechanism that causes this hemispheric tendency to clockwise or counter-clockwise patterns. Measuring the helicity of emerging solar features will elucidate the relationship between the Sun's magnetic field and solar behaviour. [1].

Relative magnetic helicity can be defined by solving the volume integral of the cross product of the magnetic vector potential A with the magnetic field B [2]:

∆HmΩ (A-A0) · (B-B0) dV

Most studies of magnetic helicity have used a simple linear force-free model of the solar magnetic field; because this approach requires only the observed horizontal magnetic component, line-of-sight magnetograms from space and ground based telescopes can be used. However, use of increasingly complex nonlinear magnetic models for helicity computation now look promising as high resolution vector magnetograms become available. Regnier and Amari [3] have demonstrated a technique for extrapolating a non-linear force free magnetic field model using the Grad-Rubin technique, and they use thhis 3-D magnetic field and associated solar volume to calculate helicity. The eSDO helicity computation will use a similar helicity computation method, but the non-linear force free magnetic field will be calculated using the optimization method in the MagneticFieldExtrapolation algorithm.

Inputs

Outputs

  • Variable (double) returned in G2cm4

Tool Interface

  • commandline:
    1. AstroGrid CEA web service: this algorithm depends on the magnetic field extrapolation algorithm. Once that algorithm has been deployed as a CEA service hosted in the UK, the helicity algorithm can be accessed at the same time.
    2. SolarSoft routine: the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline or GUI to process locally held data. However, due to the computational intensity of the magnetic field extrapolation algorithm, the helicity computation method may need to be accessed with the assumption that the user can input a datacube containing the extrapolated magnetic field.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

  1. A user wishes to calculate the helcity of a region in addition to its nonlinear magnetic field.
  2. The user inputs an HMI vector magnetogram into the MagneticFieldExtrapolation algorithm and specifies that helicity shall be calculated as well.
  3. The MagneticFieldExtrapolation algorithm defines a volume over a solar region, calculates the nonlinear magnetic field over the volume, and then uses the resulting 3-D magnetic field to calculate a helicity value for the given volume.
  4. The value for helicity in G2cm4 is returned to the user along with the magnetic field.
  5. The user can now compare the chirality or "handedness" of the solar region with the helicity of similar solar structures.

Technical Use Case

  1. MagneticFieldExtrapolation algorithm begins with an HMI vector magnetogram as input.
  2. The algorithm defines a volume over which the magnetic field field will be calcuated.
  3. Using the optimization method, the algorithm calculated the 3-D non-linear magnetic field over the volume.
  4. The helicity algorithm is now started with the 3-D nonlinear magnetic field and volume as inputs.
  5. The relative magnetic helicity integral defined by Berger and Field is solved over the volume:

∆HmΩ (A-A0) · (B-B0) dV
  1. A value for helicity is returned in units G2cm4

Test Data

  • Test magnetic field calculated by the MagneticFieldExtrapolation algorithm using SOHO-MDI vector magnetograms (and later Solar-B vector magnetograms)

Quicklook Products

  • None

Support Information

  1. van Driel-Gesztelyi, L.; Démoulin, P.; Mandrini, C. H. 2003, Advances in Space Research, 32, 10, 1855
  2. Régnier, S. & Amari, T. 2004, Astronomy and Astrophysics, 425, 345
  3. Berger, M. A. & Field, G. B. 1984, Journal of Fluid Mechanics, 147, 133
  4. De Moortel, I., 2005, Phil Trans R. Soc A, Vol 363, 2743 - 2760 (full paper)
  5. DeVore, C, 2000, ApJ, 539, 944 - 953

eSDO 1121: CME Dimming Region Recognition

This document can be viewed as a PDF.
Deliverable eSDO-1121: CME Dimming Region Recognition
V. Graffagnino, A. Fludra
07 September 2005

Description

A coronal mass ejection results in a ‘dimming’ of the hot (EUV and X-ray) corona as a result of the opening of magnetic field lines, the expansion of coronal material and the resulting density reduction that occur during a CME. In attempting to identify the source region of CME events, a number of studies have characterised the phenomenon of coronal dimming (see for example the reviews of Hudson and Webb 1997, Harrison and Lyons 2000). These studies have been based on two types of difference images in which dimmed regions appear as dark features with a reduced intensity. The two types are:

  1. Running difference images, which are obtained by subtracting the preceding image from the current one. These emphasise changes in the brightness, localisation and structure of sources that have occurred during the interval between successive images. However, artefacts such as spurious dimmings can occur if the intensity of a bright feature decreases.
  2. Fixed difference images, which are produced by subtracting a single pre-event image from all subsequent images. Any changes that occur during an event are particularly clear. However unless solar rotation is taken into account potentially spurious dimmings and brightenings can be produced since the pre-event reference image may have been taken hours before subsequent frames.

Previous Coronal Dimming investigations have made use of wide-band soft X-ray imaging, narrowband EUV filters and EUV spectra. AIA narrow-band filters are therefore expected to allow the detection of such events. The identification of a CME signature such as Coronal

Dimming is critical to fully investigate the underlying physics of the CME onset. The goal of this algorithm is therefore to identify regions of coronal dimming in order to carry out follow up investigations of subsequent CME events.

Inputs

  • Multi-wavelength AIA images - 10 channels of full disk / low resolution images, full disk / high resolution images, and region-of-interest / high resolution images.

Outputs

  • Processed images indicating CME Dimming Region locations in each of the filter wavelengths;
  • The output will be entered into the CME Dimming Region Catalogue where statistical data extracted from the images will be presented. This statistical data includes:
    • Size of the Dimming Region
    • Duration of the dimming,
    • Intensity variations during the Dimming
    • Wavelength in which event has been detected
  • FITS file produced with table extensions.
  • Catalogue access possibly via webservice/ASTROGRID

Test Data

  • TRACE and SOHO EIT images in multiple wavelengths

Tool Interface

  • commandline: input of AIA images, output of FITS files containing images and statistical data.
    1. AstroGrid CEA web service: this algorithm will be deployed as a CEA service hosted in the UK. The algorithm will run continously to generate the CME dimming region recognition catalogue, and users can call the web service to process datasets on the grid.
    2. SolarSoft routine: the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline or GUI to process locally held data.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

The user would like to obtain physical and statistical information on the properties of possible CME dimming regions observed in the AIA images at a number of wavelengths. The goal of the algorithm is to identify these dimming regions, both on disk and above the limb. Follow up investigations of these regions resulting in the actual identification of a CME event is left to the user.

  1. The user identifies a series of images from one or more of the AIA channels taken during the specified time period.
  2. The user inputs the flatfield AIA images to one or more of the automated CME Dimming Region recognition algorithms.
  3. The user specifies the constraints used to identify the level of dimming
  4. The algorithm runs and returns a FITS file to the user.
  5. The user can view an image within the FITS file displaying the location of the CME Dimming region.
  6. A fits table will also be available to allow standard statistical analysis to be carried out and also allows the user to carry out further analysis on each event(s) as desired.

Technical Use Case

The technical aspect of the CME dimming region recognition procedure has much in common with the Small Event Detection algorithms being investigated, so much code will be reusable. Both algorithms will use the methods of Berghmans et al. (1998) and Aschwanden et al. (2000); the primary difference is that while the small event detection algorithm will look for short-lived increases in localised areas, the CME dimming region recognition procedure will identify drops in intensity over large areas that last for tens of minutes up to several hours.

In comparison to the small event detection algorithm, the CME dimming region recognition code will also need to group together a much larger number of pixels that undergo simultaneous dimming to find the extent of the dimming region. The CME dimming region recognition processing can also use fixed difference AIA images in order to clearly identify regions of dimming. For on-disk areas, corrections to take into account solar rotation will also have to be applied to these difference images in order to reduce the number of spurious dimming regions that might be identified.

Modified Method of Berghmans et al. (1998)

  1. The automated CME Dimming Region recognition algorithm receives a series of AIA flatfield images and fixed difference AIA images as input.
  2. An average light curve for each pixel over a set time period is derived to define a background reference emission.
  3. CME dimming events are defined as pixels where intensity drops near-simultaneously and significantly below this background value.
  4. The spatial and temporal extent of the event is then defined.
  5. Relevant data are extracted for each event and tabulated.
  6. A FITS file is produced and returned.
  7. In automated runs, the CME dimming region recognition catalogue is updated.

Modified Method of Aschwanden et al. (2000)

In this method, a spatio-temporal pattern recognition code is used which extracts events with significant variability. This is achieved as follows:

  1. The automated CME Dimming Region recognition algorithm receives a series of AIA flatfield images and fixed difference AIA images as input.
  2. Each full image is rebinned into a number of macropixels.
  3. An event is then spatially defined to include a number of neighbouring pixels that undergo coherent time variability, within a certain tolerance limit.
  4. For each macropixel the time series is examined and the maximum, minimum fluxes and corresponding times are extracted and a flux variability defined as the difference is derived.
  5. These difference values are then ordered.
  6. The macropixel with the largest flux decrease is then chosen and neighbouring pixels examined for variability and temporal coincidence of the minimum of the lightcurve within the tolerance limit. This continues until no further neighbouring pixel is found which corresponds to the appropriate tolerance limits.
  7. This collection of pixels is then defined as an event and these pixels are marked so as not to be included in subsequent events searches.
  8. The remaining pixels are resorted in order of flux variability and the process is repeated. In this way a number of dimming events are defined, although it is expected that simultaneous, spatially independent dimming events will occur rarely.
  9. The relevent statistical information is derived.
  10. An output FITS file is produced and returned.
  11. In automated runs, the CME dimming region recognition catalogue is updated.

Quicklook Products

  • Entry in CME Dimming Region Recognition catalogue:
    • low resolution full disk image of the Sun indicating positions of dimming regions
    • table entry with relevant statistical data

Support Information

  1. K.P. Dere, G.E. Brueckner, R.A. Howard et al. Solar Physics. 175, 601-612 (1997).
  2. I.M. Chertok and V.V. Grechnev, Astronomy Reports. 47, 139-150 (2003).
  3. I.M. Chertok and V.V. Grechnev, Astronomy Reports, 47, 934-945 (2003).
  4. D. Berghmans, F. Clette, and D. Moses. Astron.Astrophys. 336,1039-1055 (1998)
  5. M.J. Aschwanden, R.W. Nightingale, T.D. Tarbell, and C.J. Wolfson, Ap.J. 535,1027-1046 (2000)
  6. Hudson, H.S., Webb, D.F., in: Coronal Mass Ejections, Geophys. Monograph Ser., AGU. 1, (1997).
  7. Harrison, R.A., Lyons, M., A&A, 358, 1097, (2000)

eSDO 1121: DEM Computation

This document can be viewed as a PDF.
Deliverable eSDO-1121: DEM Computation
V. Graffagnino, A. Fludra
08 Sep 2005

Description

Although the AIA is an imaging and not a spectroscopic instrument, the fact that images will be produced in 7 narrowband EUV filters introduces a means of obtaining information concerning the thermal structures via the production of a Differential Emission Measure (DEM) distribution – a measure of the amount of plasma at a given temperature T. A comparison of a number of algorithms used to model the differential emission measure has previously been produced and presented in the report RAL-91-092, “Intensity Integral Inversion Techniques: a Study in Preparation for the SOHO Mission,” edited by Richard Harrison and Alan Thompson in December 1991. The report’s conclusions, all methods have three commonalities: 1) the intensity integral is discretised, 2) a form of smoothing is applied, and 3) the degree of smoothing must be chosen carefully.

Whether a DEM is produced for each pixel is of course dependent on how computationally intensive the chosen algorithms are. The use of parallel computing techniques and / or genetic algorithms may be of use here to improve computation time.

A number of packages are currently employed by the Solar community, but for the purpose of eSDO it is suggested that the Arcetri method (Landi and Landini, 1997), available though the Chianti package, should provide the main general purpose algorithms due to the ease of availability, whether as a standalone program or as part of the SolarSoft package. Two other options are an adaptive smoothing (Thompson, 1990), currently used in the ADAS package, and an iterative multiplicative method (Withbroe 1975; Sylwester et al. 1980).

Inputs

  • AIA images in 7 EUV narrowband filters – These can be FDLR images for the case of the global DEM or HDHR images for the user wishing to obtain more specific DEMs.

Outputs

  • DEM curve for each pixel
  • DEM curve for each local area of interest selected by user
  • Global DEM – useful for studies of the Sun as a star as has been done by Reale et al (1998)

The required DEMs would be outputted as FITs files, with the multiple DEMs being contained within a datacube format.

Test data

  • TRACE and / or SOHO EIT images in multiple wavelengths

Tool Interface

  • commandline:
    1. AstroGrid CEA web service: this algorithm will be deployed as a CEA service hosted in the UK that users can call to process datasets on the grid. Due to computation intensity, this service may be restricted to recognized solar community users registered with AstroGrid.
    2. SolarSoft routine: if computational intensity is not too high, the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline or GUI to process locally held data.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

The user would like to obtain the differential emission measure distribution derived from the AIA images taken at the 7 EUV filter wavelengths. The differential emission measure is defined as the squared density integrated over the column depth along the line of sight for any given temperature:

Φ(T) dT = Ne2 dh

In general, this distribution is derived using a wide range of EUV/X-ray line intensities. For AIA, the spectral line intensities, I, are substituted with filter intensities, and line emissivities G(T) are substituted with the temperature response of each filter. The intensity observed in each filter can be calculated from the integral:

I = ∫ G(T) Φ(T) dT [photons cm-2 s-1 st-1]

The filter responses G(T) are in units of electrons per pixel per second as a function of plasma temperature for unit emission measure. They are calculated by convolving the effective area of each telescope as a function of wavelength with the theoretical EUV spectrum calculated for the entire range of expected coronal temperatures. The calculation of G(T) can be done prior to the DEM analysis, or it can be recalculated each time the DEM algorithm is executed with the user-defined atomic data and elemental abundances.

The following description is for the case where the user selects an area of interest and wants to obtain a DEM for that region. The methodology will be similar for the automated process of obtaining a DEM for each of the whole image pixels.

The procedure involves the following:

  1. The user identifies the local region of interest for which a DEM is required
  2. The user inputs the flatfielded AIA images (one for each filter) – the intensities are calculated for each filter.
  3. Pre-calculated G(T) functions are read
  4. Optional: The user inputs his choice of elemental abundances and ionisation equilibrium data.
  5. Optional: The appropriate atomic data (whether from Chianti or ADAS) is chosen by the user
  6. Optional: The G(T) function, which is dependent on the temperatures and densities is then calculated.
  7. The DEM is calculated.
  8. A Fits output file is produced.

Technical Use Case

The DEM C code should be available both as standalone and also wrapped as an IDL application which can then be distributed through the SolarSoft gateway at MSSL. The technical aspects of the procedure involves the following:

  • The intensities of the pixels in region of interest are summed for each filter.
  • A number of algorithms have been investigated as reported in the RAL report, whose conclusions showed that all the methods were capable of satisfactorily deriving DEMs. Each method employed integral discretisation, together with some form of smoothing applied. In a few methods positivity constraints were also imposed. The following algorithms are recommended for eSDO:

Log-T Expansion Method (Arcetri Code)

  1. Integral discretisation – trapezoidal method
  2. Iterative correction of DEM using a correction factor based on the first term of a power series expansion as a function of log(T)
  3. Goodness-of-fit criterion: chi-squared

Adaptive Smoothing Method (Glasgow Code)

  1. Integral discretisation – product integration.
  2. Smoothing – regularisation
  3. Degree of smoothing controlled by a smoothing parameter

Iterative Multiplicative Algorithm (Wroclaw Code):

  1. Integral discretisation – 3rd order spline interpolation
  2. Smoothing – iterative processing from flat to final solution in a pre-set number of iterations (maintains positivity).
  3. Goodness-of-fit criteria – chi-squared and a parameter ‘sigma’

In particular the Arcetri code algorithm is now employed within the Chianti package and thus has been regularly used and tested by the Solar community. In the case of a fully automated DEM extraction code for each of the 4096x4096 pixels, a check on the DEM being derived should be incorporated to make sure that these values are consistent with the region being examined (e.g. whether the pixels are from an active region etc.)

The DEM algorithm design will be revised following the DEM workshop with AIA collaborators to be held in February 2006.

Quicklook Products

  • none

Support Information

  1. ADAS - http://adas.phys.strath.ac.uk/
  2. CHIANTI atomic database - http://www.damtp.cam.ac.uk/user/astro/chianti/chianti.html
  3. DEM Computation with AIA - http://lasp.colorado.edu/sdo/meetings/session_1_2_3/presentations/session1/1_05_Golub.pdf
  4. Intensity integral inversion techniques: A Study in preparation for the SOHO mission, Editors: R A Harrison, A M Thompson. RAL internal report - Please see DemComputationRalReport
  5. Coronal Thermal Structure from a Differential Emission Measure Map of the Sun. J.W.Cook, J.S.Newmark, and J.D.Moses. 8th SOHO Workshop: Plasma Dynamics and Diagnostics in the Solar Transition Region and Corona. Proceedings of the Conference held 22-25 June 1999
  6. EIT User's Guide http://umbra.nascom.nasa.gov/eit/eit_guide/
  7. A. Fludra, J.T. Schmelz, 1995, Astrophys. J., 447, 936.
  8. M.Gudel, E.F.Guinan, R.Mewe, J.S.Kaastra, and S.L.Skinner. Astrophysical Journal. 479, 416-426 (1997).
  9. J.S.Kaastra, R.Mewe, D.A.Liedahl, K.P.Singh, N.E.White, and S.A.Drake. Astron. Astrophys. 314, 547-557 (1996).
  10. V.Kashyap and J.J.Drake. Astrophysical Journal. 503, 450-466 (1998).
  11. E. Landi and M. Landini, 1997, A&A, 327, 1230
  12. Sylwester, J., Schrijver, J., and Mewe, R., 1980, Sol. Phys., 67, 285
  13. A.M. Thompson, 1990, A&A, 240, 209
  14. Withbroe, G., 1975, Sol. Phys., 45, 301

eSDO 1121: Small Event Detection

This document can be viewed as a PDF.
Deliverable eSDO-1121: Small Event Detection
V. Graffagnino, A. Fludra
07 September 2005

Description

Small-scale transient brightenings are ubiquitous in the EUV images of the solar corona and transition region. These small events observed by SDO will be identified in AIA images over multiple wavelengths.

Three algorithms will be evaluated for their adeptness at distinguishing actual small events from stochastic fluctuations/noise, whilst at the same time trying not to ‘lose’ events within the extraction and analysis process. A hybrid of the various algorithms may be required as each of the algorithms have their own strengths and weaknesses.

Inputs

  • Multi-wavelength AIA images – Both Low resolution full disk images and higher resolution images will be available as input

Outputs

  • Processed images indicating event locations in each wavelength;
  • Statistical data extracted from the images including:
    • total number of events.
    • Size distributions
    • Duration distributions,
    • Peak intensity distributions
    • wavelength in which event has been detected
  • Catalogue access possibly via webservice/ASTROGRID
  • FITS file produced with table extensions.

Test Data

  • TRACE and SOHO EIT images in multiple wavelengths

Tool Interface

  • commandline:
    1. AstroGrid CEA web service: this algorithm will be deployed as a CEA service hosted in the UK. The algorithm will run continously to generate the small event catalogue, and users can call the web service to process datasets on the grid.
    2. SolarSoft routine: the C module will be wrapped in IDL and distributed through the MSSL SolarSoft gateway. Users will access to a SolarSoft installation can call the routine from the commandline to process locally held data.
    3. JSOC module: the C module will be installed in the JSOC pipeline. Users can access the routine through pipeline execution to operate on data local to the JSOC data centre.

Science Use Case

The user would like to obtain full statistical information on the properties of small transient brightenings observed in the AIA images at the various wavelengths. The goal of the algorithm is to identify all small events regardless of the type (i.e. whether it is a blinker, micro-flare, bright point etc.). Identification of the type of event is left to the user.

The actual mechanics of use will be similar regardless of whether the tool is applied individually to a series of images by the user or whether the process is automated via a pipeline whose generated results are catalogued and then accessed by the user via a web service/AstroGrid. The user identifies a series of images from one or more of the AIA channels taken during the specified time period.

  1. The user inputs the flatfield AIA images to one or more of the automated small event detection algorithms.
  2. The algorithm runs and returns a FITS file to the user.
  3. The user can view an image within the FITS file displaying the location of the detected small events.
  4. A fits table will also be available to allow standard statistical analysis to be carried out and also allows the user to carry out further analysis on each event(s) as desired.

Technical Use Case

Detection method of Berghmans et al. (1998)

  1. The automated loop recognition algorithm receives a series of AIA flatfield images as input.
  2. An average light curve for each pixel over a set time period is derived to define a background reference emission.
  3. Pixels with intensity peaks significantly above this background value (i.e. greater than a pre-defined number of standard deviations above the background value) are then defined as small events.
  4. The spatial and temporal extent of the event is then defined, by examining surrounding pixels which experience increased intensity greater than another threshold level above the background value, a level which differs from the peak threshold value by one standard deviation.
  5. These pixels are then flagged so as to be excluded from subsequent analysis. This prevents the same brightenings as being counted again in a neighbouring pixels light curve.
  6. Relevant data are extracted for each event and tabulated.
  7. A FITS file is produced and returned.
  8. In automated runs, the small event catalogue is updated.

Detection method of Aschwanden et al. (2000)

In this method, a spatio-temporal pattern recognition code is used which extracts events with significant variability. This is achieved as follows:

  1. The automated loop recognition algorithm receives a series of AIA flatfield images as input.
  2. Each full image is rebinned into a number of macropixels.
  3. An event is then spatially defined to include a number of neighbouring pixels that undergo coherent time variability. This is defined as the temporal coincidence of peak flux within a certain tolerance limit.
  4. For each macropixel the time series is examined and the maximum, minimum fluxes and corresponding times are extracted and a flux variability defined as the difference is derived. These difference values are then ordered.
  5. The macropixel with the largest flux variability is then chosen and neighbouring pixels examined for variability and coincidence of peak time within the tolerance limit. This continues until no further neighbouring pixel is found which corresponds to the appropriate tolerance limits.
  6. This collection of pixels is then defined as an event and these pixels are marked so as not to be included in subsequent events searches.
  7. The remaining pixels are resorted in order of flux variability and the process is repeated. In this way a number of events are defined.
  8. The relevent statistical information is derived.
  9. An output FITS file is produced and returned.
  10. In automated runs, the small event catalogue is updated.

Quicklook Products

  • FITs file containing low resolution full disk image of Sun indicating positions of areas of interest .
  • Table entry in small event catalogue

Support Information

  1. D.Berghmans, F.Clette, and D.Moses. Astron.Astrophys. 336,1039-1055 (1998)
  2. M.J.Aschwanden, R.W.Nightingale, T.D.Tarbell, and C.J.Wolfson. Ap.J. 535,1027-1046 (2000)

eSDO 1131: Algorithm Integration with the Grid

This document can be viewed as a PDF.
Deliverable eSDO-1131
E. Auden
23 August 2005

Introduction

AstroGrid CEA

Description

The AstroGrid Common Execution Architecture (CEA) module allows scientists to deploy commandline applications as grid-accessible web services. AstroGrid users can then call these applications as part of AstroGrid workflows, thus automating the process of searching, retrieving, and processing data. The algorithms developed by CEA will be hosted on a UK machine as commandline applications, and instances of CEA will allow each algorithm to be registered with AstroGrid.

Integration Work

The CEA application module will be deployed on the eSDO server at MSSL. Each algorithm will be registered with AstroGrid as a CEA pplication. Eventually, the algorithms and CEA instances will be hosted on live servers in the UK. It may be prudent to restrict access to computationally expensive algorithms; this can be done by checking a user's AstroGrid login details for permission to execute the application in the process described for JSOC modules below.

JSOC Modules

Description

The JSOC pipeline is a set of libraries and executables that will be used to process HMI and AIA data. Many routines will be called automatically whenever the pipeline runs; others will be called by users "on-the-fly" to generate specific high level data products. The pipeline may be distributed between Stanford University and Lockheed Martin. The majority of modules will be written in C. A more detailed description of the pipeline may be viewed at http://hmi.stanford.edu/doc/SOC_GDS_Plan/JSOC_GDS_Plan_Overview_CDR.pdf.

Integration Work

Algorithm modules developed by the eSDO project for AIA and HMI will be placed in the JSOC pipeline. Depending on user interest and other available code, the JSOC team will earmark suitable algorithms to be executed automatically during pipeline runs; others will be executed following user requests. Documentation for each algorithm shall be provided along with the relevant C modules.

The JSOC team and eSDO developers will investigate making these algorithms available to grid users via CEA applications. There are two differences in the approach taken to JSOC pipeline CEA applications versus the UK-hosted CEA applications detailed above: security and data location. Because the pipeline will run on computers reserved for the use of the JSOC team, co-investigators and other designated users, aunauthorized grid users should not be able to execute pipeline commands "just to see what will happen". Therefore, the CEA application that executes pipeline commands will first check the grid user's AstroGrid login details to see if the user is a recognized collaborator. UK solar researchers may apply to the JSOC team to have their names added to the list of collaborators. Then, to minimize large data transfers, the CEA application will obtain its input data from the JSOC data archive instead of receiving input from MySpace or a remote URL.

In addition to algorithm code and documentation, there are three elements of work. First, CEA must be deployed at the JSOC pipeline center, and an application should be built that can execute JSOC pipeline commands. Second, the CEA application should be enabled to check AstroGrid login details to ensure that only recognized collaborators can invoke the pipeline commands. Third, the CEA input should be modified to access the necessary datasets from the JSOC system instead of MySpace or remote URLs.

SolarSoft

Description

SolarSoft is a distributed set of libraries, routines and ancillary data that allow scientists to process solar data in IDL. For suitable algorithms, the C modules developed by the eSDO project will be wrapped in IDL for distribution through the MSSL SolarSoft gateway. Each eSDO algorithm placed into SolarSoft will be accessible as a commandline routine rather than a GUI. These algorithms will assume that a user has access to the relevant SDO data in a local system. Certain algorithms, such as magnetic field extrapolation or the mode asymmetry analysis, may not be suitable for distribution through SolarSoft due to the high intensity computing that will be required.

During the Phase A research period, some experimentation in wrapping C modules in IDL was tried. Documentation of a SolarSoft installation may be viewed at http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/SolarSoftInstallation. In addition, an example of calling a Mandelbrot set C module as an IDL procedure is available at http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/CallingCfromIDL.

Integration Work

SolarSoft distribution will be the final development step for suitable algorithms. Each algorithm will be evaluated for SolarSoft based on computational requirements. For those deemed suitable, the C module will be wrapped in IDL following the instructions and example at CallingCfromIDL. Next, the IDL procedure will be tested with local data using the SolarSoft installation on the eSDO server at MSSL. Then standardized SolarSoft comments and documentation will be added to the procedure. The procedure and associated C code will be uploaded to the MSSL SolarSoft gateway.

III. eSDO Visualization

eSDO 1211: Quicklook and Visualization Plan

This document can be viewed as a PDF.
Deliverable eSDO-1211
E. Auden, T. Toutain
23 August 2005

Introduction

The SDO mission will produce 2 TB of raw data every day. Although low and high level science products will only comprise a tenth of that volume, 200 GB per day represents a vast amount of information for a person to process. The eSDO quicklook and visualization plans have two goals: firstly, to help scientists search for and identify the SDO data they require, and secondly, to use this targeted search strategy to reduce the number of large data requests transferred across the network. Quicklook and visualization products will primarily take the form of thumbnail images, movies, and catalogues.

Visualization Tool

Description

The eSDO project will work with the JSOC team to develop a powerful SDO visualization tool, which shall be referred to as the SDO streaming tool in this document. Users may be familiar with web-based mapping tools such as Google Maps or MultiMap that allow the user to open a street map or satellite map in a web browser, zoom in and out, and pan in eight directions. The SDO streaming tool will provide a similar facility for HMI line-of-sight magnetograms, HMI continuum maps, and 10 channels of AIA images. However, this visualization tool will not only allow researchers to zoom and pan in space, but they will also be able to switch between science products, “pan” in time by rewinding or fast forwarding to other products in a data series, and even “zoom” in time by increasing or decreasing the cadence at which data products are displayed.

Science Use Case

  1. A solar researcher wishes to identify solar events that have occurred in the past 24 hours in order to download the relevant datasets from an SDO data centre.
  2. The user opens a web browser and navigates to the SDO streaming tool.
  3. The user defines a start time, stop time, cadence, and SDO data product in the web browser GUI.
  4. The GUI displays the full disk / low resolution SDO data product that most closely matches the user’s defined start time.
    1. Zoom in space with static time: the user can click the image and zoom in through four levels of resolution. All images are displayed in a 512 by 512 window:
      • 1st level: full disk, low resolution image
      • 2nd level: 512 by 512 extract from data product at 1024 by 1024 resolution
      • 3rd level: 512 by 512 extract from data product at 2048 by 2048 resolution
      • 4th level: 512 by 512 extract from data product at 4096 by 4096 resolution
    2. Pan in space with static time: at any level of resolution, the user can pan left, right, up, down, and in four diagonal directions.
    3. Pan in time with static space: By pressing a “play” button, the user can view SDO data products in the same or nearby series to the original data product. The images displayed to the user will correspond to the chosen instrument cadence; for instance, a user could choose to view 1 line-of-sight magnetogram from every hour, every 10 minutes, or every 45 seconds of HMI observation. The user can then rewind and fast forward to data products available between the defined start and end times.
    4. Zoom in time with static space: If the user has chosen a cadence that is lower than the instrument cadence for a given data products (for example, 45 seconds for a line-of-sight magnetogram or 18 seconds for a specific AIA channel image), then the user can “zoom in” to the instrument cadence or “zoom out” to a lower cadence such as one data product per hour or per day. Data products corresponding to the chosen cadence will be played in the GUI like a movie.
    5. Zoom and pan in time and space: The user will ultimately be able to explore different spatial resolutions and locations of SDO data products with at many cadences.
    6. Switch between data products: At any time, the user can switch between HMI line-of-sight magnetograms, HMI continuum maps, and any of the 10 channels of AIA images.

Mock up of SDO streaming tool

Technical Case

Technical development for the SDO streaming tool will involve three elements of work. First, a tool to stream HMI and AIA data products in multiple resolutions must be developed; this reduces the amount of data streamed across the network by only sending the piece of image data requested by the user, plus a buffer on all sides to allow quick spatial panning. Second, the tool must be able to stream data from different SDO data centres. The tool should select the nearest SDO data centre to the user’s location that can provide a cached version of the HMI or AIA data product. If the user pans spatially or zooms temporally to a product not held in the user’s local SDO data centre cache, the tool should stream the data from another cache with minimal interruption to the user. Finally, the tool requires a web browser GUI that will allow the users to define input parameters, stream and display the images, and provide facilities to pan and zoom.

The eSDO team will develop a prototype with Rasmus Larsen at Stanford University during the early part of Phase B.

  1. Web browser streaming tool is invoked with start time, end time, cadence, and SDO data product.
  2. Nearest SDO data product corresponding to start time is streamed from nearest data centre cache as multi-resolution data - only full disk, low resolution image is sent to user.
  3. User zooms in space - next level of resolution (plus buffer containing surrounding spatial edge) is streamed.
  4. User pans in space - next set of spatial edges is streamed and held in buffer; if product doesn't exist in local cahce, product is streamed from US cache
  5. User zooms in time - next data product in series is streamed; if product doesn't exist in local cache, product is streamed from US cache
  6. User pans in time - next or previous data product series is streamed; if product doesn't exist in local cache, product is streamed from US cache

Quicklook Products

Images

SDO processing software will extract images from low level AIA and HMI science products archived in the US data centre. Images will be converted to 200 by 200 pixel thumbnail GIFs of approximately 100 kB each. They may be held in either the US or UK data centre (to be determined) for the duration of the SDO mission. Relevant metadata such as observation time, wavelength or magnetic data, and other science product information will be written to a searchable thumbnail catalogue. Thumbnail images will be produced for the following science products:

  • HMI line-of-sight magnetograms
  • HMI dopplergrams
  • HMI 20 minute averaged filtergrams (intensitygrams)
  • AIA low resolution, full disk images in 8 wavelengths
  • AIA high resolution, tracked active region images in 8 wavelengths (for ~1 - 10 active regions, depending on solar activity)

Movies

Movies shall be generated from thumbnail images on-the-fly by user requests. Users will specify the start time, end time and science products with which they wish to generate a movie; the relevant thumbnail images will be retrieved from the UK data centre and encoded as an MPEG file using the Berkeley MPEG encoder1, currently in use with AstroGrid solar tools. Movies will be cached along with start / stop time and science product metadata, but they will not be permanently archived.

Catalogues

Three catalogues will be generated to aid data searches: a thumbnail catalogue, a small event / coronal mass ejection (CME) dimming region catalogue, and a mode parameters catalogue. The thumbnail catalogue will contain information about each thumbnail image generated and archived in the UK data centre. Users will be able to search for thumbnail images based on observation time, science product type, wavelength (for AIA images) and NOAA active region number (for AIA tracked active images). The small event / CME dimming region catalogue will contain observation time, coordinate information, AIA science product name and other data for small solar events and CME dimming regions identified by the small event detection2 and CME dimming region recognition3 tools processing AIA data on a UK machine. Finally, the mode parameters catalogue will provide a table of frequency, lindewidth, amplitude and powers for a range of low l degrees.

User Access

User access to thumbnail images, generated movies and catalogues will all be available through the AstroGrid portal. The thumbnail, small event / CME dimming region, and mode parameters catalogues will be searchable through an instance of the AstroGrid DSA4 software, while thumbnail images will be retrieved from the archive using an AstroGrid CEA5 tool and placed in the user's home computer or AstroGrid

MySpace
6 area.

In addition, a JSP page will be made available through the eSDO portal that will use the AstroGrid DSA and CEA webservices to display thumbnail images, movies, and catalogue results directly to a webpage. This page will provide simple forms that will allow users to first delineate start time, stop time, and science product type and then search the two quicklook catalogues, view thumbnails, or generate movies.

Technical Requirements

Software

Five types of software are required for the quicklook and visualization strategy described:

  1. Software to extract images from SDO science products and convert them into thumbnail GIFs
  2. The Berkeley MPEG encoder for movie generation
  3. A database such as MySQL to store tabular information for the thumbnail and small event / CME dimming region catalogues
  4. AstroGrid DSA and CEA modules
  5. SDO streaming tool and associated web browser GUI

Hardware

The UK data centre will require a staging machine to pull data from the US data centre and load it into the UK archive (please see DataCentre1321 for more details). To avoid additional data transfers, this machine can also host the image extraction / thumbnail conversion software; this facility may be hosted in the US or UK. The SDO streaming tool may have multiple instances hosted in the US, the UK, and elsewhere in Europe. A machine will also be required to host the MPEG encoder, database, and AstroGrid software. Because these three pieces of software will not require large data transfers, they can be hosted on either the staging machine or at a second location.

References

  1. Devadhar S., Krumbein C., Liu K., Smoot S. and Eugene H. The Berkeley MPEG Encoder. Berkeley Multimedia Research Company. http://bmrc.berkeley.edu/frame/research/mpeg/mpeg_encode.html. Last viewed 23/08/05.
  2. Please see PhaseASmallEventDetection for more information.
  3. Please see PhaseACMEDimmingRegionRecognition for more information.
  4. Astrogrid DSA, Data Set Access (formerly Publishers Astrogrid Library). http://www.astrogrid.org/maven/docs/1.0.1/pal/index.html. Last viewed 23/08/05.
  5. Astrogrid CEA, Common Execution Architecture. http://www.astrogrid.org/maven/docs/1.0.1/applications/index.html. Last viewed 23/08/05.
  6. Astrogrid MySpace (software component known as FileManager), http://www.astrogrid.org/maven/docs/1.0.1/filemanager/index.html. Last viewed 23/08/05.

IV. eSDO Data Centres

eSDO 1311: Data Centre Implementation Plan

This document can be viewed as a PDF.
Deliverable eSDO-1311
E. Auden, R. Bogart, L. Culhane, A.Fludra, M. Smith, K. Tian, M. Thompson
Mullard Space Science Laboratory
31 March 2005

Introduction

The Solar Dynamic Observatory will produce approximately 2 TB of raw data per day from two instruments on board, the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager (HMI). The data from these instruments will be of interest to the helioseismology and non-helioseismology solar communities in both the US and the UK. Making SDO data readily available through the virtual observatory will allow scientists access to AIA and HMI observations as soon as possible.

Raw data from the satellite will initially be downloaded to the NASA White Sands facility. The data will then be transferred to the Joint Science Operations Committee (JSOC) data facility at Stanford University as well as an off-site storage facility in line with NASA’s data security policy.

Data Products

US Data Centre

Four levels of data products will be produced by the JSOC pipeline. A full list of data products is in development by the JSOC, but the current assessment of data levels and products is below:

  • Level 0 data products will be filtergrams (32 MB each) and telemetry data (~ 1.7 kB) from AIA and HMI. These low level products will be archived and made accessible through a tape library, but most will not be held online.
  • Level 0 data will be processed into level 1 data products, such as flatfield calibrations, low resolution summary images, full resolution NOAA active region image sets, and image sets of notable features and events. Most level 1 data products will be held online for 3 months, after which time they will be archived and made accessible through the tape library. Some level 1 products will be held online for longer than 3 months (to be determined).
  • Level 2 data, such as deconvolution products, temperature maps, irradiance curves, and field line modes, will be generated on the fly from user requests. The majority of these products will not be archived or held on line.
  • Level 3 products, such as internal rotation images, internal sound speed images, full disk velocity and sound speed maps, Carrington synoptics, and other custom products will also be generated on the fly and not held in an archive.

Level 0 and level 1 archived products will be stored internally in a proprietary format with keywords and header information stored in ascii files. Users will have a choice of export formats: FITS, JPEG, VOTable, HDF, and other formats to be determined. Although level 1 and some level 0 and level 2 products will be held in an online disk cache for limited time periods, the time delay in retrieving products archived in the tape library will not be prohibitive. Searchable metadata will be stored in an Oracle database; although JSOC users and Co-Investigator groups will have access to this database, it is unlikely that grid users will be allowed to directly interact with it. Instead, it is likely that a second copy of metadata searchable for grid users will be held in either a second instance of an Oracle database or in an XML database such as eXist.

UK Data Centre

The UK data centre will have two primary responsibilities: first, to facilitate access to low and high level data products for the UK solar community during periods of high data demand; second, to store a sample of AIA and HMI filtergrams over the lifetime of the SDO mission to enable visualisation of solar activity evolution during the period of SDO operation.

It is anticipated that the non-helioseismology community will experience periods of high data demand following solar events such as flares, coronal mass ejections, and coronal waves. To ameliorate the data request demands placed on the US data centre as well as transatlantic data transfers following solar events, the UK data centre will automatically request transfer of low level SDO data. In addition, a request will be placed for pipeline generation of on the fly high level data products, and these products will also be transferred to the UK data centre. These low and high level products will be stored in the UK data centre for approximately 10 days. When UK users request SDO data through the Astrogrid portal, their queries will be automatically redirected to the UK data centre first. If the UK data centre does not hold copies of the requested data, the queries will be transferred to the US data centre. This process will be transparent to the user. If all relevant low and high level data products are stored uncompressed for the duration of a solar event, it is approximated that ~3 TB of storage will be required for every 24 hour period of coverage.

The second UK data centre activity will provide immediate access to a sample of AIA and HMI filtergrams over the lifetime of the SDO mission; this service will be available to both UK solar researchers and the larger grid community. One filtergram from both AIA and HMI will be stored every 1000 seconds. If the filtergrams are stored in their 32 MB raw image formats, the storage required is approximately 2TB per year, or roughly 12.5 TB for the stated SDO mission duration of April 2008 to July 2014.

Phase B of eSDO will address a number of technical solutions:

  • What triggers can be used to automatically request transfer of low level data and pipeline processing / transfer of high level data following solar events?
  • How concentrated will UK demand for low and high data products be following solar events? Which solar events will trigger high demand?
  • Will periods of high data demand be experienced by the helioseismology community as well as the non-helioseismology community?
  • Can the UKLight point of access at UCL be harnessed to speed transfer of data through transatlantic pipelines?
  • Which compression algorithms should be used for the most efficient storage of scientifically useable data?
  • What visualisation plug-ins will be available from the UK grid community that could enhance access to the UK archive of SDO filtergrams?
  • Will data be most readily accessible through an add-on to the Solar-B storage facility at the Mullard Space Science Laboratory or through the Rutherford Appleton Laboratory large storage facility? (This question will also be addressed in deliverable eSDO-1321, Data Centre Grid Integration.)

Data Centre Software

DSA

Both the US and UK data centres will be accessible through the DataSet Access (DSA) software developed by the AstroGrid project. DSA is a set of java libraries that allow databases to be queried through web services using the Astronomical Data Query Language (ADQL). Queries are made to observational metadata stored in a database. DSA can translate ADQL queries to different flavours of SQL for a number of relational databases or to XQuery for XML databases. The DSA software can be downloaded as a war file and deployed with the Tomcat servlet container. While queries can be transmitted directly to DSA through the software’s JSP front end, the software is well-suited to acting as a backend system accessed through its web service interface.

Databases

Once an instance of DSA has been registered with a virtual observatory (VO) such as AstroGrid, users can submit queries through that VO’s portal. The DSA is configured to access a specific database during installation. In the case of the UK data centre, SDO metadata will most likely be stored in a mySQL database, while the data itself will most likely be stored in a file system either added to the MSSL Solar-B storage facility or included in the RAL large storage facility. For the US data centre, it is likely that grid user queries will be permitted to a secondary database at the JSOC facility; this database is likely to be a clone of the primary JSOC Oracle database or an instance of the eXist XML database. Data accessible to grid users will be stored at a Stanford University facility.

Implementation Tasks

Demonstration level data centres are currently in development at both MSSL and Stanford University. The eSDO Phase A research period will be used to document experience of DSA configuration, grid user accessible database design, and VO integration that will allow a smooth deployment of production level data centres in Phase B. The Phase A data centres and documentation are becoming accessible through the eSDO wiki at http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/DataCentre. SDO and eSDO researchers primarily involved in data centre deployment are Elizabeth Auden and Mike Smith at MSSL along with Rick Bogart and Karen Tian at Stanford University.

UK Data Centre

  • Completed implementation of data centre on test server: deliverable eSDO-2211, 22 December 2006
  • Completed integration of data centre with VO: deliverable eSDO-2212, 30 September 2007

  1. Establish data centre facility (proposed site: ATLAS)
  2. Configure mySQL database to hold metadata for cached SDO data products
  3. Install and configure DSA and CEA
  4. Register DSA and CEA with AstroGrid.
  5. Enable data queries and data transfers.

US Data Centre

  • Completed network latency tests: deliverable eSDO-2221, 31 March 2006
  • Completed AstroGrid / VSO interface: deliverable eSDO-2222, 30 September 2006
  • Completed support for AstroGrid modules installed at JSOC data centre: deliverable eSDO-2223, 30 September 2007

  1. Perform network latency tests to establish data transfer methods between Stanford and UK
  2. Complete AstroGrid / VSO interface
  3. Register AstroGrid / VSO interface with AstroGrid
  4. Help install / configure DSA if required.
  5. Help install / configure CEA if required.
  6. Help install / configure workflow engine if required.

eSDO 1321: Data Centre Integration Plan

This document can be viewed as a PDF.
Deliverable eSDO-1321
E. Auden
23 August 2005

UK Data Centre

The UK data centre will host some data products from the Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA). Helioseismologists will be interested in HMI data collected for long periods of time. By contrast, AIA data will cater to solar physicists analysing events such as flares and coronal mass ejections; this audience will primarily be interested in recent data. Low and high level data products from both instruments will be permanently available through the US data centre, so the UK data centre holdings will focus on providing fast access to data of the most interest to solar scientists across the UK.

System

Three architecture models have been investigated for the UK data centre: one light footprint and two heavy footprints. In the light footprint model, we assume that the role of the UK data centre will be to provide fast access to cahced SDO export data products. The two heavier models describe an active global role in the JSOC's Data Resource Management System (DRMS) and Storage Unit Management System (SUMS). At the end of the eSDO project's Phase A, other global DRMS and SUMS instances are being considered in the US and possibly in Germany.

Architecture Model 1: Light Footprint

The "light footprint" approach will provide a 30 TB online disc cache that will store HMI and AIA data products in export format (FITS, JPEG, VOTable, etc) only. Export files will be retrieved from a major SDO data centre in the US or Europe. This 30 TB will be divided between a 'true cache' for popular HMI and AIA datasets and a rolling 60 day systematic cache of AIA products. No tape archive will be required.

The light footprint is currently favoured by the eSDO science advisory and JSOC teams. While the UK community is anticipated to have a regular interest in new AIA data, users requiring large volumes of HMI data will primarily be working with helioseismology groups at Birmingham and Sheffield that will have co-investigator accounts on JSOC processing machines at Stanford University. In addition, the HELAS project is considering plans for a European SDO data centre at the Max Planck Institute in Lindau. A disc cache of ~30 TB will provide the UK solar community with a sizeable cache for export products, but it is not large enough to warrant a heavyweight installation of a DRMS and SUMS.

  • UK Data Centre: light footprint:
    UK Data Centre: light footprint

Architecture Model 2: Heavy Footprint

The "heavy footprint" approach would provide a 300 TB disc cache that is a significant percentage of the US SDO disc cache size. This 300 TB cache would be interfaced to a UK instance of the DRMS and SUMS; entire storage units would be cached. Along with the DRMS and SUMS, the UK data centre would require software to extract export formats from storage units. This system would play active global role in storage unit management with other SDO data centres. This 300 TB will be divided between a 'true cache' for popular HMI and AIA datasets and a rolling 60 day systematic cache of AIA products. No tape archive will be required.

  • UK Data Centre: heavy footprint:
    UK Data Centre: heavy footprint

Architecture Model 3: Heavy Footprint with Tape Archive

This final "heavy footprint with tape archive" model is considered to be a fallback position if there is no major European SDO data centre and one is considered to be required. Similar to the "heavy" footprint described above, this approach would provide a 300 TB disc cache that is a significant percentage of the US SDO disc cache size. This 300 TB cache would be interfaced to a UK instance of the DRMS and SUMS; entire storage units would be cached. Along with the DRMS and SUMS, the UK data centre would require software to extract export formats from storage units. This system would play active global role in storage unit management with other SDO data centres. This 300 TB will be divided between a 'true cache' for popular HMI and AIA datasets and a rolling 60 day systematic cache of AIA products. In addition, HMI export format data products would be written to the ATLAS tape store to provide a permanent European helioseismology archive.

  • UK Data Centre: heavy footprint with tape archive:
    UK Data Centre: heavy footprint with tape archive

AIA and HMI Datasets

A number of level 1 and level 2 HMI science products will be available to users, including magnetograms, dopplergrams and continuum maps. Assuming that the "light" footprint data centre architectures is followed, HMI products will be held in export format on a disc cache following user requests. By contrast, if the "heavy" model is followed, then following user requests HMI products will be imported to the UK as JSOC storage units of ~20 GB. These storage units will be held in the large disc cache, and instances of DRMS and SUMS will be updated accordingly. Export formats of data products will be extracted from the storage units and returned to the user. Finally, if the "heavy with tape storage" architectural model is used, then HMI data will be systematically pulled from the JSOC archive and written to the ATLAS tape store inside JSOC storage units. Uncompressed storage for these data products is currently estimated at ~25 TB per year, culminating in 150 TB total storage by 2014.

AIA products will be held in a rolling 60 day cache; this will provide solar physicists with data from two most recent solar revolutions. Cached low level data will include low resolution full-disk images (8 per 10 seconds) along with high resolution images of tracked active regions (8 per 10 seconds). Several high level products generated at a much lower cadence will also be cached: thermal maps, DEM measures, irradiance estimates, and magnetic field extrapolations (1 to 10 per day). The storage requirement for a rolling 60 day cache is estimated at 11 TB.

Instrument Data Product Estimated Size Estimated Cadence Storage
HMI Line-of-sight magnetogram (full disk, full res) 15 MB 1 / 45 s cached on user request
HMI Vector magnetograms (tracked active region, full res) 3 MB 5 / 10 minutes cached on user request
HMI Averaged continuum maps (full disk, full res) 15 MB 1 / hour cached on user request
HMI Dopplergrams (full disk, full res) 20 MB 1 / 45 s cached on user request
AIA Images from 10 channels (full disk, full res) 15 MB(?) 8 / 10 s rolling 7 day cache
AIA Images from 10 channels (full disk, low res) ~1 MB(?) 8 / 10 s rolling 60 day cache
AIA Images from 10 channels (regions of interest, full res) ~1 MB(?) 40 / 10 s rolling 60 day cache
AIA Movies of regions of interest ~10 MB(?) 1 / day? rolling 60 day cache
AIA Other level 2 and 3 products (DEM, irradiance maps, etc) ~ 10MB? 10 - 20 / day? rolling 60 day cache

Integration Work

AstroGrid Deployment

The major tasks for integrating the UK eSDO centre with AstroGrid will be the deployment of the DataSet Access (DSA) and Commmon Execution Architecture (CEA) AstroGrid modules on a remote machine that can access data in the ATLAS storage facility. This development will be undertaken by MSSL early in Phase B in conjunction with work done to access Solar-B data also held at ATLAS. A relational database (MySQL) containing AIA and HMI data product metadata will reside on a remote machine. The DSA module will interface with this database, allowing a user to identify which data products are required. A request for the identified data products is sent to a CEA application on the same machine. The CEA application will issue the ATLAS commands necessary for data to be transferred from the ATLAS facility to the user's remote AstroGrid storage area, or "MySpace".

A number of test datasets will be placed in a disc cache at the ATLAS facility. Next, a MySQL database will be configured on the eSDO server at MSSL with sample metadata relating to the test datasets. Instances of DSA and CEA will be also be deployed on the eSDO server; the DSA will interface with the MySQL database and the CEA application will interface with ATLAS. Requested test datasets will be returned to an instance of the AstroGrid filestore and filemanager on the eSDO server.

Interface with JSOC Data Centre

Assuming that the "light footprint" architecture model is followed, export formatted SDO data products will need to be transferred from the JSOC data centre to the UK data center. In this model, when a UK user makes an SDO data request through the AstroGrid system, the request will first be sent to the UK data centre. If the required data is not present, the request will be redirected to the JSOC data centre. The required datasets will be exported back to the UK. The dataset will be cached in the UK system before a copy is passed to the user's MySpace area. In addition to user requests, the data centre will poll the JSOC system for new AIA data approximately twice an hour, and this data will be held in the UK cache for 60 days.

Development work will require a mechanism to poll the JSOC data centre new AIA data as well as a CEA application to pass user requests that cannot be fulfilled by the UK data centre to the JSOC system. This CEA application should cache the returned datasets in ATLAS, update the metadata accessible to the DSA deployed on the eSDO server, and pass the data on to the user's MySpace area.

US Data Centre

System

Detailed plans of the JSOC data centre and pipeline can be viewed at http://hmi.stanford.edu/doc/SOC_GDS_Plan/JSOC_GDS_Plan_Overview_CDR.pdf.

Archived Data

In addition to the HMI and AIA products listed above, a full description of archived and cached SDO products can be viewed at http://hmi.stanford.edu/doc/SOC_GDS_Plan/HMI_pipeline_JSOC_dataproducts.pdf.

Integration Work

AstroGrid Deployment

The Virtual Solar Observatory (VSO) will be the user front end to the JSOC data centre in the US. However, AstroGrid users may wish to incorporate a VSO search into an AstroGrid workflow that submits data to eSDO tools. Therefore, an AstroGrid to VSO interface will be developed using the CEA module. In addition, the JSOC data centre team is reviewing three AstroGrid components for use with their backend system. First, Karen Tian at Stanford University is investigating the DSA and CEA modules to enable data searching and data retrieval through the grid. Second, Rick Bogart has expressed interest in the AstroGrid workflow engine for use in driving JSOC pipeline execution.

The eSDO project will advise the JSOC team and aid development with these three AstroGrid components. As part of the Phase A research effort, Mike Smith has installed and configured the major AstroGrid components at MSSL: DSA, CEA, the workflow engine (JES), the filemanager / filestore, the registry and the portal. Documentation of this deployment is available to the solar community at http://www.mssl.ucl.ac.uk/twiki/bin/view/AstrogridInstallationV11, and it is also included as an appendix in the eSDO Phase A Report.

Network Latency Tests

%INCLUDE{"NetworkLatencyTestsProposal" pattern="^.*?(.*?)

V. Appendices

Astrogrid Installation version 1.1

To view this page as a PDF document, please click here.

Note: The current version of this document for AstroGrid 1.1 is accessible at AstrogridInstallation.

Introduction

The following Astrogrid components have been installed on msslxx:

Registry (using GUI installer) http://msslxx.mssl.ucl.ac.uk/esdo-registry
Filestore (using GUI installer) http://msslxx.mssl.ucl.ac.uk/esdo-filestore
Filemanager (using GUI installer) http://msslxx.mssl.ucl.ac.uk/esdo-filemanager
JES (using GUI installer) http://msslxx.mssl.ucl.ac.uk/esdo-jes
Portal (using GUI installer) http://msslxx.mssl.ucl.ac.uk/astrogrid-portal
PAL http://msslxx.mssl.ucl.ac.uk/esdo-pal
CEA (using GUI installer) http://msslxx.mssl.ucl.ac.uk/esdo-cea
Community http://msslxx.mssl.ucl.ac.uk/esdo-community

Pre-requisites

In order to run webservices for all of the AstroGrid components listed in this document, Java, Tomcat and Axis must be installed and configured correctly. Typical installation/configuration instructions follow.

  1. Java
    • Ensure Java SDK 1.4.1_01 already installed
    • Set Unix environment variable JAVA_HOME="/usr/java/j2sdk1.4.1_01".

  2. Tomcat
    • Downloaded and installed (eSDO server msslxx is currently running jakarta-tomcat-5.0.28)
    • Set Unix environment variable CATALINA_HOME="$HOME/jakarta-tomcat-5.0.28"
    • Set Unix environment variable TOMCAT_HOME="$HOME/jakarta-tomcat-5.0.28"
    • Set browser to http://msslxx.mssl.ucl.ac.uk:8080 and check that the Tomcat home page is displayed. Tomcat revision is displayed in the top-lefthand corner.
  3. Axis
    • Downloaded axis-1_2RC2-bin.tar
    • Untar and copy axis-1_2RC2/webapps/axis directory to $TOMCAT_HOME/webapps
    • Install Xerces XML Parser
      • Downloaded / untarred Xerces-J-bin.2.5.0.tar to axis-1_2RC2/webapps/axis/WEB-INF/lib/
      • Copied xml-apis.jar, xercesImpl.jar, xmlParserAPIs.jar to axis-1_2RC2/webapps/axis/WEB-INF/lib/
      • chmod +x for xml-apis.jar, xercesImpl.jar, xmlParserAPIs.jar (may be unnecessary)
    • Install activation.jar
      • Downloaded jaf-1_0_2.upd.zip to $TOMCAT_HOME/common/lib
      • Untarred and copied jaf-1.0.2/activation.jar to TOMCAT_HOME/common/lib
    • Stop and restart Tomcat
    • Check installation at http://msslxx.mssl.ucl.ac.uk:8080/axis and http://msslxx.mssl.ucl.ac.uk:8080/axis/happyaxis.jsp

Details of component specific plug-ins will appear within the installation instructions for that component.

Registry

Registry Downloads

  1. Go to the AstroGrid home page at http://www.astrogrid.org
  2. Click "AstroGrid V1.1 Release"
  3. On displayed page, click "Download and Install AstroGrid Services"
  4. On displayed page, click "Download Installers"
  5. On displayed page, click "Registry"
  6. On displayed page, click "astrogrid-registry-installer-1.1-000r.jar"

Registry Installation

  1. Move exist.war into $TOMCAT_HOME/webapps
  2. For GUI installation: open the directory where astrogrid-registry-installer-1.1-000r.jar is located, and type java -jar astrogrid-filestore-registry-1.1-b000r.jar.
  3. The Astrogrid Registry Installer GUI should appear on your screen. Click "Next".

First time installation:

  1. Select following Registry Installer options:
    • "Install an AstroGrid Registry"
    • "Self-register this registry"
    • "Add a managed authority ID" (optional)
    • "Save this configuration for future use"

  2. Click "Next" and then enter appropriate values on successive screens:
    (URLS and values highlighted in blue are current eSDO settings)
    • Ensure Tomcat is running on your target system with the Manager app. enabled: select "continue" then click "Next"
    • Please enter the URL of Tomcat on your system: http://msslxx.mssl.ucl.ac.uk:8080/, then click "Next"
    • Please enter the Tomcat manager username: <TOMCAT manager username>, then click "Next"
    • Please enter the Tomcat manager password: <TOMCAT manager password>, then click "Next"
    • Please enter the context path on the webserver for registry: esdo-registry, then click "Next"
    • Is this the correct public URL for the registry ? Please amend if you are using a proxy:
      http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry/, then click "Next"
    • Please enter the Authority ID for this registry: esdo.mssl.ucl.ac.uk, then click "Next"
    • Enter a title for the registry: eSDO registry, then click "Next"
    • Enter a contact name for the registry: Elizabeth Auden, then click "Next"
    • Enter a contact email for the registry: eca@mssl.ucl.ac.uk, then click "Next"
    • Enter a file name for saving your current settings: $HOME/Astrogrid/registry-installer.properties, then click "Next"

  3. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  4. Click "Next" and then "Done"
  5. Restart TOMCAT

Installation is now complete. Proceed to Registry Configuration steps.

Updating existing installation:

  1. Select the following Registry Installer options:
    • "Load a previously saved configuration"
    • "Remove an existing AstroGrid registry"
    • "Install an AstroGrid registry"
    • "Self-register this registry"
    • "Save this configuration for future use"

  2. Click "Next":
  3. A pop-up is displayed with the words: "Enter the filename of your previously saved settings". Enter $HOME/Astrogrid/registry-installer.properties, "Next"
  4. Check that installation progress is displayed in textbox area of installer GUI.
  5. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  6. Click "Next" and then "Done"

Update installation is now complete using previous configuration settings. To check settings, and modify if necessary, follow steps in Registry Configuration.

Registry Configuration

  1. Enter Tomcat admin page (http://msslxx.mssl.ucl.ac.uk:8080/admin/)
  2. Edit eSDO environment variables: click through Tomcat Server -> Service -> Host (localhost) -> Context (/esdo-registry) -> Resources -> Environment Entries. Some of the values shown are taken from the those supplied by the user during the GUI installation, others are defaults automatically provided by the component installer. The values of some of the eSDO Registry environment variables are shown below. Please note that each registry should manage a unique authorityID, so be sure to edit the value for reg.amend.authorityid:

  3. To amend an environment entry, click on the corresponding variable and modify as necessary.
  4. Click "Commit Changes" button when all variables have been updated
  5. Test configuration: go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry. You should see a welcome page with the authority ID you entered on the Tomcat admin page.
  6. Registry registration: go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry/admin/index.jsp and login (you should be able to use your Tomcat admin username / password)
    • Self-register the registry:
      • Click "Self Registration" and fill in appropriate values:
        • First box: enter 0.10
        • Authority ID: esdo.mssl.ucl.ac.uk
        • Title: eSDO Registry
        • Publisher: MSSL
        • Contact Name: Elizabeth Auden
        • Contact email: eca@mssl.ucl.ac.uk
      • Press "Submit" - the page will redirect to a generated VOResources XML entry describing the registry. Ensure that all settings are correct (note: you may need to change interface xsi:type="WebService" to interface xsi:type="vs:WebService"). An example entry is attached at the bottom of this page.
      • Check the "Validate" box and click "Register".
    • Optional: Register with other registries
      • Try this first: click "Register with other Registries", and on the next page click the "Set up harvesting by hydra" button.
      • If you get an error of response=500, go on to the following steps.
      • Go to the Astrogrid galahad registry page at http://capc49.ast.cam.ac.uk/galahad-registry/editEntry.jsp
      • Paste your generated registry entry from the self-registration step into the large textbox under "Upload from text", check "Validate", and click "Submit".
      • The galahad registry will pick up your registry entry the next time it performs an automatic harvest (every 2 - 24 hours)
    • Optional: Harvesting other Registries
  7. Optional: reconfigure the eXist database out of the registry webapps area. This has the advantage that if you undeploy the registry webapp (during an upgrade, for example), the entries in your eXist DB will not be lost.
    • Follow "Using eXist internally" instructions on http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry/configure.jsp
    • Create a new directory (outside the Tomcat directory structure if you wish): for example, $HOME/Astrogrid/exist-configuration
    • Copy $TOMCAT/webapps/esdo-registry/WEB-INF/exist.xml to the new directory
    • Copy $TOMCAT/webapps/esdo-registry/WEB-INF/data to the new directory
    • Go to the Tomcat admin page and click through Tomcat Server -> Service -> Host (localhost) -> Context (/esdo-registry) -> Resources -> Environment Entries. Edit variable reg.custom.exist.configuration to the full path to the exist.xml file you just copied. Click "Commit Changes".

Registry Test

  1. Go to the registry home page at http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry
  2. Click "browse" - you should see a table containing resources with entries in your registry
  3. Click on "XML" next to any resource entry; you should see a valid XML GetResourcesByIdentifier entry for the resource.

FileStore

FileStore Downloads

  1. Go to the AstroGrid home page at http://www.astrogrid.org
  2. Click "AstroGrid V1.1 Release"
  3. On displayed page, click "Download and Install AstroGrid Services"
  4. On displayed page, click "Download Installers"
  5. On displayed page, click "FileStore"
  6. On displayed page, click "astrogrid-filestore-installer-1.1-000fs.jar"

FileStore Installation

  1. For GUI installation: open the directory where astrogrid-filestore-installer-1.1-000fs.jar is located, and type java -jar astrogrid-filestore-installer-1.1-b000fs.jar.
  2. The Astrogrid Filestore Installer GUI should appear on your screen. Click "Next".

First time installation:

  1. Select following FileStore Installer options:
    • "Install FileStore"
    • "Register this FileStore"
    • "Save this configuration for future use"

  2. Click "Next" and then enter appropriate values on successive screens:
    (URLS and values highlighted in blue are current eSDO settings)
    • Tomcat running: click "continue" then click "Next"
    • FileStore repository: $HOME/Astrogrid/filestore/repository, then click "Next"
    • Tomcat URL: http://msslxx.mssl.ucl.ac.uk:8080, then click "Next"
    • Tomcat manager user: <TOMCAT manager username>, then click "Next"
    • Tomcat manager password: <TOMCAT manager password>, then click "Next"
    • Context path: esdo-filestore, then click "Next"
    • Authority: esdo.mssl.ucl.ac.uk, then click "Next"
    • Registry key: esdo-filestore, then click "Next"
    • Registry URL: http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry/, then click "Next"
    • Filename for current settings: $HOME/Astrogrid/filestore-installer.properties, then click "Next"

  3. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  4. Click "Next" and then "Done"

Installation is now complete. Proceed to FileStore Configuration steps.

Updating existing installation:

  1. Select following FileStore Installer options:
    • "Load a previously saved configuration"
    • "Remove FileStore (...)"
    • "Install FileStore"
    • "Register this FileStore"
    • "Save this configuration for future use"

  2. Click "Next":
  3. A pop-up is displayed with the words: "Enter the filename of your previously saved settings". Enter $HOME/Astrogrid/filestore-installer.properties, then click "Next"
  4. Check that installation progress is displayed in textbox area of installer GUI.
  5. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  6. Click "Next" and then "Done"

Update installation is now complete using previous configuration settings. To check settings, and modify if necessary, follow steps in FileStore Configuration.

FileStore Configuration

  1. Go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-filestore
  2. Click "Configure" or "Configure / Install"
  3. Open Tomcat administration GUI at http://msslxx.mssl.ucl.ac.uk:8080/admin
  4. Edit eSDO filestore environment variables: click through Tomcat Server -> Service -> Host (localhost) -> Context (/esdo-filestore) -> Resources -> Environment Entries. Click on the name of each variable to fill in values appropriate to your filestore (most values should have been correctly populated by the installer):
  5. Click "Commit Changes"
  6. Go back to http://msslxx.mssl.ucl.ac.uk:8080/esdo-filestore and click
    FileStore Administration"
  7. Click "Register Metadata" under "Setup"
  8. Choose "0.10" and enter a contact name and email address. Click "Submit"
  9. You should see the auto-generated VOResources entry for your filestore. Check through the XML to make sure everything is correct and click "Register."
  10. You'll be returned to the same page showing a text box with the registry entry - if there are no errors printed at the top of the page, the filestore has been successfully configured.
  11. Note: To prevent an error message pop-up being displayed when the "Properties button" is clicked, the following lines are required: (Fix added by MS on 03/10/05)
    • Go to the Registry home page at http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry
    • Click "Browse" under "Investigate" from the lefthand menu
    • Locate the entry "esdo-filestore" under the ResourceKey column and click the corresponding "Edit" from the "Actions" column
    • In the text area find the line that reads <type>Archive</type> in the <content> body and immediately below it add the lines:
      <relationship>
      <relationshipType>derived-from</relationshipType>
      <relatedResource
      ivo-id="ivo://org.astrogrid/FileStoreKind">FileStore</relatedResource>
      </relationship>
    • Tick the "Upload from test: Validate" box then click "Submit" at the bottom of the page.
    • You should now be returned to the same page displaying the textbox XML entry with no errors printed at the top

FileStore Test

  1. From the filestore admin page at http://msslxx.mssl.ucl.ac.uk:8080/esdo-filestore/admin, click "Self Test" under "Configure"
  2. Open a terminal window and either run tail -f $TOMCAT_HOME/logs/catalina.out or look at the directory holding your filestore repository, such as $HOME/Astrogrid/filestore/repository.
  3. Click "Test"
  4. If the self test runs successfully, the repository directory will now contain 11 test files. Catalina.out will display a number of debug messages regarding open and closed streams.

FileManager

FileManager Downloads

  1. Go to the AstroGrid home page at http://www.astrogrid.org
  2. Click "AstroGrid V1.1 Release"
  3. On displayed page, click "Download and Install AstroGrid Services"
  4. On displayed page, click "Download Installers"
  5. On displayed page, click "FileManager"
  6. On displayed page, click "astrogrid-filemanager-installer-1.1-000fm.jar"

FileManager Installation

  1. For GUI installation: open the directory where astrogrid-filemanager-installer-1.1-000fm.jar is located, and type java -jar astrogrid-filemanager-installer-1.1-000fm.jar.
  2. The Astrogrid Filemanager Installer GUI should appear on your screen. Click "Next".

First time installation:

  1. Select following FileManager Installer options:
    • "Install FileManager"
    • "Register this FileManager"
    • "Save this configuration for future use"

  2. Click "Next" and then enter appropriate values on successive screens:
    (URLS and values highlighted in blue are current eSDO settings)
    • Tomcat running: click "continue" then "Next"
    • URL of Tomcat: http://msslxx.mssl.ucl.ac.uk:8080, then click "Next"
    • Tomcat manager user: <TOMCAT manager username>, then click "Next"
    • Tomcat manager password: <TOMCAT manager password>, then click "Next"
    • Ivorn of local filestore: ivo://esdo.mssl.ucl.ac.uk/esdo-filestore, then click "Next"
    • Context path: esdo-filemanager, then click "Next"
    • Authority: esdo.mssl.ucl.ac.uk, then click "Next"
    • Registry key: esdo-filemanager, then click "Next"
    • Filemanager repository: $HOME/Astrogrid/filemanager/repository, then click "Next"
    • URL of registry: http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry/, then click "Next"
    • Contact name: Elizabeth Auden, then click "Next"
    • Contact email: eca@mssl.ucl.ac.uk, then click "Next"
    • Filename for current settings: $HOME/Astrogrid/filemanager-installer.properties, then click "Next"

  3. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  4. Click "Next" and then "Done"

Installation is now complete. Proceed to FileManager Configuration steps.

Updating existing installation:

  1. Select following FileManager Installer options:
    • "Load a previously saved configuration"
    • "Remove FileManager"
    • "Install FileManager"
    • "Register this FileManager"
    • "Save this configuration for future use"

  2. Click "Next":
  3. A pop-up is displayed with the words: "Enter the filename of your previously saved settings". Enter $HOME/Astrogrid/filemanager-installer.properties, "Next"
  4. Check that installation progress is displayed in textbox area of installer GUI.
  5. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  6. Click "Next" and then "Done"

Update installation is now complete using previous configuration settings. To check settings, and modify if necessary, follow steps in FileManager Configuration.

FileManager Configuration

  1. Go to Filemanager home page at http://msslxx.mssl.ucl.ac.uk:8080/esdo-filemanager
  2. Click "File Manager Administration"
  3. Look at the lefthand menu, under "Setup" click "Register Metadata"
  4. On the registration page, select "0.10", enter a contact name and email address, and optionally click "Add Authority Resource" (don't click this box if you added a new authorityID when you set up your registry). Click "Submit".
  5. You should see a textbox containing an auto-generated VOResources entry for the Filemanager. Check the XML to make sure there are no errors and click "Register". Note: this page will state the Astrogrid registry with which your Filemanager will be registered (hydra as of 26/04/05). Make sure that the authority ID you use is registered with this registry.
  6. Note: if the previous registration step did not work, try this instead: (Fix added by ECA on 27/04/05)
    • Open a terminal and open $TOMCAT_HOME/conf/Catalina/localhost/esdo-filestore.xml in a text editor.
    • Copy the variable entries for org.astrogrid.registry.admin.endpoint and org.astrogrid.registry.query.endpoint
    • Paste these two variables into the esdo-filemanager.xml file.
    • Refresh the Tomcat admin GUI to make sure that the new environment variables have been picked up for filemanager.
    • Return to the filemanager admin page at http://msslxx.mssl.ucl.ac.uk:8080/esdo-filemanager/admin
    • Click on "Register Metadata" under "Setup"
    • Select 0.10 and fill in a contact name and email address. Click "Submit"
    • You'll see a textbox containing the auto-generated VOResources entry for your filemanager. The top of the page should now say "You will be publishing this at: http://localhost:8080/esdo-registry/services/RegistryUpdate". Click "Register".
    • You should now be returned to the same page displaying the textbox XML entry with no errors printed at the top.

FileManager Test

  1. Currently there is no self-test for filemanager, but go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-filemanager/admin and click "Fingerprint" under "Configure" to see a generated report of the filemanager system.

JES

JES Downloads

  1. Go to the AstroGrid home page at http://www.astrogrid.org
  2. Click "AstroGrid V1.1 Release"
  3. On displayed page, click "Download and Install AstroGrid Services"
  4. On displayed page, click "Download Installers"
  5. On displayed page, click "JES"
  6. On displayed page, click "astrogrid-jes-installer-1.1-000j.jar"

JES Installation

  1. For GUI installation: open the directory where astrogrid-jes-installer-1.1-000j.jar is located, and type java -jar astrogrid-jes-installer-1.1-000j.jar.
  2. The Astrogrid Filestore Installer GUI should appear on your screen. Click "Next".

First time installation:

  1. Select following JES Installer options:
    • "Install JES"
    • "Save this configuration for future use"

  2. Click "Next" and then enter appropriate values on successive screens:
    (URLS and values highlighted in blue are current eSDO settings)
    • Tomcat running: click "continue" then "Next"
    • URL of Tomcat: http://msslxx.mssl.ucl.ac.uk:8080, then click "Next"
    • Tomcat manager user: <TOMCAT manager username>, then click "Next"
    • Tomcat manager password: <TOMCAT manager password>, then click "Next"
    • Context path: esdo-jes, then click "Next"
    • JES working directory: $HOME/Astrogrid/jes, then click "Next"
    • URL of registry: http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry/, then click "Next"
    • Filename for current settings: $HOME/Astrogrid/jes-installer.properties, then click "Next"

  3. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  4. Click "Next" and then "Done"

Installation is now complete. Proceed to JES Configuration steps.

Updating existing installation:

  1. Select following JES Installer options:
    • "Load a previously saved configuration"
    • "Remove JES (does not remove the JES working dir)"
    • "Install JES"
    • "Save this configuration for future use"

  2. Click "Next":
  3. A pop-up is displayed with the words: "Enter the filename of your previously saved settings". Enter $HOME/Astrogrid/jes-installer.properties, then click "Next"
  4. Check that installation progress is displayed in textbox area of installer GUI.
  5. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  6. Click "Next" and then "Done"

Update installation is now complete using previous configuration settings. To check settings, and modify if necessary, follow steps in JES Configuration.

JES Configuration

  1. Go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-jes.
  2. Under "Installation Checks" in the lefthand menu, click "Installation Tests". You should see 7 green checks and no red x's.
  3. Further configuration variables can be set through the JNDI interface or in $TOMCAT/onf/Catalina/localhost. All variables have defaults, but one was changed in this configuration of JES:

JES Test

  1. Go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-jes/query-form.html
  2. Submit a sample workflow in the textbox and click "Submit".

JES Troubleshooting

  1. Make sure that astrogrid-utils-rev.jar is included in the /esdo-jes/WEB-INF/lib directory; its absence could cause problems generating workflows. This problem was detected on 12 May 2005 and should be resolved in future astrogrid-jes*.war releases.

Portal

Portal Downloads

  1. Go to the AstroGrid home page at http://www.astrogrid.org
  2. Click "AstroGrid V1.1 Release"
  3. On displayed page, click "Download and Install AstroGrid Services"
  4. On displayed page, click "Download Installers"
  5. On displayed page, click "Portal"
  6. On displayed page, click "astrogrid-portal-installer-1.1-000p.jar"

Portal Installation

  1. For GUI installation: open the directory where astrogrid-portal-installer-1.1-000p.jar is located, and type java -jar astrogrid-portal-installer-1.1-000p.jar. Note: if you get an "out of Memory" error during installation, try running java -Xmx128m -jar astrogrid-portal-installer-1.1-000p.jar instead.
  2. The Astrogrid Portal Installer GUI should appear on your screen. Click "Next".

First time installation:

  1. Select following Portal Installer options:
    • "Install the AstroGrid Portal"
    • "Save your settings for next time"

  2. Click "Next" and then enter appropriate values on successive screens:
    (URLS and values highlighted in blue are current eSDO settings)
    • Tomcat running: click "continue" then "Next"
    • URL of Tomcat: http://msslxx.mssl.ucl.ac.uk:8080, then click "Next"
    • Tomcat manager user: <TOMCAT manager username>, then click "Next"
    • Tomcat manager password: <TOMCAT manager password> , then click "Next"
    • URL of registry endpoint: http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry/, then click "Next"
    • Location of JES: http://msslxx.mssl.ucl.ac.uk:8080/esdo-jes, then click "Next"
    • Location of SMTP server: mail.mssl.ucl.ac.uk, then click "Next"
    • SMTP username: <SMTP username>, then click "Next"
    • SMTP password: <SMTP password>, then click "Next"
    • Filename for current settings: $HOME/Astrogrid/portal-installer.properties, then click "Next"

  3. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  4. Click "Next" and then "Done"

Installation is now complete. Proceed to Portal Configuration steps.

Updating existing installation:

  1. Select following Portal Installer options:
    • "Load a previously saved configuration"
    • "Uninstall the AstroGrid Portal"
    • "Customise the login page background"
    • "Install the AstroGrid Portal"
    • "Save your settings for next time"

  2. Click "Next":
  3. A pop-up is displayed with the words: "Enter the filename of your previously saved settings". Enter $HOME/Astrogrid/portal-installer.properties, then click "Next"
  4. Check that installation progress is displayed in textbox area of installer GUI.
  5. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  6. Click "Next" and then "Done"

Update installation is now complete using previous configuration settings. To check settings, and modify if necessary, follow steps in Portal Configuration.

Portal Configuration

  1. To customize the portal, run java -jar astrogrid-portal-installer-1.1-000p.jar again.
  2. After clicking "Next", check the boxes for "Customize portal" and "save settings."
  3. Location of jpeg: click "Browse" and select an image like portal_background.gif from your directory. "Next"
  4. Click "Next" and "Done"
  5. You can sneakily change the portal background picture by renaming any JPEG image to "loginBackground.jpg" and placing a copy in $TOMCAT/webapps/astrogrid-portal/web/images.

Portal Test

  1. Go to http://msslxx.mssl.ucl.ac.uk:8080/astrogrid-portal: you should see the Astrogrid login screen with either your background image or the Astrogrid background image
  2. First, test that new users can request registration - click "register"
    • In the AstroGrid Registration box, fill in your name and email address, and click "Register"
    • Note: if you receive an error message:
      • Check $TOMCAT/logs/portal-astrogrid.log - look for a message in this or any of the logs that says "javax.activation.UnsupportedDataTypeException: no object DCH for MIME type text/plain"
      • If the above error occurs, then see how many copies of activation.jar and mail.jar are in your Tomcat system. $TOMCAT/astrogrid-port/WEB-INF/lib includes activation-1.0.2.jar and mail-1.3.1.jar.
      • If other copies of any version of these jar files exist ($TOMCAT/common/libs is a good culprit), try deleting, moving, or renaming the copies not in astrogrid-portal/WEB-INF/lib. Restart Tomcat, try requresting registration again, and see if the problem disappears.

Portal troubleshooting

  1. Make sure that astrogrid-utils-rev.jar is included in the /astrogrid-portal/WEB-INF/lib directory; its absence could cause problems generating workflows. This problem was detected on 12 May 2005 and should be resolved in future astrogrid-portal*.war releases.

Community

Community Downloads

Note
No installer available for this component

Community Installation

See instructions at http://www.astrogrid.org/maven/docs/1.0.1/community/multiproject/astrogrid-community/index.html

  1. Rename astrogrid-community-1.0.1-b000ct.war to esdo-community.war
  2. Copy esdo-community.war into $TOMCAT/webapps
  3. Download the jsqldb database driver from http://www.astrogrid.org/maven/hsqldb/jars/hsqldb-1.7.1.jar
  4. Copy hsqldb-1.7.1.jar into $TOMCAT/common/lib

Community Configuration

  1. Open the Tomcat administration GUI
  2. Edit eSDO community environment variables: click through Tomcat Server -> Service -> Host (localhost) -> Context (/esdo-community) -> Resources -> Environment Entries. Edit the variables appropriate to your community (note: if your Tomcat admin GUI will only allow values < 70 characters, edit $TOMCAT/conf/Catalina/localhost/esdo-community.xml instead):
  3. Click "Commit Changes"
  4. Edit the eSDO data source variables: click through Tomcat Server -> Service -> Host (localhost) -> Context (/esdo-community) -> Resources -> Environment Entries. Click jdbc/org.astrogrid.community.database and edit the following variableds:
    • Data Source URL: jdbc:hsqldb:/home/griduser/Astrogrid/community/org.astrogrid.community
    • JDBC Driver Class: org.hsqldb.jdbcDriver
    • User Name: sa
    • Password:
  5. Click "save" and "commit changes"
  6. Go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-community, click "Admin", and enter the Tomcat admin username and password when prompted
  7. For first time configuration, database report should NOT be healthy. Click "Reset DB".
  8. Click "Reset" - message should return saying DB is healthy with value "true".
  9. Click "Register Metadata"
  10. Click "Submit" and you should see a textbox containing the VOResources entry for your community - there will be Resource entries for a number of community services.
  11. Once you have ensured that the XML is correct, hit "Register". The community will be registered with the registry endpoint you specified in the environment entries, and you will be returned to the self-registration page.

Community Test

  1. Test the community by adding an account for yourself.
  2. Go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-community/admin, and click "Account" under "Administration"
  3. In the Add Account page, fill in Username, Display Name, Password, Description, and e-mail. Leave Home Space blank when you are adding a user - this will allow their home space area to be creating automatically.
  4. Press "Add" - you should get a blue message stating that the account was added. The account information will now appear under "list of accounts".
  5. Now go to the portal at http://msslxx.mssl.ucl.ac.uk:8080/astrogrid-portal and log in.

CEA (Common Execution Architecture)

CEA Downloads

  1. Go to the AstroGrid home page at http://www.astrogrid.org
  2. Click "AstroGrid V1.1 Release"
  3. On displayed page, click "Download and Install AstroGrid Services"
  4. On displayed page, click "Download Installers"
  5. On displayed page, click "CEA"
  6. On displayed page, click "astrogrid-cea-installer-1.1-000a.jar"

CEA Installation

  1. For GUI installation: open the directory where astrogrid-cea-installer-1.1-000a.jar is located, and type java -jar astrogrid-cea-installer-1.1-000a.jar
  2. The Astrogrid CEA Installer GUI should appear on your screen. Click "Next".

First time installation:

  1. Select following CEA Installer options:
    • "Install cea"
    • "Register this cea"
    • "Save this configuration for future use"

  2. Click "Next" and then enter appropriate values on successive screens:
    (URLS and values highlighted in blue are current eSDO settings)
    • Ensure Tomcat is running on your target system with the Manager app. enabled: select "continue" then click "Next"
    • Please enter the URL of Tomcat on your system: http://msslxx.mssl.ucl.ac.uk:8080/, then click "Next"
    • Please enter the Tomcat manager username: <TOMCAT manager username>, then click "Next"
    • Please enter the Tomcat manager password: <TOMCAT manager password>, then click "Next"
    • Which flavour of CEA do you wish to (un)install ?: select commandline then click "Next"
    • Please enter the context path on the webserver for cea: esdo-cea, then click "Next"
    • Please enter the location of your working directory: /home/griduser/Astrogrid/applications/cea/commandline/work, then click "Next"
    • Please enter the URL of your registry: http://msslxx.mssl.ucl.ac.uk:8080/esdo-registry/, then click "Next"
    • Please enter the Authority under which this CEA will be registered: esdo.mssl.ucl.ac.uk, then click "Next"
    • Please enter the registry key for this CEA server: file:///home/griduser/Astrogrid/applications/esdoCeaRegistry.xml, then click "Next"
    • Enter a contact name for the registry: Elizabeth Auden, then click "Next"
    • Please locate the commandline config file for this cea: home/griduser/Astrogrid/applications/esdo.mssl.ucl.ac.uk.xml, then click "Next"
    • Please enter the contact name you wish to put in the registry: Elizabeth Auden, then click "Next"
    • Please enter the contact email you wish to put in the registry: eca@mssl.ucl.ac.uk, then click "Next"
    • Enter a file name for saving your current settings: $HOME/Astrogrid/registry-installer.properties, then click "Next"

  3. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  4. Click "Next" and then "Done"

Installation is now complete. Proceed to CEA Configuration steps.

Updating existing installation:

  1. Select following CEA Installer options:
    • "Load a previously saved configuration"
    • "Remove cea"
    • "Install cea"
    • "Save this configuration for future use"

  2. Click "Next":
  3. A pop-up is displayed with the words: "Enter the filename of your previously saved settings". Enter $HOME/Astrogrid/cea-installer.properties, then click "Next"
  4. Check that installation progress is displayed in textbox area of installer GUI.
  5. Check that a second Installer GUI pops-up upon completion of the installation and contains the line: "The reinstallation process is now complete"
  6. Click "Next" and then "Done"

Update installation is now complete using previous configuration settings. To check settings, and modify if necessary, follow steps in CEA Configuration.

CEA Configuration

  1. Go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-cea
  2. Open the Tomcat administration GUI
  3. Edit eSDO cea environment variables: click through Tomcat Server -> Service -> Host (localhost) -> Context (/esdo-cea) -> Resources -> Environment Entries. Select "Create new env entry" from "Available actions" and add the following new variables (as java.lang.Strings) appropriate to your commandline application tools (note: if your Tomcat admin GUI cannot accept values greater than 70 characters, edit these properties in $TOMCAT/conf/Catalina/localhost/esdo-cea.xml):
  4. Click "Commit Changes"
  5. Create the esdoCeaRegistry.xml file specified above in home/griduser/Astrogrid/applications. An example is attached to this page.
  6. Create the esdo.mssl.ucl.ac.uk.xml file specified above in home/griduser/Astrogrid/applications. An example is attached to this page.
  7. Install the commandline application you wish to use as a CEA tool. Specify the application name, input variable types, and output variable types in esdoCeaRegistry.xml and esdo.mssl.ucl.ac.uk. Notes:
    • Currently repeatable input parameters can be specified, but a specific number of output parameters must be declared.
    • Input and output types can be complex, double, text, boolean, anyURI, anyXML, VOTable, RA, Dec, ADQL, binary, or FITS.
  8. For more information on configuring the esdo.mssl.ucl.ac.uk.xml file, please see http://msslxx.mssl.ucl.ac.uk:8080/esdo-cea/provider/ApplicationConfigruation.html.
  9. All new CEA applications can be declared in the same esdoCeaRegistry.xml and esdo.mssl.ucl.ac.uk.xml files.

CEA Test

  1. Open a terminal and make sure that directory you specified in cea.commandline.workingdir.file exists; create this directory if it doesn't.
  2. Go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-cea
  3. First click "Fingerprint" under "Installation" - you should see the system report including a description of your applications and a VODescription of the CEA commandline component.
  4. Next, click "Installation Tests" under "Installation". You should see 4 integration tests with green checkmarks indicating "Passed".

PAL (Publisher's AstroGrid Library)

PAL Downloads

Note
No installer available for this component

Note:PAL will eventally be renamed as DSA

PAL Installation

To run a PAL webservice successfully, a database must be installed and configured correctly. A MySQL database is used by the eSDO server, but other database types are supported. Installation instructions are included for a typical MySQL setup.

  1. Install MySQL (Note: this is the database used by the MSSL eSDO server, but other database types are supported)
    • Download *.rpm from MySQL downloads site
    • Unpack rpm file and copy to relevant directory
    • Configure database:
    • MySQL configured to run as a daemon on machine startup by computer administrator
    • MySQL users “mysql” and “root” created by computer administrator
    • MySQL user “root” should be used to create databases and tables, import data, and perform other table manipuation
    • MySQL grant tables listed user “root” at “localhost”

  2. Install MySQL jdbc driver:
    • Download mysql-connector-java-3.0.16-ga-bin.tar from MySQL downloads site and untar
    • Copy mysql-connector-java-3.0.16-ga-bin.jar to $TOMCAT_HOME/common/lib
      %br%
  3. Create database and table. Import data. Test database.
    • Create database as root:
    • Create table as root
    • Create columns in table(s) as root
    • Import table data from a tab-separated text file
    • Test database from commandline and through jdbc connection

      Note: A duplicate “root" entry at “localhost.localdomain” and a flush privileges may be required to allow connections with the MySQL jdbc driver, which recognizes “localhost.localdomain” in place of “localhost”.

  4. Rename astrogrid-pal-skycatserver-1.0.1-b007pl.war to esdo-pal.war
  5. Copy esdo-community.war into $TOMCAT/webapps
  6. Place war file in $Tomcat/webapps and restart Tomcat
  7. Check esdo-pal directory appears under $TOMCAT/webapps

PAL Configuration

  1. Documentation is bundled with war file: go to http://msslxx.mssl.ucl.ac.uk:8080/esdo-pal/. The left panel contains links to installation, release and configuration notes.
  2. Copy $Tomcat/webapps/warname/WEB-INF/classes/default.properties to $Tomcat/common/classes/astrogrid.properties
  3. Use the Tomcat Administration page to edit these properties in JNDI. Go to Tomcat Admin and click Tomcat Server -> Service -> Host -> Context (astrogrid-pal) -> Resources -> EnvironmentEntries, click on the the context file, and fill in or change any properties in the text boxes to the right of the screen. Click "Commit Changes." This creates an XML file in $Tomcat/conf/Catalina/locahost with your updated properties. Astrogrid and Tomcat will always look for these properties in JNDI - any further changes made in a text editor to $Tomcat/webapps/warname/WEB-INF/classes/default.properties and $Tomcat/common/classes/astrogrid.properties will be ignored. (See PAL Notes below.)

The list below shows some typical PAL JNDI settings. Pay particular attention to the datacenter.plugin and datacenter.resource.plugin parameters.

  • If your database is Postgres, look for "datacenter.queryier.plugin.sql.translator", and uncomment
  • If your database offers circle or crossmatching (mainly astrophysical databases), uncomment

  • datacenter.querier.plugin=org.astrogrid.datacenter.queriers.sql.JdbcPlugin
  • datacenter.plugin.jdbc.drivers=com.mysql.jdbc.Driver
  • datacenter.plugin.jdbc.url=mysql://:3306/
  • datacenter.plugin.jdbc.user=
  • datacenter.plugin.jdbc.password=
  • conesearch.table=SampleStars
  • conesearch.ra.column=RA
  • conesearch.dec.column=DEC
  • conesearch.columns.units=deg
  • db.trigfuncs.in.radians=true
  • datacenter.implements.circle=true
  • datacenter.implements.xmatch=false
  • datacenter.url=http://localhost:8080/pal
  • datacenter.resource.plugin.1=org.astrogrid.datacenter.metadata.TabularDBResources
  • datacenter.resource.plugin.2=org.astrogrid.datacenter.metadata.TabularSkyServiceResources
  • datacenter.resource.plugin.3=org.astrogrid.datacenter.metadata.FileResourcePlugin
  • datacenter.resource.plugin.4=org.astrogrid.datacenter.metadata.CeaResourcePlugin
  • datacenter.resource.plugin.5=org.astrogrid.datacenter.metadata.AuthorityConfigPlugin

PAL Test

  1. Set the browser to localhost:8080/pal and check the PAL home page (entitled "SkyCatServer") is displayed
  2. Select Query/Ask Adql/Sql Query from the links on the left of the page and check the ADQL/sql query page is displayed
  3. Enter a simple query based on the contents of your active database and select "Submit Query"
  4. Check that query results are returned in the default VOTable format

PAL Notes

  1. If you undeploy the PAL component or install a new version of Tomcat, the PAL properties will no longer be in JNDI. Open $Tomcat/common/classes/astrogrid.properties in the Tomcat Administration page again and click "commit changes" to add the properties to JNDI. For this reason, it is good practice to manually add any properties changes you make in JNDI to the $Tomcat/common/classes/astrogrid.properties file.
  2. If your PAL is named anything other than "pal.war", be sure to reflect this change in the astrogrid.properties file.
  3. The astrogrid.properties variable "datacenter.max.return" defines the maximum number of records that will be returned by PAL in response to a query, regardless of the query source. The default setting is 2000.

AstroGrid Tutorials

Click here to view this page as a pdf document.

Introduction

This userguide provides step-by-step instructions on how to create and run a simple workflow through an AstroGrid portal. The worked examples in this guide use the MySQL database and portal hosted on the eSDO (msslxx) project machine at MSSL.

What is AstroGrid ?

AstroGrid is a virtual observatory which provides access to a vast collection of astrophysical, solar, and solar terrestrial databases and software tools via the Internet. Access is via one of the AstroGrid portals - internet websites that allow the user to elect from the library of astrophysical databases and software components and build these into "workflows".

Workflows can range from a simple query for a list of URLs, to complex tasks such as the popular 'Solar Moviemaker' application.

Figure 1 provides a schematic overview of the AstroGrid architecture. The sequence of events that occur when a workflow is executed are as follows.

  • The user logs on to one of the AstroGrid portals.
  • The user creates a new/loads an existing 'workflow'.
  • The portal verifies that any service referenced in the workflow (Datacenter CEA or Commandline CEA tool) is listed in the registry to which it is connected.
  • The user submits the workflow to the Job Execution Service (JES) as a 'job'.
  • The JES doublechecks with the registry that all referenced services are available.
  • The JES passes input parameters specified in the workflow to the requested services.
  • Services pass back output parameters to the JES. These may form the final result or the next stage of the workflow.
  • Process continues unless and error is encountered or all steps in the job have been completed.
  • Final output results written to the specified area of MySpace.

The various AstroGrid components and services depicted may reside on a single server, or may be hosted on a variety of servers hundreds of miles apart; in either case the AstroGrid architecture is transparent to the user.


Logging into an AstroGrid portal

In addition to the eSDO portal hosted at MSSL, there are other U.K. based AstroGrid portals available, including Leicester University's 'Cadairidris' and 'Zhumulangma' machines. Once an account has been created with a specific community, the user should be able to log on to any AstroGrid portal with the same username, password, and community name.

Figure 2 below shows a typical AstroGrid portal login page.

Anyone wishing to access an AstroGrid portal must have a valid login account. An account can be requested by clicking-on the 'register' link on the login page of the chosen portal and filling in the necessary details. These will be forwarded to the portal administrator, who is responsible for the creation of new accounts.

Once an account has been created the user can login into the portal as follows:

  1. Open a web browser to an AstroGrid portal, such as http://msslxx.mssl.ucl.ac.uk:8080/astrogrid-portal.
  2. Fill out the login form 'Username', 'Password' and 'Community' fields.
  3. Click-on the 'login' button and wait for the portal home page to be displayed.


Queries

A fundamental part of any workflow is the database query, the results of which may form one of the the inputs to the next step of a workflow. Queries are written in standard SQL syntax but are converted to and saved in a special XML format called ADQL for workflow transmission.

The database on the eSDO server (msslxx.mssl.ucl.ac.uk) contains two tables ('trace' and 'mdi') with fields storing the filename, location and time/dates of TRACE and SOHO-MDI files downloaded for January 2002. The following example shows how to create a simple query that will extract the pathnames of selected files listed in the 'mdi' table.It will be used at a later stage as part of a simple workflow.

Creating a query

  1. Click-on the 'Queries' button on the toolbr and verify the 'Query Editor' page is displayed.
  2. Type a query into the 'Data Query Builder' scratchpad area, e.g. SELECT m.location FROM mdi as m where m.date >= "2002-01-01 00:00:00" and m.date < "2002-01-01 02:00:00"

Note: the "Execute Query" button is not enabled. To submit your query, save it to MySpace as detailed in the next section, and then load the query into a workflow.

Saving a query

  1. Click-on the 'Save to MySpace' button below the scratchpad and verify the 'MySpace Microbrowser' pop-up is displayed.
  2. Browse for appropriate sub-directory. Note - these are MySpace directories, not directories on your local machine or network. Click "Up" to go up a directory, or click "New folder" to create a new folder under the current directory. Alternatively, click the yellow folder icon next to a directory name in the microbrowser to select that directory.
  3. Highlight existing file or enter the name of a new file in 'File Name:' field.
  4. Click-on 'Save' and verify 'MySpace Microbrowser' pop-up removed.
  5. Click on the "MySpace" button at the top of the page. Use the MySpace Explorer to confirm that the query has been saved in MySpace.

Loading a query

Existing queries can be viewed and modified by loading them into the query scratchpad.

  1. Click-on the 'Queries' button on the toolbar.
  2. Click-on the 'Load from MySpace' button below the scratchpad and verify the 'MySpace Microbrowser' pop-up is displayed.
  3. Highlight the 'Name' of the newly created query.
  4. Click-on 'Open' on the pop-up. Verify the pop-up is removed and the selected query is displayed in the scratchpad area.
  5. Modify and save as required.


Workflows

The workflow allows the user to construct a complex sequence of astrophysical software processes: for example, querying a database, passing the results to external software applications for processing and then sending the output to the MySpace virtual filestore. These activities may be supplemented with inbuilt workflow commands, such as For and While loops, and a powerful scripting language called 'Groovy', a shorthand version of the Java programming language.

Two worked examples are provided: a simple workflow designed to query a database and write the results to the user's MySpace area, and a more complex workflow that combines query and filecopy routines. In either case, the user should proceed straight to the 'Saving a workflow' section of this tutorial after creating the workflow of their choice.

Creating a simple workflow

The following example shows how to construct a simple workflow using the query created previously to extract a list of filenames held in the eSDO database 'mdi' table and return these in VOTABLE (XML) format for storage in MySpace. The workflow will be created from scratch and should appear similar to the one shown in Figure 5.

  • Figure 4 - AstroGrid Workflow Editor page (new workflow shown):
    AGWorkflowEditor.jpg

  1. Click-on the 'Workflows' button on the toolbar.
  2. Verify the 'Workflow Editor' page is displayed as in Figure 4.
  3. Verify that 'new workflow' is displayed in the 'Name:' box. Go to step 6. _If there is a workflow already loaded, i.e. 'Name:' field reads something other than 'new workflow', then go to step 4.
  4. Move the mouse cursor to the 'File' dropdown menu below the toolbar and select 'New'.
  5. Click-on 'OK' on the "Any unsaved workflow information will be removed..." pop-up.
  6. Move cursor to 'Edit' dropdown menu on the 'Workflow Editor'.
  7. Select 'Insert step' -> 'here' and click-on left mouse button. Verify 'Step' appended to 'Sequence'.
  8. Click-on 'Step' (highlighted with yellow box) and verify Step/Task panel is displayed at the bottom of the webpage.
  9. Click-on the '--Select task--' dropdown menu, move cursor to chosen task (highlighted in blue) and click-on again. For this example, select "esdo.mssl.ucl.ac.uk/esdo-pal/ceaApplication : adql". If you don't see this tool in the application list, type "esdo.mssl.ucl.ac.uk/esdo-pal/ceaApplication" in the box next to 'Task Name' and press the enter key - a red error message will be displayed if the tool name is not recognized.
  10. Add optional 'Step name:' (optional but provides useful documentation).
  11. Add optional 'Description:' (optional but provides useful documentation).
  12. Leave 'var name:' blank.
  13. Click-on the 'Update step details' button and wait for the webpage to refresh.
  14. Click-on workflow 'Step' and verify a step parameters box is displayed with 'VOTABLE' and 'Query' shown as input parameters, and 'Result' as output parameter.
  15. Click-on the 'Browse' button alongside the 'Query' input parameter and verify the 'MySpace Microbrowser' page is displayed.
  16. Find and highlight the query created earlier and click-on the 'Select' button. You can also type in the MySpace reference of a previously created query. Example: "ivo//esdo.mssl.ucl.ac.uk/UserName#queries/my_query"
  17. Verify that the 'MySpace Microbrowser' pop-up is removed and the query input parameter now displays the MySpace pathname of the selected query.
  18. Enter an output file in the ouput parameter field. You can use the 'MySpace Microbrowser' - browse to the appropriate sub-directory and create a new results file by entering a filename and clicking "New", or highlight an existing results file and click "select" (old data will be overwritten). You can also manually entering the full MySpace pathname of a file. Example: "ivo://esdo.mssl.ucl.ac.uk/UserName#VOTable/my_query_results"
  19. Click-on 'Update parameter values' and allow the webpage to refresh.
  20. Add meaningful name and description to the Workflow Editor task 'Name:' and 'Description:' fields.
  21. Click-on 'update workflow details' button alongside the 'Name:' and 'Description:' fields. Allow webpage to refresh. Note: The type and number of parameters displayed are specific to this application. Other applications may use different parameters.

  • Figure 5 - AstroGrid Workflow Editor (with simple query workflow shown) page :
    AGWorkflowEditor2.jpg

Creating a more complex workflow

In this example we take our previous workflow a stage further by using pathnames returned from a query and supplying these as input to an application which extracts the listed files from a datastore on the eSDO msslxx server and copies them to MySpace. The workflow will be created from scratch and should appear similar to the one shown in Figure 6.

Step 1: Load query.

  1. Click-on the 'Workflows' button on the toolbar. Verify the 'Workflow Editor' page is displayed.
  2. Ensure that a 'new workflow' is displayed in the 'Name:' box.
  3. Move cursor to 'Edit' dropdown menu on the 'Workflow Editor'.
  4. Select 'Insert step' -> 'here' and click-on left mouse button. Verify 'Step' appended to 'Sequence'.
  5. Click-on 'Step' (highlighted with yellow box) and verify Step/Task panel is displayed at the bottom of the webpage.
  6. Click-on the '--Select task--' dropdown menu, move cursor to chosen task (highlighted in blue) and click-on again. For this example, select "esdo.mssl.ucl.ac.uk/esdo-pal/ceaApplication : adql". If you don't see this tool in the application list, type "esdo.mssl.ucl.ac.uk/esdo-pal/ceaApplication" in the box next to 'Task Name' and press the enter key - a red error message will be displayed if the tool name is not recognized.
  7. Add optional 'Step name:' (provides useful documentation).
  8. Add optional 'Description:' (provides useful documentation).
  9. Type 'source' into 'Var. name:' field.
  10. Click-on the 'Update step details' button and wait for the webpage to refresh.
  11. Click-on workflow 'Step'. Verify a step parameters box is displayed with 'VOTABLE' and 'Query' shown as input parameters, and 'Result' as output parameter.
  12. Click-on the 'Browse' button alongside the 'Query' input parameter and verify the 'MySpace Microbrowser' page is displayed.
  13. Find and highlight the query created earlier and click-on the 'Select' button. You can also type in the MySpace reference of a previously created query. Example: "ivo://esdo.mssl.ucl.ac.uk/UserName#queries/my_query".
  14. Click on the tick box alongside the input parameter field if it is not already ticked.
  15. Verify that the 'MySpace Microbrowser' pop-up is removed and the query input parameter now displays the MySpace pathname of the selected query.
  16. Ensure the ouput parameter field is left blank and the tick box is unticked.
  17. Click-on 'Update parameter values' and allow the webpage to refresh.

Step 2: Transform the query results into filenames to grab.

  1. With 'Step' still highlighted, move cursor to 'Edit' dropdown menu on the 'Workflow Editor'.
  2. Select 'Insert logic' -> 'Set' -> 'after' and click-on left mouse button. Verify 'Set:' is appended to the workflow immediately after 'Step'.
  3. Click-on 'Set' (highlighted with yellow box) and verify 'Set:' panel is displayed at the bottom of the webpage.
  4. Add 'file' to the 'Name:' field and set the 'Value' field to a nominal value, e.g. 'abc'.
  5. Click-on 'update set details' and allow the webpage to refresh.
  6. With 'Set' still highlighted, move cursor to 'Edit' dropdown menu on the 'Workflow Editor'.
  7. Select 'Insert logic' -> 'Set' -> 'after' and click-on left mouse button. Verify 'Set:' is appended to the workflow immediately after previous 'Set' .
  8. With 'Set' highlighted add 'urls' to the 'Name:' field and set the 'Value' field to a nominal value, e.g. 'abc'.
  9. Click-on 'update set details' button and allow the webpage to refresh. _The 'Set' command allows the user to declare and initialize variables for use in workflow scripts. At present it isn't possible to assign more than one variable to each 'Set' command, so several 'Set' command are required if multiple variables are to be used._
  10. With 'Set' still highlighted, move cursor to 'Edit' dropdown menu on the 'Workflow Editor'.
  11. Select 'Insert logic' -> 'Script' -> 'after' and click-on left mouse button.
  12. Verify 'Script:' is appended to the workflow immediately after previous 'Set'.
  13. Click-on 'Script' (highlighted with yellow box) and verify 'Script:' panel is displayed at the bottom of the webpage.
  14. Add the following text to the 'Body:' field of the panel:

    if (source.size() > 0){
    print ("Run script");
    print "\n";
    votable = source.Result;
    parser = new XmlParser();
    nodes = parser.parseText(votable);
    urls = nodes.depthFirst().findAll{it.name() == 'TD'}.collect{it.value()}.flatten();
    }
    This script takes any filenames returned by the query and stores them in a variable called urls (defined earlier in the workflow). The 'Script' command allows the user to use 'Groovy', a powerful shorthand Java programming language that provides the user with access to and control over many aspects of the JES. For more information, please refer to the AstroGrid Job Management User Guide at http://www.astrogrid.org/maven/docs/HEAD/jes/astrogrid-workflow-userguide.pdf

  15. Click-on 'update script details' button and allow the webpage to refresh.

Step 3: Send filenames as input to MDI data transfer tool.

  1. With 'Script' still highlighted, move cursor to 'Edit' dropdown menu on the 'Workflow Editor'.
  2. Select 'Insert loop' -> 'For loop' -> 'after' and click-on left mouse button. Verify 'For:' is appended to the workflow immediately after 'Script:'.
  3. Click-on 'For' (highlighted with yellow box) and verify 'For loop:' panel is displayed at the bottom of the webpage.
  4. Add 'x' to the 'Var:' field, and '${urls}' to the 'Items:' field.
  5. Click-on 'update for loop details' button and allow the webpage to refresh.
The 'For' command causes the workflow to execute the workflow commands that follow it in a loop. The size of the loop is defined by the variable assigned to the 'Items:' field; in this example, it is the number of pathnames stored in the 'urls' variable.
  1. With 'For' still highlighted, move cursor to 'Edit' dropdown menu on the 'Workflow Editor'.
  2. Select 'Insert sequence' -> 'after' and click-on left mouse button.
  3. Verify 'Sequence:' is appended to the workflow immediately after 'For:'.
  4. With 'Sequence:' still highlighted, move cursor to 'Edit' dropdown menu on the 'Workflow Editor'.
  5. Select 'Insert logic' -> 'Script' -> 'here' and click-on left mouse button.
  6. Verify 'Script:' is appended to the workflow immediately after 'Sequence:'.
  7. Click-on 'Script' (highlighted with yellow box) and verify 'Script:' panel is displayed at the bottom of the webpage.
  8. Add the following text to the 'Body:' field of the panel:
    count = urls.size();
    file = x;
    print "url is ... " + x + "\n";
    StringTokenizer st = new StringTokenizer(file, "/");
    while (st.hasMoreTokens()) {
    file = st.nextToken();
    }
    print "\n";
    jes.info(x);
    concatStep = jes.getSteps().find {it.getName() == 'GrabSDOFiles'};
    inputs = concatStep.getTool().getInput();
    inputs.clearParameter();
    p = jes.newParameter();
    p.setName('InputFiles');
    p.setIndirect(true);
    p.setValue(x);
    inputs.addParameter(p);

    This script takes the next pathname in the 'urls' array and uses this as the input to the final step of the workflow. It also parses the pathname according to the '/' delimiter and stores the result in the variable 'file'. This variable forms the output of the final step of the workflow step.

  9. Click-on 'update script details' button and allow the webpage to refresh.
  10. With 'Script' still highlighted, move cursor to 'Edit' dropdown menu on the 'Workflow Editor'.
  11. Select 'Insert Step' -> 'after' and click-on left mouse button. Verify 'Step' appended to 'Script'.
  12. Click-on 'Step' (highlighted with yellow box) and verify Step/Task panel is displayed at the bottom of the webpage.
  13. Click-on the '--Select task--' dropdown menu, move cursor to chosen task. For this example, select 'esdo.mssl.ucl.ac.uk/GrabSDOFiles'. If you don't see this tool in the application list, type 'esdo.mssl.ucl.ac.uk/GrabSDOFiles' in the box next to 'Task Name' and press the enter key - a red error message will be displayed if the tool name is not recognized.
  14. Add 'GrabSDOFiles' to 'Step name:' field.
  15. Add optional 'Description:'.
  16. Leave 'Var. name:' field blank.
  17. Click-on the 'Update step details' button and wait for the webpage to refresh.
  18. Click-on workflow 'Step'. Verify a step parameters box is displayed with one input field and one output field shown.
  19. Add 'InputFiles' to the input parameter field. This is the variable written into by the previous script so care must be taken to ensure that it matches exactly the spelling of the script variable (case sensitive).
  20. Add 'ivo://esdo.mssl.ucl.ac.uk/UserName#VOTable/${file}' to the output parameter field.
  21. Add name and description to the Workflow Editor task 'Name:' and 'Description:' fields.
  22. Click-on 'update workflow details' button alongside the 'Name:' and 'Description:' fields. Allow webpage to refresh.

  • Figure 6 - AstroGrid Workflow Editor (with file copy workflow shown) page :
    AGWorkflowEditor3.jpg

Saving a workflow

  1. Move cursor to 'File' dropdown menu on the 'Workflow Editor', then click-on 'Save'. Verify that the 'MySpace Microbrowser' pop-up is displayed.
  2. Select appropriate sub-directory for the workflow, type in a suitable name into the 'File Name:' field and click-on 'Save'. Verify that the 'MySpace Microbrowser' pop-up is removed.

Workflow is now saved in MySpace and is ready to be loaded and submitted as a job.

Loading/Submitting a Workflow

  1. Move the cursor to the 'File' dropdown menu on the 'Workflow Editor' and click-on 'Open'. Verify the 'MySpace Microbrowser' pop-up is displayed.
  2. Find and highlight the workflow created earlier and click-on the 'Open' button. You may also type in the file name. (Example: "ivo://esdo.mssl.ucl.ac.uk/UserName#Workflows/my_workflow".)
  3. Allow webpage to refresh and verify that the selected workflow is displayed.
  4. Move the cursor to the 'File' dropdown menu on the Workflow Editor and click-on 'Submit' and allow the webpage to refresh.
  5. Verify 'Workflow Editor' displays the message: 'Workflow successfully submitted to JES; ...'.
  6. Once job has completed, a VOTable file containing the query results should appear in your MySpace VOTable area.
NOTE: CURRENTLY RESULTS ARE NOT BEING SAVED TO MYSPACE! - ECA, 17 May

Checking Job Status

  1. Click-on on either the 'job monitor' link on the 'Workflow Editor' or 'Jobs' button on the toolbar. Verify 'Job monitor' page is displayed.
  2. Verify job submitted is listed and 'Status' reads either 'ERROR', 'RUNNING' or 'COMPLETED'.
  3. When job has finished, i.e. status reads 'ERROR' or 'COMPLETED', click-on 'Job ID'. Verify workflow is displayed showing success/failure of each stage.
  4. Click-on 'Workflow transcript'. Verify 'Workflow transcript' page is displayed with details of the complete job transaction.

Deleting a Job

  1. Return to the 'Job Monitor' page by clicking-on the 'Jobs' button on the toolbar.
  2. Click-on 'delete-job' alongside the workflow previously submitted. Allow the webpage to refresh and verify the job removed from the list.


Other MySpace commands

MySpace features several file commands that can be applied to any file stored there, be it a query, workflow or an output file of any description.

Copy/Paste a file

  1. Browse for appropriate file or sub-directory in MySpace and highlight.
  2. Click on 'Copy'.
  3. Click on 'Paste' and verify that 'Copy. Please supply a new name for the file' pop-up displayed.
  4. Enter new filename and click-on 'OK'. Verify pop-up removed and new file displayed.

Deleting a file or sub-directory

  1. Browse for appropriate file or sub-directory in MySpace highlight.
  2. Click-on 'Delete'. Verify 'Confirm Deletion' pop-up displayed.
  3. Click-on 'OK'. Verify pop-up removed and selected file deleted.

Uploading a file

Files of any format can be transferred to 'MySpace' from other sources, e.g. local machine or website, by using the upload facility.

From the desktop

  1. Click-on 'From the desktop...' button.
  2. Click-on 'Browse...' button. Verify filebrowser pop-up is displayed.
  3. Browse for appropriate file and double-click. Selected file is displayed in the 'Path:' field.
  4. Click-on 'Go' button. Verify 'Supply a new name for the uploaded file' pop-up is displayed.
  5. Enter new filename (optional) and click-on 'OK'. Verify pop-up removed and new file displayed.

From the web

  1. Click-on 'From the web...' button. Verify 'URL: field is displayed.
  2. Enter valid URL and click-on 'Go' button. Verify 'Supply a new name for the uploaded file' pop-up is displayed.
  3. Enter new filename (optional) and click-on 'OK'. Verify pop-up removed and new file displayed.

    Non-valid URLs will result in a file of zero bytes length being added to MySpace. The file will take the name of the suffix of the incorrect URL.

Viewing file contents

The contents of files stored in MySpace may be viewed.

  1. Highlight the 'Name' of the file.
  2. Click-on the 'Properties' button and verify the 'File Properties' pop-up is displayed.
  3. Click-on the 'Path:'link on the pop-up and verify the contents the file are displayed in the pop-up.

    The pop-up viewer will display the contents of text files and recognised binary file formats (.fits, .mpg). If a non-recognised format is encountered, however, the user will be prompted to select a suitable application for viewing the file.

  4. Close the pop-up.

  • Figure 8 - AstroGrid MySpace file contents viewed:
    AGMySpaceExplorer2.jpg

The Registry

In order for a datacenter or software application to be used by the AstroGrid community it has first to be 'registered', i.e. details about the service it provides are uploaded in XML format to one of the servers dedicated to this task called a 'Registry'.

There are several registries hosted in the U.K, shared by the various portals. These include the registry hosted on the eSDO project, and 'Hydra' and 'Galahad' at Leicester and Cambridge Universities respectively.

Although each portal communicates with just one registry, different registries share their information through a process called "harvesting". This is an automatic cross-checking process whereby registries "inform" each other of any services that may have been added or modified recently. In this way the various registries keep in sync with one another, thus ensuring all AstroGrid users have access to the latest datacenters/software tools.

At present services cannot be deleted from an AstroGrid registry. However, the 'status' property within the XML body of a service can be changed to 'deleted', which results in it being listed in lighter text than its active neighbours and signifying that it is no longer supported.


Abbreviations

CEA Commandline Execution Architecture
JES Job Execution Service
MSSL Mullard Space Science Laboratory
SDO Solar Dynamics Observatory

Phase B Workpackages

2100 Solar Algorithms

2110 Algorithm Coding

2111 Completed code and grid interfaces for algorithm applications 30/06/06

  • Identify and / or prepare test data for all algorithms
  • Implement algorithms detailed in milestone 1121
  • Write user interfaces for grid integration (using eSDO development server as example)
  • Write interfaces to processing machines for grid integration (using eSDO development server as example)

2112 Test script for scientific testing of algorithms 01/09/06

  • Write unit tests to confirm error handling, successful running, etc.
  • Run unit tests using automated software such as Maven or Eclipse
  • Write tests to confirm scientific accuracy of algorithms
  • Provide test scripts for scientific beta testers to run tests

2113 Completed tests with scientists' comments 22/12/06

  • Refine code until all automated unit tests run successfully.
  • Collect comments from scientific beta testers pinpointing areas of further refinement to solar algorithms.

2114 Completed, refined code satisfying scientific testing results 30/03/07

  • Obtain "Clean bill of health" report from automated unit test software
  • Collate scientific testing results indicating that algorithms satisfy scientific requirements

2120 Algorithm Grid Integration

2121 Completed integration of solar algorithms with AstroGrid 31/05/07

  • Create CEA interface for each algorithm
  • Register CEA applications with AstroGrid
  • Test CEA applications in AstroGrid workflow

2122 Completed integration of solar algorithms with JSOC 31/07/07

  • Deploy algorithm module in JSOC pipeline
  • Create CEA interface for pipeline
  • Register CEA applications with AstroGrid
  • Test CEA applications in AstroGrid workflow

2123 Completed integration of solar algorithms with SolarSoft 30/09/07

  • Wrap each algorithm module in IDL
  • Test algorithm from SolarSoft
  • Place algorithm in MSSL gateway for distribution

2200 Quicklook and Visualization

2210 Quicklook Products

2211 Completed application to extract labelled thumbnail images from FITS files 31/03/06

  • Create application to extract thumbnail images from FITS file
  • Add image labelling to application
  • Add metadata extraction to application

2212 Completed web browser application to generate image galleries on the fly 30/06/06

  • Create web browser GUI with user parameters for start time, end time, cadence, data product
  • Add image searching functionality
  • Load images into web browser GUI

2213 Completed web browser application to generate movies on the fly 31/07/06

  • Create web browser GUI with user parameters for start time, end time, cadence, data product
  • Add image searching functionality
  • Add Berkeley MPEG encoder to return movie to Sheffield

2220 Catalogues

2221 Completed database and DSA instance for thumbnail catalogue 31/08/06

  • Extract metadata from FITS files during thumbnail extraction
  • Port metadata from FITS file to MySQL database
  • Deploy DSA instance to interface with database
  • Register DSA with AstroGrid and test search inside AstroGrid workflow

2222 Completed database and DSA instance for small event / CME dimming region catalogue 30/09/07

  • Set up automatic execution of small event detection and CME dimming region recognition algorithms
  • Port metadata from algorithms to MySQL database
  • Deploy DSA instance to interface with database
  • Register DSA with AstroGrid and test search inside AstroGrid workflow

2223 Completed database and DSA instance for helioseismology mode parameters catalogue 30/09/07

  • Set up automatic execution of mode parameter analysis algorithm
  • Port metadata from algorithm to MySQL database
  • Deploy DSA instance to interface with database
  • Register DSA with AstroGrid and test search inside AstroGrid workflow

2230 SDO Streaming Tool

2231 Rapid prototype for streaming tool 30/06/06

  • Create prototype GUI with parameters for start time, end time, cadence, and data product
  • Create prototype multi-resolution streaming functionality
  • Allow pan and zoom in space and time

2232 Evaluate functionality of rapid prototype streaming tool 31/07/06

  • Measure progress in panning and zooming against development effort left
  • Evaluate time required to implement multiple cache streaming
  • Obtain user feedback on GUI and tool functionality

2233 Completed functionality to stream data from multiple caches 22/12/06

  • Stream from nearest cache when user pans in space
  • Stream from nearest cache when user zooms or pans in time

2234 Completed streaming tool 30/06/07

  • Deploy web browswer GUI for streaming tool
  • Allow tool to stream data from US, UK, and Germany (if applicable)

2300 Data Centres

2310 UK Data Centre

2311 Completed implementation of data centre on eSDO development server 22/12/06

  • Set up MySQL database of SDO metadata
  • Deploy instance of DSA to search data
  • Deploy instance of CEA to transfer data
  • Register test DSA with AstroGrid
  • Register testCEA with AstroGrid

2312 Completed integration of SDO data centre with AstroGrid 30/09/07

  • Formally declare disc space for UK cache
  • Formally declare staging area for transfer of data from US to UK
  • Set up data transfer CEA application for rolling 60 day AIA cache
  • Set up data transfer for AIA and HMI request cache
  • Deploy instance of DSA to search data
  • Deploy instance of CEA to transfer data
  • Register DSA with AstroGrid
  • Register CEA with AstroGrid

2320 US Data Centre

2321 Completed network latency tests 31/03/06

  • Set up accounts at MSSL, UCL, Stanford and RAL
  • Perform specified tests with different transfer mechanism and recort latency

2322 Completed AstroGrid / VSO interface 30 /09/06

  • Set up application with VSO search parameters
  • Deploy DSA with interface to VSO application
  • Register DSA with AstroGrid

2323 Completed support for AstroGrid modules installed at JSOC data centre 30/09/07

  • Provide support for DSA installation
  • Provide support for CEA installation
  • Provide support for workflow engine installation

-- ElizabethAuden - 29 Sep 2005

Edit | Attach | Watch | Print version | History: r7 < r6 < r5 < r4 < r3 | Backlinks | Raw View | More topic actions
Topic revision: r7 - 2005-10-10 - ElizabethAuden
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback