Tags:
create new tag
view all tags

Helicity Deployment

Science Testing Document

Input

  • The algorithm takes as its input the output data from the Magnetic Field Extrapolation algorithm, that is, a FITS file containing the Bx, By and Bz components of the Non-Linear Force Free Field (NLFFF). These are arranged as 3 columns (one for each component) in a binary table extension.

Compilation and Execution:

Note: see http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/RunningNGSCode for instructions on compilation and execution on the UK's National Grid Service clusters.

  • Download source code: http://msslxx.mssl.ucl.ac.uk:8080/eSDO/Helicity_1.0_src.tar
  • gcc compilation: % mpicc -o Helicity top_level.c initialization.c helicity.c comms.mpi8.c -lcfitsio -lmpich -lm. A makefile is provided in the installation package to simplify building of the algorithm. Running 'make' will create an executable called Helicity (and a shared library called libHelicity.so for IDL support). Please refer to the README file in the pacxkage for details of how to install the software.

  • commandline execution: %
    mpiexec -n <no cpus> Helicity <input filename> <initialisation filename> <output filename>
e.g. mpiexec -n 2 Helicity DeVoreCase2.fits DevoreTestCases.dat output.fits

  • no cpus is a positive integer used to indicate the number of processors to be used in running the algorithm. This parameter (and the mpiexec -n prefix) can be omitted if the algorithm is to be run on a single cpu.

  • input filename is the name of the input FITS file containing the magnetic field from which the helicity is to be calculated. The full pathname is required if the file is not in the same directory as the algorithm executable.

  • initialisation filename is the name of the Ascii file containing the computational grid configuration data. The full pathname is required if the file is not in the same directory as the algorithm executable.

  • output filename is the name of the output FITS file containing the final helicity values. The full pathname is required if the file is not in the same directory as the algorithm executable.

Expected Output

The algorithm produces a FITS file with an ASCII table extension containing a magnetic helicity scaling factor for the input magnetic field. The actual helicity is a product of this scaling factor and L4, where L is the region size in cm. For example, a scaling factor of -256 for a region of 100Mm (= 1010cm) would give a total magnetic helicity of -256 * (1010)4 = -2.56 * 1042Mx2.

The table extension also records the values of the parameters used to configure the staggered grid.

Current level of completion

The algorithm uses the Magnetic Helicity computation from the FCTMHD3D package created by Dr Rick DeVore at the Naval Research Laboratory (NRL). It has been converted to 'C' from FORTRAN 90 and extraneous MHD simulation code stripped out according to Rick DeVore's guidelines. Multi-dimensional arrays have been replaced with single dimension arrays to make dynamic memory allocation simpler, and floating point arrays originally declared as doubles are now treated as floats in order to reduce the amount of allocated memory.

Existing Message Passing Interface (MPI) calls have been removed and new MPI calls added to speed-up the potential field calculations which can be computer-intensive with larger datasets. The code has been successfully built with the MPI library and run on the National Grid Service cluster at RAL.

Dr DeVore has provided sample test data and expected results in order that we can verify our code.

The algorithm uses a staggered grid approach which defines how the input dataset is spead across a computational box. Set-up values for the staggered grid are defined in an initialization file which is supplied as one of the commandline arguments.

Recent experiments have shown that it takes approximately 60 cpu hours to run the algorithm on the RAL NGS cluster (12 hours across 5 cpus).

Future work

The interpolation routines currently used to assign magnetic field input data to the staggered grid are specific to the testcase data provided. A generic interpolation scheme that will deal with all input data (including results from the Magnetic Extrapolation algorithm).

The code is intended to work with the output of the Magnetic Extrapolation algorithm. Investigate how the potential field computed in that algorithm may be used to replace the corresponding calculation in the Helicity code.

Science Test Cases

See HelicityDeploymentResults

For all test case input files, download: http://msslxx.mssl.ucl.ac.uk:8080/eSDO/Helicity_1.0_testdata.tar

Case 1: DeVore Test Case #1

Description

This test case was originally used by Dr Rick DeVore in his research paper 'Dynamical Formation and Stability of Helical Prominence Magnetic Fields' - ApJ 539:954-963 (see link under the 'Literature Search' section of the Helicity Computation Twiki page)

The dataset consists of a numerically generated bipolar magnetic field that provides the starting point for a shear motion simulation performed close to and parallel with the photospheric polarity inversion line. The MHD simulation uses Dr DeVore's package FCTMHD3D.

Input

  • Initialisation data
  • HelicityComputation/test/DeVoreTestCases.dat (see Test Case download file)

  • Input data
  • HelicityComputation/test/DeVoreCase1.fits (see Test Case download file)

The input dataset is a binary file (converted to FITS format) containing the Bx, By and Bz magnetic field components of the simulated bipolar field at time t=0 seconds, i.e. before the shear motion is introduced.

The datacube is a 250 * 95 * 95 pixel volume and the helicity is computed on a grid of 500 * 190 * 190 units (x-axis, y-axis and z-axis respectively).

Expected Output

The expected output is a magnetic helicity scaling factor close to zero.

Case 2: DeVore Test Case #2

Description

This test case uses the bipolar magnetic field defined in Test Case #1 after it has been subject to a shear motion for 100s. The shear motion is simulated using the MHD simulation package FCTMHD3D, devised by Dr DeVore.

Input

  • Initialisation data
  • HelicityComputation/test/DeVoreTestCases.dat (see Test Case download file)

  • Input data
  • HelicityComputation/test/DeVoreCase2.fits (see Test Case download file)

The input dataset is a binary file (converted to FITS format) containing the Bx, By and Bz magnetic field components of the simulated bipolar field at time t=100 seconds.

The datacube is a 250 * 95 * 95 pixel volume and the helicity is computed on a grid of 500 * 190 * 190 units (x-axis, y-axis and z-axis respectively).

Expected Output

The expected output is a magnetic helicity scaling factor close to -256. Our tests on the RAL NGS cluster show a value of approximately -124.

Case 3: DeVore Test Case #3

Description

This test case uses the bipolar magnetic field defined in Test Case #1 after it has been subject to a shear motion for 200s. The shear motion is simulated using the MHD simulation package FCTMHD3D, devised by Dr DeVore.

Input

  • Initialisation data
  • HelicityComputation/test/DeVoreTestCases.dat (see Test Case download file)

  • Input data
  • HelicityComputation/test/DeVoreCase3.fits (see Test Case download file)

The input dataset is a binary file (converted to FITS format) containing the Bx, By and Bz magnetic field components of the simulated bipolar field at time t=200 seconds.

The datacube is a 250 * 95 * 95 pixel volume and the helicity is computed on a grid of 500 * 190 * 190 units (x-axis, y-axis and z-axis respectively).

Expected Output

The expected output is a magnetic helicity scaling factor close to -256. Our tests on the RAL NGS cluster show a value of approximately -124.

Unit Testing

  • gcc compilation: A makefile (Makefile.test) is provided in the installation package to enable a unit test executable called HelicityTests to be built. When executed a series of internal checks are run and the results printed on the standard output.

  • commandline execution: %
    HelicityTests

Classes with unit tests:

  • comms.mpi8.c .............. (26 tests)
  • dataio.c ......................... (2 tests)
  • helicity.c ....................... (1 test)
  • initialization.c ................ (1 test)
  • top_level.c .................... (1 test)

Running the algorithm from AstroGrid

AstroGrid workflow instructions:

  1. Open AstroGrid workbench and click "Task Launcher"
  2. Tasks: find application, specify file as input, specify file as output, launch
  3. Task Launcher search: helicity computation or "Solar Helicity Computation"

  • Output:
    • OutputFile: helicity.fits (file reference; MySpace fits file)

Running the algorithm from IDL

The algorithm may be run from within IDL using the IDL wrapper Helicity.pro provided in the installation package. The input parameters are the same as for the commandline version. For example:

  • idl> .run Helicity.pro
  • idl> HELICITY_WRAP, 'DeVoreCase2.fits', 'DevoreTestCases.dat', 'output.fits'

Please note: The libHelicity.so shared library is required by the IDL wrapper. Also, IDL doesn't support the running of the algorithm on multiple processors.

Staggered grids

The algorithm uses a staggered grid for the helicity computation. A staggered grid allows magnetic field data saved on uniform or non-uniform grids to be assigned to a user-defined grid along the x, y and z axes and allows regions of interest to be stretched along any axis independently of the other axes.

The configuration of the staggered grid is defined in an Ascii file and forms one of the input parameters to the algorithm.

Descriptions of individual grid parameters used within the file and their use is shown in the table below.

Parameter   Description   Comments
m1pes   Defines no. of processors used along x-axis   Should always be set = 1
m2pes   Defines no. of processors used along y-axis   Ditto
m3pes   Defines no. of processors used along z-axis   Ditto
n1pes   Defines no. of processors used along x-axis   Ditto
n2pes   Defines no. of processors used along y-axis   Ditto
n3pes   Defines no. of processors used along z-axis   Ditto
nx1   Defines no. of gridpoints in region #1 (x-axis)   Integer >= 0
nx2   Defines no. of gridpoints in region #2 (x-axis)   Integer >= 0
nx3   Defines no. of gridpoints in region #3 (x-axis)   Integer >= 0
nx4   Defines no. of gridpoints in region #4 (x-axis)   Integer >= 0
ny1   Defines no. of gridpoints in region #1 (y-axis)   Integer >= 0
ny2   Defines no. of gridpoints in region #2 (y-axis)   Integer >= 0
ny3   Defines no. of gridpoints in region #3 (y-axis)   Integer >= 0
ny4   Defines no. of gridpoints in region #4 (y-axis)   Integer >= 0
nz1   Defines no. of gridpoints in region #1 (z-axis)   Integer >= 0
nz2   Defines no. of gridpoints in region #2 (z-axis)   Integer >= 0
nz3   Defines no. of gridpoints in region #3 (z-axis)   Integer >= 0
nz4   Defines no. of gridpoints in region #4 (z-axis)   Integer >= 0

Note: nx*, ny* and nz* parameters allow the computational box to be divided into a maximum of 4 regions along each axis. If there is no requirement to do so, then nx1, ny1 and nz1 should be set to the total no. of gridpoints along the x, y and z axes respectively and the remaining parameters set to 0. This is the case for the DeVore testcases and recommended for the NLFFF output of the Magnetic Extrapolation algorithm.

mmhd   No. of mhd3 array parameters   Should always be set = 11
mtrn   No. of trn3 array parameters   Should always be set = 12
l1   Total no. of interior gridpoints along x-axis   nx1+nx2+nx3+nx4 - 6
l2   Total no. of interior gridpoints along y-axis   ny1+ny2+ny3+ny4 - 6
l3   Total no. of interior gridpoints along z-axis   nz1+nz2+nz3+nz4 - 6
xlen1   Defines computational domain of region #1 (x-axis)   Float >= 0.0
xlen2   Defines computational domain of region #2 (x-axis)   Float >= 0.0
xlen3   Defines computational domain of region #3 (x-axis)   Float >= 0.0
xlen4   Defines computational domain of region #4 (x-axis)   Float >= 0.0
ylen1   Defines computational domain of region #1 (y-axis)   Float >= 0.0
ylen2   Defines computational domain of region #2 (y-axis)   Float >= 0.0
ylen3   Defines computational domain of region #3 (y-axis)   Float >= 0.0
ylen4   Defines computational domain of region #4 (y-axis)   Float >= 0.0
zlen1   Defines computational domain of region #1 (z-axis)   Float >= 0.0
zlen2   Defines computational domain of region #2 (z-axis)   Float >= 0.0
zlen3   Defines computational domain of region #3 (z-axis)   Float >= 0.0
zlen4   Defines computational domain of region #4 (z-axis)   Float >= 0.0

Note: xlen*, ylen* and zlen* parameters are used to define the "size" of the regions along each axis and can be used to minimize the influence of boundary conditions. If the computational box is considered as a whole, i.e. not split into regions, then only xlen1, ylen1 and zlen1 should be set and the remaining parameters set to zero. This is the case for the DeVore testcases and recommended for the NLFFF output of the Magnetic Extrapolation algorithm.

dxstr1   Stretching factor - region #1 (x-axis)   Float >= 0.0
dxstr2   Stretching factor - region #2 (x-axis)   Float >= 0.0
dxstr3   Stretching factor - region #3 (x-axis)   Float >= 0.0
dxstr4   Stretching factor - region #4 (x-axis)   Float >= 0.0
dystr1   Stretching factor - region #1 (y-axis)   Float >= 0.0
dystr2   Stretching factor - region #2 (y-axis)   Float >= 0.0
dystr3   Stretching factor - region #3 (y-axis)   Float >= 0.0
dystr4   Stretching factor - region #4 (y-axis)   Float >= 0.0
dzstr1   Stretching factor - region #1 (z-axis)   Float >= 0.0
dzstr2   Stretching factor - region #2 (z-axis)   Float >= 0.0
dzstr3   Stretching factor - region #3 (z-axis)   Float >= 0.0
dxzstr4   Stretching factor - region #4 (z-axis)   Float >= 0.0

Note: dxstr*, dystr* and dzstr* parameters are used to exponentially stretch regions along each axis and can be used to emphasize areas of interest within the computational box. The settings should reflect the uniformity (or non-uniformity) of the original magnetic field grid. If the computational volume is considered as a whole, then only dxstr1, dystr1 and dzstr1 should be set and the remaining parameters set to zero. If the input data is on a uniform grid, as in the case of NLFFF output of the Magnetic Extrapolation algorithm, stretching factors should be set to unity, i.e. dxstr1 = 1.0, dystr1 = 1.0 and dzstr1 = 1.0.

lpbc1   Periodic boundary conditions (x-axis)   1 = True, 0 = False
lpbc2   Periodic boundary conditions (y-axis)   1 = True, 0 = False
lpbc3   Periodic boundary conditions (z-axis)   1 = True, 0 = False

Note: lpbc1, lpbc2 and lpbc3 define whether or not the data are periodic. In the case if the DeVore test cases and NLFFF output from the Magnetic Extrapolation algorithm these will always be set FALSE (0).

lsym1   Computational domain is symmetric about x-axis   1 = True, 0 = False
lsym2   Computational domain is symmetric about y-axis   1 = True, 0 = False
lsym3   Computational domain is symmetric about z-axis   1 = True, 0 = False

Note: lsym1, lsym2 and lsym3 define whether or not the the computational domain parameters (xlen*, ylen* and zlen*) are symmetrical about their respective axes. For example, if xlen1 = 24 and lsym1 = TRUE (1) then the domain along x spans -12 to +12 (symmetric about the x-axis). If lysm1 = FALSE (0), then the domain spans 0 to 24.

lfixbz   Potential field calculation   1 = Analytical, 0 = use Bfield input data

Note: This parameter will always be FALSE (0), i.e. the potential field will always be calculated from the magnetic field input data.

Message Passing Interface (MPI)

For more details, please refer to: http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/MagneticExtrapolationDeployment#Message_Passing_Interface_MPI

-- MikeSmith - 26 Sep 2007

Edit | Attach | Watch | Print version | History: r20 < r19 < r18 < r17 < r16 | Backlinks | Raw View | More topic actions
Topic revision: r20 - 2008-01-15 - MikeSmith
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback