Note: see http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/RunningNGSCode for instructions on compilation and execution on the UK's National Grid Service clusters.
mpiexec -n <no cpus> Helicity <input filename> <initialisation filename> <output filename>
The algorithm produces a FITS file with an ASCII table extension containing a magnetic helicity scaling factor for the input magnetic field. The actual helicity is a product of this scaling factor and L^{4}, where L is the region size in cm. For example, a scaling factor of -256 for a region of 100Mm (= 10^{10}cm) would give a total magnetic helicity of -256 * (10^{10})^{4} = -2.56 * 10^{42}Mx^{2}.
The table extension also records the values of the parameters used to configure the staggered grid.
The algorithm uses the Magnetic Helicity computation from the FCTMHD3D package created by Dr Rick DeVore at the Naval Research Laboratory (NRL). It has been converted to 'C' from FORTRAN 90 and extraneous MHD simulation code stripped out according to Rick DeVore's guidelines. Multi-dimensional arrays have been replaced with single dimension arrays to make dynamic memory allocation simpler, and floating point arrays originally declared as doubles are now treated as floats in order to reduce the amount of allocated memory.
Existing Message Passing Interface (MPI) calls have been removed and new MPI calls added to speed-up the potential field calculations which can be computer-intensive with larger datasets. The code has been successfully built with the MPI library and run on the National Grid Service cluster at RAL.
Dr DeVore has provided sample test data and expected results in order that we can verify our code.
The algorithm uses a staggered grid approach which defines how the input dataset is spead across a computational box. Set-up values for the staggered grid are defined in an initialization file which is supplied as one of the commandline arguments.
Recent experiments have shown that it takes approximately 60 cpu hours to run the algorithm on the RAL NGS cluster (12 hours across 5 cpus).
The interpolation routines currently used to assign magnetic field input data to the staggered grid are specific to the testcase data provided. A generic interpolation scheme that will deal with all input data (including results from the Magnetic Extrapolation algorithm).
The code is intended to work with the output of the Magnetic Extrapolation algorithm. Investigate how the potential field computed in that algorithm may be used to replace the corresponding calculation in the Helicity code.
For all test case input files, download: http://msslxx.mssl.ucl.ac.uk:8080/eSDO/Helicity_1.0_testdata.tar
The dataset consists of a numerically generated bipolar magnetic field that provides the starting point for a shear motion simulation performed close to and parallel with the photospheric polarity inversion line. The MHD simulation uses Dr DeVore's package FCTMHD3D.
The input dataset is a binary file (converted to FITS format) containing the Bx, By and Bz magnetic field components of the simulated bipolar field at time t=0 seconds, i.e. before the shear motion is introduced.
The datacube is a 250 * 95 * 95 pixel volume and the helicity is computed on a grid of 500 * 190 * 190 units (x-axis, y-axis and z-axis respectively).
The input dataset is a binary file (converted to FITS format) containing the Bx, By and Bz magnetic field components of the simulated bipolar field at time t=100 seconds.
The datacube is a 250 * 95 * 95 pixel volume and the helicity is computed on a grid of 500 * 190 * 190 units (x-axis, y-axis and z-axis respectively).
The input dataset is a binary file (converted to FITS format) containing the Bx, By and Bz magnetic field components of the simulated bipolar field at time t=200 seconds.
The datacube is a 250 * 95 * 95 pixel volume and the helicity is computed on a grid of 500 * 190 * 190 units (x-axis, y-axis and z-axis respectively).
HelicityTests
Classes with unit tests:
The algorithm may be run from within IDL using the IDL wrapper Helicity.pro provided in the installation package. The input parameters are the same as for the commandline version. For example:
Please note: The libHelicity.so shared library is required by the IDL wrapper. Also, IDL doesn't support the running of the algorithm on multiple processors.
The algorithm uses a staggered grid for the helicity computation. A staggered grid allows magnetic field data saved on uniform or non-uniform grids to be assigned to a user-defined grid along the x, y and z axes and allows regions of interest to be stretched along any axis independently of the other axes.
The configuration of the staggered grid is defined in an Ascii file and forms one of the input parameters to the algorithm.
Descriptions of individual grid parameters used within the file and their use is shown in the table below.
Parameter | Description | Comments | ||
---|---|---|---|---|
m1pes | Defines no. of processors used along x-axis | Should always be set = 1 | ||
m2pes | Defines no. of processors used along y-axis | Ditto | ||
m3pes | Defines no. of processors used along z-axis | Ditto | ||
n1pes | Defines no. of processors used along x-axis | Ditto | ||
n2pes | Defines no. of processors used along y-axis | Ditto | ||
n3pes | Defines no. of processors used along z-axis | Ditto | ||
nx1 | Defines no. of gridpoints in region #1 (x-axis) | Integer >= 0 | ||
nx2 | Defines no. of gridpoints in region #2 (x-axis) | Integer >= 0 | ||
nx3 | Defines no. of gridpoints in region #3 (x-axis) | Integer >= 0 | ||
nx4 | Defines no. of gridpoints in region #4 (x-axis) | Integer >= 0 | ||
ny1 | Defines no. of gridpoints in region #1 (y-axis) | Integer >= 0 | ||
ny2 | Defines no. of gridpoints in region #2 (y-axis) | Integer >= 0 | ||
ny3 | Defines no. of gridpoints in region #3 (y-axis) | Integer >= 0 | ||
ny4 | Defines no. of gridpoints in region #4 (y-axis) | Integer >= 0 | ||
nz1 | Defines no. of gridpoints in region #1 (z-axis) | Integer >= 0 | ||
nz2 | Defines no. of gridpoints in region #2 (z-axis) | Integer >= 0 | ||
nz3 | Defines no. of gridpoints in region #3 (z-axis) | Integer >= 0 | ||
nz4 | Defines no. of gridpoints in region #4 (z-axis) | Integer >= 0 |
Note: nx*, ny* and nz* parameters allow the computational box to be divided into a maximum of 4 regions along each axis. If there is no requirement to do so, then nx1, ny1 and nz1 should be set to the total no. of gridpoints along the x, y and z axes respectively and the remaining parameters set to 0. This is the case for the DeVore testcases and recommended for the NLFFF output of the Magnetic Extrapolation algorithm.
mmhd | No. of mhd3 array parameters | Should always be set = 11 | ||
mtrn | No. of trn3 array parameters | Should always be set = 12 | ||
l1 | Total no. of interior gridpoints along x-axis | nx1+nx2+nx3+nx4 - 6 | ||
l2 | Total no. of interior gridpoints along y-axis | ny1+ny2+ny3+ny4 - 6 | ||
l3 | Total no. of interior gridpoints along z-axis | nz1+nz2+nz3+nz4 - 6 | ||
xlen1 | Defines computational domain of region #1 (x-axis) | Float >= 0.0 | ||
xlen2 | Defines computational domain of region #2 (x-axis) | Float >= 0.0 | ||
xlen3 | Defines computational domain of region #3 (x-axis) | Float >= 0.0 | ||
xlen4 | Defines computational domain of region #4 (x-axis) | Float >= 0.0 | ||
ylen1 | Defines computational domain of region #1 (y-axis) | Float >= 0.0 | ||
ylen2 | Defines computational domain of region #2 (y-axis) | Float >= 0.0 | ||
ylen3 | Defines computational domain of region #3 (y-axis) | Float >= 0.0 | ||
ylen4 | Defines computational domain of region #4 (y-axis) | Float >= 0.0 | ||
zlen1 | Defines computational domain of region #1 (z-axis) | Float >= 0.0 | ||
zlen2 | Defines computational domain of region #2 (z-axis) | Float >= 0.0 | ||
zlen3 | Defines computational domain of region #3 (z-axis) | Float >= 0.0 | ||
zlen4 | Defines computational domain of region #4 (z-axis) | Float >= 0.0 |
Note: xlen*, ylen* and zlen* parameters are used to define the "size" of the regions along each axis and can be used to minimize the influence of boundary conditions. If the computational box is considered as a whole, i.e. not split into regions, then only xlen1, ylen1 and zlen1 should be set and the remaining parameters set to zero. This is the case for the DeVore testcases and recommended for the NLFFF output of the Magnetic Extrapolation algorithm.
dxstr1 | Stretching factor - region #1 (x-axis) | Float >= 0.0 | ||
dxstr2 | Stretching factor - region #2 (x-axis) | Float >= 0.0 | ||
dxstr3 | Stretching factor - region #3 (x-axis) | Float >= 0.0 | ||
dxstr4 | Stretching factor - region #4 (x-axis) | Float >= 0.0 | ||
dystr1 | Stretching factor - region #1 (y-axis) | Float >= 0.0 | ||
dystr2 | Stretching factor - region #2 (y-axis) | Float >= 0.0 | ||
dystr3 | Stretching factor - region #3 (y-axis) | Float >= 0.0 | ||
dystr4 | Stretching factor - region #4 (y-axis) | Float >= 0.0 | ||
dzstr1 | Stretching factor - region #1 (z-axis) | Float >= 0.0 | ||
dzstr2 | Stretching factor - region #2 (z-axis) | Float >= 0.0 | ||
dzstr3 | Stretching factor - region #3 (z-axis) | Float >= 0.0 | ||
dxzstr4 | Stretching factor - region #4 (z-axis) | Float >= 0.0 |
Note: dxstr*, dystr* and dzstr* parameters are used to exponentially stretch regions along each axis and can be used to emphasize areas of interest within the computational box. The settings should reflect the uniformity (or non-uniformity) of the original magnetic field grid. If the computational volume is considered as a whole, then only dxstr1, dystr1 and dzstr1 should be set and the remaining parameters set to zero. If the input data is on a uniform grid, as in the case of NLFFF output of the Magnetic Extrapolation algorithm, stretching factors should be set to unity, i.e. dxstr1 = 1.0, dystr1 = 1.0 and dzstr1 = 1.0.
lpbc1 | Periodic boundary conditions (x-axis) | 1 = True, 0 = False | ||
lpbc2 | Periodic boundary conditions (y-axis) | 1 = True, 0 = False | ||
lpbc3 | Periodic boundary conditions (z-axis) | 1 = True, 0 = False |
Note: lpbc1, lpbc2 and lpbc3 define whether or not the data are periodic. In the case if the DeVore test cases and NLFFF output from the Magnetic Extrapolation algorithm these will always be set FALSE (0).
lsym1 | Computational domain is symmetric about x-axis | 1 = True, 0 = False | ||
lsym2 | Computational domain is symmetric about y-axis | 1 = True, 0 = False | ||
lsym3 | Computational domain is symmetric about z-axis | 1 = True, 0 = False |
Note: lsym1, lsym2 and lsym3 define whether or not the the computational domain parameters (xlen*, ylen* and zlen*) are symmetrical about their respective axes. For example, if xlen1 = 24 and lsym1 = TRUE (1) then the domain along x spans -12 to +12 (symmetric about the x-axis). If lysm1 = FALSE (0), then the domain spans 0 to 24.
lfixbz | Potential field calculation | 1 = Analytical, 0 = use Bfield input data |
Note: This parameter will always be FALSE (0), i.e. the potential field will always be calculated from the magnetic field input data.
For more details, please refer to: http://www.mssl.ucl.ac.uk/twiki/bin/view/SDO/MagneticExtrapolationDeployment#Message_Passing_Interface_MPI
-- MikeSmith - 26 Sep 2007