xpatch.html

Running XPATCH/FISC on the IFP machines

This page describes the particularities of the XPATCH/FISC installation on the
IFP machines at UIUC.


XPATCH can be run on the following IFP machines: guinan, jake, cozumel,
jamaica, and bashir. Guinan is a big 12-processor SGI, and jake is an
SGI O2. Both are over in the IFP lab on the second floor of Beckman.
Cozumel, jamaica, and bashir are all Suns on the first floor of CSL.
Cozumel is in Aaron’s office (C&SRL 123). Jamaica and Bashir are in the
DSP lab (C&SRL 122). Of the bunch, Bashir and Guinan have the most memory,
so they’re the best for big FISC runs. (Note, however, that Guinan is usually
quite busy and frequently used for demos in which non-demo related jobs
get kicked off.)


I’ve used “cozumel” in all the examples which follow. Replace “cozumel”
with whatever machine you’re running on.

Security Requirements

To run XPATCH or FISC, you must be a member of the
xpatch group. XPATCH/FISC is “export-controlled,” so only
U.S. citizens can be members of the XPATCH group.

Necessary “Dot” Files

Before you embark on your XPATCH/FISC adventures, make a file in your
home directory called .xpatch_gui.sun which contains:

setenv XPATCH_MAIN_PATH /workspace/xpatch2.4
setenv XPATCH_GUI      $XPATCH_MAIN_PATH/bin.sun

if ($HOST == "cozumel.csl.uiuc.edu")  
setenv XG_LICENSE      $XPATCH_MAIN_PATH/license/demaco.cozumel.dat

if ($HOST == "bashir.ifp.uiuc.edu")  
setenv XG_LICENSE      $XPATCH_MAIN_PATH/license/demaco.bashir.dat

if ($HOST == "jamaica.csl.uiuc.edu")  
setenv XG_LICENSE      $XPATCH_MAIN_PATH/license/demaco.jamaica.dat

setenv XG_ADVANCED     $XPATCH_MAIN_PATH/dynamic 
setenv XG_CIFER        $XPATCH_MAIN_PATH/dynamic 
setenv XG_DATA_FILES   $XPATCH_MAIN_PATH/data 
setenv XG_HELP_PAGES   $XPATCH_MAIN_PATH/help 
setenv XG_RAMLIB       $XPATCH_MAIN_PATH/data/demaco 
setenv XPATCHES_DATA   $XPATCH_MAIN_PATH/data/xpatches

And a file called .xpatch_gui.sgi which contains:

setenv XPATCH_MAIN_PATH /workspace 
setenv XPATCH_GUI  $XPATCH_MAIN_PATH/xpatch2.4/bin

if ($HOST == "guinan.ifp.uiuc.edu")  
setenv XG_LICENSE  $XPATCH_MAIN_PATH/xpatch2.4/license/demaco.guinan.dat

if ($HOST == "jake.ifp.uiuc.edu")  
setenv XG_LICENSE $XPATCH_MAIN_PATH/xpatch2.4/license/demaco.jake.dat setenv XG_ADVANCED $XPATCH_MAIN_PATH/xpatch2.4/dynamic setenv XG_CIFER $XPATCH_MAIN_PATH/xpatch2.4/dynamic setenv XG_DATA_FILES $XPATCH_MAIN_PATH/xpatch2.4/data setenv XG_HELP_PAGES $XPATCH_MAIN_PATH/xpatch2.4/help setenv XG_RAMLIB $XPATCH_MAIN_PATH/xpatch2.4/data/demaco setenv XPATCHES_DATA $XPATCH_MAIN_PATH/xpatch2.4/data/xpatches

Also add the following lines to your .cshrc file, somewhere
after it sets the main path variable:


if ($VENDOR == “sun”) then
source ~/.xpatch_gui.sun
set path=($path /workspace/xpatch2.4/bin.sun )
else
# if ($VENDOR == “sgi”)
source ~/.xpatch_gui.sgi
set path=($path /workspace/xpatch2.4/bin )
endif

Making It Go

If your dot files are set up correctly, typing xpatch.x should
bring up the main GUI. You can also run the programs individually from the
command line, for instance, fisc.x, xpatchf.x,
xpatcht.x, etc. Be sure to browse through the files in the
/workspace/xpatch2.4/doc.

The License Manager

The license manager is designed so that it can run on just one machine,
and then all the other machines look to that one machine. However, I never
managed to get that to work. Hence, I’ve set it up so each machine runs
its own individual copy of the license manager. There’s a separate
demaco.[whatever].dat file for each machine, where
[whatever] is a machine name.


Hopefully, the license manager is already running whenever you want to do
whatever you want to do. If it’s not, you’ll have to start it.


To do something with the license manager, first go to the license manager
directory:


cd /workspace/xpatch2.4/license


To see if it’s running: If you want to check if the license manager
is up, type something like:


sun/lmstat -c /homes/lanterma/xpatch2.4/license/demaco.cozumel.dat


(If you’re running on an SGI, use “sgi” instead of “sun” in your command)


If everything is up and running, you should get something like:

lmstat - Copyright (C) 1989-1994 Globetrotter Software, Inc.
Flexible License Manager status on Wed 11/4/98 15:53

License server status (License file: /homes/lanterma/xpatch2.4/license/demaco.cozumel.dat):

cozumel.csl.uiuc.edu: license server UP (MASTER)

Vendor daemon status (on cozumel.csl.uiuc.edu):

DEMACO: UP


To start the license manager: Go to the license directory and type

sun/lmgrd -c demaco.cozumel.dat


It should spit out something along the lines of:

15:31:37 (lmgrd) Starting vendor daemons ...  
15:31:37 (lmgrd) Started DEMACO (internet tcp_port 41691 pid 11353)
15:31:39 (DEMACO) Server started on cozumel.csl.uiuc.edu for: xpatch
15:31:39 (DEMACO) fisc

To shut down the license manager: Go to the license directory and
type

lmdown -c /homes/lanterma/xpatch2.4/license/demaco.cozumel.dat


In general, bringing down the license manager is not something you’ll need
to do.

The Job Control Server

The job control server allows you to set up a bunch of runs to be run either
simultaneously, or in “batch” mode, i.e. one right after the other. On any
given machine (except for a multiprocessor machine like Guinan), you really
want to run just one job at a time, especially with memory-intensive codes
like FISC.


Ideally, the job control server should be run as root, so multiple users
can use it, and the job control server can change ownership of files as
needed. Under this scheme, you can also run different jobs on different
machines, controlling them all from a single machine.
However, we opted not to do this,
since every time something went wrong, we would have
to go to our system administrator. With XPATCH/FISC,
things tend to go wrong a lot,
and I wanted to make sure that we could fix everything ourselves without
having to go through the administrator. We lose some convenience and
functionality, but it’s worth it to be able to immediately fix problems.


Hence, each person must start and shut down the job server on their
own. During an XPATCH/FISC session, you start the job server, and when
you’re done, you kill job server.
If you don’t kill the job server, then
no one else can run jobs on that machine
since XPATCH really doesn’t like having two job
servers going at the same time. So don’t forget to kill the job server
once you’re done.


To shut down the job control server: Type

ps -ef | grep jc_server.x


If it’s up, you’ll get something like

lanterma  4305  4270  0 19:09:59 pts/6    0:00 /workspace/xpatch2.4/bin.sun/jc_server.x /workspace/xpatch2.4/bin.sun


Then type kill 4305 and kill 4270, or whatever your
numbers are as shown by the ps command. (I forget which number
is the one you really want to kill, so go ahead and kill them both.)


To start the job control server: Type

$XPATCH_GUI/jc_server.x $XPATCH_GUI &


You’ll get something like

NOTE: Can't open Job Control configuration file: /etc/jc/server.cfg
Therefore, default load averages will be assumed.
jc_server.x Running on bashir.ifp.uiuc.edu...


The “note” is perfectly normal and can be ignored.

Advice on FISC

For a given incident angle and frequency, FISC generates the current
distribution on the target surface. This is what takes most of the computation
time. Once it has the current distribution, it can generate the complex
scattered field at a variety of observation angles relatively quickly.
Thus, in planning FISC runs, it is good to keep in mind that additional
incident angles and frequencies are expensive, whereas additional observation
angles are cheap. With this in mind, one can use rules-of-thumb based on
the Nyquist criterion (some
suggested by FISC) to decide how finely to sample in frequency space.
In addition, FISC can employ a “bistatic-to-monostatic” approximation
to synthesize additional incident angles from observation angles. This
is discussed in the FISC documentation.

Dr. Jiming Song of CCEM offered the following advice concerning FISC:

  • FISC has an option so solve the Magnetic Field Integral Equation
    (MFIE), the Electric Field Integral Equation (EFIE), or the Combined Field
    Integral Equation (CFIE). CFIE uses the same amount of memory as the
    EFIR, but it reduces the number of iterations a lot, so it should be used
    if possible. Unfortunately the CFIE
    can only be used for a closed target.

  • One can choose between the conjugate gradient (CG) or biconjugate
    gradient (BICG) methods. The CG always converges, but is slower. Although
    the BICG may not converge in some cases, it is faster than the CG.
    If the parameter “alpha” (see the FISC documentation) is set to 1,
    CG is recommended; if it is set to 0.5, choose one of them. (In practice,
    I haven’t encountered a case where BICG failed to converge.)

  • The number of levels in the multilevel fast multipole algorithm
    can be specified, or a number of -1 can be entered, in which case the
    code decides on the number of levels. In general, the latter option is
    recommended. The size of the finest cube, for instance displayed by the
    code as “finest cube in lambda = 0.1667”, should be
    about 1.2 to 2 times the longest edge, displayed as
    “longest edge length in lambda = 0.1250.” There may be times when
    the user will want to set the number of levels in order to run FISC in
    the available memory, since
    memory requirements decrease with the number of levels.
    The number of levels cannot be increased
    arbitrarily, however; accuracy degrades if the number of levels is too big.

Some example memory requirements for the VFY218 plane,
for different frequencies and for different
numbers of levels, are given below. Memory is in megabytes. Note that the
memory requirements increase as frequency increases. A star indicates the
number of levels suggested by the code.

Freq (MHz)    4       5       6        7       8
55.25       82.62*                                           
79.25       86.00*                                           
175.25      125.96  39.69*                                   
211.25      201.71  62.04*  28.44                            
471.25                      283.8*   128.6   118.62      
681.25                      1377.2   436.7*  294.7    
885.25                               797.4*  436.0  


Last updated 6/2/99. Send comments or questions to
lanterma@ifp.uiuc.edu.