MuoCal/OnlCal for Run-3

For general info on OnlCal and Data Production for Run-3 see the Data Production site .

This page contains some basic info regarding (incl. instructions for how to run) the MuoCal Run-3 online calibration code. Pls consider this to be a work in progress. This document will hopefully evolve together with the code.

Reconstruction time, file sizes etc.

For a first test, the data placed on
/common/buffer1/filterdata/MUID*/f*PRDFF on Jan 28 were processed. This was a set of 84816 events in 10 runs (67565, 67567, 67633, 67634, 67782, 67783, 67784, 67790, 67798, 67799). The processing, on va032 took about 15 h (North files) + 23 h (South files) of wall-clock time on this, not heavily used, double processor-machine. I.e. about 1.6 seconds/event. This is about 4 times higher than the 0.4 s/event or 2.5 events/sec on a RCAS machine previously measured. Note that this RCAS test was with a non-filtered PRDF, so one would expect the average to go up quite a lot if one only looks at filtered events.

File sizes: the DSTs for these runs take up a total size of about 2.1 Gb. When merging these onto nDSTs (also only counting events that could have been in both the North and South filtered files only once - the overlap is on the percent level), this comes to down to about 3.4 Mb. I only save the EventHeader and PHMuoTracksOut (PHdiMuoTracksOutv2) nodes. When only writing out events with at least one track, this goes down by close to a factor 5. For the first real production attempt, I also included LVL1 and BLT nodes and MUID road and track-road relational tables for trigger etc. studies. The ratio between the OnlCal nDSTs and normal DSTs (no central arm info included) is then about a factor 100.

Performance

I ran thru some J/Psi candidate events from Run-2 pp prdfs and the dimuon mass peak appeared to have survived through OnlCal and onto the nDSTs. See the plot in the draft area.

Statistics (updated Feb 5)

The sample tested above had ~85 k filtered events. This corresponds to a BBC live sample of about 8.5 M for these runs (if I did the math correctly). With two perfect arms and triggers, we should have seen about 4 J/Psi's. Since one arm (North) shows basically nothing, we can divide this by 2, and end up with 2, which is fairly close to 1 (or 0).

The total sample until now is 352 M BBC live triggers, translating into approx. 80 J/Psi's (assuming North reco. doesn't work), and 8.3 M scaled MUID 1D triggers, translating into ~3700 CPU hours.

The amount of filtered data on the data disks on va033-va044 corresponds to about 100 M BBC live triggers, and 2.5 M MUID triggers, i.e. we should maybe be able to see about 25 J/Psi's in the South arm. On the first day of processing (~14 h before a reboot of va0xx early on 9 Feb), 520 files were succesfully processed corresponding to about 0.9M MUIDS triggers. For the plots linked below, only the part that had made it to ndsts this far was included (0.8M MUID triggers). The plots are in the draft area:

single muons: pt, pz, p, eta (pseudorap) (not much found in North)
di-muons: ++, --, +-, signal

Code location

It's all in CVS under
online/calibration/Run03/subsystems/mutr

Examples of usage

Here follows an example of how to run.
Of course you need to have installed OnlMon libraries in your LD_LIBRARY_PATH. The build follows the normal PHENIX procedure.

For a quick summary of what to do to get going, here's what I did as phnxmutr on va032:

mkdir silvermy/onlcal
cd silvermy/onlcal/
cvs -l co -d . online/calibration/Run3
mkdir build install
cd build/
../autogen.sh --prefix=/home/phnxmutr/silvermy/onlcal/install
make install
setenv LD_LIBRARY_PATH \
"/home/phnxmutr/silvermy/onlcal/install/lib:$LD_LIBRARY_PATH"
# cd to your rundir
# copy setup, run_muo.C from
# online/calibration/Run3/subsystem/mutr
# there
cd /data/phnxmutr/silvermy/process/north/run
source setup
# run through 100 events on a file
nice root -b run_muo.C\(\"data.prdf\", \"dst.root\",100\) -q > & log.txt; 



silvermy@lanl.gov 2003

Last modified: