the problem we’re facing
PHENIX program heavy in “ensemble” physics
- typical day (or week) at the office: get lots of events, make foreground and background distributions, compare, improve code, repeat until published
needs to move lots of data very efficiently
needs to be comprehensible to PHENIX physicists
- people are accustomed to “staging files”
needs to work with the CAS analysis architecture
- lots of linux boxes with 30 GB disk on each
- main NFS server with 3 TB disk
solution: optimized batch file mover
- similar to Fermilab data “freight train”
- works with existing tools: HPSS, ssh, pftp, perl