Go to the U of M home page
School of Physics & Astronomy
School of Physics and Astronomy Wiki

User Tools


computing:department:unix:jobs:home

This is an old revision of the document!


How and Where to run computing jobs

Running Jobs on the Cluster

The best way to run background computing jobs on the unix cluster is to submit them to the condor batch system. At the simplest level you can just use the “condor_run” command, eg:

condor_run yourprogram

Using the more advanced features of condor can be valuable for long-running jobs; for example your job can be checkpointed so that if a workstation is rebooted, the job can resume without starting from the beginning.

Running Jobs Locally in the Background

If you're not using condor, then any non-interactive compute jobs should be niced - this reduces their priority level relative to interactive use, so that workstation users still have a responsive system. In the absence of interactive load, the system is still dedicated to your jobs, so in most cases runtime will not be much affected.

If “yourprogram” is your executable, then you can run it nicely using the command:

nice +n yourprogram

where n is a number between 1 and 19 (the higher the number, the lower the job priority).

You should choose the priority in the most socially-responsible way you can manage, according to how long you expect the job to run - eg, jobs which may run for days should be run at a lower priority than those which might only take a few hours. We also consider a priority of 4 to be the minimum socially-acceptable nice level to use for background jobs - so, for example:

nice +4 yourprogram

I/O intensive jobs

If your job requires manipulation of large files, it will be faster to use local scratch storage on the local machine rather than your home directory (which is accessed over the network). Check our pages on file storage to see where you can store such files.

Where to run jobs

  • Spartha (also known as physics.umn.edu) is intended for general-purpose interactive computing for the entire department, and for short jobs. We do not encourage running of long cpu-intensive jobs on this system, but it may be used for shorter ones (for long ones, we suggest using condor batch system).
  • Two systems are available for general-purpose computing, sunfire1 and sunfire2. Compute jobs may be run here, though we still suggest that condor would be a better choice.
  • Various research groups (HEP, Nuclear Theory, Cosmology, etc) have their own systems in the cluster and should use them accordingly. We don't restrict access to these machines by other users; please don't abuse this. If you have a need which might be met by running jobs on another groups' systems, you should arrange permission with them before doing so.
  • Students paying the IT computing fee (that is, physics majors and pre-qualifier graduate students) have the use of systems in the Physics Computer Lab.
computing/department/unix/jobs/home.1233275992.txt.gz · Last modified: 2009/01/29 18:39 by allan