Go to the U of M home page
School of Physics & Astronomy
School of Physics and Astronomy Wiki

User Tools


computing:department:unix:jobs:home

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
computing:department:unix:jobs:home [2009/01/29 18:39] allancomputing:department:unix:jobs:home [2013/07/17 16:41] (current) – [I/O intensive jobs] allan
Line 1: Line 1:
-===== How and Where to run computing jobs =====+====== How and Where to run computing jobs ======
  
-==== Running Jobs on the Cluster ====+===== What systems are available =====
  
-The best way to run background computing jobs on the unix cluster is to submit them to the [[:computing:department:unix:jobs:condor|condor batch system]]. At the simplest level you can just use the "condor_run" command, eg:+More about that on [[:computing:department:unix:systems|this page]] 
 + 
 +===== Running Jobs on Unix ===== 
 + 
 +The best way to run background computing jobs on the unix systems is to submit them to the [[:computing:department:unix:jobs:condor|condor batch system]]. At the simplest level you can just use the "condor_run" command, eg:
  
   condor_run yourprogram   condor_run yourprogram
Line 10: Line 14:
  
  
-==== Running Jobs Locally in the Background ====+===== Running Jobs Locally in the Background =====
  
 If you're not using condor, then any non-interactive compute jobs should be //niced// - this reduces their priority level relative to interactive use, so that workstation users still have a responsive system. In the absence of interactive load, the system is still dedicated to your jobs, so in most cases runtime will not be much affected. If you're not using condor, then any non-interactive compute jobs should be //niced// - this reduces their priority level relative to interactive use, so that workstation users still have a responsive system. In the absence of interactive load, the system is still dedicated to your jobs, so in most cases runtime will not be much affected.
Line 16: Line 20:
 If "yourprogram" is your executable, then you can run it nicely using the command: If "yourprogram" is your executable, then you can run it nicely using the command:
  
 +  # tcsh shell users:
   nice +n yourprogram   nice +n yourprogram
 +
 +  # bash shell users:
 +  nice -n yourprogram
  
 where n is a number between 1 and 19 (the higher the number, the lower the job priority). where n is a number between 1 and 19 (the higher the number, the lower the job priority).
Line 22: Line 30:
 You should choose the priority in the most socially-responsible way you can manage, according to how long you expect the job to run - eg, jobs which may run for days should be run at a lower priority than those which might only take a few hours. We also consider a priority of 4 to be the minimum socially-acceptable nice level to use for background jobs - so, for example: You should choose the priority in the most socially-responsible way you can manage, according to how long you expect the job to run - eg, jobs which may run for days should be run at a lower priority than those which might only take a few hours. We also consider a priority of 4 to be the minimum socially-acceptable nice level to use for background jobs - so, for example:
  
 +  # tcsh shell users:
   nice +4 yourprogram   nice +4 yourprogram
  
 +  # bash shell users:
 +  nice -4 yourprogram
  
-==== I/O intensive jobs ==== +===== I/O intensive jobs =====
- +
-If your job requires manipulation of large files, it will be faster to use local scratch storage on the local machine rather than your home directory (which is accessed over the network). Check our pages on [[:computing:department:unix:file_storage|file storage]] to see where you can store such files. +
  
-==== Where to run jobs ==== +<note>Please make sure you don't send intensive writes to your home directory, as this causes slowdowns for all other users on our system. We may be forced to kill any processes which are causing such problems</note>
-  * Spartha (also known as physics.umn.edu) is intended for general-purpose interactive computing for the entire department, and for short jobs. We do not encourage running of long cpu-intensive jobs on this system, but it may be used for shorter ones (for long ones, we suggest using [[:computing:department:unix:condor|condor batch system]]). +
-  * Two systems are available for general-purpose computing, sunfire1 and sunfire2. Compute jobs may be run here, though we still suggest that condor would be a better choice. +
-  * Various research groups (HEP, Nuclear Theory, Cosmology, etc) have their own systems in the cluster and should use them accordingly. We don't restrict access to these machines by other users; please don't abuse this. If you have a need which might be met by running jobs on another groups' systems, you should arrange permission with them before doing so. +
-  * Students paying the IT computing fee (that is, physics majors and pre-qualifier graduate students) have the use of systems in the [[:computing:department:labs|Physics Computer Lab]]. +
  
 +If your job requires manipulation of large files (especially writing to them), it will be faster to use local scratch storage on the local machine rather than your home directory over the network. Check our pages on [[:computing:department:unix:file_storage|file storage]] to see where you can store such files.
  
computing/department/unix/jobs/home.1233275992.txt.gz · Last modified: 2009/01/29 18:39 by allan