Grid User Guide

NOTICE: Since CMSSW_4_1_0 is only built for 64-bit machines, you may need to set SCRAM_ARCH before you source any environment scripts by

bash$ export SCRAM_ARCH=slc5_amd64_gcc434
csh# setenv SCRAM_ARCH slc5_amd64_gcc434

The documentation explains to grid users how to use the resources here at Purdue.

 

To submit jobs to Purdue grid means to submit jobs to cluster Purdue-RCAC or cluster Purdue-Steele through gatekeeper osg.rcac.purdue.edu or lepton.rcac.purdue.edu. In order to do this, grid authentication is needed.

Authentication

The application process is explained here:

Applying for a DOE grid certificate and CMS VO

Renewing a DOE grid certificate and CMS VO

Importing and exporting a certificate to Browsers

Installing your certificate on the local machine


Useful mailing list to get to know downtime notice and ask for help

Mailing list information

 

To use the data that are stored in Hadoop, you can find the names of all dataset that registered at global DBS and our local DBS in the following links. Our local DBS stores the local production data.

Data resources

Local Data at Global DBS

When you want to submit jobs to our cluster, you must want to know how many job slots are available and how much Hadoop storage is available.

Job Slots and Hadoop storage

Job slots information is shown in one of two pie graphs on the upper right side of this page.

Hadoop storage space information is shown in the other pie graph.

 

Manage Jobs with CRAB

Descriptions of the cmsRun python configuration syntax

Submitting jobs through CRAB and storing data in Hadoop

CMS FileMover Service: to get your favorite data via plain download from a web interface

Publishing data using CRAB

Submitting jobs through Condor-G