Annual Report 2002
In 2002, the reliable operation of the Fast-Ethernet-technology-based JINR Local Area Network (LAN) was provided (Fig. 3). There are 4053 network elements incorporated in the JINR LAN at present (there were 3451 in 2001), including 113 general-purpose and specialized servers. 821 home PCs are connected to the JINR modem pool.
JINR Central Computing Complex
The development of the JINR Central Computing Complex (CCC) was in progress on the basis of general-purpose and specialized clusters and computer farms. A distributed PC/Linux cluster has been installed at JINR CCC. The cluster comprises four separate interconnected components of various hardware nature and functional purposes. It includes an interactive farm of four dual-processor PCs Pentium III 1 GHz, 512 MB RAM, where the basic mathematical and special-purpose software required for computations in the framework of several experiments, has been installed. The cluster also comprises specialized computing farms: a general-purpose farm, an LHC (Large Hadron Collider) farm, and a parallel computation farm. The computing general-purpose farm has eight dual-processor PCs Pentium III 500MHz, 512 MB RAM. The LHC farm comprises 16 dual-processor PCs Pentium III 1 GHz, 512 MB RAM. The parallel computation farm includes eight dual-processor PCs Pentium III 1GHz, 512 MB RAM connected by the communication network Myrinet 2000. Besides the general cluster, there are a number of specialized servers. The total general-purpose cluster's performance is almost 2500 SpecInt95.
The distributed file system AFS provides a transparent and protected access to the common disk space for information storage for the users of the JINR LAN and for all participants of international collaborations and projects of JINR. The total capacity of the JINR CCC disk space is 6 TB. The 15-TB automated tape library is used for the long-term storage of enormous information arrays and for the backup copying system (Fig. 4).
In 2002, work on creation of the JINR's GRID-segment and its incorporation in the global GRID structure was in progress. First steps towards creating a system of the global monitoring of the resources of the large-scale GRID-LHC virtual organization including the LAN segments of several institutes (MSU SINP, JINR, SSC RRC "Kurchatov Institute", RAS KIAM) in accordance with the GRID-architecture were initiated. The monitoring system operates in a test mode, its experimental use for simulation and analysis of simulated data for experiments CMS, ALICE, and ATLAS was carried out. First results on practical application of the hierarchical mass-storage control system in the GRID-LHC virtual organizations with optimal use of the backup resources, fragmentation and replication of data were obtained.
© Laboratory of Information Technologies, JINR, Dubna, 2003