Site map rus
AboutNews
ActivitiesComputing & Information resources
     Computing & Information resources > CICC     
News
CICC
Registration
Statistics
AFS File System (ps)
dCache Manual
Safety in Network
Libraries
About Parallel Applications
User's Guide
Practical Recommendations
Contact
Photogallery
Old version of CICC site
Architecture and equipment of CICC

Now the structure of the CICC computing farm comprises the following computing machines:

- 60 64-bit machines:
2 processors Xeon 5150 (2 cores per processor);
Clock frequency 2.66GHz;
4096KB cash per core;
8GB RAM;
250GB Disk;
1Gb Ethernet.

- 30 64-bit machines:
2 processors Xeon E5430 (4 cores per processor);
Clock frequency 2.66GHz;
6144KB cash per core;
16GB RAM;
250GB Disk;
1Gb Ethernet.

- 10 64-bit machines:
2 processors X5450 (4 cores per processor);
Clock frequency 3.00GHz;
6144KB cash per core;
16GB RAM;
250GB Disk;
1Gb Ethernet;
2x infiniband.

- 10 64-bit machines:
2 processors Xeon E5410 (4 cores per processor);
Clock frequency 2.33GHz;
6144KB cash per core;
16GB RAM;
2 * 160GB Disk;
1Gb Ethernet;
2x infiniband.

- 54 64-bit machines, 2 machines in one case of 2 inches height (U2):
2 processors Xeon E5420 (4 cores per processor);
Clock frequency 2.50GHz;
6144KB cash per core;
16GB RAM;
250GB Disk;
1Gb Ethernet.

- 60 64-bit machines, 2 machines in one case of 2 inches height (U2):
2 processors Xeon E5430 (4 cores per processor);
Clock frequency 2.66GHz;
6144KB cash per core;
16GB RAM;
500GB Disk;
1Gb Ethernet.

- 80 64-bit machines, 4 machines in one case of 4 inches height (U4):
2 processors Xeon X5650 (6 cores per processor);
Clock frequency 2.67GHz;
12288KB cash per core;
24GB RAM;
2 x 500GB Disk;
1Gb Ethernet.

- 4 64-bit machines, in one case of 4 inches height (U4):
2 processors Xeon E5540 (4 cores per processor);
Clock frequency 2.536GHz;
8192KB cash per core;
24GB RAM;
250GB Disk;
1Gb Ethernet.

Since 2-, 4- and 6-core processors are in fact two and four independent processors on one crystal, we have 2560 64-bit CPU. All these CPU are accessible both to JINR users and Grid users through the unified system of batch job processing - batch.

For the development of home software and other needs of JINR users, installed were 5 machines with interactive access of users:
- 4 x 64-bit machines
Xeon X5650, 12 x Core, 36GB RAM, 2 x 500GB HDD, 1Gb Ethernet;
- 1 x 64-bit machine
Xeon E5420, 8 x Core, 16GB RAM, 250GB HDD, 1Gb Ethernet.

The CICC structure comprises several servers of support of user's work and JINR services: batch, WWW, DB mysql and Oracle; e-mail; DNS, monitoring Nagios and others. These servers work basically on 64-bit hardware Xeon and Opteron.

The main system of storing large volumes of information in the CICC JINR is the hardware-software complex dCache. We support 2 dCache instances:
- for 2 virtual organizations LHC CMS and ATLAS;
- for local users and user groups as well as for international projects MPD, HONE, FUSION, BIOMED, COMPASS.

At the moment 9 servers of the main interfaces of the dCache system and 46 systems of a data storage (Pool) work within 2 dCache machines.

Sevaral user associations of our center apply a system of access to remote information XROOTD. For the work of this system 3 hard- and software complexes have been created which comprise:
1 server of processing inquiries to the system and 14 data storage systems.

Hardware platforms of the servers comprising the storage systems: Xeon and Opteron. All storage systems are constructed with the use of the hardware mechanism RAID6. The total accessible capacity of the storage systems dCache and XROOTD is ~1.4PB.

The JINR CICC includes 6 AFS-servers - the highly protected distributed file system which is the basis for supporting domestic directories of users and a system of access to the general software for the whole Institute. Common AFS space in JINR is ~6TB.

In order to serve the WLCG site at JINR (the site is a separate cluster within the distributed WLCG environment) and other international collaborations, 22 servers with system gLite (the software of the intermediate level WLCG) have been installed. In addition to the functions of support of work of the site JINR-LCG2 itself, part of the servers realizes important services and functions of support of the Russian segment of the WLCG project.

In order to improve the work of the CICC local area network and achieve the required parameters of access to data and files, aggregation (TRUNK) of several connections 1 Gb Ethernet in the unified virtual channel of increased throughput is applied.

Figure 1 gives a logic structure of the network connection of basket Superblade with 10 computing blades to the CICC local area network.
Figure 2 shows the connection of 20 computing machines installed in the rack to the CICC local area network.
Figure 3 shows the logics of connecting the rack with 8 disk servers to the basic router of the CICC local network.



Fig.1


Fig.2


Fig.3
Using the Scheme of aggregation of network links, we were able to fulfill the requirements on the access speed to data from the computing tasks of the main users of our resources of both the local JINR users and the users of wLCG/EGEE. This scheme has allowed one not to transfer the whole local network to more powerful channels (10GbE, or Infiniband), that would demand significant monetary expenses.
Archive: 2007, 2008
   Copyright © LIT, JINR , 2006
    Webmaster : @jinr.ru

|    About    |    News    |    Activities    |    Computing & Information resources    |