Annual Report 2003

    JINR LOCAL AREA NETWORK

At present, IP addresses database contains 4506 registered JINR LAN elements (4053 in year 2002). As was outlined in the JINR Topical Plan for 2003, one of the main tasks to develop the information, computing and telecommunication structure at JINR was implementation of the first stage of a selected variant of the JINR Backbone on the basis of Gigabit Ethernet.

The core of the JINR LAN Gigabit Ethernet Backbone is Cisco Catalyst 6509 switch with 8-port Gigabit Interface Card. There are Cisco Catalyst 3550 switches to be installed at seven JINR Laboratories and in the building housing the JINR Administration. All these pieces of gigabit equipment are interconnected by 16-wired single-mode optical cables of 10300 m (Fig. 3). To defend the perimeter of the JINR LAN, two Cisco firewall devices PIX-525 were installed (one is active, and the other is in a failover mode).

Fig. 3. JINR Gigabit Backbone

Fig. 3. JINR Gigabit Backbone

Work was in progress on research in the main features of the network traffic. The methods of nonlinear analysis and uniflow neural network were applied to reconstruction of a dynamic system describing the information traffic in a middle-size local area network. The neural network trained on measuring the network traffic has played back a statistical distribution of the information flow, which is well featured by the logarithmically normal law. The analysis of key components of the traffic measurements has shown that already some leading components form a log-normal distribution, while the residual components play a role of casual noise. This result is confirmed by joint statistical, wavelet and Fourier analysis of traffic measurements. The log-normal distribution of the information flow and the multiplicative character of time series confirms applicability of the scheme developed by A.Kolmogorov to homogeneous fragmentation of grains, for the network traffic as well [1].

    Distributed Information Systems,
    JINR Central Computing and Information Complex

The JINR Central Computing and Information Complex (JINR CCIC) is part of the Russian Information Computing Complex for processing information from the Large Hadron Collider. It comprises: an interactive cluster of common access; a computing farm for carrying out simulation and data processing for large experiments; a computing farm for the tasks of the LHC project; a computing farm for carrying out parallel calculations on the basis of modern network technologies (MYRINET, SCI, etc.); mass storage resources on disk RAID-arrays and tape robots.

CCIC PC-farms performance is: CPU 4.3 kSPI95, disk space 7.7 TB and ATL tapes 16.8 TB. Average total CPU loading was 25%. In October 2003, the loading was 60.98%. JINR CCIC resources were used by the experiments E391A (KEK), KLOD, COMPASS, D0, DIRAC, HARP, CMS, ALICE for mass event modelling, data simulation and analysis. For the experiments ALICE, ATLAS and CMS, sessions of the mass modelling of physical events were conducted in the framework of JINR's participation in DC04 (Data Challenge 2004). More then 300 staff members of JINR and other research are using the JINR CCIC. Table 2 shows statistics of CPU time used by JINR's Laboratories on CCIC PC farms.

Table 2
Subdivision LHC prod. run DLNP BLTP LPP FLNP LIT Others VBLHE FLNR
CPU time, % 33.32 20.77 18.71 6.70 5.78 5.23 4.47 2.57 2.44

    Computing Service and Creation of a Grid Segment of JINR

In 2003, LIT actively worked on using the Grid technologies for experimental data processing. At present, the scientific community begins intensive use of the Grid concept, which guesses creation of an infrastructure providing the global integration of information and computing resources. JINR has a possibility of a full-scale involvement in this process. The LHC project, which is unique in scale of obtained data and from the viewpoint of computer technologies, provides processing and analysis of experimental data using the Grid. The analytical review in the journal "Open Systems" prepared in cooperation with SINP MSU and SSC RRC "Kurchatov Institute" is devoted to the analysis of work performed in this area at JINR and the Russian centres [2].

Work was under way to create a system of the global monitoring of the resources of the large-scale Russian virtual organization, including LAN segments of several institutes (SINP MSU, JINR, ITEP, IHEP, IAM RAS) in accordance with Grid architecture. The adaptation and support of new versions of ANAPHE (former LHC++) Library for Linux, Windows and Sun Solaris platforms were performed. The existing software for LHC experiments (ATLAS, ALICE and CMS) and non-LHC experiments was supported. Measurements of Globus Toolkit 3 (GT3) performance under heavy load and concurrency were done. AliEn server was installed for distributed data processing of ALICE in Russia. The Castor system was installed and tested at the JINR CCIC.

During the year 2003 JINR participated in the CMS Pre-Challenge production (PCP03). 250K events were simulated with the help of the CMSIM v.133 package. A volume of data produced was 320 GB. A new Grid tool, Storage Resource Broker (SRB), was used for the CMS production. The SRB client program installed at JINR provides a direct access to CMS common data-bases at SRB server in UK (Bristol) and gives new opportunities for storage and exchange of data inside the CMS collaboration.

The LIT staff members take part in the development of monitoring facilities for computing clusters with a large number of nodes (10000 and more) which are used in the EU Data Grid infrastructure. In the framework of a task of Monitoring and Fault Tolerance they participate in creation of a Correlation Engine system. This system serves for an operative discovering of abnormal states at cluster nodes and taking precautions to prevent them. A Correlation Engine Prototype was installed at CERN and JINR for accounting abnormal states of nodes [3].

Some tests were performed on data transfer from Protvino (sirius-b.ihep.su; OS Digital UNIX Alpha Systems 4.0) to ATL-2640 mass storage system in Dubna (dtmain.jinr.ru; OS HP-UX 11.0) to estimate a transmission capacity and a stability of a system including communication channels and a mass storage (OmniBack disk agent in Protvino and OmniBack tape agent in Dubna). No abnormal terminations have been fixed. The average transmission speed was 480 Kbps. The maximal speed was 623 Kbps, while the minimal one was 301 Kbps. (The distance between Dubna and Protvino is about 250 km; the capacity communication channel Protvino-Moscow is 8 Mbps).

The storage of data obtained during CMS Monte-Carlo Mass Production runs was provided by using Omnistorage: the volumes of data from SINP MSU (~ 1 TB) have been transferred to Dubna to ATL-2640.

Maintenance of the JINR Program Library was in progress. New documents have been prepared and introduced in WWW. They include realization at JINR of electronic access to the CPCLIB, CERNLIB ( http://www.jinr.ru/programs/), adaptation programs on the JINR computer platforms, and filling the JINRLIB (about 80 programs have been included and tested).

© Laboratory of Information Technologies, JINR, Dubna, 2004
T.Strizh