|| First computer at JINR. Computing machine
"Ural-1" was put into operation, with performance of 100 operations/sec and a magnetic drum memory.
|| Computers M-20 (20 000 operations/s) and "Kiev" (5 000 operations/s) were put into operation.
|| A first step on a way of construction of a multi-machine complex for
processing experimental information in elementary particle physics - a punched film from measuring semiautomatic devices goes to
"Kiev" is processed. Spectrometer information from the LNP measuring center by a cable of 1 km length is transferred for processing to
the central computers.
||Two-machine system of gathering and processing information on the basis of "Minsk - 2" and M-20 was created.
|| Computer BESM-4 is put into operation.
|| BESM-6 computer in JINR. Creation of a compiler from language Fortran, a monitor system "Dubna", distributed all
over the machines BESM-6 in the USSR and abroad (GDR, India), creation of operational system "Dubna" on BESM-6.
Equipment of the Computer Center with machines M-6000, Minsk - 2, BESM-4, ÒÐÀ, ÑDÑ-1604À.
||The scanning machine HPD is an installation for automatic speed
processing of images from the track chambers based on mechanical
scanning device of the "traveling beam" type, has passed its trial
operation. With its use, measured were pictures of dp-events from a
1-meter liquid hydrogen bubble chamber.
||CCC was equipped with CDC-6200 (further modernized up to dual-processor ÑDÑ-6500, equipped in 1976 with remote terminals).
Performance of the Complex has grown up to 3 million operations/s.
CDC-6500 was removed from operation in 1995.
|| ALT-2/160, mass processing of film information started.
|| Creation of a system of terminal access to BESM-6 and ÑDÑ
on the basis of language Intercom. The technical basis of communication is realized on a small computer ES-1010, creation of the JINR multi-machine complex
based on BESM-6 and fast communication channels with data-processing centres of the Laboratories.
||Beginning of mass picture measurement from a spectrometer RISK by a scanning system. Spiral Reader on a line with ÐDÐ-8.
||Introduction of a computer of a uniform series ES-1060, ES-1061.
Connection of terminal devices to all basic JINR mainframes (Intercom and subsystem TERM).
|| Terminal network JINET (Joint Institute Network) was put into operation. Its software was completely developed at LCTA.
|| Mass purchase of personal computers "Pravets-2", program-compatible with IBM PÑ/ÕÒ; inclusion of personal computers in the network JINET.
|| Network JINET became a component of the international computer network.
|| Machine cluster VAX-8350 and first-order network ETHERNET were put into operation; interface of networks JINET and ETHERNET.
Computers ES-1037, ES-1066 were put into operation, organization of a multi-machine ES complex on the basis of the common disk memory.
|| First servers and workstations of SUN family in JINR; development of workstation clusters.
|| Superminicomputers of CONVEX family (C-120, C-220).
|| Installation of ground and two satellite computer communication channels under protocol ÒÑÐ/IP of the JINR LAN with global networks, introduction of first WWW servers. 1200 computers are connected to the network.
|| Upgrade of a ground digital channel JINR-Moscow up to throughput 128 Êbps.
Creation of the JINR modem pool. Replacement of the central devices ES - 1066 by two-processor machine IBÌ4381.
A base server DEC ALPHA 2100 for the JINR inter-institute information centre under project BAFIZ with open network access on WWW was installed.
|| Creation of a communication node in the framework of Russian base network RBNET, realization of a high-speed optical channel JINR-Moscow with throughput of 2 Mbps.
Start-up of multiprocessing vector system C3840.
Creation of a specialized distributed SUN-cluster for experiment CMS in JINR.
|| Introduction of ATM technology in the framework of JINR LAN.
Creation of the JINR center of high-performance computing on the basis of the massive-parallel system HP Exemplar SPP-2000, vector-parallel computing system S3840 and mass memory system ATL2640 on DLT tapes of 10,56 Tbyte and mass memory of 10 Tbyte.
Installation of an experimental computing ÐÑ-farm for experiments ÑÌS and ALICE.
|| Creation of the JINR BackBone based on ATM-technology was completed.
|| Creation of a ÐÑ-farm of common access in the structure of the JIRN center of high-performance calculations.
32-processor system APE-100 system was put into operation for calculations on lattices.
The total number of users of the JINR computer network - 3105.
|| Upgrade of the JINR computer communication link up to 30 Mbps. Transition to the technology Fast Ethernet (100Mbps in the JINR Backbone.
Creation of a test Grid segment at JINR.
Expansion of the ÐÑ-farm to the performance of ~ 2000 SPECint95 (performance in 1 SPÅCint95 approximately corresponds to 40 million operations per second).
|| Creation in the structure of JINR Central information computer complex (CICC JINR) of a new distributed complex comprising 4 interconnected components: an interactive cluster of common access; a computing general purpose farm; a computing farm for experiments on LHC; a computing cluster for parallel computations. The CICC includes 80 processors with the total performance of 80 Gflops, a web - server, a server of databases and a file server with disk RAID-arrays. The total capacity of disk arrays - 6 Tbyte.
|| Upgrade of the JINR computer communication link up to 45 Mbps.
Transition to technology Gigabit Ethernet (1000 Mbps) in the JINR Backbone.
The total number of users of the JINR computer network - 4506.
|| Upgrade of the JINR computer communication channel up to 1000 Mbps.
The total capacity of disk arrays is 50 Tbyte.
||The CICC comprises 160 processors of the total capacity of 100 kSI2K.
|| Growth of the CICC performance up to 670 kSI2K.
The total capacity of the disk arrays is 100 TB.
|| The CICC performance grows up to 1400 kSI2K.
|| Launching of the high-speed computer communication link
JINR-Moscow based on the DWDM technology with a throughput of 20 Gbps.
Increase in the CICC computing resource up to 960 cores of the total capacity of 2400 kSI2K.
The total capacity of the disk arrays is 500 TB.
|| A first stage of transition of the JINR Backbone to a data transfer rate of 10 Gbps has been realized.
|| The transition of the JINR Backbone to a data transfer rate of
10 Gbps has been finalized.
The new system for climatic control of CICC was put into operation.
|| Works on creation Tier-1 level distributed centre for support of the CMS experiment have begun.
|| A prototype of the CMS data processing centre of the Tier-1
level was created.
Creation of clouds autonomous grid infrastructure.
|| JINR Cloud infrastructure as IAAS service put into operation.
Heterogeneous computing cluster HybriLIT for high performance
computing was put into operation.
New version of the monitoring of the computing clusters was implemented.
|| Full scale Tier-1 center for LHC CMS experiment put in
operation as the JINR basic facility.
|| The Tier-2 center at JINR supports a number of virtual organizations, in particular: ALICE, ATLAS, BES, BIOMED, COMPASS, CMS, HONE, FUSION, LHCB, MPD, NOνA, STAR.
HybriLIT has introduced a new component to the heterogeneous computing cluster - a system of virtual desktops to support the work of users with application packages.
|| The JINR local computer network is transferred to DHCP (Dynamic Host Configuration Protocol).
The implementation of the development project for the Multifunctional Information and Computing Complex (MICC) has begun.
Creation of an engineering infrastructure specialized in high-performance computing (HPC), which is based on contact liquid cooling technology and designed to develop a heterogeneous cluster HybriLIT in order to multiply the computing power.
|| On March 27, a presentation of a new supercomputer named after N.N. Govorun, which is the development of the HybriLI heterogeneous platform. The theoretical peak performance of the new powerful JINR computing complex is estimated at 1 Pflops in single precision or about 500 TFlops in double precision.