Development of computer technology in LCTA (1958-1999) 1958-1969 1958The first computer in JINR. The Ural-1 computing machine with a capacity of 100 operations/s and memory on a magnetic drum was put into operation. 1961Commissioning of the M-20 computer (20 thousand operations/c) and "Kiev" (5 thousand operations/s). 1962The first step towards building a multi—machine complex for processing experimental information in elementary particle physics , the perforated film from measuring semi-automatic machines is delivered to Kiev and processed. Spectrometric information from the LNF measuring center is transmitted via a cable about 1 km long to central computers for processing. 1965A two-machine information collection and processing system based on Minsk-2 and M-20 has been created. 1967The BESM-4 computer was put into operation. 1968The appearance of BESM-6 in JINR and the Dubna operating system. Creation of the Dubna monitor system, based on a translator from the Fortran language, prepared by the combined efforts of an international team of programmers from Russia, Germany, Hungary and Korea, led by I.N. Silin and V.P. Shirikov. An extensive library of general-purpose programs containing more than 1,500 modules became an integral part of it. The work on the creation of the program library was led by R.N.Fedorova. Equipping the computing center with machines M-6000, Minsk-2, BESM-4, TPA, CDC-1604A. 1970-1979 1971Under the leadership of I.N. Silin, an effective "Dubna Dispatcher DD-71" was implemented in LCTA. All this served as the basis for the creation and development of technical and software communication tools BESM-6 with different types of machines measuring centers laboratories JINR. The HPD scanning machine has been put into trial operation - a high-speed automatic image processing installation from track cameras based on a mechanical scanning device of the "running beam" type for measuring DP event images from a 1-meter liquid hydrogen bubble chamber. Automation of processing of camera images in the 60s played a big role in studying the interactions of accelerated particles with matter using bubble and other optical track cameras. The work on automating the processing of camera images in LCTA was carried out under the guidance of V.I. Moroz. 1972The CCC was replenished with the CDC-6200 computer (later upgraded to the dual-processor CDC-6500, equipped with remote terminals in 1976). The productivity of the complex has grown to 3 million operations/s. 1978ALT-2/160 started mass processing of information. 1979Creation of a terminal access system for BESM-6 and CDC based on the Intercom language. The technical basis of communication is implemented on the EC-1010 small computer, the creation of a multi-machine JINR complex based on BESM-6 and fast communication channels with data-processing centres of the laboratories. 1980-1979 1980The beginning of mass measurements of the images of the spectrometer is carried out by the scanning system Spiral meter on the line with PDP-8. 1981Introduction of a single series of computers – EU-1060, EU-1061. Connection of terminal devices to all JINR base computers (Intercom and TERM subsystem). 1985The first institute-wide terminal network JINET (Joint Institute Network) has been commissioned. The software of network equipment for the JINR local area network was fully developed at LCTA under the supervision of Professor V.P. Shirikov. 1986Mass acquisition of personal computers “Pravetz-2”, programmatically compatible with IBM PC/XT; PC inclusion in JINET JINR network. 1987The organization of a parallel JINET and its associated high-speed ETHERNET network (up to 10 MB/s) has begun. JINET JINR network has become a subscriber of the international computer network. 1989Commissioning of a cluster of VAX-8350 machines and the first stage of the ETHERNET network; pairing of JINET and ETHERNET networks. Commissioning of the EU-1037, EU-1066 computers, organization of a multi-machine complex of EU computers based on shared disk memory. 1990-1999 1991 The first servers and workstations of the SUN family in JINR; development of clusters of workstations. 1992CONVEX family superminicomputers (C-120, C-220). 1993Work has begun on the organization of terrestrial and two satellite communication channels via the TCP/IP protocol of the JINR local network with global networks, the introduction of the first WWW servers. 1200 computers are connected to the network. 1996Modernization of the JINR–Moscow terrestrial digital communication channel to 128 Kbit/s bandwidth. Creation of the JINR modem pool. Replacement of the EC-1066 central devices with a dual-processor IBM4381. Commissioning of the DEC ALPHA 2100 base server for the JINR Interinstitutional Information Center under the BAFIZ project with open network access via WWW. 1997Creation of a communication node within the framework of the Russian RBNET basic network, implementation of a high–speed optical communication channel JINR-Moscow with a bandwidth of 2 Mbit/s. Launch of the C3840 multiprocessor vector system. Creation of a specialized distributed SUN cluster for the CMS experiment at JINR. 1998Implementation of ATM technology within the JINR local computer network. Creation of the JINR high-performance computing center based on the HP Exemplar SPP-2000 massively parallel system, the C3840 vector parallel computing system and the ATL2640 mass memory system on DLT tapes with a capacity of 10.56 TB and a 10 TB mass memory system. Creation of an experimental computing PC farm for SMS and ALICE experiments. 1999The creation of the JINR reference network based on ATM technology has been completed. Development of computer technology in LIT In 2000, the Laboratory of Computer Technology and Automation was reorganized into an Laboratory of Information Technology. 2000-2004 2000Creation of a shared access PC farm as part of the JINR high-performance computing Center. Commissioning of the APE-100 32-processor system for grid calculations. The total number of users of the JINR computer network at the end of the year was 3105. 2001Expansion of the JINR computer communication channel to 30 Mbit/s. Transition to Fast Ethernet technology (100 Mbit/s) in the JINR reference network. Creating a text Grid segment in JINR. Expansion of the PC farm to a performance of ~ 2000 SPECint95 (a performance of 1 SPECint95 approximately corresponds to 40 million operations per second). 2002Creation of a new distributed complex consisting of 4 interconnected components as part of the JINR Central Information and Computing complex (JINR CICC): an interactive shared access cluster, a general-purpose computing farm, a computing farm for LHC experiments and a computing cluster for parallel computing. The CICC includes 80 processors with a total capacity of 80 Gflops, web servers, database servers and file servers with disk RAID arrays. The total capacity of disk arrays is 6 TB. 2003Expansion of the JINR computer communication channel to 45 Mbit/s. Transition to Gigabit Ethernet technology (1000 Mbit/s) in the JINR reference network. Creating an LCG infrastructure. The total number of users of the JINR computer network amounted to 4506 people at the end of the year. 2005-2009 2005 Expansion of the JINR computer communication channel up to 1000 Mbit/s. Bringing the total capacity of disk arrays to 50 TB. The performance of the CICC was ~ 100 kSI2K. The total number of users of the JINR computer network at the end of the year is 5335 people. 2006The CICC consists of 160 processors with a total capacity of 1400 kSI2K. 2007 Increase the performance of the CICC to 670 kSI2K. The total capacity of disk arrays is 100 TB. 2008 Increased CICC performance up to 1400 kSI2K. 2009Creation of a high-speed communication channel JINR-Moscow based on DWDM technology with a bandwidth of 20 Gbit/sec Increasing the computing resource of CICC to 960 cores with a total capacity of 2400 kSI2K. The total capacity of disk arrays was 500 TB. 2010-2014 2010 The first stage of transition in the JINR reference network at 10 Gbit/s has been implemented. 2011 The work on the transition of the JINR reference network to 10 Gbit/s has been completed. A new CICC climate control system has been put into operation. 2012 Work has begun on the creation of a Tier-1 distributed center to support the CMS experiment. 2013 A prototype of a data processing center for a Tier-1 CMS experiment has been created. The creation of "cloud" autonomous grid infrastructures has been implemented. 2014 The JINR Cloud Infrastructure Service (IAAS) has been put into operation. A computing cluster with heterogeneous HybriLIT architecture has been put into operation. A new version of the monitoring system of the computing complex has been implemented. 2015-2019 2015A full-scale Tier-1 center for the LHC CMS experiment becomes the JINR base facility. 2016The Tier-2 center at JINR supports a number of virtual organizations, in particular: ALICE, ATLAS, BES, BIOMED, COMPASS, CMS, HONE, FUSION, LHCB, MPD, NOvA, STAR. A new component has been introduced into the heterogeneous HybriLIT computing cluster - a virtual desktop system to support users working with application software packages. 2017The JINR local area Network has been switched to DHCP (Dynamic Host Configuration Protocol). The implementation of a project for the development of a Multifunctional Information and Computing Complex (MICC) has begun. Creation of an engineering infrastructure specialized for high-performance computing (HPC), which is based on contact liquid cooling technology and designed for the development of a heterogeneous HybriLIT cluster in order to multiply computing power. 2018On March 27, the presentation of the new N.N. Govorun supercomputer, which is a development of the heterogeneous HybriLIT platform, took place. The theoretical peak performance of the new powerful JINR computing complex is estimated at 1 Pflops in single precision or about 500 Tflops in double precision. 2019The supercomputer "Govorun" was modernized, i.e., the total peak performance of the supercomputer reached 860 TFlops for double-precision operations and 1.7 PFlops for single-precision operations, which in turn allowed the CPU component of the "Govorun" supercomputer to take the 10th place in the TOP50 list of the most powerful supercomputers in Russia and the CIS.The bandwidth of the Moscow-JINR telecommunication channel has been increased to 3 x 100 Gbit/s, the bandwidth of the backbone of the Institute local area network was increased to 2 x 100 Gbit/s.A distributed computing cluster network between DLNP and VBLHEP with a capacity of 400 Gbit/s with double redundancyThe Tier-1 data processing system for CMS was increased to 10 688 cores(151.97 kHS06), the computing resources of the Tier-2 center amounted to 4128 cores(55.489 kHS06). Tier-2 has become the best one in the Russian Consortium RDIG.The total usable capacity of disk servers amount to 2789 TB for ATLAS, CMS, and ALICE and 140 TB for other virtual organizations. 2020-2023 2020The Tier-1 data processing system was increased to 13 376 cores(203.569 kHS06), in terms of performance, Tier-1 was ranked second among all the Tier-1 world centers for the CMS experiment.The computing resources of the Tier-2 center were expanded to 7060 cores, which currently provides a performance of 100 kHS06. In April, the commissioning of a new tape library IBM TS4500 with a total volume of 40 PB was completed. The resources of the cloud infrastructure were enlarged due to contributions of the NOvA experiment (480 CPU cores, 2.88 TB of RAM, 1.728 PB of disk space for ceph-based storage) and the commissioning of 2880 CPU cores with 46.08 TB of RAM purchased by the JUNO experiment. 2021In terms of performance, Tier-1 was ranked first among the Tier-1 world centres for the CMS experiment. The Tier-1 resource centre has also been used to perform simulations for the MPD experiment of the NICA project. The computing resources of the Tier-2 center have been expanded to 9272 cores( 149,938.7 HEP-SPEC06 hours). Our upercomputer cluster is in the top-500 leading systems in the world (the DAOS (Distributed Asynchronous Object Storage) polygon of the “Govorun” supercomputer takes the 1st place among Russian supercomputers in the current IO500 list). The series of computational experiments was carried out on the “Govorun” supercomputer using several quantum simulators, such as QuEST, Qiskit, CuQuantum, and the Circ quantum circuit generator, which are capable of operating on different computing architectures. 2022 The performance of the supercomputer "Govorun" enhanced by 23.5% and reached 1.1 PFlops level and its memory capacity increased. Within the development of the resource monitoring system for the Tier-1 and Tier-2 grid sites, a new accounting system was created at JINR, which enabled to significantly expand the functionality of the original system, as well as to reduce the time of obtaining statistical data due to the creation of automatic data processing by the visualization system. 2023 The application of grid technologies based on the DIRAC Interware enabled to integrate not only the allocated computing resources of all MICC components, but also the clusters of the Member States’ organizations. To meet the increasing needs of neutrino experiments for storing experimental data,the cloud storage capacity of the neutrino platform was increased from 1.5 to 3.1 PB. ×