Researches of young scientists 2019

The 126 Session of the JINR Scientific Council, 19–20 September 2019

Information Technologies @ JINR development strategy

Nikolay Voytishin
Laboratory of Information Technologies, JINR, Dubna, Russia
The 126 Session of the JINR Scientific Council
19 September 2019

Programme Advisory Committee for Condensed Matter Physics 50th meeting, 17–18 June 2019

Data management system of the UNECE ICP Vegetation Program

А. Uzhinskiy1, G. Ososkov1, M. Frontasyeva2
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Frank Laboratory of Neutron Physics, JINR, Dubna, Russia

The aim of the UNECE International Cooperative Program (ICP) Vegetation in the framework of the United Nations Convention on Long-Range Transboundary Air Pollution (CLRTAP) is to identify main polluted areas of Europe, create regional maps and further develop the understanding of the long-range transboundary pollution. The Data Management System (DMS) of the UNECE ICP Vegetation consists of a set of interconnected services and tools deployed and hosted at the Joint Institute of Nuclear Research (JINR) cloud infrastructure. DMS is intended to provide its participants with a modern unified system of collecting, analyzing and processing of biological monitoring data. The general information about DMS and its abilities are presented.

Multifunctional platform for plant disease detection

А. Uzhinskiy1, P. Goncharov2, G. Ososkov1, A. Nechaevskiy1
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Sukhoi State Technical University of Gomel, Gomel, Belarus

The increasing number of smartphones and advances in the deep learning field open new opportunities in the crop disease detection. The aim of our research is to develop a multifunctional platform that uses modern organization and deep learning technologies to provide a new level of service to the agricultural community. As a product, we are going to develop a mobile application allowing users to send photos and a text description of sick plants and get the cause of the illness and its treatment. We have collected a special database of grape and wheat leaves consisting of ten image sets. We have reached 93% accuracy in the disease detection with a deep siamese convolution network. We have developed a web portal with a basic detection functionality and provided an opportunity to download our self-collected image database. The general information about the platform and its abilities are presented.

Parallel simulation of the magnetization reversal phenomenon in the φ0-Josephson junction

M.V Bashashin1,2, E.V. Zemlyanaya1,2, Yu.M. Shukrinov1,2, I.R. Rahmonov1,3, P. Ph. Atanasova4, S.A. Panayotova4
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Dubna State University, Dubna, Russia
3Umarov Physical Technical Institute, Tajikistan
4University of Plovdiv Paisii Hilendarski, Bulgaria

A model of the φ0-Josephson junction in a system “superconductor – ferromagnetic – superconductor” with direct coupling between the magnetic moment and the Josephson current is studied by using the implicit two-stage Gauss-Legendre algorithm for the numerical solution of the respective system of equations. In this framework, the effect of the full magnetization reversal is investigated in a wide range of parameters of the model. With this aim, a parallel MPI/C++ computer code has been developed. Its efficiency is confirmed by the calculations, which have been carried out at the Heterogeneous Platform “HybriLIT” оf the Multifunctional Information and Computer Complex of the Laboratory of Information Technologies, JINR (Dubna).

Distributed information and computing environment of JINR Member State organizations

N.A. Balashov1,2, A.V. Baranov1, A.N. Makhalkin1, Ye.M. Mazhitova1,2, N.A. Kutovskiy1, R.N. Semenov1,3
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Institute of Nuclear Physics, Almaty, Kazakhstan
3Plekhanov Russian University of Economics, Moscow, Russia

The integration of JINR Member State organizations’ resources in a unified distributed information and computing environment is an important and topical task, the solution of which would significantly accelerate scientific research. This paper describes the distributed cloud infrastructure deployed on the basis of the resources of the Laboratory of Information Technologies of the Joint Institute for Nuclear Research (JINR) and some JINR Member State organizations, explains the motivation of this work, the approach it is based on, gives outline plans for using the created infrastructure.

Solving the Optimization Problem for Designing the Pulse Cryogenic Cell

A. Ayriyan1, J. Busa Jr. 1,2, H. Grigorian1,3, E. E. Donets4
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Institute of Experimental Physics, Slovak Academy of Sciences
3Department of Theoretical Physics, Yerevan State University
4Veksler and Baldin Laboratory of High Energy Physics, JINR, Dubna, Russia

Solving the optimization problem for the characteristics of the thermal source of a cryogenic cell, i.e. a multilayer cylindrical sandwich-type configuration designed for a pulsed dosed injection of the working substance into the ionization chamber of the source of multiply charged ions, is considered. To solve the optimization problem, we have developed the MPI+OpenMP hybrid parallel calculation algorithm based on the brute force method to search for the maximum of the integral of proportionality to the volume of gas evaporated from the cell surface. The algorithm leads to multiple solutions of the initial boundary value problem for the heat equation, which is solved numerically by the alternating direction implicit method (ADI). A method of simple iterations with an adaptive time step is implemented to solve nonlinear difference equations. The solution of the optimization problem for a specific cell configuration on the GOVORUN supercomputer has demonstrated a ten- to hundredfold acceleration of the calculations.

Programme Advisory Committee for Particle Physics 51st meeting, 19–20 June 2019

Tier-1 Service Monitoring System

I. Kadochnikov, V. Korenkov, V. Mitsyn, I. Pelevanyuk, T. Strizh
Laboratory of Information Technologies, JINR, Dubna, Russia

Tier-1 for CMS was created in JINR in 2015. It is important to keep an eye on the Tier-1 center all the time in order to maintain its performance. The hardware monitoring system is based on Nagios. It monitors the center on several levels: the engineering infrastructure, the network and hardware. Apart from infrastructure monitoring, there is a need for consolidated service monitoring. Top-level services, which accept jobs and data from the grid, depend on lower-level storage and processing facilities, which themselves rely on the underlying infrastructure. There are various sources of information about the state and activity of the Tier-1 services. The decision to develop a new monitoring system was made. The goals are to retrieve a monitoring information about services from different sources, process data into events and statuses and react according to a set of rules, e.g. to notify service administrators or restart a service.

Event reconstruction based on data from micro-strip detectors of the tracking system in the BM@N experiment

Dmitriy Baranov
Laboratory of Information Technologies, JINR, Dubna, Russia

Main detectors of the tracking system in the BM@N experiment have a micro-strip readout. The key advantage of the given detector type is to use readout electronics that are much easier to assemble through a smaller number of digital channels in comparison with, for example, pad-based or pixel-based detectors. However, this advantage is diluted by a significant shortcoming, i.e. false strip crossings (fakes) resulting from the coordinate reconstruction procedure. It considerably complicates further track finding algorithms. With an increase in event multiplicity the number of fakes increases as well. It leads to a reduction in the overall efficiency of the event reconstruction procedure. In this report, we describe the features of the hit reconstruction procedure as a step in the complete event reconstruction based on data from three types of micro-strip detectors used in the BM@N experiment in 2017-2018: GEM, SILICON, CSC. The software implementation of simulation and data processing algorithms for the detectors is also described.

Cluster monitoring system of the Multifunctional information and computing complex (MICC) LIT

I. Kashunin, A. Dolbilov, A, Golunov, V. Korenkov, V. Mitsyn, T. Strizh, E. Lysenko
Laboratory of Information Technologies, JINR, Dubna, Russia

The monitoring system of LIT MICC Tier-1 and Tier-2 was put into operation in early 2015. In connection with the development of the computing complex, the number of devices as well as the number of measured metrics increased. Thus, over time, the amount of monitored data increased and the monitoring system server performance was insufficient. The solution to this problem was the construction of a cluster monitoring system. This made it possible to distribute the load from one server to several and significantly increase the level of scalability.

Recent developments in JINR cloud services

N.A. Balashov1, A.V. Baranov1, A.N. Makhalkin1, Ye.M. Mazhitova1,2, N.A. Kutovskiy1, R.N. Semenov1,3
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Institute of Nuclear Physics, Almaty, Kazakhstan
3Plekhanov Russian University of Economics, Moscow, Russia

The JINR cloud based on the Infrastructure-as-a-Service (IaaS) model provides JINR users with virtual machines (VMs), i.e. a universal computing resource designed for personal usage and as a basis for multi-user services. In this paper we give an overview of the recent changes in the JINR cloud infrastructure, common cloud services and present some new service developments, namely and

Operation center of the JINR Multifunctional information and computing complex

A.O. Golunov, A.G. Dolbilov, I.S. Kadochnikov, I.A. Kashunin, V.V. Korenkov, V.V. Mitsyn, I.S. Pelevanyk, T.А. Strizh
Laboratory of Information Technologies, JINR, Dubna, Russia

The Multifunctional information and computing complex (MICC) at the Laboratory of Information Technologies of the Joint Institute for Nuclear Research (LIT JINR) is a sophisticated multi-component hardware-software complex aimed at a wide range of tasks related to data processing, analysis and storage in order to ensure scientific and productive activities of the Institute and its Member States. The main components of the MICC computing infrastructure are the Tier-1 and Tier-2 grid sites of the global grid infrastructure WLCG (Worldwide LHC Computing Grid) created for processing of data from experiments at the Large Hadron Collider, the JINR cloud infrastructure and the heterogeneous computing cluster HybriLIT. An important tool for ensuring a smooth operation of computing systems of such a level in a 24/7 mode is a comprehensive monitoring of all components and subsystems of the Centre. To ensure an effective control of the MICC components, the operation center (OC) of the Multifunctional information and computing complex has been developed in the Laboratory of Information Technologies. The main functions of OC are the round-the-clock surveillance of the state of hardware components, services, the engineering and network infrastructure.

Short-Range Correlation measurement at BM@N

V. Lenivenko1, S. Merts1, V. Palchik2, V. Panin3, M. Patsyuk1, N. Voytishin2
1Veksler and Baldin Laboratory of High Energy Physics, JINR, Dubna, Russia
2Laboratory of Information Technologies, JINR, Dubna, Russia
3Commissariat à l'énergie atomique et aux énergies alternatives, France

BM@N (Baryonic Matter at Nuclotron) is a fixed target experiment. It is the first functioning experiment of the NICA-Nuclotron-M (Nuclotron-based Ion Collider fAсility) accelerator complex. The main goal is to study the properties of hadrons and the formation of (multi) strange hyperons on the threshold of the birth of hypernuclei. In the last physical run, which ended in April 2018, besides the main BM@N physical program, the first measurement of Short-Range Correlations in the carbon nucleus was carried out. About 20% of nucleons of nuclei at any moment of time are located in intensely interacting SRC pairs, which are characterized by a large absolute and small momentum of the center-of-mass system in comparison with the Fermi pulse. Traditionally, the properties of SRC pairs are studied in hard scattering reactions when a particle from the beam (electron or proton) interacts with one nucleon of a nucleus. In the BM@N experiment, inverse kinematics was used: a carbon ion beam collided with a liquid hydrogen target, while the nucleus, after the interaction, continued to move forward and was recorded by tracking and time-of-flight systems of the BM@N spectrometer. The properties of the residual nucleus at a beam momentum of 4 GeV/s/nucleon have never been studied before. We will present a brief overview of the preliminary results of analyzing the data of the first SRC measurement at BM@N.

Poster of N. Voytishin ranked second on the PAC for Particle Physics

GOVORUN supercomputer engineering infrastructure

A.S. Vorontsov, A.G. Dolbilov, M.L. Shishmakov, E.A. Grafov, A.S. Kamensky, S.V. Marchenko
Laboratory of Information Technologies, JINR, Dubna, Russia

A complex engineering infrastructure has been developed to support the GOVORUN supercomputer, which is an expansion of the HybriLIT heterogeneous cluster. The infrastructure combines an integration of two solutions on cooling systems: the air cooling system for the GPU-component and the water cooling system for the CPU-component based on the solution of the RSC Group.

IT- ecosystem of the HybriLIT platform

D. Belyakov1, Yu. Butenko1, M. Kirakosyan1, M. Matveev1, D. Podgainy1, O. Streltsova1, Sh. Torosyan1, M. Vala2, M. Zuev1
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Pavol Josef Safarik University, Kosice, Slovakia

In order to efficiently use HPC-resources in solving scientific and applied problems, it is necessary to provide both computing resources and software-information environment that allows users to simplify work with the existing computing resources. Another aspect that influences the development of the software-information environment is integration of HPC-resources with applied program packages that are increasingly being used to solve complex technical problems that are necessary for JINR. All this leads to the formation of an IT ecosystem, which is not only a convenient means for carrying out resource-intensive computations, but also becomes a fruitful educational environment allowing students to learn latest computing architectures, technologies and tools for parallel programming.

GOVORUN supercomputer: hardware and software environment

D. Belyakov1, Yu. Butenko1, I.A. Kashunin1, M. Matveev1, M. Vala2
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Pavol Josef Safarik University, Kosice, Slovakia

«HybriLIT» heterogeneous platform is a part of the Multifunctional information and computing complex (MICC) of the Laboratory of Information Technologies of JINR. The heterogeneous platform consists of «GOVORUN» supercomputer and «HybriLIT» educational and testing polygon. Network infrastructure of the heterogeneous platform is based on 10GBASE-T Ethernet network with two high-speed segments of 100Gbit/s Mellanox Infiniband and 100 Gbit/s Intel Omni-Path technologies. The data network of «GOVORUN» supercomputer has been built on high performance network filesystem «Lustre» which is working over 100 Gbit/s Intel OmniPath. The MPI processes utilizes Intel Omni-Path between CPU component and Mellanox Infiniband between GPU component. The total performance is 500 TFlops for double precision and 1 PFlops for single precision.

Using the GOVORUN supercomputer for the NICA megaproject

D. Belyakov1, A.G. Dolbilov1, A.N. Moshkin2, D.V. Podgainy1, O.V. Rogachevsky2, O.I. Streltsova1, M.I. Zuev1
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Veksler and Baldin Laboratory of High Energy Physics, JINR, Dubna, Russia

At present, the GOVORUN supercomputer is used for both theoretical studies and event simulation for the MPD experiment of the NICA megaproject. To generate simulated data of the MPD experiment, the computing components of the GOROVUN supercomputer, i.e. Skylake (2880 computing cores) and KNL (6048 computing cores), are used; data are stored on the ultrafast data storage system (UDSS) under the management of the Lustre file system with a subsequent transfer to cold storages controlled by the EOS and ZFS file systems. UDSS currently has five storage servers with 12 SSD disks using the NVMe connection technology and a total capacity of 120 TB, which ensures low time of access to data and a data acquisition/output rate of 30 TB per second. Due to the UDSS high performance, by May 2019 over 40 million events for the MPD experiment have already been generated using the UrQMD generator for a nuclear collision energy AuAu √s = 4, 7, 9, 11 GeV. In future, other MC generators are expected to be used as well. The implementation of different computing models for the NICA megaproject requires confirmation of the model’s efficiency, i.e. meeting the requirements for the time characteristics of acquiring data from detectors with their subsequent transfer to processing, analysis and storage, as well as the requirements for the efficiency of event modeling and processing in the experiment. For these purposes it is necessary to carry out tests in a real software and computing environment, which should include all the required components. At present, the GOVORUN supercomputer is such an environment; it contains the latest computing resources and a hyperconvergent UDSS with a software-defined architecture, which allows providing a maximum flexibility of data storage system configurations. It is planned to use the DIRAC software for managing jobs and the process of reading out/recording/processing data from various types of storages and file systems. All the enumerated above will allow one to check a basic set of data storage and transmission technologies, simulate data flows, choose optimal distributed file systems and increase the efficiency of event modeling and processing. The studies in the given direction were supported by the RFBR grant (“Megascience – NICA”), №18-02-40101 and 18-02-40102.

Programme Advisory Committee for Nuclear Physics 50th meeting, 24–25 June 2019

JOIN² Software Platform for the JINR Open Access Institutional Repository

I.A. Filozova, R.N. Semenov, G.V. Shestakova, T.N. Zaikina
Laboratory of Information Technologies, JINR, Dubna, Russia

In recent years, open scientific infrastructures have become an important trend for providing researchers, the state and society with scientific information. Worldwide research institutes and universities actively plan and implement archives of their scientific output. The JINR Document Server (JDS — was based on the Invenio software platform (developed at CERN). The goals of JDS are to store JINR information resources and provide effective access to them. JDS contains many materials that reflect and facilitate research activities. In the framework of the JOIN2 (Just anOther INvenio INstance) project, the partners have improved and adapted the Invenio software platform to the information needs of JOIN2 users. Needs of JDS users are very similar to needs of JOIN2 users, so JINR decided to join the JOIN2 project. The JINR participation in the project will improve the functionality of the JINR Open Access institutional repository by reusing the code and further joint development. The key points are the JDS migration and adaptation to the JOIN2 platform and workflows. JINR joining to the JOIN2 project opens new challenges, for example, the enhancements required to handle the Cyrillic script for the correct authority records display have been implemented. This enhancement is applied to other national languages as well. It actually enables the system to handle whatever language in whatever script. Some of the JDS features are the following: records with media files (video lectures, seminars, tutorials); data import based on DOI, ISBN, ID from arXiv, WoS, Medline, Inspec, Biosis, PubMed; private collections with a working group identification; collection of authority records, namely Grants, Experiments, Institutions, Institutes, People, Periodicals; their link with bibliographic records. JOIN² enables users, authors, librarians, managers, etc. to view the results of scientific work in a useful, friendly form and provide rich functionalities in the simplest way. The JOIN² workflow covers several verification layers of user data, which exclude various types of errors, and provides a reliable information to end users.

Heavy fragments (3He and 4He) identification using the energy loss method in the STS detector of the CBM experiment

O.Yu. Derenovskaya1, V.V. Ivanov1,2, I.O. Vassiliev3,4
1Laboratory of Information Technologies, JINR, Dubna, Russia
2National Research Nuclear University “MEPhI”, Moscow, Russia
3Gesellschaft für Schwerionenforschung mbH, GSI, Darmstadt, Germany
4Goethe-Universität, Frankfurt am Main, Germany

Currently the CBM experiment is being developed in GSI (Darmstadt, Germany) at the FAIR accelerator complex of the international collaboration with JINR. One of the aims of the experiment is to study the production of hypernuclei. Theoretical models predict that single and even doubly strange hypernuclei are produced in heavy-ion collisions with a maximum yield in the region of SIS100 energies. The discovery and investigation of new (doubly strange) hypernuclei will shed light on hyperon-nucleon and hyperon-hyperon interactions. In order to accurately measure the yields of hypernuclei and their lifetime, one should identify their decay products including 3He and 4He. In this paper, the possibility of the heavy fragments identification using the energy loss method in the STS detector was studied. The ωkn criterion was successfully adapted for the separation of doubly charged particles from singly charged particles. The combination of the energy loss method and ωkn criterion showed a high level of the background suppression and 3He and 4He identification.

Chinese IHEP Data Center Simulation

D.I. Priakhina1, a, A.V. Nechaevskiy1, V.V. Trofimov1, G.A. Ososkov1, D.M. Marov2
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Dubna State University, Dubna Russia E-mail: a

The Laboratory of Information Technologies (LIT) at the Joint Institute for Nuclear Research (JINR) developed a system for simulating storage and processing of data from large scientific experiments, i.e. the SyMSim (Synthesis of Monitoring and Simulation) software complex. The software complex was applied to design systems of storage and processing large amounts of data. A web site was developed for the interaction with the simulation program. The site allows one to create a simulated infrastructure in a design view by placing necessary components in a certain field, to configure equipment parameters and look through simulation results in charts. In 2018, the software complex was adapted for the computing center of the Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences. A simulation of the IHEP data center was conducted together with Chinese colleagues. The parameters from the monitoring tools database of the IHEP computing infrastructure and some experimental data of BESIII were used as input data for simulation. The simulation results allowed making a choice of the network topology in order to improve its performance

Algorithms and Methods for High Precision Computer Simulations of Cyclotrons for Proton Therapy: The Case of SC202

T. Karamysheva1, O. Karamyshev2, V. Malinin2, D. Popov2
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Dzhelepov Laboratory of Nuclear Problems, JINR, Dubna, Russia

Effective and accurate computer simulations are highly important in the accelerator design and production. The most difficult and important task in the cyclotron development is a magnetic field simulation. It is necessary to achieve the accuracy of the model that is higher than the possible deviation of the magnetic field in a real magnet. An accurate model of the magnet and other systems of the cyclotron allows one to perform beam tracking through the whole accelerator from the ion source to the extraction. While high accuracy is necessary during late stages of research and development, the high performance of simulations and the ability to swiftly analyze and apply changes to the project play a key role during early stages of the project. The techniques and the algorithms for high accuracy and performance of the magnet simulations have been created and used for the development of the SC202 cyclotron for proton therapy, which is under production by the collaboration of JINR (Dubna, Russia) and ASIPP (Hefei, China).

Monte Carlo study of the systematic errors in the measurement of the scattering 15N ions by 10,11B

I. Satyshev1, S.G. Belogurov1,2, B. Mauyey2, V. Schetinin1, E. Ovcharenko1, M. Kozlov3
1Laboratory of Information Technologies, JINR, Dubna, Russia
2Flerov Laboratory of Nuclear Reactions, JINR, Dubna, Russia
3University Centre, JINR, Dubna, Russia

A series of experiments on the elastic scattering of 15N ions from 10,11B has been carried out at the U-200P cyclotron in the Laboratory of Heavy Ions of the Warsaw University using the charged particle detection system ICARE. The measured differential distributions are used for obtaining the theoretical interpretation in terms of the Optical Model for the pure elastic scattering and the Distorted Wave Born Approximation for the cluster transfer mechanism. The reliability of the interpretation depends on the systematic errors of the experiment. In this work, we report on a Monte Carlo study of such errors using the ExpertRoot simulation and the analysis framework based on the FairRoot package. The influence of such factors as the angular, spatial and energy spread of the beam, the energy loss and the multiple scattering in the target, as well as the influence of the dimension of the detector slit on the angular resolution of the detector and the reconstructed differential cross section were investigated. The Monte Carlo model allowed us to study the influence of the ion identification and the detection efficiency and the energy resolution as well. However, in the given experiment these factors were not of any importance. As a result, it was demonstrated that the reconstructed differential cross section was slightly different from the input one. The main reason for this distinction is the beam spot size at the target. The influence of the slit length is negligible; hence, it can be increased for a better detection efficiency. The developed software will be used for planning and analyzing similar experiments in the future.

Poster of I. Satyshev ranked third on the PAC for Nuclear Physics