The raw data generated by OMs and section masters are firstly sent in the cluster center, then to the shore station via the optical cable. The cluster control software transforms the data into three parts presented in a special Baikal data format. These parts are master, monitor, and service records. The master records contain the digitalized OMs data accompanied with the meta-data like timestamps, OM addresses, trigger, and some other information. Service records contain information about static and dynamic configuration of a cluster. The former includes number of clusters, IP-addresses of the telescope elements, correspondance between sections and strings. The other includes current run number and the trigger configuration. Finally the monitor records contain the OM parameters — photomultiplier high voltage, registration threshold, temperature inside the OM, and amplitude distribution of signals.

One cluster generates about 15 Gb of the data per day. All the data having been compressed, they are sent through the Internet to Dubna servers.

In Dubna the data are processed with the help of BARS (Baikal Analysis and Reconstruction Software). Its kernel was taken from MARS framework of MAGIC collaboration (permission was granted), which is in turn based on the ROOT framework. BARS is supposed to be run on a Linux operation system. Mainly BARS consist of C++ classes and ROOT-macros performing specific tasks like uncompressing and reading the raw data, sorting the different types of records, sorting the data by cluster sections or by timestamps, extracting impulses, creating joint events.

An output of a macro is usually a ROOT-file to be used as an input of another macro. Thus they can form chain of tasks, which can be considered as a data processing cycle. Actually such a cycle should be run daily as the new raw data arrive from the Baikal shore station. Managing the daily cycle is the aim of the automation system writen on Python programming language.

logo-gvd