• No results found

Computing Research Center

N/A
N/A
Protected

Academic year: 2021

Share "Computing Research Center"

Copied!
5
0
0

Loading.... (view fulltext now)

Full text

(1)

data analysis environment, which is utilized for various studies on particle and nuclear physics, material and life sciences, accelerator development, theory computation, etc. This system has been in operation since April 2012. It is integrated with the B factory computer system, which was dedicated to the Belle experiment, and with the Cen-tral Information System, which was mainly being used for J-PARC experiments. This system also includes informa-tion infrastructure environments such as e-mail and web systems. The data storage system in KEKCC consists of a hard disk (7 PB in size) and a tape-library (16 PB in size, with HPSS) system; 3 PB of the disk and the tape-library are combined into the HSM (Hierarchical Storage Man-agement) system using GHI (GPFS-HPSS-Interface). The storage system is shared in KEKCC with GPFS and GHI as a distributed fi le system. This fi le system can be shared by Windows, Linux and OSX computers through CIFS (Common Internet File System) and iRODS, a data GRID middleware.

Figure 4-4-2-1 shows the CPU utilization for each re-search group in FY2012. The average CPU utilization was about 50%. Experiments and simulated data accounted for 25% of the data storage. The system utilization is expected to increase in the next year because the beam power of J-PARC is increasing.

Large-scale Simulation Program

KEK launched its Large-scale Simulation Program in April 1996 to support large-scale simulations in the fi eld of high-energy physics and other related areas. Under this

4.4.2.1. Overview

The computing research center provides computing resources and computer networks to support research activities at KEK.

The Central Computer System was upgraded in early 2012. It was integrated with the B-factory computer sys-tem, and provides the staff of KEK and research collabora-tors with large amounts of experimental data and CPUs for data analysis. The Supercomputer System is operated for large-scale simulation programs. The computing re-search center operates campus networks including, the KEK-LAN, the J-PARC infrastructure network, called JLAN, and the HEPnet-J for high energy physics collaboration of domestic universities and laboratories. Accordingly, the concerns over computer security have been steadily gaining importance.

To support communication in research activities, the computing research center provides an e-mail system and web systems. These were also upgraded in early 2012 along with the central computing system. Such information systems provide a variety of services such as mailing lists, a Wiki, and KDS-Indico, besides conventional services.

4.4.2.2. Computing Services

Central Computer System

The Central Computer System, KEKCC, provides a

4.4.2. Computing Research Center

(2)

program, KEK solicits proposals for projects that make use of the KEK Supercomputer System (KEKSC).

Two research periods overlapped the fi scal year 2012: the research period of 2012, from March 2012 to Septem-ber 2012, and research period of 2012 − 2013, from Oc-tober 2012 to September 2013. During the 2012 research period, 21 proposals covering the following research ar-eas were fi led and approved by the Program Advisory Committee: lattice QCD (11), elementary particle physics (2), nuclear physics (1), material science (1), astrophysics (3), accelerator science (2), and numerical algorithms and computational methods (1). In addition, 8 trial applica-tions were also accepted. In the 2012 − 2013 research period, 25 proposals have been approved, most of which are continuations of proposals fi led during the test period. Four trial applications have also been approved thus far. (See http://ohgata-s.kek.jp/ .)

The KEKSC currently consists of System A, a Hitachi SR16000 model M1, and System B, an IBM Blue Gene/Q. System-A started service in September 2011 at an off-site data center, and has been in operation at KEK since March 2012. System B started service in April 2012. KEKSC is connected to the Japan Lattice Data Grid, which provides fast transfer of data in lattice QCD among super-computer sites in Japan via HEPnet-J/sc, a virtual private network based on SINET4 provided by National Institute of Infor-matics.

4.4.2.3. Network and Security

Network

Migration of the wireless network

The KEK network group operates a wireless network “MA cluster” in the KEK Tsukuba campus. This network requires MAC address registration. As the MAC address

fi ltering switch became outdated, service of the oldest network “tsubaki” was stopped in FY2012. “tsubaki-II”

was adopted as its successor, with the same encryption settings. Migration was carried out in two steps. The fi rst step involved disabling announcement of the SSID. Thus, new wireless devices could not detect the old network. However, devices registered with the old network could still connect to it. The second step was to completely stop the old network. Thus, no device could connect to it. There was a 3-month interval between these steps, so that all users could change the settings of their devices during this period. After the migration was complete, the encryption key of the new network is changed, and “tsubaki-III,” which uses a more secure encryption algo-rithm, was adopted. However, many old wireless devices exist in KEK and not all of them can connect to “tsubaki-III”. Therefore, both “tsubaki-II” and “tsubaki-III” are currently used.

In recent years, there has been a tremendous increase in the number of wireless devices. Therefore, the DHCP lease time of IP addresses has been optimized and short-ened. Initially, the lease time of wireless network service was one week. However, of late, the IP address pool is exhausted even if the lease time provided is 24 hours. Hence, the current lease time is only 12 hours. This means that the IP address assigned to a user’s device on one day may differ from that assigned to it the previous day. PerfSONAR

For monitoring the network connectivity between new KEKCC and sites of collaborators, perfSONAR servers are installed in the KEKCC network. Since perfSONAR servers may periodically apply non-negligible traffi c load on the network, most perfSONAR servers are not fully open to the Internet. Although perfSONAR is widely used in LHC computing sites to monitor the connectivity with each other, KEK is not joining the tier structure of LHC, and we had to ask the manager of perfSONAR servers at the sites of the collaborators for access permission. Figure 4-4-2-2 shows a brief history of the throughput between KEK and

Fig. 4-4-2-2. Throughput between KEK and PNNL. The green line indicates transmission from KEK to PNNL, and the blue line indicates transmission from PNNL to KEK. When the network path between KEK and PNNL has changed for some reason, the change has affected not only the round trip time but also the throughput.

8/27 9/6 9/16 9/27 10/2 10/12 10/22 10/31

Speed (Mbps)

0 100 200

(3)

KEK Secure Network System.

4.4.2.4. J-PARC Information System

Since FY2002, the J-PARC infrastructure network, called JLAN, has been operating independent of KEK LAN and JAEA LAN in terms of logical structure and operational policy. In July 2012, JLAN was upgraded to cope with the increasing demand for network bandwidth. The old system provided a bandwidth of 100 Mbps for terminal ports, 1 Gbps for the Internet, and 2 Gbps for the Tokai-Tsukuba connection. In contrast, the new sys-tem provides 1 Gbps for terminal ports, 10 Gbps for the Internet, and 8 Gbps for the Tokai-Tsukuba connection. Besides increasing network capacity, the upgrade has made JLAN more robust by introducing redundant switch connections. The total number of hosts on JLAN was over 3,500, and it has been increasing at a rate of 117% per year. The growth curve of edge switches, wireless LAN access points, and hosts connected to JLAN are shown in Fig. 4-4-2-4. Figures 4-4-2-5 and 4-4-2-6 show the net-work usage of the Tsukuba-Tokai and Internet connec-tions in 2012, respectively.

4.4.2.5. Research and Development

Grid

The Central Computer System (CCS) has been in-stalled, and it started service in April 2012. The EMI Grid middleware has been deployed in the CCS for analyzing and sharing experimental data over distributed systems. This system is operated under the Worldwide LHC Com-puting Grid (WLCG) project, which is a global collabora-tion of more than 170 computing centers in 36 countries, linking up national and international Grid infrastructures.

In the near future, accelerator physics experiments, such as the Belle II, T2K, and ILC, will generate hundreds of petabytes of data per year. A stable storage system Pacifi c Northwest National Laboratory (PNNL) in the USA.

The network route from KEK to PNNL uses SINET and ESnet. SINET and ESnet peer with each other at Los An-geles (LA) and New York (NY). The designated path is the LA path, but occasionally, the path fl ips to the NY path, making the round trip time longer. The history graph generated by perfSONAR makes it easy to diagnose the throughput decrease and to check whether it is temporal. Server room moving

The building K12, i.e., the “PS Main Ring Power Sta-tion,” which is usually called “M5”, was used as the serv-er room from FY2007 to FY2011. It was dismantled in FY2012, and most of the servers were moved to K14, the “Proton Beam Application Building,” near K12. The infra-structure of K12 was inadequate for the server room. For example, the network fi bers are around 20 years old and the maximum data transmission speed is only 100 Mbps, even though K12 had fi bers for 10 Gbps links. The chillers, power supply systems, and card key systems have been reinforced, and some of the cables have been spliced and re-routed from K12 to K14.

Security

In recent times, people’s awareness of protection against security threats has increased and it has become common for people to install and update anti-virus soft-ware on their computers as well as OS updating. As a result, the number of virus/worm infections found in the KEK intra-network has been decreasing, as shown in Fig. 4-4-2-3. On the other hand, the number of com-promised hosts in the DMZ network is gradually increas-ing. These hosts are open to the Internet and provide important services, such as the Web server, mail server, and SSH server. Fortunately, the security breaches that oc-curred in FY2012 are not serious; however, future attacks may be severe and cause damage to KEK. To minimize these security risks, we monitor network packets around the clock and provide a self-inspection environment to administrators of the DMZ host in the framework of the

0 1 2

Virus/Worm infection DMZ host Fig. 4-4-2-3. Number of security breaches in FY2012.

(4)

methodology. It provides many methods, including data input/output functions, data-analysis functions, and func-tions for distributed data processing. As the data contain-er in Manyo-Library can be written in the NeXus format (see http://www.nexusformat.org/ ), the data fi les can be read in any other laboratory. Many data-analysis software programs have been developed for various instruments/ experiments by adopting this framework.

Manyo-Library has been installed to the neutron scat-tering instruments in MLF. It is being utilized as an infra-structure for software environments. In FY2012, we im-proved the effi ciency and user-interface of the library. We also drafted user manuals. The fi rst offi cial version of the Manyo-Library, 0.3, was released in October (see http:// wiki.kek.jp/display/manyo/Manyo+Library+Home).

Geant4

Geant4 is a toolkit for detector simulation of the passage of particles through matter. It provides a com-prehensive set of functionalities for geometry, material, particle, tracking particles, particle interaction, detector response, event, run, visualization, and user interface. Geant4 has rich fl exibility and expansibility as a generic simulation framework. Geant4 is widely used in several and quick error detection are crucial to the effi cient

management of such large amounts of data, with high throughput. CRC is also responsible for developing the Grid software. The Statistical Charts And Log Analyzer (SCALA), which is a framework for graphically display-ing operational data for a distributed data management system, has been developed and is currently in use at the CCS. This framework allows operational information such as disk usage and number of users to be displayed. SCALA also allows remote debugging through the dis-covery and display of error messages from middleware. This enables quick error detection, and debugging works even while the system is running.

Object Oriented Data Analysis Environment for Neutron Scattering, “Manyo-Library”

The Materials and Life Science Facility (MLF) of J-PARC is a user facility providing neutron and muon sources for experiments. An analysis environment has been devel-oped to provide a software framework for neutron-scat-tering experiments in MLF. The framework, Manyo-Li-brary has common and generic analysis functionalities for neutron-scattering experiments. It is a C++ framework with class libraries, and is based on an object-oriented

Fig. 4-4-2-5. Data transfer from Tokai campus to Tsukuba campus.

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0

Jul-12 Aug-12 Sep-12 Oct-12 Nov-12 Dec-12 Jan-13 Feb-13 Mar-13

Gbi

t/s

ec

month-year

Tokai -> Tsukuba Data Transfer Rate

2hour average 5 min peak 2011 Max. 0.83Gbps ! JL A N up gr ad e

Fig. 4-4-2-6. History of Internet connections. 0 0.2 0.4 0.6 0.8 1

Jun-12 Jul-12 Aug-12 Oct-12 Nov-12 Dec-12 Jan-13 Feb-13 Mar-13

Gb

s

month-year

Internet (5min avg)

from internet to internet 2011 Max. 0.21Gbps (to) 0.29Gbps (from)

(5)

GRACE

GRACE is an automatic computation system that provides quantitative theoretical predictions of scattering cross sections of elementary particles and event genera-tors for high-energy physics experiments. An important extension of the GRACE system is the inclusion of higher order corrections. This is necessary to give more precise theoretical predictions in the standard model or beyond. In higher order correction, we have to consider both tree and loop diagrams. At the one-loop level, a general ana-lytical prescription is established, whereas in the two-loop level, analytic results are obtained only for limited cases

and an effi cient numerical method is required. We have

introduced the Direct Computation Method (DCM) for loop integrals. It is based on numerical multi-dimensional integration and numerical extrapolation. DCM is a fully numerical method and is applicable to a wide class of loop integrals with various physics parameters. To minimize the computation time, we are developing a parallel version of DCM using GPGPU or a multi-thread technique, OpenMP, for multi-core CPUs, and testing several computational intensive loop integrals.

Besides the numerical approach, obtaining analytical expressions of loop integration is also important for the calculation of physical processes including loop diagrams. It is known that any loop and any point function can be written as a linear combination of some hypergeometric series. We explain how one-loop integration is expressed in a hypergeometric series using recursion formulae. We also obtain an n-point function exactry expressed in terms of a hypergeometric series for arbitrary mass parameters and momentum in any space-time dimension.

In the QCD process, we investigate the one-gluon-ex-change corrections to the real photon structure functions in the massive parton model. We also employ a technique based on the Cutkosky Rules and the reduction of Feyn-man integrals to master integrals. We show results of four structure functions up to next-to leading order (NLO) and a positivity constraint, which are related to polarized

and unpolarized structure functions and satisfi ed at NLO.

application domains ranging from HEP (High Energy Physics) experiments to medical and space applications.

Its versatility has attracted attention from fi elds beyond

particle physics.

As a collaboration achievement, in FY2012, we re-leased the new version 9.6 by the end of November, fol-lowed by several patch releases. Our Japanese colleagues have worked on several categories, such as particle, track-ing, detector response, visualization, and user interface,

and have contributed signifi cantly to the improvements

in performance and functionality.

We continuously support the user community. We had a user-training course on September 5, i.e., 12th In-ternational Conference on Radiation Shielding, at Nara. Another remarkable achievement in terms of user support is that we have succeeded in creating a Japanese version of Geant4 training material that is compliant with the latest version of Geant4. This material is expected to be useful in tutorials.

As for new developments, we have continued our work in the project, including the development of a framework for the Japan/US Cooperation Program, col-laborating with the SLAC Geant4 team and some other experiment groups. We have devoted our efforts to speeding up Geant4 for future experiments. This project facilitates the improvement of Geant4 kernel using latest computer technologies such as multi-core CPU, many-core CPU, and GPU computing. Further, we have started working on a new implementation of a multi-thread ver-sion of Geant4 (G4MT), and the next major release is expected in December 2013. The prototype of G4MT shows good performance with an increase in the number of CPU cores. We succeeded in running this prototype and testing it on the new Intel Xeon Phi many-core archi-tecture. Another direction for Geant4 parallelism using GPU is also being investigated. The project group is focus-ing on electromagnetic physics in the lower energy region under voxel geometry, which is mainly used in radiation dosimetery. Electromagnetic processes in Geant4 were implemented in CUDA, and we achieved a speed-up of

about 30 − 40 times over the simulations produced by a

Figure

Figure 4-4-2-1 shows the CPU utilization for each re- re-search group in FY2012.  The average CPU utilization was  about 50%
Fig. 4-4-2-2.   Throughput between KEK and PNNL.  The green  line  indicates  transmission  from  KEK  to  PNNL,  and  the  blue  line  indicates  transmission  from  PNNL to KEK
Fig. 4-4-2-3.  Number of security breaches in FY2012.
Fig. 4-4-2-5.  Data transfer from Tokai campus to Tsukuba campus.

References

Related documents

Live in front and affidavit lieu of originals sample special power of proof of originals and contact who signed the documents notarized affidavits to notarize.. Ssb by any

.s STATED EARLIER, THE COMPLAINT clcrk has tlic responsibility for receiving information on complaints and completing the "case sheet." In addition, he assigns an officer

Specialized Centers Friends of the Environment Center Youth Creative Art Center Girl’s Creativity Center Qatar Volunteer Center Information Technology Center Qatar Social and

puerile, rules of the technical works. In the very title of the drama, he has disregarded the rule[11] that the name of a drama of invention should be formed by compounding the names

It is shown that, under static expectations, wages converge over time to the unique Nash equilibrium of the constituent game, in which wages are set at the market clearing

This dissociation between formal qualifications and pay, which is specific to Japan, runs counter to the teachings of the ‘signalling’ and ‘adverse selection’ theories

Methods: We performed time series models using Box-Jenkins autoregressive integrated moving averages (ARIMA) on monthly data on the gross ingredient cost of all nicotine