The CC1 system
Solution for private
cloud computing
Outline
●
What is CC1?
●
Features
●
Technical details
●
System requirements and installation
●
How to get it?
What is CC1?
●
The CC1 system is a complete solution to
manage computing resources in the form of
IaaS in any institution or company.
●
Entire system designed and created
in IFJ PAN in Cracow.
●
It helps to organize and virtualize
heterogeneous compute and storage
resources.
●
Final user can create and save virtual
machines in easy way through the Web
Features
For the end users:
● CC1 provides Web interface and EC2 programming interface which is easily accessible by external tools.
● Disk space for users is provided as a space for virtual machine images and virtual
storage disks there is no user space like a cloud object storage.
● With CC1 system user gets management options for virtual machines (VM) and virtual farms, which are groups of VMs forming a computing cluster with one head node.
Features
For the CC1 administrator
●
Dedicated administrator panel to
manage users and resources.
●
Administrator can organize
distributed clusters unified with one
CC1 cloud controller.
●
Elastic way to extend/narrow
number of physical nodes in the
system.
Features: administrator module
List of physical nodes for one of clusters List of clusters added to CC1 cloud
List of storages added to CC1 cloud
Adding new storage Adding new cluster
Features
CC1 administrator can easily
set and modify resources
limits for particular users, also
can set monthly credits to use
resources. User can check
usage of credits on chart.
Features
User can monitor each VM
Technical details
Technical details
Technical details
●
Software:
– Tested under latest stable Debian release.
– Uses virtualization package KVM (libvirt) and web framework Django.
– New version is released every few months.
●
Uniform Python environment
– Interface WWW (Django with Apache).
– CLM (Cloud Manager) and CM (Cluster Manager).
– Databases (PostgreSQL or MYSQL).
– Communication (more RESTful communication since version 2.0,
Technical details
Virtual Machines
● Libvirt is used as low level virtualization tool (not a baremetal solution).
● The KVM is default hypervisor. Libvirt supports more hypervisors, but were not tested.
● VM image is copied from physical storage to a local disk on the node and from this clone of image VM is started.
● It is recomended to keep the size of VM images small
(10 GB for Linux). Additional space can be provided via virtual disks.
● Possibility to create preconfigured virtual computing clusters (Farms) with batch system.
Technical details
Storage
● FileBased storage system (Network File System) of required performance.
● Simple load balancer for simultaneous usage of several disk arrays.
● In IFJ PAN: Currently IBM, formerly Sun Storage.
Technical details
Virtual storage disks
● User storage is provided via virtual disks that can be attached to VM at the start or to a running VM (USB and virtio drivers).
● Virtual disks are created by users on the physical storage through the network protocol.
● Disks are accessed directly from the physical storage.
● The disks are „permanent” i.e. survive after destruction of VM.
Technical details
User's private networks
● Each user has its own network/networks. Range of private IP addresses may be booked by users from the total pool.
● Inter-VM communication is done by routing OSPF protocol (Quagga).
Technical details
Network topology and administration
● CC1 Installation packages configure all necessary services required to start cluster's network.
● Network solution for VMs is based on small routed networks (not bridged).
● Each virtual network for VM:
– Is created using Libvirt.
– Is connected to physical node.
– Has a local Libvirt's dnsmasq (basic DNS and DHCP).
● All these networks are joined by routing protocols (OSPF).
● Private IP address is provided to VM by dnsmasq service (DHCP) managed by Libvirt.
● Routing from the internal cluster network to the Internet is done on Routing server with proper Quagga configuration.
Technical details
Accessing VM from from the outside of the cloud
● VM Public IP – can be dynamically attached to VM, then released and attached to another VM. Private to public IP mapping on the node where VM is running.
● VNC port mapping – CC1 Cluster Manager forwards VNC ports to nodes, so needs to have its own public IP.
Fault Tolerance
●
Redundancyready deployment is possible.
●
In IFJ PAN: two mirrored control servers and synchronized
databases (regularly backupped).
●
VMs not affected by possible control node's failures.
●
Webinterface and VNC connections might become
unavailable for a very short period of time during switch to
backup hardware.
●
Due to control servers stored as preconfigured virtual machines
maintenance break rarely takes more than 15 minutes.
System requirements
● CC1 is a free Linux software build using modern virtualization and Internet technologies.
● CC1 System requirements:
– Single or several nodes for routing service, interfaces, cloud manager, cluster manager, databases.
– Router server – to route packages between the Internet and cluster network (Quagga configuration)
● Hardware requirements:
– Physical nodes
● A few multicore nodes with hardware virtualization and Debian system installed
● Underlying host OS does not matter for end users
● Debian has good hardware support (including enterprise blade Systems) – Disk storage
● Mounted to nodes as remote location (NFS by default, other distributed file systems also possible)
● For large installations disk array is recommended to have proper performance.
● Recommended network is 10 Gb/s.
● Software
– Python + Libvirt + hypervisor (KVM) (probably any Linux)
Optional:
System installation
To install all the CC1 components on the same machine
● Add CC1 repository to the operating system.
● Install deb packages:
cc1-cm cc1-clm cc1-wi cc1-ec2 cc1-common
● Modify configuration files according to the installation instruction on http://cc1.ifj.edu.pl and restart services.
● Visit your machine address using Internet browser to see CC1 web portal.
IFJ PAN Deployment
CC1 installation at Institute of
CC1 installation at Institute of
Nuclear Physics PAN in Krakow
Nuclear Physics PAN in Krakow
1000 cores (Blades IBM HS22 Xeon L5640 @ 2.27 GHz)
1000 cores (Blades IBM HS22 Xeon L5640 @ 2.27 GHz)
Disk storage 100 TB
Disk storage 100 TB
Network:
Network:
How to get CC1?
● You can test the system ONLINE! Just send us request about access to CC1 Web and EC2 interfaces to cc1@cloud.ifj.edu.pl and we will provide a
temporary access to computing resources in Krakow.
● CC1 project page is www.cc1.ifj.edu.pl
● There, in Demo section you can find disk images with preinstalled CC1 system. So you can test CC1 Web portal on your local machine.