• No results found

STREAMEZZO RICH MEDIA SERVER

N/A
N/A
Protected

Academic year: 2021

Share "STREAMEZZO RICH MEDIA SERVER"

Copied!
16
0
0

Loading.... (view fulltext now)

Full text

(1)

STREAMEZZO RICH MEDIA SERVER

Clustering

This document is the property of Streamezzo. It cannot be distributed without the authorization of Streamezzo.

(2)

Table of contents

1.  INTRODUCTION ... 3 

1.1  Rich Media Server instances ... 3 

2.  CLUSTERING ... 3  2.1  Related documents ... 3  2.2  Note ... 4  3.  PROCESSES ... 5  3.1  Vertical scaling ... 5  3.2  Horizontal scaling ... 5 

4.  MIRRORING RICH MEDIA SERVER ... 6 

4.1  On the same machine... 6 

4.1.1.  Configuration changes ... 6 

4.2  On another machine ... 7 

4.3  Starting new Rich Media Server instance ... 8 

5.  APACHE LOAD BALANCING ... 9 

5.1  Apache configuration ... 9  5.2  JK configuration ... 9  5.2.1.  AJP 1.3 connectors ... 9  5.2.2.  JK workers ... 10  5.2.3.  Workers mapping ... 10  5.3  Session affinity ... 10 

5.4  Including new Rich Media Server into cluster ... 11 

5.4.1.  Rich Media Server configuration ... 11 

5.4.2.  JK module configuration ... 11 

5.5  Restarting Apache ... 12 

6.  TOMCAT SESSIONS REPLICATION ... 13 

7.  TOMCAT PERSISTENT SESSIONS MANAGEMENT ... 15 

7.1  Sessions data table ... 15 

7.1.1.  Oracle ... 15 

7.1.2.  MySQL ... 15 

7.2  Tomcat configuration ... 15 

(3)

1. I

NTRODUCTION

The goal of this document is to help administrators in deploying two or more Rich Media Server instances in order to make a clustered platform.

This document also applies for making a mirror of an existing Rich Media Server in order to add one or more instances into an existing cluster.

1.1 Rich Media Server instances

A Rich Media Server instance stands for a Rich Media Server running into Jakarta Tomcat 5.5.

Rich Media Server cluster is made of two or more Rich Media Server instances associated to the same database. You can refer to the document entitled Rich Media Server Execution Environment for more information about clustered platforms architecture. Each instance is supposed to run in an application server into which every Rich Media service registered into database is deployed.

Rich Media Server instances may be deployed within different machines (horizontal scaling) or within a single machine (vertical scaling). You can refer to the document entitled Rich Media Server Clustering for more information about Rich Media Server clustering.

2. C

LUSTERING

A cluster is made of several instances running on one or more machines in order to increase the platform performance. A load balancer is in charge of dispatching client request among every instance. In order to share the same configuration, all Rich Media Server instances are supposed to be connected to the same database, which may also be clustered. You can refer to the document entitled Rich Media

Server Execution Environment for more information about load balancer and database integration in a

Rich Media Server clustered platform.

This document provides a sample cluster configuration made of free software. Hardware load balancer and database cluster are not in the scope of this document since no free solution is available for these tiers (Streamezzo is currently evaluating MySQL cluster and C-JDBC solutions as they are likely to provide a free database cluster, but they are not mature enough to be used in a production

environment).

In order to balance client requests handling between several application server instances, a web server must be added in front of them. It will receive every client requests and select the application server instance to forward to.

This document provides a sample configuration for a cluster made of one or more Apache 2.0.53 web servers in front of one or more Tomcat instances. As many parameters can be setup for each element, this configuration can be customized in order to improve performance.

2.1 Related documents

Streamezzo Rich Media Server Setup Guide describes the deployment of Rich Media Server into most

JEE application servers and JDBC compliant database.

Streamezzo Rich Media Server Recommended Platform describes the architecture of Rich Media

(4)

2.2 Note

This document is specific to Rich Media Server deployment with setup programs provided within the release, it does not apply to deployment into any JEE application server.

Clustering Rich Media Server into a JEE application server is the same as clustering any JEE application, it consists in making a cluster with application servers instances, located behind a load balancer (hardware or software), and linked to the same database, which may itself be clustered for high availalibity. Front end Rich Media Server core web applications (stz, SystSTZ4Writer and

SystHTMLWriter) are supposed to be deployed onto every inctance of such a cluster, as well as every rich media service web application.

Most of application servers clusters are handled through a specific instance so called administration node which is in charge of deploying applications onto every other cluster node. Rich Media Server Administration (stzadmin) is only supposed to be deployed onto the administration node. For security purpose, it must not be deployed onto front end cluster nodes.

(5)

3. P

ROCESSES

When several Rich Media Server instances are deployed on a single machine, it is called vertical scaling. Such a process is useful when Rich Media Server is running on a machine which is so powerful that a single instance is not able to benefit from its resources. This document describes the simplest way to duplicate a Rich Media Server environment and to configure it in order to make both instances running concurrently on the same machine without any conflict.

When each Rich Media Server instance is running on a separate machine, it is called horizontal scaling. In a high availability platform, for example, it is necessary to deploy two Rich Media Server instance on two different machines in order to make a fully redundant platform.

3.1 Vertical scaling

In order to deploy two Rich Media Server instances on a single machine, the following process must be respected:

- deploying the first Rich Media Server instance;

- making a copy of the first Rich Media Server base directory;

- changing some configuration parameters for the second Rich Media Server instance; - declaring the second Rich Media Server instance through Rich Media Server

Administration web interface;

- if Apache web server is deployed in front of Rich Media Server, declare both Rich Media Server instances into JK module configuration.

First Rich Media Server instance deployment is made as if there would be only one Rich Media Server running on the machine. This step is described into the Rich Media Server Setup Guide.

3.2 Horizontal scaling

Duplicating an existing Rich Media Server to make the new instance running onto another machine is made of the following steps:

- deploying the new Rich Media Server instance and associating it to the same database as the existing Rich Media Server without resetting its data;

- making a copy of the first Rich Media Server web applications directory; - pasting this copy into the new Rich Media Server web applications directory;

- declaring the second Rich Media Server instance through Rich Media Server Administration web interface;

- if Apache web server is deployed in front of Rich Media Server, declare both Rich Media Server instances into JK module configuration.

(6)

4. M

IRRORING

R

ICH

M

EDIA

S

ERVER

Once it has been deployed with any one of setup programs, a Rich Media Server instance is made of the contents located under the INSTALL_DIR/serviceNode directory.

4.1 On the same machine

Making a mirror of an existing Rich Media Server consists in copying contents of the following subdirectories:

- serviceNode/bin: startup.bat and shutdown.bat (startup.sh and shutdown.sh under Linux); - serviceNode/conf: catalina.policy, server.xml, tomcat-users.xml, web.xml;

- serviceNode/shared: every classes and lib file; - serviceNode/webapps: all contents must be copied;

The serviceNode/logs directory as well as the serviceNode/temp directory are supposed to be created and left empty in order for new Rich Media Server instance to start successfully.

Then some configuration parameters must be changed into startup and shutdown scripts as well as into

server.xml file.

The result of this operation consists in a new directory (called serviceNodeBis in this document) which is a copy of the existing one, excluded temporary and logs files which have been generated during

existing Rich Media Server life.

4.1.1. Configuration changes

Startup and shutdown scripts must be modified in order to take into account the new Rich Media Server base directory.

Under Windows, it consists in editing startup.bat and shutdown.bat files in order to replace the line:

set CATALINA_BASE=../../serviceNode

by the line:

set CATALINA_BASE=../../serviceNodeBis

Under Linux, it consists in editing startup.bat and shutdown.bat files in order to replace the line:

export CATALINA_BASE=../../serviceNode

by the line:

export CATALINA_BASE=../../serviceNodeBis

Then some configuration parameters must be changed into Jakarta Tomcat configuration file server.xml. Indeed, both Rich Media Server instances have to listen to different TCP ports in order to avoid any conflict. Furthermore, they have to be identified differently when they are running within a cluster making session affinity.

Whatever the operating system is, this step consists in editing server.xml file and changing the values colored in red in the following ([…] stands for contents that do not need any modification but it does not have to be removed):

(7)

<!-- Rich Media Server Tomcat 5.5 configuration file -->

<!-- Server port attribute value must be unique for one machine --> <Server port="8686" shutdown="SHUTDOWN" debug="0">

[…]

<!-- Define a non-SSL Coyote HTTP/1.1 Connector on port 80 --> <Connector className="org.apache.coyote.tomcat4.CoyoteConnector" port="80" minProcessors="25" maxProcessors="100"

enableLookups="true" redirectPort="8443"

acceptCount="100" debug="0" connectionTimeout="60000" useURIValidationHack="false" disableUploadTimeout="true" URIEncoding="UTF-8" useBodyEncodingForURI="false"/> <!-- Define a non-SSL Coyote HTTP/1.1 Connector on port 8080 --> <Connector className="org.apache.coyote.tomcat4.CoyoteConnector" port="8080" minProcessors="5" maxProcessors="75"

enableLookups="true" redirectPort="8443"

acceptCount="10" debug="0" connectionTimeout="60000" useURIValidationHack="false" disableUploadTimeout="true" URIEncoding="UTF-8" useBodyEncodingForURI="false"/> […]

<!-- Define a Coyote/JK2 AJP 1.3 Connector on port 8009 --> <Connector className="org.apache.coyote.tomcat4.CoyoteConnector" port="8009" minProcessors="25" maxProcessors="100"

enableLookups="true" redirectPort="8443"

acceptCount="100" debug="0" connectionTimeout="60000" useURIValidationHack="false"

protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler" URIEncoding="UTF-8" useBodyEncodingForURI="false"/>

[…]

<Engine name="Standalone" defaultHost="localhost" debug="0" jvmRoute="serviceNode">

[…]

The contents shown above match the configuration file as it is created after a deployment with default configuration parameters values. It is possible for these values to be different from one Rich Media Server instance to another, as the file format is likely to have changed since it is rewritten after each service publication. In any case, the required modifications consist in changing every TCP ports values (shutdown port and every connector port) for the new Rich Media Server to listen to ports which are not already used by other processes running on the machine, including the existing Rich Media Server. It consists also in changing the identifier (jvmRoute) used by Apache JK module to make session affinity in a clustered environment.

4.2 On another machine

In order to make a mirror of an existing Rich Media Server on a new machine, you first have to deploy the Rich Media Server thanks to one of the provided setup programs, as described into Rich Media

Server Setup Guide. The Rich Media Server database is supposed to be the same as the existing Rich

Media Server one, and its data must not be reset.

The single step left is to copy serviceNode/webapps directory contents of the existing Rich Media Server and paste it into the serviceNode/webapps of the new Rich Media Server.

Then both Rich Media Server instances are able to serve the same Rich Media services, the new Rich Media Server can be included into the cluster.

(8)

4.3 Starting new Rich Media Server instance

Once one of the steps described above has been achieved, the new Rich Media Server instance must be started by executing the startup script.

In order to check for new Rich Media Server instance to start successfully, the standard output messages must be checked. Under Windows, the console is used as standard output whereas under Linux, the file logs/catalina.out is used.

(9)

5. A

PACHE LOAD BALANCING

Load balancing may be made thanks to Apache web server.

Apache 2.0.53 and Tomcat 5.5 are coupled thanks to the connector JK 1.2.10. It must be noted that Apache and Tomcat cannot listen to the same port on the same machine. If Apache and Tomcat are intended to run on the same machine, you must take care Tomcat configuration does not declare a Connector on the same port as Apache one. In such a case, you may have to remove a Connector from Tomcat configuration by removing the conflicting Connector from Tomcat configuration file (server.xml).

5.1 Apache configuration

Once Apache 2.0.53 and JK 1.2.10 are installed, Apache configuration file httpd.conf must be updated. Following lines must be added (commented lines are only provided to locate where JK specific

directives must be added).

5.2 JK configuration

JK 1.2.10 module configuration requires Jakarta Tomcat 5.5 instances to define AJP1.3 connectors. JK configuration is defined into the files workers.properties and uriworkermap.properties that must be manually created and edited in the conf subdirectory of the Apache 2.0.53 installation directory.

5.2.1. AJP 1.3 connectors

Tomcat configuration file (server.xml) must include an AJP 1.3 connector declaration. <Connector

className="org.apache.coyote.tomcat4.CoyoteConnector" #

# Dynamic Shared Object (DSO) Support #

# To be able to use the functionality of a module which was # built as a DSO you

# have to place corresponding `LoadModule' lines at this # location so the

# directives contained in it are actually available _before_ # they are used.

# Statically compiled modules (those listed by `httpd -l') do # not need

# to be loaded here. #

# Example:

# LoadModule foo_module modules/mod_foo.so #

LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/jk.log

JkLogLevel error

JkMountFile conf/uriworkermap.properties JkShmFile logs/jk.shm

(10)

port="8009" minProcessors="5" maxProcessors="75" enableLookups="true" redirectPort="8443"

acceptCount="10" debug="0" connectionTimeout="20000" useURIValidationHack="false"

protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"/> Rich Media Server setup programs declare such a connector. Some more parameters could be configured for such a connector, but they are not in the scope of this document.

5.2.2. JK workers

The file workers.properties contains the declaration of a worker for each Jakarta Tomcat 5.5 instance. Worker host is the IP address or host name of the machine and worker port matches the AJP1.3 connector port declared into the Tomcat configuration file.

In addition, a worker is declared with lb as type. It is used as a load balancer between every Tomcat instance. For each Tomcat associated worker, the lbfactor defines the relative workload every worker is able to handle. When every Tomcat instances run on same hardware, the same value must be affected.

5.2.3. Workers mapping

The file uriworkermap.properties contains the declaration of client requests URI patterns that JK workers will deal with. For every incoming HTTP request, Apache will look for a matching pattern to select the worker to forward request to.

This single line means that every incoming HTTP request which URL matches the format

http://APACHE_HOST:APACHE_PORT/stz/* will be forwarded to one of the Tomcat instances declared into serviceNode worker balance_workers property value.

5.3 Session affinity

Session affinity consists in forwarding every client requests for the same session towards the same Tomcat instance. It speeds up performance and it is required when Tomcat instances are not deployed into a cluster. worker.list=serviceNode worker.sn1.type=ajp13 worker.sn1.host=SN1_IP_ADDRESS worker.sn1.port=SN1_AJP13_PORT worker.sn1.lbfactor=1 worker.sn1.socket_keepalive=1 worker.sn2.type=ajp13 worker.sn2.host= SN2_IP_ADDRESS worker.sn2.port= SN2_AJP13_PORT worker.sn2.lbfactor=1 worker.sn2.socket_keepalive=1 worker.serviceNode.type=lb worker.serviceNode.balance_workers=sn1,sn2 /stz/*=serviceNode

(11)

JK workers names (sn1 and sn2 in the sample configuration) must be also declared into Tomcat configuration file to define session affinity.

For every Tomcat instance making part of the platform, edit the configuration file (server.xml) and locate the Engine XML tag. Then add the jvmRoute attribute with the corresponding worker name as value.

5.4 Including new Rich Media Server into cluster

Whatever scaling mode is used, the new Rich Media Server must be included into cluster in order on one hand to be known by Apache web server and on the other hand for new Rich Media services to be deployed into this new instance at the same time as it will be deployed into other Rich Media Server instances of the cluster.

5.4.1. Rich Media Server configuration

The new Rich Media Server instance must be declared through Rich Media Server Administration web interface in order to Rich Media services to be deployed into this instance as well as into existing instances after each publication.

This operation consists in adding the new instances into the servers list which management is described into the Rich Media Server Adminstration Guide.

The new server must be declared with the publication port set into server.xml file for the parameter port of one of the HTTP connector.

5.4.2. JK module configuration

At last, when Rich Media Server instances make part of a cluster into which workload is balanced by a web server using JK module, the new instance must be added to JK configuration.

JK module configuration is described into Rich Media Server Setup Guide. Adding a Rich Media Server instance into a cluster implies declaring it into JK module configuration file.

Following lines must be added to the file workers.properties ([…] stands for contents that do not need any modification but it does not have to be removed):

[…] worker.<SN_JVM_ROUTE>.type=ajp13 worker.<SN_JVM_ROUTE>.host=<SN_IP_ADDRESS> worker.<SN_JVM_ROUTE>.port=<SN_AJP13_PORT> worker.<SN_JVM_ROUTE>.lbfactor=1 worker.<SN_JVM_ROUTE>.socket_keepalive=1 […] worker.serviceNode.balance_workers=[…],<SN_JVM_ROUTE>

where <SN_JVM_ROUTE> stands for the value set into server.xml file for the parameter jvmRoute, <SN_IP_ADDRESS> matches the machine host name or IP address and <SN_AJP13_PORT> is the value set into server.xml for the parameter port of the AJP connector.

(12)

5.5 Restarting Apache

At last, if Apache web server is used as the cluster load balancer, it must be restarted in order to take the JK configuration changes into account.

Under Linux, Apache web server must be restarted by executing the following command: apachectl graceful

(13)

6. T

OMCAT SESSIONS REPLICATION

Sessions replication between several Tomcat instances has been introduced into Tomcat 5.5. However, the developer Filip Hanik has released a library in order to make it possible for Tomcat 4.1. As it is not a Tomcat 4.1 built-in feature, this library is not available when deploying Rich Media Server within a previously installed Tomcat 4.1.

Nevertheless, Rich Media Server setup programs include this library into the embedded Tomcat 4.1 so that it is possible to configure it as described above in order to replicate sessions between several Rich Media Server instances running into a cluster. The library implementing this feature is named

tomcat-replication.jar and it is located under $INSTALL_DIR/tomcat/server/lib.

This feature can only be activated by editing the Tomcat configuration file manually. In Tomcat 4.1, session replication is configured on a per context basis. As Rich Media services sessions are registered in Front context, only this one must be set up to replicate sessions. Here is the configuration file

($INSTALL_DIR/serviceNode/conf/server.xml) excerpt to include in order for Rich Media Server sessions to be replicated:

<Context docBase="stz" path="/stz" cookies="false"> <Manager className="org.apache.catalina.session.InMemoryReplicationManager" checkInterval="10" expireSessionsOnShutdown="false" maxActiveSessions="-1" debug="0" printToScreen="true" saveOnRestart="false" minIdleSwap="-1" maxIdleSwap="-1" maxIdleBackup="-1" pathname="null" printSessionInfo="true" serviceclass="org.apache.catalina.cluster.mcast.McastService" mcastAddr="228.1.2.3" mcastPort="45566" mcastFrequency="500" mcastDropTime="5000" tcpListenAddress="auto" tcpListenPort="4001" tcpSelectorTimeout="100" tcpThreadCount="2" tcpKeepAliveTime="-1" synchronousReplication="true" useDirtyFlag="true"> </Manager> <Valve className="org.apache.catalina.session.ReplicationValve" debug="0"/>

<Resource auth="Container" description="Streamezzo Database Connection" name="jdbc/STZDatabase" scope="Shareable"

type="javax.sql.DataSource"/> </Context>

The only parameter likely to change from one Rich Media Server instance to another is tcpListenerPort, which must be different for each instance running on a single machine.

(14)

When you first startup, you will get an error:

java.lang.Exception: ManagedBean is not found with InMemoryReplicationManager

(15)

7. T

OMCAT PERSISTENT SESSIONS MANAGEMENT

With the previous configuration, the cluster is able to overcome an application server instance failure. In such a case, further requests are forwarded to the second application server instance that keeps on running. This way, the platform provides high availability.

A higher level of cluster ability consists in maintaining every users sessions over an application server failure. The solution consists in writing session data into a persistent storage that is accessible for every application server instance.

You have to know that such a feature impacts the whole platform performance, so it

should be used only if it is really needed.

This can be done by either using a shared file system or the database. This document focuses on the second solution as it is quite easy to add this feature to the sample cluster.

7.1 Sessions data table

Sessions data is written into a database table that is not created by SQL scripts provided by Streamezzo.

You have to execute one the following SQL query in order to create this table.

7.1.1. Oracle

create table tomcat_sessions (

session_id varchar(100) not null primary key, valid_session char(1) not null,

max_inactive number(10) not null, last_access number(20) not null, app_name varchar(255),

session_data blob);

7.1.2. MySQL

create table tomcat_sessions (

session_id varchar(100) not null primary key, valid_session char(1) not null,

max_inactive int not null, last_access bigint not null, app_name varchar(255), session_data mediumblob, KEY kapp_name(app_name));

7.2 Tomcat configuration

The configuration file server.xml must be updated in order to configure the sessions persistent storage. A context must be defined for the front request dispatcher. This context will wrap the persistent manager configuration.

(16)

<Context path="/stz" docBase="stz" reloadable="true" cookies="false" debug="0"> <Manager distributable="true" className="org.apache.catalina.session.PersistentManager" debug="0" checkInterval="60" saveOnRestart="true" maxActiveSessions="-1" minIdleSwap="-1" maxIdleSwap="-1" maxIdleBackup="-1"> <Store className="org.apache.catalina.session.JDBCStore" debug="0" connectionURL="<DB_JDBC_URL>" driverName="<DB_JDBC_DRIVER_CLASS>" sessionAppCol="app_name" sessionDataCol="session_data" sessionIdCol="session_id" sessionLastAccessedCol="last_access" sessionMaxInactiveCol="max_inactive" sessionTable="tomcat_sessions" sessionValidCol="valid_session"> </Store> </Manager> </Context> Where:

<DB_JDBC_DRIVER_CLASS> is the name of the JDBC driver used to contact the database:

- oracle.jdbc.driver.OracleDriver for Oracle; - org.gjt.mm.mysql.Driver for MySQL.

<DB_JDBC_URL> is the URL used by the driver to connect to the database:

- jdbc:oracle:thin:stzdb/stzdb@host:1521:stzdb for Oracle; - jdbc:mysql://localhost:3306/stzdb?autoReconnect=true&amp;

user=stzdb&amp;password=stzdb for MySQL.

7.3 Database connection libraries

The JDBC driver library must be located into the TOMCAT_HOME/common/lib directory. If Rich Media Server has been installed through one of the provided setup programs, you must move

commons-dbcp-1.2.jar and commons-pool-commons-dbcp-1.2.jar from $INSTALL_DIR/serviceNode/shared/lib directory to $INSTALL_DIR/tomcat/common/lib directory.

References

Related documents

Media and Coders Audio Voice play/record, tone generation/detection (DTMF, RFC2833); call progress analysis and PVD/PAMD are planned for a future release Audio conferencing

Factory wired 18AWG Field wired DISCONNECT SWITCH (OPTIONAL) 230 VAC L1 HOT (YELLOW) L2 HOT (YELLOW) YELLOW WHITE L H FAN MOTOR THERMOSTAT (OPTIONAL) BLACK BLACK RED RED HIGH

As already mentioned, many shapes incorporate straight line segments for vertical and horizontal bars, but slightly curved arc segments for diagonal lines.. At small sizes,

notice Leviticus 23:33-35: “And the [eternal] spake unto moses, saying, speak unto the children of Israel, saying, the fifteenth day of this seventh month shall be the feast of

Media Server Dallas Media Server New York Media Server Cincinnati Media Server Miami Media Server Little Rock Internet Gateway and Firewall 1 X Stream.

2.3 The aim of DROs was therefore to provide debt relief to those excluded from existing procedures – those with low levels of debt with no prospect of paying off those debts due

To this end, our study has two fundamental goals; [1] tracing the effects of Egypt Virtual Water (VW) trade on its real water availability, and [2] analyzing the effects of

Primer Design Search results display -- Click the Add Primer Site to Map button to add the binding site for the selected (highlighted) primer or primer pair to the molecule