• No results found

Blade Cluster

N/A
N/A
Protected

Academic year: 2021

Share "Blade Cluster"

Copied!
121
0
0

Loading.... (view fulltext now)

Full text

(1)R14.1 Operation & Configuration Customized Course. Mohammad Bahgat December-2011.

(2) PREFACE. 2. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(3) AGENDA. 3. 1.. What is Blade Cluster. 2.. Blade Cluster Architecture. 3.. Blade Cluster Concepts. 4.. Traffic Cases. 5.. Signaling from Blade Cluster Point of View. 6.. Blade Cluster Configuration Quick View. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(4) AGENDA. 4. 1.. What is Blade Cluster. 2.. Blade Cluster Architecture. 3.. Blade Cluster Concepts. 4.. Traffic Cases. 5.. Signaling from Blade Cluster Point of View. 6.. Blade Cluster Configuration Quick View. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(5) What is Blade Cluster Preface. 5. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(6) What is Blade Cluster Blade History Ø A blade server is a stripped down server computer with a modular design optimized to minimize the use of physical space and energy. Whereas a standard rack-mount server can function with (at least) a power cord and network cable, blade servers have many components removed to save space Ø Developers first placed complete microcomputers on cards and packaged them in standard 19-inch racks in the 1970s Ø The first commercialized blade server architecture was invented by Christopher Hipp and David Kirkeby was assigned to Houston-based RLX Technologies that shipped its first commercial blade server in 2001 and was acquired by Hewlett Packard (HP) in 2005 Ø The name blade server appeared when a card included the processor, memory, I/O and non-volatile program storage. This allowed manufacturers to package a complete server, with its operating system and applications, on a single card / board / blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. 6. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(7) What is Blade Cluster Blade Cluster Definition Ø The MSC-S Blade Cluster, the future-proof server part of Ericsson’s Mobile Softswitch solution, provides very high capacity, effortless scalability, and outstanding system availability. It also means lower OPEX per subscriber, and sets the stage for business-efficient network solutions Ø 3 Blades in MSC R14.1 Compared to MSC R13.2 have a capacity of 43% gain. 7. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(8) What is Blade Cluster Blade Cluster Benefits Ø Ultra High Capacity Up to 11 Million subscribers Ø Outstanding Node Availability Zero down time on node level; and enabling SW upgrade of single blade without traffic disturbance Ø Easy Scalability Blades could be added & removed without updating the configuration of neither radio nor core network Ø A future proof solution MSC-S Blade Cluster is enabling SIP interworking (IMS) Ø Dump Cloning on MSC Blades Data across blades are the same, this allows for manual cloning a dump from blade to another 8. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(9) What is Blade Cluster Blade Cluster Benefits. 9. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(10) AGENDA 1.. What is Blade Cluster. 2. Blade Cluster Architecture. 10. 3.. Blade Cluster Concepts. 4.. Traffic Cases. 5.. Signaling from Blade Cluster Point of View. 6.. Blade Cluster Configuration Quick View. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(11) Blade Cluster Architecture Blade Cluster Components. 11. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(12) Blade Cluster Architecture Blade Cluster Components Ø MSC Blades: Provide MSC/VLR/SSF & GMSC application functions. Subscribers are distributed among the available MSC-Blades. The blades maintain subscriber registrations and control the mobile radio access. MSC-Blades are logically connected to other network nodes (MGW, MSC, HLR, FNR, SCP, RNC, BSC). Inter-Blade communication is done via Ethernet. Ø SPXes : To hide the internal structure of the Blade Cluster. Also provides signaling support functions (Routing, Load-sharing and interworking of different protocols) , it also acts as an entry point for incoming traffic & exit point for outgoing traffic Ø APG: 2 APG43s are provided [1 for charging&1 for APGs functionality “I/O Functions & Statistics”]. 12. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(13) Blade Cluster Architecture Blade Cluster Components Ø Integrated Site (IS): A solution for building compact IP based sites, its components used for implementing intra MSC-BC connectivity and provide IP connectivity to external nodes, MSC-BC doesn’t use AXE dedicated hardware for IP, it composed of: v MXB: Main IS Switch, control shelf => SCP-RP v EXB: Used to connect Attached System “Like SPX” => DLEB v IPLB: Used to route IP signaling as a physical interface to IP network v SIS: Site Infrastructure Support general I/O system for IS, also acts as O&M for IS & boot functions (You can’t add a new board unless it’s up). 13. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(14) Blade Cluster Architecture Blade Cluster Connectivity. 14. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(15) Blade Cluster Architecture Blade Cluster System Cabinets – Whole System. 15. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(16) Blade Cluster Architecture Blade Cluster System Boards Ø SPX v APZ q APZ 212 55 used in MSC-S R13.1 BC only q APZ 212 60 & APG43 q SCB-RP Support & Connection Board RP (Ethernet Switching, Power, RP Bus, Shelf Manager) q GARP Generic Application Resource Processor (For NTP) v APT q ALI ATM Link Interface q ET155 Exchange Terminal 155 Mbit/s for TDM interface q STEB Signaling Terminal Enhanced Board q XDB Switch Distributed Board acting as Group Switch q IRB Incoming Reference Board for incoming synchronization reference q CGB Clock Generation Board for stable clocks 16. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(17) Blade Cluster Architecture Blade Cluster System Boards ØI/O v APG43 Ø Blades v APZ 214 03 GEP2 Generic Ericsson Processor Ø IS v MXB-BS Main Switch Blade System (Ethernet switching, Power, Shelf Manager, IPMB); it contains two boards q SCXB Main Switch Board, 2 SCXB boards required for each IS subrack, it provides 1 GE connectivity to all slot positions, in IS-1; 2 CMCB boards found while no need for them in IS-2, SCXB with CMCB provides 10 GE q CMCB Optional extension board to provide 10 GE connectivity to Backplane. 17. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(18) Blade Cluster Architecture Blade Cluster System Boards ØIS v IPLB-BS IP Line Board, IPLB blade system provides signaling traffic connectivity and routing, it acts as the host interface for load-balancer functionality, IPLB behaves as an IP Host or IP forwarding engine, BC system requires one IPLB pair blades, one port of IPLB used for 1 GE connectivity to LAN vEXB5 Extension Switch Board (Ethernet switch with external interfaces for connecting non-IS equipment to IS infrastructure), used for AXE attached system connectivity, in other words; External LAN attachment blade system is used for extending the IS LAN connectivity to various externally attached LAN devices, providing these with well-defined, standards compliant data link layer connectivity, 2 boards included in IS-1. 18. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(19) Blade Cluster Architecture Blade Cluster System Boards ØIS v SIS Site Infrastructure Support Board (OAM for IS), one SIS pair mandatory per IS domain; there is one active SIS blade and one stand-by. The SIS blade system provides services like: Integrated Site Management (ISM), Fault management, Interface for Locally Connected Terminal etc.. 19. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(20) Blade Cluster Architecture Blade Cluster System Cabinets - Cabinet 1. 20. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(21) Blade Cluster Architecture Blade Cluster System Cabinets - Cabinet 2. 21. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(22) Blade Cluster Architecture Blade Cluster System Terminologies. 22. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(23) Blade Cluster Architecture Blade Cluster System Terminologies. 23. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(24) Blade Cluster Architecture Blade Cluster System Terminologies. 24. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(25) Blade Cluster Architecture Blade Cluster Resiliency. Board Types. 25. Resiliency. SPX. 2 * (1+1) Protection. MSC-Blades. N+1 Protection. APG43. 1+1 Protection (OAM & STS). APG43. 1+1 Protection (Charging). IS. 1+1 Protection. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(26) AGENDA 1.. What is Blade Cluster. 2.. Blade Cluster Architecture. 3. Blade Cluster Concepts. 26. 4.. Traffic Cases. 5.. Signaling from Blade Cluster Point of View. 6.. Blade Cluster Configuration Quick View. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(27) Blade Cluster Architecture Blade Cluster Concepts Ø Signaling ProXy A 1+1 protected AXE CP with RP equipment. It will hide cluster from external network & convert and route signaling traffic (ATM, TDM and IP based signaling) received from external network nodes to the appropriate blades. “Acting as SGW while Blades use IP only” Ø Primary Blade One from two MSC blades that can handle a certain mobile subscriber. One of the MSC blades is automatically selected as Primary MSC blade for each mobile subscriber by the mobile subscriber distribution function. The Primary MSC blade executes traffic for a mobile subscriber unless it experiences a transient failure or traffic isolation. 27. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(28) Blade Cluster Architecture Blade Cluster Concepts Ø Buddy Blade One from two MSC blades that can handle a certain mobile subscriber. One of the MSC blades is automatically selected as Buddy MSC blade for each mobile subscriber by the mobile subscriber distribution function. The Buddy MSC blade is to handle new traffic for a subscriber during traffic isolation or a transient failure of the Primary MSC blade of the same subscriber Once the Buddy MSC blade has taken the control of a subscriber, it handles this subscriber until the next location update is received or until it becomes unable to execute traffic due to traffic isolation or due to a transient failure. At the occurrence of one of these events, the Primary MSC blade will start to handle the subscriber again, unless the Primary MSC blade is not able to execute traffic. 28. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(29) Blade Cluster Architecture Blade Cluster Concepts Ø Active blade The Blade that is handling traffic of a certain mobile subscriber at a certain time. It is a logical role that is automatically assigned to a blade for each mobile subscriber by the distribution function. Active blade of a subscriber is either the Primary or the Buddy blade of that subscriber. The selection whether Primary or Buddy blade is the active blade can change in the course of time and depends on the ability of the two blades to execute traffic. There can be only one active blade per subscriber at the same time ØPassive blade From the two blades that can execute traffic for a mobile subscriber (Primary and Buddy blade), a blade that is not the active blade is passive blade. By default a blade becomes a passive blade for all mobile subscribers that are already registered on that blade during traffic isolation or a transient failure. Both ,Primary and Buddy blade can be a passive blade at the same time 29. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(30) Blade Cluster Architecture Blade Cluster Concepts Ø VLR data replication This is a function that makes the subscriber data of a mobile subscriber available in the VLRs of the Primary and of Buddy blade. It achieves a consistency of the VLR data between the two MSC blades. The master database in this case is the HLR Ø Permanent failure A permanent failure of a blade is a failure that can’t be corrected by automatic recovery actions. A permanent failure of a blade always leads to cluster reconfiguration Ø Transient failure A transient failure of a blade is a temporary situation. During this situation the blade is not able to execute traffic due to automatic recovery actions. 30. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(31) Blade Cluster Architecture Blade Cluster Concepts Ø Cluster Handler A new APZ platform function, which provides information about the blades in the cluster in a consistent manner. I.e. before the application is notified (on each blade) about the available blades, the platform will perform checks concerning the blade availability and the communication paths. If the cluster handler detects changes in the cluster (transient or permanent changes), it will notify the application notification. It also offers functionality to traffic isolate a blade and to initiate capacity changes. The latter one can be triggered either by the operator or due to permanent failures. 31. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(32) Blade Cluster Architecture Blade Cluster Concepts Ø Load-Vector A logical representation of the cluster configuration and the load distribution within the MSC Server Blade Cluster. It contains the information which blades are configured to handle traffic and how the load is distributed over the MSC blades. The information of the load vectors is stored on all blades and it is used by the distribution function to determine the Primary and the Buddy blades of a mobile subscriber. Different load vectors are used for this task. The load vectors are calculated based on the information in the consistent cluster view Ø Cluster Reconfiguration The automatic process of redistributing the registered mobile subscribers between the blades according to a new cluster configuration. It is always preceded by the calculation of new load vectors. A cluster reconfiguration is initiated due to a cluster extension, a cluster reduction or a permanent failure of a blade 32. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(33) Blade Cluster Architecture Blade Cluster Key Mechanisms. 33. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(34) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Distribution & Replication. 34. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(35) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Distribution & Replication. 35. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(36) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Distribution & Replication. 36. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(37) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Distribution & Replication. 37. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(38) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Distribution & Replication. 38. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(39) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Distribution & Replication. 39. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(40) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS). 40. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(41) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS). 41. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(42) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS). 42. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(43) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS) q COASE. 43. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(44) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS) q Age Rank Assigned by Cluster Handler indicating the age of CP joining the quorum, the longest time CP alive in quorum has the smallest Age Rank and is the CP Traffic Leader that will further assigns the Master Role. 44. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(45) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS) q Age Rank Assigned by Cluster Handler indicating the age of CP joining the quorum, the longest time CP alive in quorum has the smallest Age Rank and is the CP Traffic Leader that will further assigns the Master Role. 45. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(46) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS) q Traffic Leader & Route Master Role. 46. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(47) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS) q Traffic Leader & Route Master Role. 47. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(48) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS) q Traffic Leader & Route Master Role. 48. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(49) Blade Cluster Architecture Blade Cluster Key Mechanisms Ø Cluster Circuit Sharing (CCS) q Traffic Leader & Route Master Role. 49. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(50) Blade Cluster Architecture IP on CP Migration ØVirtual Interfaces (VIFs) oVirtual Interface is a logical representation of an interface that is used to send or receive data for the connected network oVIF connected to a VLAN oIP addresses & routing tables can only be configured on VIFs. 50. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(51) Blade Cluster Architecture IP on CP Migration ØVirtual Interfaces (VIFs) oTypes of VIFs §Dual Sided CP (SPXes) "Multi-Homing" §Supports two physical Ethernet interfaces (EthA, EthB); each interface connected to a number representing VLAN to be used for this VIF "EthA-10 connected to EthA & uses VLAN-10" §Both VIF that defined on two Ethernet interfaces called VIF-Pair, specific functions applied to VIF-Pair like "Router Supervision". 51. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(52) Blade Cluster Architecture IP on CP Migration ØVirtual Interfaces (VIFs) oTypes of VIFs §Router Supervision Function available only for dual sided CP §Each VLAN could have one Router Supervision Instance that every instance has 2 router supervision IP addresses used to monitor IP connectivity between interface and supervised gateway §Router Supervision IP addresses (PingA & PingB) always defined in pairs, one on each interface in a VIF-Pair. 52. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(53) Blade Cluster Architecture IP on CP Migration ØVirtual Interfaces (VIFs) oTypes of VIFs §Router Supervision Function available only for dual sided CP §Each VLAN could have one Router Supervision Instance that every instance has 2 router supervision IP addresses used to monitor IP connectivity between interface and supervised gateway §Router Supervision IP addresses (PingA & PingB) always defined in pairs, one on each interface in a VIF-Pair. 53. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(54) Blade Cluster Architecture IP on CP Migration ØVirtual Interfaces (VIFs) oTypes of VIFs §Single Sided CP (Blades) “Semi Multi-Homing in a Single-Homed View” §Supports two physical Ethernet interfaces, will appear as a single interface (LAG) because of Ethernet Link AGgregation §For single sided VIF called; named VIF "nVIF“ § VIF is always connected to one Ethernet interface §On VIF; operator can define Application IP that belongs to one or more defined IP subnets. 54. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(55) Blade Cluster Architecture IP on CP Migration ØVirtual Interfaces (VIFs) oTypes of VIFs §Single Sided CP (Blades) “Semi Multi-Homing in a Single-Homed View” §Supports two physical Ethernet interfaces, will appear as a single interface (LAG) because of Ethernet Link AGgregation §For single sided VIF called; named VIF "nVIF“ § VIF is always connected to one Ethernet interface §On VIF; operator can define Application IP that belongs to one or more defined IP subnets. 55. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(56) Blade Cluster Architecture IP on CP Migration ØVirtual Interfaces (VIFs) oTypes of VIFs §For IP stack on CP; a specific Subnet belongs to exactly one VLAN §Link aggregation is a term which describes usage of multiple Ethernet network cables/ports in parallel to increase the link speed beyond the limits of any one single cable or port. 56. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(57) Blade Cluster Architecture IP on CP Migration Ø Different Cases Uses IP on CP Ethernet Link Aggregation Group (LAG). 57. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(58) Blade Cluster Architecture IP on CP Migration Ø Different Cases Uses IP on CP. 58. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(59) Blade Cluster Architecture IP on CP Migration Ø Different Cases Uses IP on CP. 59. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(60) Blade Cluster Architecture IP on CP Migration Ø Different Cases Uses IP on CP. 60. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(61) Blade Cluster Architecture IP on CP Migration Ø Different Cases Uses IP on CP. 61. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(62) Blade Cluster Architecture IP on CP Migration Ø Different Cases Uses IP on CP. 62. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(63) Blade Cluster Architecture IP on CP Migration Ø Different Cases Uses IP on CP. 63. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(64) AGENDA 1.. What is Blade Cluster. 2.. Blade Cluster Architecture. 3.. Blade Cluster Concepts. 4. Traffic Cases. 64. 5.. Signaling from Blade Cluster Point of View. 6.. Blade Cluster Configuration Quick View. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(65) Traffic Cases Location Update. 1. 2. 3. 4. 5. 1- A# sends request to SPX for LU. 65. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 6.

(66) Traffic Cases Location Update. 1. 2. 3. 4. 5. 6. 2- SPX perform round robin technique to choose randomly a blade – blade 2 chosen 66. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(67) Traffic Cases Location Update. 1. 2. 3. 4. 5. 6. 3- Load vector calculated for A# based on [IMSI, CP Load, Cluster Area Identifier], primary blade 4 and blade 4 calculate load vector to choose buddy blade, blade 5 chosen 67. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(68) Traffic Cases Location Update. 1. 2. 3. 4. 5. 4- TID sent back to SPX where SPX forward it to HLR 68. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 6.

(69) Traffic Cases Location Update. 1. 2. 3. 4. 5. 6. 5- HLR sends LU acknowledge back to SPX then back to primary blade 69. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(70) Traffic Cases Mobile Originating Call. 1. 2. 3. 4. 5. 1- A# sends its IMSI, B# and Bearer Capability. 70. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 6.

(71) Traffic Cases Mobile Originating Call. 1. 2. 3. 4. 5. 6. 2- SPX perform round robin technique to choose randomly a blade – blade 2 chosen 71. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(72) Traffic Cases Mobile Originating Call. 1. 2. 3. 4. 5. 6. 3- Blade 2 will read load vector from cluster handler and assign primary & buddy blades of last LU of A# 72. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(73) Traffic Cases Mobile Originating Call. 1. 2. 3. 4. 5. 4- SRI sent with TID to HLR 73. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 6.

(74) Traffic Cases Mobile Terminating Call. 1. 2. 3. 4. 5. 1- B# sends IAM request to SPX 74. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 6.

(75) Traffic Cases Mobile Terminating Call. 1. 2. 3. 4. 5. 6. 2- SPX perform round robin technique to choose randomly a blade – blade 1 chosen 75. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(76) Traffic Cases Mobile Terminating Call. 1. 2. 4. 3. 5. 3- Blade 3 chosen as the lowest load blade. 76. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 6.

(77) Traffic Cases Mobile Terminating Call. 1. 2. 3. 4. 5. 4- SRI with TID sent to HLR 77. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 6.

(78) Traffic Cases Mobile Terminating Call. 1. 2. 3. 4. 5. 5- HLR sends PRN aligned with IMSI of A# back to SPX 78. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 6.

(79) Traffic Cases Mobile Terminating Call. 1. 2. 3. 4. 5. 6. 6- SPX perform round robin technique to choose randomly a blade – blade 6 chosen 79. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(80) Traffic Cases Mobile Terminating Call. 1. 2. 3. 4. 5. 6. 7- Load vector calculated for A# based on [IMSI, CP Load, Cluster Area Identifier], primary blade 5 and blade 5 calculate load vector to choose buddy blade, blade 2 chosen 80. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(81) Traffic Cases Mobile Terminating Call. 1. 2. 3. 4. 5. 6. 8- Blade 5 sends PRN acknowledgement assigned with MSRN and HLR sends back to SPX SRI acknowledgement that will finally back to blade 3 81. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(82) Traffic Cases Mobile Terminating Call. 1. 2. 3. 4. 5. 9- Blade 3 routes to blade 5 for mobile terminating MTE. 82. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 6.

(83) Traffic Cases Hand Over. Drift BC. Serving BC 1. 2. 3. 4. 5. 6. 1. 2. 3. 1- A# sends request to SPX for HO. 83. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 4. 5. 6.

(84) Traffic Cases Hand Over. Drift BC. Serving BC 1. 2. 3. 4. 5. 6. 1. 2. 3. 2- SPX of Serving BC sends HO request to SPX of Drift BC. 84. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 4. 5. 6.

(85) Traffic Cases Hand Over. Drift BC. Serving BC 1. 2. 3. 4. 5. 6. 1. 2. 3. 4. 5. 6. 3- SPX perform round robin technique to choose randomly a blade – blade 2 chosen 85. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(86) Traffic Cases Hand Over. Drift BC. Serving BC 1. 2. 3. 4. 5. 6. 1. 2. 3. 4- SPX will choose lowest load blade, blade 5 chosen 86. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat. 4. 5. 6.

(87) Traffic Cases Hand Over. Drift BC. Serving BC 1. 2. 3. 4. 5. 6. 1. 2. 3. 4. 5. 6. 5- Load vector calculated for A# based on [IMSI, CP Load, Cluster Area Identifier], primary blade 6 and blade 6 calculate load vector to choose buddy blade, blade 4 chosen 87. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(88) AGENDA 1.. What is Blade Cluster. 2.. Blade Cluster Architecture. 3.. Blade Cluster Concepts. 4.. Traffic Cases. 5. Signaling from Blade Cluster Point of View 6.. 88. Blade Cluster Configuration Quick View. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(89) Signaling from Blade Cluster Point of View Introduction Ø SPXes main functions are: q SPXes will choose the leader blade by Round-Robin method q Hide cluster from external NEs q Provide external signaling interfaces q As long as blades using IP only; SPX will act as an SGW for TDM/ATM q SPXes will act as STPs for quasi-associate signaling mode q For SCCP based signaling; it will need a protocol conversion to be directed to blades that understand IP; this is known by SUA protocol. 89. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(90) Signaling from Blade Cluster Point of View Protocols Ø SUA will be supported for MSC-BC internal connections (Between SPXes and Blades & between Blades themselves) Ø Blades external interfaces based on SCCP,GCP over SCTP setup directly to MGWbypassing SPXes Ø This is also applied for SIP & SIP-I protocols to target nodes in IMS network via IPLB blades Ø M3UA based signaling could possibly be also setup over direct paths to the blades but it’s not recommended because of increased configurations needed. 90. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(91) Signaling from Blade Cluster Point of View Protocols. 91. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(92) Signaling from Blade Cluster Point of View Protocols. 92. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(93) Signaling from Blade Cluster Point of View Addressing & Routing Ø Connections of NEs that are not aware of MSC-BC as a distributed multi-hosted system [MGWs], MSC-BC appears as one single MSC-Server node “Common cluster SPC/GT” Ø NEs that are “non-cluster-aware” [RANAP ”RNC”, BSSAP ”BSC”, MAP/TCAP ”HLR”], are all signaling peers that uses SCCP as a base signaling protocol; hence connections are proxied that SPX pair performs a cluster internal load balancing without letting blades being visible to the outside. 93. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(94) Signaling from Blade Cluster Point of View Addressing & Routing Ø Connections of NEs that are not aware of MSC-BC as a distributed multi-hosted system [MGWs], MSC-BC appears as one single MSC-Server node “Common cluster SPC/GT” Ø NEs that are “non-cluster-aware” [RANAP ”RNC”, BSSAP ”BSC”, MAP/TCAP ”HLR”], are all signaling peers that uses SCCP as a base signaling protocol; hence connections are proxied that SPX pair performs a cluster internal load balancing without letting blades being visible to the outside. 94. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(95) Signaling from Blade Cluster Point of View Addressing & Routing Ø 4 types of addresses could be distinguished; q Global Title for cluster [One common GT for all blades] q Cluster SPC [One DPC for all blades, HPC on SPXes] for SCCP based signaling – RANAP, BSSAP & MAP/TCAP q Individual SPCs per SPX, one SPC common for all blades, for GCP, ISUP/BICC q Extra SPC for associated signaling mode “On SPX and Blades as well” [NonEricsson BSC]. 95. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(96) Signaling from Blade Cluster Point of View Addressing & Routing Ø 4 types of addresses could be distinguished; q Global Title for cluster [One common GT for all blades] q Cluster SPC [One DPC for all blades, HPC on SPXes] for SCCP based signaling – RANAP, BSSAP & MAP/TCAP q Individual SPCs per SPX, one SPC common for all blades, for GCP, ISUP/BICC q Extra SPC for associated signaling mode “On SPX and Blades as well” [NonEricsson BSC]. 96. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(97) Signaling from Blade Cluster Point of View Addressing & Routing Ø Hosted Point Code: SPXes seen as MTP STPs from external NE point of view Ø Hosted Point Code is an SCCP level in SPX where SPX will recognize HPC and distribute the messages to one of blades after protocol conversion by Round-Robin Ø Blades respond directly to SPX and allocate Destination local Reference Numbers “DRN” for SCCP signaling “RANAP/BSSAP” or Transaction IDs “TID” for MAP dialogues signaling Ø After connection/dialogue initially established; SPX will use DRN/TID for routing to the right blade Ø Where does GTT performed? q Performed by SPX for outbound traffic sent by local SCCP applications on SPX but not for outbound traffic of blades q While performed by SPX for inbound traffic as an intermediate or far-end SPC, maybe also performed at the blades if necessitates. 97. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(98) Signaling from Blade Cluster Point of View Addressing & Routing. 98. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(99) Signaling from Blade Cluster Point of View Addressing & Routing. 99. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(100) Signaling from Blade Cluster Point of View Addressing & Routing. 100. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(101) Signaling from Blade Cluster Point of View Addressing & Routing. 101. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(102) Signaling from Blade Cluster Point of View Signaling Scenarios Ø SIGTRAN to SIGTRAN. 102. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(103) Signaling from Blade Cluster Point of View Signaling Scenarios Ø SS7 to SIGTRAN [QUASI-ASSOCIATE MODE]. 103. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(104) Signaling from Blade Cluster Point of View Signaling Scenarios Ø SS7 to SIGTRAN [ASSOCIATE MODE]. 104. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(105) Signaling from Blade Cluster Point of View Signaling Scenarios Ø Internal Connections. 105. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(106) Signaling from Blade Cluster Point of View SUA Ø SUA Concepts q Application Server (AS): Logical entity serving specific RK, contains one or more of ASPs that actively processing traffic. AS uses SUA for communicating SCTP/IP infrastructure as a transport layer. You can consider one AS equivalent to one BC q Application Server Process (ASP): Element of distributed IP based signaling node, provisioned to receive certain ranges of signaling traffic q Signaling Gateway (SG): Element that terminates SS7 and transport SCCP or MTP3 messages over IP to an IP SEP (Blade), SG could be modeled as one SGP that is located at the border between SS7 and IP network q Signaling Gateway Process (SGP): A Process instance of an SG, its function comprises SS7 & SUA stack, one SGP exists in SG where all remote processes are connected; SGP & SG is 1:1. You can consider one SGP equivalent to one SPX. 106. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(107) Signaling from Blade Cluster Point of View SUA Ø SUA Concepts q Application Server (AS): Logical entity serving specific RK, contains one or more of ASPs that actively processing traffic. AS uses SUA for communicating SCTP/IP infrastructure as a transport layer. You can consider one AS equivalent to one BC q Application Server Process (ASP): Element of distributed IP based signaling node, provisioned to receive certain ranges of signaling traffic q Signaling Gateway (SG): Element that terminates SS7 and transport SCCP or MTP3 messages over IP to an IP SEP (Blade), SG could be modeled as one SGP that is located at the border between SS7 and IP network q Signaling Gateway Process (SGP): A Process instance of an SG, its function comprises SS7 & SUA stack, one SGP exists in SG where all remote processes are connected; SGP & SG is 1:1. You can consider one SGP equivalent to one SPX. 107. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(108) Signaling from Blade Cluster Point of View SUA Ø SUA Concepts q Signaling Process (SP): Process instance that uses SUA to communicate with other SPs in SUA network, each SP owns an SCTP End Point used for TX&RX SUA messages, in SUA; SP could be ASP or SGP or IP Server Process (IPSP) q Routing Context & Routing Key (RC & RK): Uniquely identifies Routing Key (RK) where it describes a set of SS7 parameters and parameter ranges that define range of signaling traffic configured to be handled by particular AS q SCTP Modes: o Client-Server Mode o Peer-to-Peer Mode o Peer-to-Server Mode. 108. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(109) Signaling from Blade Cluster Point of View SUA Ø SUA Concepts qVisualization of SUA Concepts; SUA allows ASPs to be part of several ASs. 109. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(110) Signaling from Blade Cluster Point of View SIP Interworking & IMS. 110. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(111) Signaling from Blade Cluster Point of View SIP Interworking & IMS. 111. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(112) Signaling from Blade Cluster Point of View SIP Interworking & IMS. 112. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(113) AGENDA 1.. What is Blade Cluster. 2.. Blade Cluster Architecture. 3.. Blade Cluster Concepts. 4.. Traffic Cases. 5.. Signaling from Blade Cluster Point of View. 6. Blade Cluster Configuration Quick View. 113. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(114) Blade Cluster Configuration Quick View Introduction (SCTP Connections) Ø SCTP associations are setup both internally between BC components and towards external NEs in the IP domain Ø SCTP associations and SUA connections are setup automatically between blades, connections to SPXes need to be configured manually Ø Separate SCTP associations are needed for each user (M3UA, SUA & GCP) Ø Blades need associations to both SPXes and SPXes need the corresponding associations to all blades ØMSC Blades: q SUA toward both SPXes q M3UA toward both SPXes q Direct GCP toward all MGWs using GCP on SCTP. 114. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(115) Blade Cluster Configuration Quick View Introduction (SCTP Connections) Ø SCTP associations are setup both internally between BC components and towards external NEs in the IP domain Ø SCTP associations and SUA connections are setup automatically between blades, connections to SPXes need to be configured manually Ø Separate SCTP associations are needed for each user (M3UA, SUA & GCP) Ø Blades need associations to both SPXes and SPXes need the corresponding associations to all blades ØMSC Blades: q SUA toward both SPXes q M3UA toward both SPXes q Direct GCP toward all MGWs using GCP on SCTP. 115. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(116) Blade Cluster Configuration Quick View Configuration Example Ø Network Example Diagram. Configuration. 116. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(117) Blade Cluster Configuration Quick View Configuration Example Ø Network Example Diagram. Configuration. 117. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(118) Blade Cluster Configuration Quick View Configuration Example Ø Network Example Diagram. 118. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(119) Blade Cluster Configuration Quick View Useful Commands NELS CPLS CPGLS HWCLS CQRHLLS CQMSP:CP=ALL,DETAILS; ODBIP MGSBP PLCLP CPLS –l. 119. Network Element Data, List CP Identification, List CP Group, List Hardware Configuration, List Quorum Log, List Cluster Handler, Quorum Membership Data, Print Cluster Object Data, Blade Information, Print # Registered Subscribers on MSC-S Blade, Print Processor Load, Cluster Processor Load Survey, Print CP Identification, Hardware related info, List. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(120) In Summer. 120. Blade Cluster R14.1 Operation & Configuration Mohammad Bahgat.

(121)

(122)

References

Related documents

Appro HyperBlade • Mini-Cluster • Mid-Cluster • Full-Cluster • Blade Server • BladeDome Management.. Appro HyperBlade

To remove the blade, release the blade tension handle. Remove the blade. To install the blade, place the blade over the bottom wheel, then on the top wheel. Teeth must point

• Light travels in a straight line from an object or source to eye – First travels through the cornea (lens in front of the eye) – Travels from less dense medium (air) to more

visitation to these sites between 2007 and 2014. Despite the differing foraging strategies employed by the two taxa, the temporal consistency observed in proportions of

In this paper, we propose an approach to engineering adaptive security for CPS that exploits a formal model of the topology of cyber and physical spaces to discover and coun- teract

By contrast, organizations that utilize competency-based approaches to performance management are more likely to view performance management as the foundation for a

ini yang mempengaruhi adalah difusi zat warna kedalam adsorben, bukan reaksi kimia karena energi aktivasinya sangat ke c il. Nilai MI positifmenandakan proses ini bel] alan

These varying patronage practices have consequences for the broader developmen- tal capacities of the state. Elite level patronage leads to more stability and cohesion in the