7 Conclusion
The number of devices connected to the Internet of Things is increasing and the criticality of the transported data is also growing. However, the data transport network of the Internet of Things is deprecated in comparison to the acquisition and processing of data layers. In this paper, we proposed a networkfunctionvirtualization infrastruc- ture that allows the agile and effective deployment of vir- tual functions for outsourcing network tasks from the IoT devices to the cloud. The proposal developed the idea of a virtualized access gateway node, capable of creating inde- pendent and isolated domains of connected devices. In each domain independent, isolated and tailored network functions are applied to respond to the performance and security requirements of each Internet of Things appli- cation. A prototype of the proposed infrastructure was developed and evaluated. The results demonstrated that the virtualized access gateway node does not introduce performance losses for the access of the IoT devices. It has further been found that latency between the access node and the infrastructure can be substantially high without performance losses. The results of the experiments show that even while performing the virtual queue policy func- tion in a virtualized environment, the effects of policy enforcement are visible on the physical interface, avoid- ing the overuse of resources. Since the proposal does not apply protocols that execute flow control, the vari- ation in the delay on the communication between the gateway and networkfunctionvirtualization infrastruc- ture (NFVI) does not interfere with the received packet rate and generates little influence for the processing of the packets in the virtualized environment. Another impor- tant point is that the proposal is based on traditional, consolidated and widely developed tools, such as the use of GRE, which allows higher performance than experi- mental tools that are still in development, such as NSH.
The journey starts recognizing that the key principle of NFV is Network-IT convergence (Network applications running on a common IT infrastructure) and the organizational model has to support this convergence to make NFV successful. The NFVI Tenant Operations organization has to speak the same language of VNF providers, the language of vIMS, vEPC and vCPE experts, to understand their requirements and translate them in terms of NFV Infrastructure services. On the other end, the NFV Infrastructure Operations organization designs, delivers, operates and manages a modern IT infrastructure, requiring virtualization, cloud and DevOps skills, expertise and best practices.
with the TSecO. If an attacker gains admin privilege in the hypervisor, it is possible to launch a malicious instance by modifying the filter scheduler, even if the TSecO returns a failure. However, it is very well possible to detect this using the log entries in TSecO. Placing the TSecO and its management function separated from the NFVI helps in detecting any malicious actions. Also, the service provider can communicate with TSecO and check the audits during every launch of VNF to verify the entire launch process. It might be still possible for the attacker with admin privileges to inject malware after the signature verification. Hence, our method does not completely prevent the system from such an attack.
Summary: In tables I and II, we summarize current activities towards NFV MANO. In Table I, we map the functionalities of each project or product to the functional blocks of the ETSI NFV reference architecture. We can observe that most projects or products choose to rely on existing infrastructures and cloud systems such as OpenStack for achieving the NFVI and some form of data modeling and storage to model and store VNFs. On the MANO part, it can be noted that almost all propose a solution for each of the three functional blocks VIM, VNFM, and NFVO. The difference however, is in the functionality. This can be observed in Table II, which summarizes the management and orchestration functionality of each project/product based on their description. For this purpose, we define four functionality categories. The man- agement approach classifies them based on whether they are centralized, distributed, policy-based, and automated (self- managed). The management function classifies them according to their support for five of the basic management functions -
Summary: To summarize, in Table IV we present all the projects giving their main objective, their focus with respect to NFV and related areas, and entities leading or funding them.
All these projects are guided by the proposals coming out of the standardization described earlier, in particular ETSI, 3GPP and DMTF. It is interesting to observe that all the three industrial projects (ZOOM, OPNFV and OpenMANO) surveyed are focused on MANO. This underlines the im- portance of MANO in NFV. MANO is a critical aspect towards ensuring the correct operation of the NFVI as well as the VNFs. Just like the decoupled functions, NFV demands a shift from network management models that are device- driven to those that are aware of the orchestration needs of networks which do not only contain legacy equipment, but also VNFs. The enhanced models should have improved oper- ations, administration, maintenance and provisioning focused on the creation and lifecycle management of both physical and virtualized functions. For NFV to be successful, all probable MANO challenges should be addressed at the current initial specification, definition and design phase, rather than later when real large scale deployments commence.
The ETSI-NFV reference architecture defines a layered approach to VNF deploy- ments (see Figure 1).
To help ensure portable and deterministic performance across a NFV-based service deployment and operation, the infrastructure must expose the relevant NFVI attributes up through the delivery stack. Likewise, the VNF’s information models, describing its resource requirements and those for the services being launched, are key to enabling the provisioning layers to make intelligent and optimal deployment decisions. This Enhanced Platform Awareness (EPA) capability in the NFVI allows the orchestration platform to intelligently deploy well-designed VNFs onto the appropriate underlying infrastructure, and it helps ensure correct allocation of the resources for an end-to-end VNF service scenario.
(1) proposing a cross-domain and technology virtualization solution allowing the creation and operation of infrastructure slices including subsets of the network and computational physical resources, and (2) supporting dynamic end-to-end service provisioning across the network segments, offering variable QoS guarantees, throughout the integrated network. Summary: To summarize, in Table IV we present all the projects giving their main objective, their focus with respect to NFV and related areas, and entities leading or funding them. All these projects are guided by the proposals coming out of the standardization described earlier, in particular ETSI, 3GPP and DMTF. It is interesting to observe that all the three industrial projects (ZOOM, OPNFV and OpenMANO) surveyed are focused on MANO. This underlines the im- portance of MANO in NFV. MANO is a critical aspect towards ensuring the correct operation of the NFVI as well as the VNFs. Just like the decoupled functions, NFV demands a shift from network management models that are device- driven to those that are aware of the orchestration needs of networks which do not only contain legacy equipment, but also VNFs. The enhanced models should have improved oper- ations, administration, maintenance and provisioning focused on the creation and lifecycle management of both physical and virtualized functions. For NFV to be successful, all probable MANO challenges should be addressed at the current initial specification, definition and design phase, rather than later when real large scale deployments commence.
4.7 BRAS Prototype Application with QoS In this section we focus on the vBRAS prototype application in combination with QoS. First, we show performance numbers for QoS handling the vBR[r]
We present here a pattern for the NFV architecture. Our audience includes system architects and system designers as well as telco service providers. The NFV pattern provides network functions related to a cloud reference architecture and is an important part of cloud ecosystems. An ecosystem is the expansion of a software product line architecture to include systems outside the product which interact with the product [Bos09]. Figure 1 shows a partial cloud ecosystem. The Cloud Security Reference Architecture (SRA) is the main pattern (hub) that defines the ecosystem [Fer15a]. The SRA can be derived from a Cloud RA by adding security patterns to control its identified threats. Cloud Web Application Firewalls and Security Group Firewalls provide filtering functions that can be provided as services through VNFs or on their own. The Cloud Compliant Reference Architecture applies patterns to the Cloud RA to comply with regulations.
The concept of virtualizing network functions or Net- work FunctionVirtualization (NFV) has been gaining much attention in the telecommunications and net- working industry [1]. NFV involves exploiting stan- dard computing virtualization technology to consol- idate many network equipment types onto industry standard high volume servers, switches and storage. These could be located at data centers, network nodes and in the end user premises. It enables a transfor- mation in the way that operators architect and design networks to implement their networking infrastructure. In other words, NFV refers to the development of software-based implementations of hardware network functions and running them on a virtualized set of resources on carrier-grade servers to mimic the behavior of the corresponding hardware middle-boxes.
V´ıctor L´opez received the M.Sc. (Hons.) degree in telecommunications engi- neering from Universidad de Alcal´a de Henares, Spain, in 2005 and the Ph.D. (Hons.) degree in computer science and telecommunications engineering from Universidad Aut´onoma de Madrid (UAM), Madrid, Spain, in 2009. In 2004, he joined Telef´onica I+D as a Researcher, where he was involved in next generation networks for metro, core, and access. He was involved with several European Union projects (NOBEL, MUSE, MUPBED). In 2006, he joined the High-Performance Computing and Networking Research Group (UAM) as a Researcher in the ePhoton/One+ Network of Excellence. He worked as an Assistant Professor at UAM, where he was involved in optical metro-core projects (BONE, MAINS). In 2011, he joined Telefonica I+D as Technology specialist. He has co-authored more than 100 publications and contributed to IETF drafts. His research interests include the integration of Internet services over IP/MPLS and optical networks and control plane technologies (PCE, SDN, GMPLS).
Handling the network through software helps the CSPs to cut down their CapEx on the physical network and also to develop their networks when required rather than budgeting for them in advance. CSPs having large networks carry a large inventory of network devices which remain largely unused. Advancement in technology makes most of these devices obsolete over time and they lesser number of one component but more of the other, the existing inventory becomes redundant. With changes in the software over the NFV platform, a server acting as a component can be made to operate as another component, which helps in managing inventory and eliminating the need to buy new devices.
Ixia’s Life-cycle Advantage:
Testing Virtualization + Virtualized Testing
Making the right decision means taking the right approach to validation throughout the migration process. Ixia offers the industry’s only life-cycle solution for ensuring success, and the front-lines experience needed to know what to do when.
Additionally, edge clusters need to also be part of the transport zone as edge components (NSX Perimeter Edge & DLR Control VM) that connect to and interact with the logical switches forward traffic/interact with the physical infrastructure. The Distributed Logical Router (DLR) Control VM sits on the edge server and provides the control plane and configuration functions for the kernel-embedded DLR on the vSphere ESXi hosts. A minimum of two edge servers allows for high availability in an active/active configuration for edge devices. Scalability is achieved by each tenant DLR having a pair of active/active DLR Control VMs and Perimeter Edge devices. Starting with NSX 6.1 release, ECMP is supported on both the DLR and the Perimeter Edge. Up to eight active/active edges are possible per each tenant DLR.
architecture aspect, we propose a Hybrid Packet and Circuit switched (HyPaC) data center network to provide high bandwidth for data intensive applications with low complexity.. The [r]
In this project, Mininet tool is used to enable multicasting on different topologies using the OpenDaylight controller. The packets before reaching their respective destinations should go through the VNFs where they are processed by different functions like Firewall, Load Balancers etc. These VNFs are implemented on different hosts in the network, but Mininet doesn’t allow hosts to forward packets. Therefore, for the purpose of simulation, they were implemented on different switches in the network. Delay in the links is added to implement the processing times of the network functions (VNFs). Different systems used in the project consists of 2 network functions placed on one or different switches and are then duplicated on various other switches of the network. Real-world network topologies like NSFNET, Cost239, Arpanet and Random12 are used for simulation in this project. The delay between the source and destination for each case is measured. The number of VNFs is proportional to the cost of the network. It is observed that as the cost increases, the delay between the source and destinations decreases up to a certain number of VNFs and after that the delay does not decrease any further.
Service providers traditional sources of revenue, voice and video, are losing ground to services being provided over the top (OTT) on their data channels. While, the infrastructure needed to handle all that data traffic needs to grow to meet the expanding capacity requirements, it’s resulting in, infrastructure costs growing faster than subscriber revenue growth. Operators who try to respond with new ways to monetize their services are realizing that their networks are not agile enough to introduce new services more quickly.
With respect to resource elasticity, being able to deploy network services in virtual machines running on commodity processors instead of deploying purpose-built appliances means reducing the time-to-market for introducing new services, as well as making it easy to re-provision the resources allocated to existing services as demand dictates.
With respect to the software ecosystem, being able to leverage general-purpose building block services instead of integrating stove-piped applications from a limited set of vendors makes it easier to introduce incremental value-added functionality into the network, and as a consequence, opens the space for innovation by third-party service providers.