• No results found

OPTIMIZING VIDEO STORAGE AT THE EDGE OF THE NETWORK

N/A
N/A
Protected

Academic year: 2021

Share "OPTIMIZING VIDEO STORAGE AT THE EDGE OF THE NETWORK"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

OPTIMIZING VIDEO STORAGE AT THE

EDGE OF THE NETWORK

Leveraging Intelligent Content Distribution Software, Off-the-Shelf Hardware

and MLC Flash to Deploy Scalable and Economical Pay-As-You-Grow VOD

Networks

(2)

IntroductIon

Flash storage is increasingly being deployed by cable operators and telcos in their video

on demand (VOD) networks to store and stream video content for the ever-increasing

volumes of video content demanded by residential subscribers. The use of intelligent

content distribution software is essential to successfully addressing the challenges of

flash deployments, and the use of commercial off-the-shelf (COTS) server hardware

allows service providers to capitalize on the frequent advances in standards-based

server technologies.

Multi-level cell (MLC) flash memory enables higher-density storage at the edge of the

network, and the combination of intelligent content distribution software, COTS server

hardware, and MLC flash memory allows cable operators and telcos to drive down

CAPEX and OPEX while economically scaling the network to support VOD requirements.

This whitepaper discusses major trends in on-demand services and reviews the

trade-offs between network and storage resources. It explains the advantages of using

flash for edge caching and streaming, and concludes that MLC flash is more efficient,

economical, and practical than single-level cell (SLC) flash—if intelligent software is

used in addition to existing error-correction and wear-leveling solutions. It presents the

advantages of deploying the Motorola B-3 Video Server at the edge of the network

along with Motorola’s Adaptive Media Management techniques to optimize bandwidth,

storage, and streaming for on-demand networks.

(3)

WHITE PAPER • Optimizing Video Storage

3

tHE MoVE toWArd FLEXIBLE contEnt dIStrIButIon

nEtWorKS For Vod

According to Pike & Fisher, on demand will grow to 38% of viewing by 2012 as time-shifted TV and other real-time video services convert traditional linear TV viewing into on demand streams. Streaming and ingest growth are increasing dramatically as a result, and library content is expanding dramatically. For most service providers, libraries have at least tripled over the past few years.

At the same time, concurrency curves are rapidly flattening. Just a few years ago, the 80/20 rule applied—80 percent of on demand streams could be served with the top 20 percent of the active library. Based on a detailed analysis of more than 180 VOD deployments worldwide, Motorola estimates that these same 80 percent of streams now access more than 50 percent of the library. What this means is that the VOD infrastructure must be able to efficiently access more of the library than ever before in order to provide a robust user experience.

tHE EVoLutIon oF tHE on dEMAnd nEtWorK

Early VOD deployments provided viewers with access to a small content library with low concurrency rates, so service providers utilized a highly distributed architecture with VOD servers generally deployed at hubs to provide decentralized streaming and storage. Initial decentralized streaming and storage architectures relied on highly distributed disk-based servers, each with the entire library. Due to the relatively small size of the libraries, this solution was possible.

(4)

To better address growing libraries and improve reliability, many service providers evolved their networks to utilize high capacity solid-state servers for a more centralized architecture. Libraries were aggregated in more central locations, such as headends. They could be managed and scaled more easily with less duplication of content, and reliable, solid-state video servers with decoupled streaming and storage resources enabled the more flexible delivery of real-time services.

But the massive growth in content libraries makes it financially untenable to continue to maintain the entire library at each streaming server, so many service providers are turning to hybrid models with centralized libraries and distributed streaming. Storage is centralized in a regional library, streaming resources can be distributed as required, and a subset of the library is cached with each video server.

(5)

WHITE PAPER • Optimizing Video Storage

5

In this model, streaming servers can either be centralized or distributed based on the constraints of the network, subscriber density, and other factors. Video service providers can reduce CAPEX for storage by minimizing the amount of content replicated at each streaming server while leveraging centralized, solid-state servers for reliability, scalability, and performance. By implementing this architecture, operators can benefit from:

Centralized content libraries •

Mixed-density streaming •

Solid-state edge servers •

Efficient and scalable edge storage platforms •

Reduced CAPEX and OPEX •

MAnAgIng tHE trAdEoFFS BEtWEEn nEtWorK And

StorAgE rESourcES

The primary challenge in this model is determining which content to place at the edge of the network and which to leave in the central library. The two resources that must be traded off against each other are the size of the edge cache and the amount of network capacity.

The larger the edge cache capacity, the more likely the requested content will be available in the streaming server, but the greater the cost of storage. The smaller the edge cache, the more network bandwidth is required to access library content to serve requests for less-popular content. Service providers are faced with the challenge of cost-effectively managing the tradeoffs between network and storage resources. As the concurrency curve flattens—and subscribers access more of the library—this challenge becomes more difficult to manage.

The objective is therefore to intelligently position content at the edge of the network to efficiently serve as many content requests as possible without burdening the backhaul network. By storing the correct content at the edge of the network at the right time, service providers can more efficiently utilize the backhaul network resources and minimize the amount of data that needs to be sent to the edge server and written to storage.

Figure 4: In this example, the use of simple content placement algorithms results in the equivalent of the entire library being sent over the network every 24 hours.

(6)

Relying on simple algorithms for determining which content to send from the central library to the edge cache results in resource demands shifting to the network. More intelligent software is required so that service providers can more effectively leverage edge storage resources and minimize demands to access centrally stored content.

LEVErAgIng FLASH MEMory At tHE EdgE

Although mechanical, “spinning” drives continue to make improvements in storage density, seek times, and spindle speeds have not improved significantly. Solid-state drives (SSDs) with flash memory offer faster access speeds and increased reliability, and they are greener platforms that can significantly reduce energy OPEX. Like spinning disk drives, SSDs are non-volatile—so they do not lose content if they lose power, which further reduces OPEX by allowing service providers to more effectively manage remote server platforms at the edge of the network.

Selecting MLC Over SLC Flash

NAND flash is increasingly used as a storage medium for video due to its reliability and performance advantages over spinning disks. With flash memory, storing and erasing is performed by injecting and removing electrons from a floating gate. But as a cell is erased and programmed repeatedly, electrons become trapped due to atomic bonds being broken down. This leads to longer erase times and shorter program times, and eventually, the erase time will become so large that the cell will be unusable.

There are two alternatives for flash storage at the edge of the network. SLC and MLC technologies use the same fundamental cell, but they use different sensing techniques. SLC is today used more commonly for enterprise-class server applications, but it lacks the density to support increasing scalability requirements. With SLC, a single bit of data is presented in a storage element, but MLC presents two or more bits of data—thus improving storage capacity.

An MLC flash device typically provides for about 10,000 erase/write cycles while the SLC typically supports about 100,000 erase/write cycles. In addition to the inevitable wear-out of flash memory, it is also subject to other “soft” failure modes that must be efficiently managed. Flash controllers need sophisticated error correction mechanisms and intelligent wear-leveling software to best utilize flash memory.

For example, there are portions of the drive that are written to once (OS, applications) and read from many times and there are portions of the drive that are written to and read from many times (video caches, databases, etc). This results in “hot spots” that wear much faster than other portions of the drive. Wear leveling addresses this by essentially swapping the hot spots with the static data periodically to smooth the writing across the entirety of the SSD.

Enterprise-class servers have typically relied on SLC because of the greater endurance. However, improvements in wear-leveling and error correction in SSDs and the use of software to more intelligently propagate content to the edge of the network extends the life of MLC flash in an on demand server, allowing service providers to dramatically increase storage densities.

(7)

WHITE PAPER • Optimizing Video Storage

7

While read speeds for MLC and SLC are roughly equivalent, SLC flash supports faster write speeds and has a longer lifecycle than MLC. However, since MLC can store more content in the same form factor, they offer major advantages in price and density.

MLC-based SSDs are now being more aggressively used by enterprise-class servers, as wear-leveling becomes more sophisticated and write amplification is being reduced via better write caching techniques. By leveraging intelligent software to improve content propagation and wear-leveling and extend the lifecycle of MLC-based SSDs, service providers can deploy denser, more cost-effective video servers at edge of the network to reduce the burden on the backhaul network and more efficiently fulfill subscriber content requests.

Off-The-Shelf Servers Versus Proprietary Platforms

Proprietary flash devices lack the ability to leverage the economies of scale organizations expect from commercial off-the-shelf (COTS) servers. With standards-based server platforms, network operators can leverage Moore’s Law to more quickly take advantage of technology advances, without having to wait for proprietary server vendors to redesign their platforms. While innovation on a proprietary server can be measured in years, rapid technology advances are swiftly reflected in COTS servers. Proprietary servers may also lock service providers into a single platform, while COTS platforms allow network operators to leverage technology innovations and declining price curves while economically building out a distributed content infrastructure.

crEAtIng tHE IdEAL EdgE SoLutIon

Video service providers can optimize storage and network resources by relying on proven solutions from Motorola. A flexible, reliable, and high-performance on demand network is required to succeed in today’s competitive marketplace, and Motorola has shipped over a million VOD streams worldwide and implemented more than 180 VOD deployments.

Motorola offers the most widely deployed solid-state, on demand server, and a standards-based edge video server that allows network operators to take advantage of MLC-based flash storage. Motorola also utilizes its Adaptive Media Management software algorithms to deliver increased intelligence to the network. Adaptive Media Management is used to distribute content and assign streaming resources across the network, and to share content libraries between video servers.

Motorola has unmatched experience and expertise in the design and analysis of solid-state memory subsystems for VOD. Motorola is an active participant in the JEDEC Solid State Technology Association, including JC64.8, the committee that is working to define SSD reliability standards, as well as JC42, which is where flash devices are standardized.

EVALUATING FLASH TECHNOLOGY OPTIONS

SLC

MLC

PRICE/GB

LIFESPAN

STORAGE DENSITY

WRITE SPEED

READ SPEED

Figure 5: Advantages in price per GB and density make MLC flash the preferred solution for implementing flash at the edge of the network.

(8)

A Standards-Based COTS Solution

The B-3 is a COTS platform, allowing video content providers to leverage the increased performance of standards-based server platforms. Network operators can leverage Moore’s Law to drive down CAPEX and more swiftly leverage emerging technologies. They can avoid the limitations of proprietary platforms by relying on standards-based server platforms that leverage ongoing technology innovations and production efficiencies.

Increased Intelligence for Video Distribution

Adaptive Media Management allows service providers to more effectively leverage edge storage resources to minimize demands on the network. This increased intelligence also allows network operators to better utilize edge storage capacity to extend the lifecycle of MLC storage. By intelligently managing the placement of content at the edge of the network, Adaptive Media Management reduces content “churn” as viewing patterns change over time. This minimizes the number of writes to flash memory, optimizing the MLC SSDs. Adaptive Media Management adds an additional layer of software intelligence—above the wear-leveling and error-correction done in the SSDs—to allow content providers to more effectively and efficiently propagate video content at the edge of the network. By minimizing the number of writes required, Adaptive Media Management extends the lifecycle of MLC SSDs and allows service providers to increase the ROI for MLC memory by lengthening its productive lifecycle.

Figure 6: The Motorola B-3 Video Server enables cost-effective scalability at the edge of the network.

B-3 Video Servers at the Edge

The Motorola B-3 Video Server leverages industry-standard hardware and solid-state MLC flash memory to create a high-performance, highly scalable, and fault-tolerant on demand server. The B-3 complements the Motorola B-1 Video Server, the world’s most widely deployed solid-state on demand video server.

The B-3 expands Motorola’s On Demand Portfolio, creating the broadest, most flexible lineup of solid-state, on demand servers in the industry. With the Motorola B-3, Motorola has melded open, standards-based software with industry-standard hardware components to create a powerful on demand platform. The result is a high-performance, highly scalable, fault tolerant server that delivers premium support for VOD, time-shifted TV, on demand ad insertion, network-based digital video recording (nDVR), and other advanced services.

The B-3 optimizes the performance, reliability, and scalability of industry-standard hardware and efficiently scales from small streaming sites to large deployments. The B-3 is comprised of a cluster of modular On Demand Media Blades that can be configured as a solid-stage edge server supported by a central library server, or as a standalone video server with an integrated content library. It is designed to scale simply and without interruption by adding On Demand Media Blades, providing cost-effective scalability at the edge of the network. By deploying the B-3 as an edge server, providers can realize the benefits of distributed streaming resources while maintaining a centralized content library to minimize storage costs.

(9)

WHITE PAPER • Optimizing Video Storage

9

Motorola developed Adaptive Media Management based on over five years of deep understanding of real-world VOD network requirements. It is integrated into the Motorola Content Propagation System (CPS1000), a software-based distribution platform that intelligently manages content and streams, allowing service providers to efficiently share content between multiple servers. Adaptive Media Management capabilities on the CPS1000-CM optimize:

Streaming resources by selecting the optimal streaming/ingest/storage resources to lower •

operational costs and providing load balancing between video servers.

Storage resources by enabling multi-tiered storage libraries and minimizing content replication •

and storage costs.

Network resources by minimizing content propagation. •

Integrating with the CPS1000, the B-3 can also be deployed in conjunction with the B-1 Video Server. In this scenario CPS1000 acts as a central point of ingest for new content and distributes it to both B-1 and B-3 servers. The CPS1000 also selects the optimal server to stream to a particular service group or subscriber based on the positioning of content, server reachability, and server load. All servers can share a central content library, or they can be configured to maintain separate libraries as appropriate.

concLuSIon

Motorola has a long history of understanding and maintaining solid-state storage subsystems, and Adaptive Media Management software provides the intelligence necessary for leveraging MLC flash and building reliable, long-lasting, and standards-based edge video servers. Adaptive Media Management also complements advances in error handling and correction, and sophisticated wear-leveling algorithms to extend the lifecycle of MLC-based flash memory video servers.

Pay-as-you-grow edge server solutions from Motorola allow telcos and cable operators to optimize streaming, storage, and network resources as they deliver scalable VOD services.

Figure 7: Motorola offers a proven, future-proof solution that allows network operators to optimize video storage at the edge of the network.

(10)

Service providers can leverage the B-3 Video Server and Adaptive Media Management to build flexible on demand networks that can adapt according to subscriber demand. The CPS1000-CM implements Adaptive Media Management to bring this flexibility to a cluster of B-3s, reducing both CAPEX and OPEX and allowing service providers to optimize video storage at the edge of the network by leveraging intelligent content distribution software, COTS hardware, and MLC flash memory to deploy scalable and economical pay-as-you-grow VOD services.

References

Related documents

The key segments in the mattress industry in India are; Natural latex foam, Memory foam, PU foam, Inner spring and Rubberized coir.. Natural Latex mattresses are

UPnP Control Point (DLNA) Device Discovery HTTP Server (DLNA, Chormecast, AirPlay Photo/Video) RTSP Server (AirPlay Audio) Streaming Server.. Figure 11: Simplified

• The default cost elements - material, labor, burden, overhead, and subcontract - are created whenever costs are copied from a current or GL cost set.. Note Other cost elements

• Speed of weaning: induction requires care, but is relatively quick; subsequent taper is slow • Monitoring: Urinary drug screen, pain behaviors, drug use and seeking,

The SWARMs ontology uses a core ontology to interrelate a set of domain-specific ontologies, including the mission and planning, the robotic vehicle, the communication and

Online community: A group of people using social media tools and sites on the Internet OpenID: Is a single sign-on system that allows Internet users to log on to many different.

Since then the Inland Revenue has ignored the unanimous decision of the House of Lords that the use of personal portfolio bonds as part of a long-term

For the poorest farmers in eastern India, then, the benefits of groundwater irrigation have come through three routes: in large part, through purchased pump irrigation and, in a