Media Flow Controller Overview : How Media Flow Controller Works

How Media Flow Controller Works
A single Media Flow Controller can sustain up to 40,000 simultaneous connections for different media streams. Media Flow Controller utilizes a unique rate-based task dispatch model that allows it to manage delivery rates specific to each connection as a fundamental aspect of the platform.
For video, Media Flow Controller delivers SmoothFlow™ multi-bit-rate streaming and AssuredFlow™ features that guarantee a TV-like viewing experience. Media Flow Controller consolidates all streaming protocols (HTTP, RTSP, RTMP) into a single server, reducing the number of servers required to deliver video over multiple protocols.
Note! In Release 2.0.2, Media Flow Controller supports HTTP Progressive Download (PDL), RTSP/RTP streaming, and Flash Media Server (FMS).
Media Flow Controller is able to get content from origin servers and/or origin storages once, and serve it to several users simultaneously.
Users typically point themselves to a portal, select content to watch (click on a thumbnail, search and click a link to a video, and so forth), and the content then gets delivered from Media Flow Controller.
Requests for media get redirected to Media Flow Controller through one of the known methods: DNS, HTTP URL re-direct, Transparent HTTP redirect (with or without Direct Server Return), Policy Based Routing (PBR), and so forth.
Media Flow Controller can operate in three different proxy modes: reverse proxy (most common), transparent proxy, and mid-tier proxy.
When a request for content is received, Media Flow Controller does certain basic checks, such as URL validation, and identifies the content to be served. Media Flow Controller parses the URL query string, and header fields to identify policies associated with the content. Media Flow Controller then calculates the assured flow rate (AFR) needed to deliver this content, and does a resource check to verify that the content can be delivered in an acceptable manner for that session.
Once the delivery session is admitted, AssuredFlow can guarantee certain resources throughout the life of that session: if Media Flow Controller does not have enough resources, it rejects the request, and/or redirects the request to a different Media Flow Controller.
Media Flow Controller then checks its hierarchical caches to minimize the cost of serving this media object. If no copy exists in any cache (aka “cache miss”), Media Flow Controller posts a request to the target origin server, fetches the content, and serves it to the user. Then Media Flow Controller decides if that content is cache-worthy. Media Flow Controller decides the cache-worthiness based on its intelligent Analytical Engine and customer-configured policies. When objects become “hot” (downloaded at a high rate), Media Flow Controller promotes them to a cache tier that supports faster delivery. Promotion in Media Flow Controller can happen starting from the lowest tier, i.e. SATA to SAS, SSD, and RAM. This allows Media Flow Controller to scale throughput and meet increased demand.
The Analytical Engine determines the “hotness” of content based on frequency of download requests. As requests for a particular video increase, the hotness of that video increases and the Analytical Engine moves that video up in the cache hierarchy. Likewise, as requests for a video fall off, the Analytical Engine moves that video down in the cache hierarchy.
A caching structure that starts with RAM and incorporates a flexible hierarchy of cache devices ensures that objects are placed and migrated across the hierarchies based on dynamic load characteristics.
Cache tiers are implemented as an extendable framework, making it easy to add new types of caching devices and origin storage. The caching system is agnostic to the delivery protocol, allowing multiple delivery protocols to share the cached content. See Figure 1.
Figure 1 Juniper Networks Media Flow Controller Operations (reverse proxy deployment)
Figure 1, above, illustrates the relations between Media Flow Controller and other network components in the media delivery optimization operation.
1.
Requests come in from the Internet via HTTP, to (typically) an Ethernet switch or Load Balancer that redirects the request to Media Flow Controller.
2.
Upon an initial request, Media Flow Controller obtains the content from origin, serves it, and caches a copy. Subsequent requests are served directly from Media Flow Controller cache.
3.

Report an Error
Media Flow Controller Administrator's Guide and CLI Command Reference
Copyright © 2010 Juniper Networks, Inc.