Free Book Download: Hands-On Guide To Webcasting Production

51UV65ljoKLWebcasting guru Steve Mack and I wrote a webcasting production book entitled “Hands-On Guide To Webcasting” (Amazon), which we’re now giving away as a free PDF download. You might notice that the book was published in 2005 and since that time, webcasting has evolved into the mainstream application it is today. But some of the best practices regarding encoding, connectivity, and audio and video production techniques etc. have never changed. We felt the book could still be a valuable resource to many and we wanted to make it available to everyone, with webcastingbook.com now re-directing to this post.

This book was one of the eight books in my series that combined, have sold more than 25,000 copies, with the webcasting book being the most popular. So we’re happy to have gotten the rights back to the publication to be able to share it with everyone. The help email included in the book still works, so those with questions can still reach out to us, and we’ll try to answer any follow-up questions. You may re-purpose content from the book as you like, as long as you don’t charge for it and you credit the source and link back to webcastingbook.com. Here’s a quick breakdown on the chapters

  • Chapter 1 is a Quick Start, which shows you just how simple webcasting can be. If you want to start webcasting immediately, start here.
  • Chapters 2 and 3 provide some background about streaming media and digital audio and video.
  • Chapters 4 and 5 are focused on the business of webcasting. These chapters discuss the legal intricacies of a webcast, along with expected costs and revenues.
  • Chapters 6 through 8 deal with webcast production practice. Planning, equipment, crew requirements, connectivity, and audio and video production techniques etc.
  • Chapters 9 and 10 cover encoding and authoring best practices. This section also covers how to author simple metafiles and HTML pages with embedded players and how to ensure that the method you use scales properly during large events.
  • Chapter 11 is concerned with distribution. This section discusses how to plan and implement a redundant server infrastructure, and how to estimate what your infrastructure needs are.
  • Chapter 12 highlights a number of case studies, both successful and not so successful. These case studies provide you with some real-life examples of how webcasts are planned and executed, how they were justified, what went right, and possibly more important, what went wrong.

I’ll also be giving away my business book in the coming days.

Sponsored by

The Impact Of HTTPS On Caching Deployments In Operator Networks

When Google made the decision in 2013 to have all of their properties and data, including YouTube, move to HTTPS delivery, many have been asking what impact this has had on open caching deployments inside operator networks. Some have suggested that HTTPS delivery is becoming a trend but based on what we have heard from other content owners, and from talking to last-mile providers, I don’t expect this to be a broader industry trend in the long run.

In many cases, we can use the publicly stated plans of large streaming services like Netflix as proof of outlook for the industry as a whole. In short, the decision to stream all content via HTTPS is an expensive one and the business goals of long form video streaming services like Netflix, Amazon, ESPN, and Hulu can be met through more efficient and far less costly streaming infrastructure and best practices. To this point, Netflix publicly stated they would not implement SSL given their assessment that “costs over time would be in the $10’s to $100’s of millions per year” to fully encrypt all their streaming traffic. [Source: one, two]

Indeed, we know that content providers worldwide have adopted best practices to manage content security and consumer privacy for streaming media. Through the use of DRM to protect content rights and URL obfuscation combined with control plane encryption to secure consumer privacy, content providers can meet their obligations to both content rights owners and consumers. These streaming media best practices also support the deployment of open caching solutions in operator networks to optimize online video for both network utilization and Quality of Experience (QoE).  Going forward, content providers will continue to rely on these best practices to scale their streaming offerings worldwide and the majority won’t move to HTTPS delivery.

There will be significant and long-term value in the deployment of open caching as a critical part of the overall open architecture for streaming video. Operators can invest in open caching platforms with confidence, knowing that their investment will continue to deliver value in the form of network cost savings and improved QoE over the long run.

In just a few instances, as seems to be the case with YouTube, some content providers may take the extreme and costly step of encrypting both control and data plane traffic for the sake of consumer privacy. Full SSL encryption is generally considered to be cost prohibitive and few, if any, other content providers can afford to implement such a model. However, even in the case of fully encrypted traffic, it’s a safe bet to expect that content providers will continue to work collaboratively with caching technology providers to support traffic optimization and open caching in last mile networks.

Limelight Launches New DDoS Solution & Research Findings About The Security Market

DDoS and other cyber attacks are clearly on the rise. According to Akamai’s recent State of the Internet Report, between 2013 and 2014, DDoS attacks rose 90%. And not only are the number of attacks rising, but the volume of those attacks is growing as well. Numbers from Radware’s 2014-2015 Global Application and Network Security Report, stated that 29% of attacks are over 1Gbps in size. It’s probably safe to say that attack volumes and frequency will only continue to increase, especially as companies continue to rely on the Internet to conduct their business.

Many organizations already recognize the need for security. According to recent research by Limelight Networks, only 8% of surveyed executives indicated that they weren’t using some sort of security for the delivery of their digital content. What’s more, 76% indicated that the delivery of digital content is “extremely important” to their business.

So what are organizations doing today to mitigate potential attacks that might interfere with their ability to deliver digital content? For many, it’s on-premise equipment (CPE). Of those surveyed in Limelight’s research, 31% are handling the security themselves. Others are employing a hybrid approach, using some CPE combined with cloud-based services. But there are a variety of problems with both of these approaches (pure CPE and CPE plus cloud). First, using any kind of CPE has both CAPEX and OPEX requirements. You not only need to purchase the hardware (redundantly, of course) but you need people to manage, update, upgrade, and operate it. Second, you need excess bandwidth (transit) to support an attack while also handling “good” traffic. Finally, combining CPE with cloud services adds significant complexity to your content delivery architecture.

What’s the alternative? CDN-based security. More than half (53%) of respondents in Limelight’s research plan to rely on their CDN provider to handle content delivery security concerns in the future. And for many customers, it makes total sense for several reasons:

  • Upstream—if an organization is already using a CDN provider to deliver their digital content, detecting and mitigating an attack can come at the network edge, potentially thousands of miles from origin thereby sparing an organization’s network from any potential fallout or impact. When combined with scrubbing, only good traffic is returned to the origin preventing an organization’s bandwidth from being flooded with bad traffic.
  • Absorption—as a distributed network, most CDNs have thousands of servers against which they can spread out an attack, even preventing Layer 3 and Layer 4 attacks (two common DDoS vectors) from ever reaching the origin.
  • Resiliency—with those thousands of servers and terabits of egress capacity, the CDN quickly returns to normal operations in the wake of volumetric DDoS attacks. Even while under duress, the CDN can still continue to provide accelerated content delivery services.

Last week, Limelight announced its CDN-based security offering—DDoS Attack Interceptor. This solution, integrated directly with the Limelight’s content delivery services, provides proactive detection with mitigation technology in the cloud protecting customers against the downtime, loss of business and brand reputation impact associated with DDoS attacks. The solution is virtually transparent to customers and from a high-level, works the following way:

  • Prior to an attack, Limelight’s detection technology is constantly fingerprinting a customer’s traffic to learn what “good” traffic looks like. This fingerprint is sent continuously to “off-net” scrubbing centers. According to Limelight, the scrubbing centers are in different data centers and do not share bandwidth with Limelight’s delivery POPs so that the attack traffic does not share resources with the good, or clean, traffic
  • An attack presents itself against a target protected by Limelight
  • The Limelight CDN begins to absorb most of the attack while, at the same time, proactive monitoring detects the DDoS attack and notification alarms are raised in the network operations center
  • The customer is notified that they are under attack. If the attack is small enough and the customer has enough bandwidth to handle both good and bad traffic, they can opt to just let the CDN do what it does best. But if they don’t want to chance that the attack volume will increase, or if they don’t have the resources to handle it, they can opt to have the traffic scrubbed
  • When scrubbing is enacted, traffic is rerouted to the off-net scrubbing centers
  • The scrubbing centers already have a very detailed fingerprint of good traffic, so they may immediately begin aggressively mitigating the attack without having to be configured manually and without a lengthy “learning” period. The scrubbing centers return the clean traffic directly to Limelight’s CDN for delivery as usual using dedicated network interconnects for increased performance.

Limelight’s detection system constantly monitors for malicious traffic. However, since this monitoring is not happening in-line, Limelight claims it has no performance impact on a customer’s traffic. The detection covers the broadest range of DDoS attacks—both infrastructure as well as application layer attacks. According to Limelight, their solution can prevent certain zero day attacks using “behavior-based” techniques that compare measured baselines of both volume and patterns to more intelligently differentiate good traffic from bad.

It’s clear from the research that not only will DDoS attacks continue to rise (both in scope and scale) but that executives are worried about how to mitigate them. When the results can be loss of revenue, everyone starts to pay attention. And because the CDN as a cloud-based security solution provides a number of benefits over CPE or hybrid architectures, it’s no wonder that the major CDNs (Level 3 and EdgeCast by Verizon were the most recent before Limelight) have all added the service to their portfolios. It good to see Limelight moving up the stack with their product portfolio and offering more value-added-services, like security, to help them diversify their revenue away from purely storage and bit delivery. As DDoS and other attacks continue to grow in size and sophistication it will be interesting to see how these services evolve in an otherwise crowded security market with many different approaches and solutions to the DDoS problem.

How To Quantify The Value Of Your CDN Services

As mobile applications become more sophisticated, many congestion points have been identified which have given rise to a number of specialized solutions to resolve them. The primary solution for working around Internet congestion and slow-downs has long been the edge delivery and caching provided by content delivery networks. But those tactics have become commoditized, with asset delivery performance becoming table stakes delivered as a service. As a result, vendors have been working hard to offer true performance solutions, outside of storage, large software download and streaming video delivery services.

Over the past few years, the CDN market has spawned a number of specialty solutions to overcome specific challenges in the form of video streaming, web security, and dynamic applications. It has been well documented that web and mobile application performance is critical for e-commerce companies to achieve maximum transaction conversions. In today’s e-commerce landscape, where even milliseconds of latency can impact business performance, high CDN performance isn’t a nice to have, it’s a must have. But the tradeoffs have led to a polarizing effect between business units and the IT teams that support them. Modern marketers and e-commerce practitioners focus on engaging users with third-party content in the form of social media integration, localized reviews, trust icons and more, all of which need to perform flawlessly across a range of devices and form factors to keep users focused, engaged and loyal. The legacy attitude of one-size-fits-all for a CDN has become outdated as businesses seek out best-of-breed solutions to keep them competitive and to drive top-line growth. This is one of the main reasons why many customers have a multi-CDN strategy, where they might use one CDN specifically for video streaming, but another for mobile content acceleration.

One of the primary challenges in all of this is in arriving at measurable proof of the business impact. Historically, it has been extremely hard for e-retailers to quickly analyze the effectiveness of the solutions they’ve put in place to help drive web performance. Enterprise IT departments often find it difficult, if not impossible, to prove the benefit that their efforts have on customer satisfaction and top-line growth because analytic tools have historically been siloed by business specialties – IT has Application Performance Management (APM) tools and the business units have business analytics solutions. Whereas marketing and e-commerce teams have a variety of A/B testing solutions at their disposal, the IT team often struggles to show measurable business improvements.

Last week, adaptive CDN vendor Yottaa unveiled a new A/B testing methodology called ValidateIT that enables enterprises to easily and instantly demonstrate business value from their CDNs and other web performance optimization investments. Yottaa developed the methodology in 2013 and has been using it successfully with many of its customers since then. Through ValidateIT, enterprises can predictably and accurately split traffic in real-time, allowing them to verify the immediate and long-term business benefits of optimizing their web applications. As the first vendor in this market that I know of to offer this type of methodology, Yottaa is enabling enterprises to make an informed and confident buying decision by demonstrating the business value of web application optimization

But Yottaa and its customers are not the only ones to have “validated” ValidateIT, the methodology has also earned a certification from Iterate Studio, a company that specializes in bringing business-changing technologies to large enterprises. Iterate Studio curates, validates and combines differentiated technologies that have repeatedly delivered positive and verifiable business impact across a broad set of metrics. Working together with customers, Yottaa applies the ValidateIT methodology to split, instrument and measure web traffic using trusted third-party business analytic tools. The important aspects of the methodology include:

  • Control over the flow of visitor traffic. Yottaa typically splits traffic 50/50 in proof-of-concept scenarios to highlight the benefit their technology is having over an existing solution. In head-to-head “bake-off” scenarios, Yottaa can split the traffic into thirds or more, depending upon the competition.
  • Conducting a live, simultaneous A/B test. Online businesses frequently say that it’s impossible to accurately compare two different time periods to one another because of the variables that would impact the results. Campaigns, seasonality, breaking news and events, and any number of competing factors can influence visitor behavior. So Yottaa ensures that ValidateIT highlights the business-impacting results of their solution in real-time by arbitrarily sending visitors to the incumbent solution and the Yottaa (and possibly other competing vendors’) optimized solution and then measuring the results. This eliminates objections with regard to performance versus content or campaigns, as arguably all things are equal but the web performance optimization techniques applied to the visitor sessions.
  • Leverage in-place third-party analytics solutions. IT vendors have attempted to bring proprietary business analytics to market, but Yottaa felt it was important to lean on the business analytics solutions companies already use to ensure a credible test and validation. Plus, by using existing business analytics, marketing, e-commerce and IT leaders can leverage any existing custom metrics, analysis methodologies, and reports to drill-down into the details.

The most legitimate use case for Yottaa’s solution is that you don’t know whether your one-size-fits-all CDN solution is right for you or whether you need a specialty CDN until you actually measure, evaluate and analyze the results. That’s the reason for and beauty of ValidateIT and why the company offers it at no cost. It’s free as part of their solution validation process because they want 100% of companies to better understand which solutions in the market truly work versus one’s that don’t. It’s a nice tool to arm enterprise buyers with to show them the real business benefit vs. relying on a blue-chip logo.

The Code Problem for Web Applications & How Instart Logic Is Using Machine Learning To Fix It

The adoption of mobile devices with powerful CPUs and full HTML5 browsers is enabling a new wave of increasingly sophisticated applications. Responsive/adaptive sites and single page applications are two obvious examples. But the increased sophistication is also creating new performance bottlenecks along the application delivery path. While the industry can continue to eek out incremental performance gains from network-level optimizations, the innovation focus has shifted to systems that are application-aware (like FEO) and now execution-aware. It’s the new frontier for accelerating application performance.

To deliver web experiences that meet these new world demands, developers are increasingly using JavaScript in web design. In fact, according to httparchive, the amount of JavaScript used by the top 100 websites has almost tripled in the last three years and the size of web pages has grown 15 percent since 2014. The popularity of easily available frameworks and libraries like angular.js, backbone.js and jQuery make development easier and time-to-market faster.

Unfortunately, there is a tradeoff for these rich web experiences. As web pages become bloated with JavaScript, there are substantial delays in application delivery performance — particularly on mobile devices where there are smaller CPUs, memory and cache sizes. It’s not uncommon for end-users to wait for seconds, staring at a blank screen, while the browser downloads and parses all this code.

A big part of the bottleneck causing these performance delays lies within the delivery of JavaScript code. When a site loads and a browser request is made, traditional web delivery approaches respond by sending all of the JavaScript code without understanding how the end users’ browsers will use it. In fact, many times, more than half of the code is never even used. Developers have tried to mitigate this challenge by turning to minification – an approach that removes unnecessary data, such as white-spaces and comments. But this approach provides only minimal benefits to web performance.

Now imagine instead, if the browser could intelligently decide what JavaScript code is actually used and download that code on-demand. While the performance benefit could be substantial, demand loading code without breaking the application would be a very challenging problem. This is exactly what a new technology called SmartSequence with JavaScript Streaming from Instart Logic does. It’s the first such innovation that I have seen that applies machine learning to gain a deep understanding of how browsers use JavaScript code to optimize its delivery and enhance performance.

By using real-time learning coupled with a cloud-client architecture, their technology can detect what JavaScript code is commonly used and deliver only the necessary portions. The company says this approach reduces the download size of a typical web application by 30-40%, resulting in dramatic performance improvements for end users. With this new method, developers can now accelerate the delivery of web applications even as the use of JavaScript continues to rise.

For web and application developers, this gives them the freedom to push the boundaries of web development without sacrificing performance, opening up endless opportunities for revolutionizing web and mobile applications. The way Instart Logic is looking to solve this problem is interesting as I haven’t seen this approach in the market before, so it’s definitely one to watch as it evolves. For more details on the technology, check out the company’s blog post entitled, “Don’t Buy the JavaScript You Don’t Use.”

Stream Optimization Vendor Beamr Details ROI, Breaks Down Cost

Screen Shot 2015-03-03 at 10.44.48 AMIn a recent blog post, I detailed stream optimization solutions and concluded that the lack of market education could kill this segment before it even has a chance to grow. In that post I raised some key questions about the economics of optimizing streams including: How much cost does optimization add to the encoding flow? How much is the delivery cost reduced? How much traffic or content do you need to have in order to get an ROI? At that time, none of the stream optimization vendors had this information available on their websites.

Beamr, who is one of the stream optimization vendors I mentioned, recently stepped up to the challenge, and sent me a detailed ROI calculation for their solution, which they also posted on their website. Finally we have some public numbers that show the economics of their stream optimization solution, and the type of companies that could benefit from it. Beamr’s product works by iteratively re-encoding each video frame to different compression levels, and selecting the “optimal” compression level which provides the minimum frame size in bytes while not introducing any additional artifacts. The diagram below shows the processing which Beamr Video performs on each video frame.

Screen Shot 2015-03-02 at 7.11.34 PMAs you can imagine, this process is resource intensive, since each frame is encoded several times before moving on to the next frame. And indeed, the processing time for one hour of video on a single core ranges from 3 hours for 360p content to 14 hours for 1080p content. However, since Beamr is capable of distributing its processing to multiple cores in parallel, on a strong enough machine optimizing an hour of 1080p content can actually be completed in one hour. Beamr’s ROI calculation estimates that the cost of processing one hour of video (which includes 6 ABR layers at various resolutions ranging from 360p to 1080p) is around $15. This figure includes about $4 of CPU cost, and $11 of the Beamr software license cost.

As for delivery cost, Beamr’s estimation based on Amazon’s CDN costs for large customers that deliver Petabytes of data each month is around 3 cents for each hour of video delivered. Since Beamr chops off around 35% of the stream bitrate on average, the Beamr savings on each hour of video delivered are approximately one cent. Comparing this number with the processing cost, it can easily be seen that for videos that are viewed 1500 times the cost of processing is exactly offset by the savings in the CDN cost, and above 1500 views there is a positive ROI.

After reviewing these numbers, two things became clear to me: First, that under the right circumstance, there can be a positive ROI for deploying Beamr’s stream optimization solution. Second, the benefit is only for OTT content owners that have a relatively large number of video views each month. If you have 1M views for an hour-long episode, you can save $10,000 in delivery by optimizing that episode with Beamr. If your clips are viewed only 100 times on average, you won’t recover optimization costs through delivery costs savings. However, since stream optimization can also benefit the UX, by reducing rebuffing events and enabling a higher-quality ABR layer to be received by more users, it might make sense to apply it even in smaller-scale services. But again, these claims have not been proven yet by Beamr or any of the other stream optimization vendors yet.

Thanks to Beamr for breaking down some of the costs and helping to educate the market.

The Internet Has Always Been Open, It’s The Platforms & Devices That Are Closed

As expected, today’s vote on the FCC’s proposed net neutrality rules passed with a 3-2 margin. While this is a big step in a process that has been going on for thirteen years now, we’re still a long way off from this debate being over. Since a draft of the proposal wasn’t shared with the public we still don’t know what exactly the rules state or how to interpret them. We’ve also learned that FCC Commissioner Clyburn did get FCC Chairman Wheeler to make “significant changes” to the newly passed FCC rules, but what those changes are we won’t know until we get to see the actual language.

The problem is that even when we do get to read the new rules, many of the words used are going to be vague. Things like “fair” and “unreasonable” have no meanings. What is the baseline that will be used to define what is fair, and what isn’t? Apparently that is up to the FCC and from what I am told, the new rules provide no definitions or methodology at all of how those words will be put into practice. Vague, high-level language isn’t what we need more of, yet that’s what we get when the rules are being written by politicians. It also doesn’t help that many in the media still can’t get the basic facts right, which only continues to add more confusion to the topic. My RSS feed is already full of more than a hundred net neutrality posts and some, like this one from Engadget, get the very basics wrong.

The post says that the new rules will, “ban things like paid prioritization, a tactic some ISPs used to get additional fees from bandwidth-heavy companies like Netflix,” except that Netflix is getting NO prioritization of any kind. Netflix has a paid interconnect deal with Comcast and other ISPs but a paid interconnect deal is not the same thing as paid prioritization. All you have to do is read the joint press release by Comcast and Netflix, to know this as it clearly states that, “Netflix receives no preferential network treatment“. Engadget is not the only media site to get this wrong. These are the basics, if people can’t get those right, what chance do we have of having an educated discussion on net neutrality rules when people don’t even know what they apply to?

For all the talk of how this now help consumers with regards to blocking or throttling of content via wireline services, it has no impact. We don’t have a single example of that being done by any wireline ISP, so there isn’t a problem that needs fixing. To me, the biggest piece of language in the new rules is that the FCC is using Title II classification not just for ISPs, but also edge providers. This gives the FCC the right to examine the ISP practices downstream to broadband consumers as well as upstream to edge providers. But is the oversight and regulations for upstream and downstream going to be the same? Probably not and one would expect it could very well be different.

I find it funny that the term “open Internet” keeps being used. Has the Internet ever been “closed” to anyone? I’ve never heard of any consumer complaining that they went to a website or content service and it was denied on their device, do to their wireline ISP provider. It’s usually denied on the device because the platform or device has a closed ecosystem, which the net neutrality rules don’t address. So for those that have been saying that today’s vote now, “opens up the Internet to be a level playing field”, think again. The Internet itself has always been open, the apps and platforms we use, for the most part, are all closed.