Apple’s HTTP Live Streaming: A Nightmare?

This blog post is hopefully #1 of 2 posts on the same topic: Getting HTTP Live Streaming configured on a Linux server running CentOS 5.4. If you are curious about what I have been up to for over a week, or if you are looking for ways to stream your audio or video to the iPhone, then this is for you.

A week ago, I had high hopes that I would be able to author a write-up on how I got Apple’s HTTP Live Streaming Protocol to work on our CentOS server at work (TechMission). Recently, I was asked to implement this new technology in order that we might be able to “stream” MP3s from our website to iPhones using the 3G network. I figured it would be a 1 to 2 day job. Little did I know….

It has now been over a week since the request was made to put this project as my highest priority. Besides working on this project, the only thing that I have done during my time at work the last week is database backups, performing our weekly rsync of the home directories, and upgrading our Moodle site.

If you are operating an Apple server, then from what I can tell, your job is going to be a whole lot easier. For the rest of us… well, it’s trial, error and research. Let’s get started.

  1. Research
    The first thing that I did was research what exactly the HTTP Live Streaming is. HTTP Live Streaming is a concept for an entirely new internet protocol (IP). The Internet Draft of the Internet Engineering Task Force (IETF) states “describes a protocol for transmitting unbounded streams of multimedia data. It specifies the data format of the files and the actions to be taken by the server (sender) and the clients (receivers) of the streams.”

    More specifically, according to Apple, “HTTP Live Streaming allows you to send live or prerecorded audio and video to iPhone or other devices, such as iPod touch or desktop computers, using an ordinary Web server.”

    The concept is simple: When using the HTTP Live Streaming IP, media files (either provided by a live video camera or equivalent feed, or pre-recorded content such as .MP3 files) are “encoded” by the server to an acceptable format before the new file is broken up into small chunks that are then sent to the client. So if you look closely, you will realize that this is not a true stream, nor does it have to be live. Why it is called the HTTP Live Streaming IP, therefore, is beyond me. The name is a little misleading, in my opinion.

  2. Server Implementation
    So how does one actually implement this? Good question. I’m still trying to solve the issue on our server! However, here is what I have figured out and tried. Hopefully this will be useful to someone.

    The are two parts to a server’s configuration for HTTP Live Streaming:

    1. Media Encoder
    2. File Segmenter

    Media Encoder
    The media encoder is what takes the signal from a live broadcast feed or from some other incompatible format, and turns it into an acceptable format for the iPhone (or iPod touch or even Quick Time for that matter) to understand.

    Before reaching the segmenter, video files in an MPEG-2 transport stream, and audio-only files can either be in an MPE2-2 transport stream, or in AAC (with appropriate headers) or MP3 format. Obviously, if one is trying to stream pre-recorded audio content, the media encoder is NOT always a necessary step, assuming the audio was already saved in the correct format.

    File Segmenter
    The second component of the server before the file chunks are sent to the client is the file segmenter. Unless you have a lot of server space, are not streaming anything live, and do not have many files to stream, I highly recommend that this step be performed when the client requests it. The other option would be to segment the files and save the segments in a permanent location on the server.

    This is the most critical part of HTTP Live Streaming and is required to make it work. Before implementing into a production environment, I recommend testing in your test environment (if you don’t have a dedicated testing server, you could install your server’s operating system into a virtual machine).

    My first attempt was to use FFMpeg with a segmenter written in C by Chase Douglas. As one who is not very familiar with C and also not familiar with Mac OS X, it took me a while to realize that the segmenter was written to run on Mac servers, and not on Linux. I thought about trying to port the code, but decided to try some other things first.

    In my further research, I found that somebody (Carson McDonald) HAD ported Chase Douglas’ segmenter to Linux. But the only catch is, it has a Ruby wrapper script (I have never worked with Ruby, gems, Ruby on Rails, or the Ruby server before). Nevertheless, I decided to give it a try.

    After spending 2 weeks on the project now, I have been unsuccessful getting Ruby and Ruby-Gems to work properly. I first installed Ruby and Ruby-Gems through yum. When that didn’t really work, I uninstalled it all through yum and the installed the latest versions of both manually (putting the source, of course, in /usr/local/src).

    In his instructions, Carson says that the gems net-scp and right_aws are required to work with the Ruby script he wrote. When I run gem install net-scp or gem install right_aws, I keep getting the following error message:

    ERROR: could not find net-scp locally or in a repository

    I am working on solving this issue, and may decide to try either of the following 2 options:
    a) Porting the original C script written by Chase Douglas into a stand-alone program that works on Linux
    b) Porting Carson McDonald’s Ruby scripts to shell scripts, which can be called with PHP.

    Right now, my boss has put me onto a new project, and I’m not quite sure when I’ll come back to this project. But once I find a solution, I will post about it here.

Hopefully this has been useful for some of you to see what I have done. Or, if you are reading this, and you think you have a solution for me, feel free to respond.

More Resources
Still feeling a little confused? Check out these great resources, which I have found very useful (in addition to the ones I linked to earlier):

By far the most popular blog and set of instructions for getting this to work is Carson McDonald’s articles on iPhone HTTP Streaming with FFMpeg and an Open Source Segmenter and HTTP Live Video Stream Segmenter and Distributor.

However, the best documentation provided by Apple that I could find is their HTTP Live Streaming Overview.


1 thought on “Apple’s HTTP Live Streaming: A Nightmare?

  1. Atul Davda

    Hi, have you had any progress.

    I’ve managed to install both net-scp and right_aws but am unable to compile due to unresolved references. I’m on Debian Lenny & should have latest libs and ffmepg. Good luck.


Leave a Reply

Your email address will not be published. Required fields are marked *