The Charm of the Cable Info Channel
Lo-fi cable nostalgia, streamed live with Pygame and FFmpeg
Introduction
As someone who's spent a good chunk of time helping streamers spiff up their online presence with everything from snazzy intros and stingers to custom channel emotes and animations, I've found that a solid understanding of the underlying technology is key. It's one thing to design a cool graphic, but quite another to ensure it integrates seamlessly with the myriad platforms and broadcasting tools out there. In fact, my quest for robust live chromakeying capabilities even led me to purchase a vMix license a few years back – a testament to the diverse technical rabbit holes I've explored. For the longest time, I'd generally assumed that if you wanted to go live on the big social media platforms like Twitch or YouTube, your options were largely confined to well-known solutions like OBS Studio, StreamYard, the aforementioned vMix, or perhaps just direct camera feeds.
However, if you've followed some of my recent ramblings, you'll know I've been dabbling quite a bit with Python scripts, experimenting with real-time video manipulation, be it chromakeying with OpenCV or dynamically drawing visuals with Pygame. This tinkering naturally led me to a rather intriguing question: could one, in theory, "roll their own" fully customized live streaming studio, complete with all the personalized bells and whistles imaginable?
As it turns out, much of this is surprisingly achievable, though I certainly didn't test every single aspect. This may be a project which might very well span several posts. For today, though, we're taking a delightful trip down memory lane, paying homage to the charmingly lo-fi aesthetic of 1980s cable "character generator" channels. Our mission? To see if we can quickly whip up something similar using Python/Pygame and stream it live as an "info" channel to YouTube. Specifically, I’ll be using the XML “feed” for the sonnik chronicles to have this channel rendered.
What are “Character Generated” Channels?
In the 1980s, character generator channels became a staple of cable television, providing viewers with text-based information in a simple yet effective format. While these channels first emerged in the 1970s, it wasn't until the following decade that advancements in technology made character generation equipment more affordable, allowing smaller cable providers to adopt it widely. At the time, cable television was highly localized, with providers often serving specific communities or even individual apartment complexes. This decentralization created an environment where cost-efficient solutions were essential, and the arrival of budget-friendly character generator systems enabled even the smallest operators to participate in this growing trend.
Character generator (CG) channels often served as a platform for community engagement, sponsored either by local municipal governments or the cable systems themselves. These channels displayed a variety of information, such as announcements for community events, birthday messages, classified ads, and even light-hearted jokes. The audio accompaniment typically came from local radio stations, featuring either popular music or easy listening tracks, based on the provider's preferences. Some systems went a step further by offering a character-generated channel lineup, paving the way for innovations like the PREVUE/TV Guide channel. The Weather Channel's "Local on the 8s" segment used a proprietary CG setup, WeatherStar.
During the overnight hours, when local channels would traditionally end their broadcast day and shut off the transmitter until morning, automated information channels would take over. News networks like Reuters would offer feeds alongside teletype to local stations' news offices, ensuring that viewers had access to the latest information even during off-hours.
Today, we still have many automated data channels, though the graphics and video have become significantly better. The use of recently taped studio footage has also enhanced the quality of these channels. Examples of modern automated data channels include LocalNow and WeatherNation, which continue the tradition of providing timely and relevant information to viewers.
If you’d like to see an example of how these actually played out in the 1980s, check out this capture from YouTube user robatsea2009; it’s a grab from Seatle’s TCI Cable.
Our Process Using Pygame/FFmpeg
In this discussion, we'll delve into the process of taking a Pygame surface, where text is rendered, and sending it via FFmpeg to an RTMP (Real-Time Messaging Protocol) server. FFmpeg serves as the workhorse, handling all the intricate video processing tasks. Python and Pygame are utilized to set up the logistics and style of the display, ensuring that the text is presented effectively. Additionally, a WAV file is piped as background music using FFmpeg, adding an auditory element to the stream.
From my research, it appears feasible to switch between automated content and pre-recorded video, although this was not tested in this instance. This capability seems more reliable on Linux or Mac systems due to the lack of mkfifo support in Windows, even within the Windows Subsystem for Linux. In some cases, the recommended approach is to use features of OBS Studio to supplement what Windows cannot achieve natively. I may explore potential workarounds in a future post to address these limitations.
It appears that when using Python, FFmpeg is often the go-to tool for handling all aspects of audio processing. However, if you plan to incorporate a live camera feed, along with chromakeying and graphic overlays, the process flow can become significantly more complex. These varying process flows are beyond the scope of this article, but they may be explored in a future post. Additionally, the operating system used to generate the content can further complicate the process, as different systems may have unique requirements and limitations.
Enjoying this article? Consider a one-time tip to sonnik.
For my tests, I’m using my YouTube channel, which I’ve already set up for live streaming. (Note, I’m unclear on YouTube’s current approval process for live streaming; it’s possible that there may be a 24-48 hour approval process.) Providers such as YouTube or Twitch will provide the RTMP endpoint address and a channel key, which our python script stores in a YAML file.
The YAML File for Configuration
feeds:
- name: The sonnik chronicles
url: https://sonnik.substack.com/feed
screen:
output_resolution: [1920, 1080]
preview_resolution: [854, 480]
streaming:
platform: youtube
stream_key: YOUR_STREAM_KEY_HERE # placeholder
bitrate: 4500 # in kbps
framerate: 30 # in fps
Here we configure my Substack’s XML feed. If you’re an author on Substack, you’ll likely find this feed in a similar address. This is where you also place the key for YouTube. (We’re using YouTube, but this can be changed in stream_output.py if you want to test on another platform.) In our case, we’re only testing one live platform at a time. It is possible however, to stream to two endpoints. You can either use a tee muxer on your local machine, or use a service like restream.io
Python – Readying for FFmpeg
The python code prepares the pygame surface (numpy conversion) in main.py, and stream_output.py handles the ffmpeg command line arguments.
if ffmpeg_proc:
frame = pygame.surfarray.pixels3d(render_surface).swapaxes(0, 1).copy()
try:
ffmpeg_proc.stdin.write(frame.tobytes())
except (BrokenPipeError, ValueError):
ffmpeg_proc = None
def start_streaming_process(config, resolution):
"""
Start the FFmpeg process for streaming to YouTube Live.
:param config:
:param resolution:
:return:
"""
width, height = resolution
fps = config['streaming'].get('framerate', 30)
bitrate = config['streaming'].get('bitrate', 4500)
stream_key = config['streaming']['stream_key']
audio_path = os.path.join("assets", "sounds", "loop_music.wav")
if not os.path.exists(audio_path):
raise FileNotFoundError(f"Missing audio file: {audio_path}")
rtmp_url = f"rtmp://a.rtmp.youtube.com/live2/{stream_key}"
cmd = [
"ffmpeg",
"-f", "rawvideo",
"-pix_fmt", "rgb24",
"-s", f"{width}x{height}",
"-r", str(fps),
"-i", "-",
"-stream_loop", "-1",
"-i", audio_path,
"-c:v", "libx264",
"-preset", "veryfast",
"-b:v", f"{bitrate}k",
"-c:a", "aac",
"-b:a", "128k",
"-f", "flv",
rtmp_url
]
return subprocess.Popen(cmd, stdin=subprocess.PIPE)
Result Output
Aside from a minor buffer underrun issue that I seemingly resolved, I was shocked that the “Go Live” effort worked with this script on the first try, no substantial debug needed. Though, in retrospect, I realize this shouldn’t be the case as I set up a “preview only” mode in the script before going live. Therefore, any issues with pygame and looping would have been resolved before I made the connection to stream live. The only magic code needed was the bit about the numpy transformation, and sending this to FFmpeg.
On the YouTube side, simply go to YouTube Studio and then “Create/Go Live”. You’ll be able to edit meta data for the stream, such as setting a thumbnail/placeholder and visibility (such as leaving it unlisted so it’s not searchable). This is also where you’ll see the correct RTMP endpoint and can view your key. While a backup endpoint is offered, you’ll only need the primary endpoint address in most cases.
I noticed about a 20 second delay between my preview window, and what was available to the world on YouTube. I used the normal latency setting in YouTube studio. I did receive a momentary buffer underrun message, but it eventually disappeared and I observed no ill effect on my resultant video.
To end the stream, I simply killed the Python process (as I didn’t add key responsiveness by design to prevent accidental stream interference). After about a minute, YouTube considers the live broadcast ended and will be available in your channel’s video list under “live”.
You can see my result here…
The Code
The code is available at our ongoing “utilities” repository. See directory rtmp-stream. I did not include the font and music file, for obvious licensing reasons. Here’s the directory structure that you’ll have after cloning the repository.
stream_overlay/
├── main.py # Entry point with minimal logic
├── config/
│ └── config.yaml # Example or schema for YAML configuration
├── feeds/
│ └── substack.py # Functions to fetch and parse Substack content
├── graphics/
│ └── layout_engine.py # Handles text layout, wrapping, screen-safe rendering
├── streaming/
│ └── stream_output.py # Placeholder for RTMP or mock stream output
├── utils/
│ └── config_loader.py # Reads and validates the YAML config
├── assets/
│ └── fonts/ # Fonts used for display https://dejavu-fonts.github.io/Download.html
│ └── sounds/ # Insert wav file here
└── requirements.txt # Python dependencies
Prerequisite
FFmpeg must be downloaded and installed on your system. If I recall correctly, the installer handles setting up the PATH environment variable for you, but this may be something to check if you have any problems.
Running the Python
To run the python in preview mode:
python main.py -c config\config.yaml
To run with the “go live” option:
python main.py -c config\config.yaml -live
Note, as mentioned above, to go live on a platform you’ll need to check the configuration of stream_output.py and insert your key into the configuration YAML file.
Conclusion
You might be wondering how you can apply the information discussed here. The beauty of automated data sources is that they can be presented in various ways, making them incredibly versatile. For instance, if you search for "Live Seismographs" on YouTube, you'll find fully automated data sources displayed through different methods, including video. Similarly, some live weather channels on YouTube utilize this approach to provide real-time updates.
This method can also be used to render video that can later be chromakeyed downstream. Imagine placing the time, date, and location at the top of your Pygame surface, creating a ticker or crawl at the bottom of the screen, and leaving the rest of the field green. Another piece of software can then key on this green area and insert live video, creating a seamless blend of automated and live content.
Stay tuned, as I plan to explore this topic further in future posts. If you have any questions or comments, please feel free to share them below. Your feedback and curiosity are always welcome!