Introduction
I’ve been diving deep into classic television lately, with a particular focus on the Dragnet franchise. To give some context, this iconic series started as a radio drama in 1949 before transitioning into the 1951 black-and-white TV show. The franchise saw a revival in 1967 with a modernized, color format. This 1967 series also led to the creation of the spinoff Adam-12, which in turn gave rise to another spinoff, the TV series Emergency!.
Many of the original black-and-white episodes from the 1951 Dragnet series are considered “lost,” with the reasons behind their disappearance remaining unclear. For those episodes that are still available , finding high-quality versions can be quite challenging. The DVDs are often out of print, and it’s difficult to gauge the video quality before purchasing them. Note that many of these episodes have entered the public domain and can occasionally be found on platforms like YouTube. Regardless of the method that you might find these episodes, the copies available are usually sourced from old film masters, which often show visible imperfections such as specks, scratches, and dirt. While these flaws might add a vintage charm for some enthusiasts, they could render the episodes less enjoyable for modern viewers accustomed to higher visual standards.
Inspired by my recent exploration of upscaling techniques, I began to consider the feasibility of using AI or open-source tools to address these film imperfections, specifically the specks and scratches that detract from the viewing experience. While grain removal isn't my primary focus, cleaning up these visual defects could significantly enhance the quality of the episodes.
As a disclaimer, I might ultimately turn to a commercial solution like Neat Video, which is highly regarded for its effectiveness in Premiere and After Effects. However, given my success with the open-source tool Real-ESRGAN, detailed in a previous post, I'm inclined to first explore whether a similar open-source option could meet my needs.
Navigating VapourSynth Installation: Challenges and Resources
I initially started my research with AI to investigate options that are out there. After narrowing down my requirements, I landed on a path to explore VapourSynth.
ChatGPT suggested that setup and installation might be difficult, but it wasn't. If you can install and use Python, you can handle VapourSynth. I did encounter some issues with the plugin environment, which I'll discuss shortly.
The initial instructions provided by ChatGPT were unclear, which complicated the setup process. VapourSynth offers several installation options, and there may have been a conflict when I installed it using both the EXE file and Python’s pip. Resolving this issue might necessitate a complete uninstallation and reinstallation of my Python environment, which I am currently reluctant to undertake. Additionally, other installation options include a batch file and a PowerShell script. Please note that the installation EXE is unsigned, potentially causing Windows to issue warnings during installation. ChatGPT directed me to a Git installation page, which might not have been the ideal starting point for me.
As a result, I thought I’d break down several resources to help, as I won’t be able to detail everything in this post.
The VapourSynth Web page. This is a WordPress site which contains a good springboard of information.
The Doom9.org Forum Page – VapourSynth. Doom9 is a longtime resource for video encoding/transcoding specialists. The community for VapourSynth is very active.
The VSDB (VapourSynth Database). Information about plugins.
Git – Releases Page. Downloads with version archive.
To be transparent, I haven’t yet had the opportunity to delve deeply into the resources linked above. It’s likely that a seasoned VapourSynth enthusiast could spot the mistakes I made during installation and understand the dependency issues I encountered. I’m confident that once I dedicate more time to thoroughly exploring these materials, many of my problems will be resolved. However, given that my audience primarily consists of casual film restoration and upscaling enthusiasts, I’ve chosen to simplify and adjust the level of detail in this Substack post to focus on broader accessibility.
VapourSynth appears to be an advancement of AviSynth. Although I have historically been familiar with AviSynth, my usage in the past was not extensive. A Video Stackexchange post from 2020 outlines the differences between these solutions. As of 2025, both AviSynth+ and VapourSynth are actively maintained.
Furthermore, I would not assert that VapourSynth is an ultimate solution in this domain. Nevertheless, I am encountering difficulties in identifying any open-source alternative that stands out as a definitive option. There are numerous compelling alternatives available; indeed, my research into this process has highlighted several options that warrant further investigation.
Film Restoration Workflow: Stabilization, Cleaning, and Upscaling
To create a proper “restored” video, I could imagine my workflow going something like this.
Step One: Utilize Adobe Premiere’s Warp Stabilizer to eliminate "film jitter". Although I have not personally used it specifically for film jitter, I have successfully applied it to reduce camera shake. Based on available information, Premiere's effect is also proficient in addressing film jitter. Additionally, VapourSynth offers several plugins that appear capable of performing this task. However, I have not tested the VapourSynth options for film stabilization. I believe it is important that this step would take place first, as some of the plugins for getting rid of film defects like scratches and dust use temporal motion detection for some of the “repair” effects.
Step Two: Use a tool to restore the specks and dirt and other film-born defects. For this effort, this is where I’m using VapourSynth. Due to my self-imposed deadlines for publishing content on Substack, I had to make a great deal of compromises here in terms of troubleshooting and debugging.
Step Three: Use the Real-ESRGAN solution that I personally have confidence in to upscale the “cleaned” film.
Other Requirements for VapourSynth
vsrepo
It’s important to find “vsrepo.py” which will be included with your VapourSynth installation. This will be used to install plugins to extend VapourSynth’s functionality. In my case, I installed VapourSynth at the user level (rather than at the System level). So in my case, to obtain plugins, I’d type this command in.
Depending on your python configuration and PATH environment variable, you may be able to go the bloat of providing a full path to the python script.
For example, to list available plugins at VSDB:
python "C:\Users\[user]\AppData\Local\Programs\VapourSynth\vsrepo\vsrepo.py" available
…can possibly be shorted to…
vsrepo available
To install a plugin, such as lostfunc which is a collection of popular scripts from Doom9.org
vsrepo install lostfunc
vsedit
vsedit can be downloaded from this location. Note, VSE-Previewer may be at the top of the page, so scroll down to this section (or other appropriate recent version).

The installer is user-friendly and does not require you to manually locate your Python installation. It automatically detects the VapourSynth installation on your system. However, I encountered a few registry errors when installing at the user level. To resolve this issue, run the installer as an administrator.
ffmpeg
On Windows, you’ll need an ffmpeg binary if you will want to process your video. For Windows, you can download from gyan.dev (a trusted source). (See original ffmpeg.org page).
Testing with vsedit
Vapoursynth and vsedit work with vpy scripts (a python script for vapourware). Here’s sample code (restorechain.vpy) for my tests.
# restorechain.vpy
import vapoursynth as vs
import havsfunc as haf
core = vs.core
src = core.ffms2.Source("myvideofile.mp4")
src = core.resize.Bicubic(src, format=vs.YUV420P8)
filtered = core.ctmf.CTMF(src, radius=2)
filtered = core.knlm.KNLMeansCL(filtered, d=3, a=3, h=1.2)
filtered = core.descratch.DeScratch(filtered)
filtered = core.dfttest.DFTTest(filtered, sigma=4.0, tbsize=3)
filtered = core.rgvs.RemoveGrain(filtered, mode=17)
filtered = haf.QTGMC(filtered, Preset="Slower", TFF=True, FPSDivisor=2, EZDenoise=0.75, NoisePreset="Slow")
filtered = core.text.Text(filtered, "Restoration Chain Active")
filtered.set_output(0)
src.set_output(1)
Unfortunately, I didn’t walk through dependency hell as I was troubleshooting this. vsedit will tell you if a plugin is missing in the error window. vsedit will display an error message in pink/red at the bottom of the screen with guidance.
Failed to evaluate the script:
Python exception: No module named 'missingmod'
Traceback (most recent call last):
File "src/cython/vapoursynth.pyx", line 3378, in vapoursynth._vpy_evaluate
File "src/cython/vapoursynth.pyx", line 3379, in vapoursynth._vpy_evaluate
File "happy2.vpy", line 2, in
import havsfunc as haf
ModuleNotFoundError: No module named 'missingmod'
In this case, I would execute…
vsrepo install missingmod
…to fix the issue. It is advisable to check for a missing Python library and execute a pip install if the vsrepo command does not identify any problems. Additionally, please note that you might encounter complex dependency issues, similar to what I am experiencing. I’ll spend some more time investigating my issues.
Once you have the required prerequisites, you can test by pressing F5 (which is available in the Script Menu)
Note, my lines of code…
filtered.set_output(0)
src.set_output(1)
This will allow you to toggle between the final output (0) and original (1), so you can meter the level of change before you process into a new video file using vspipe and ffmpeg.
By toggling between 0 and 1, you can see if your filters are having the right level of adjustment, or if you need to change some argument values.
Once you are happy with the results in vsedit, you can export to a new file using ffmpeg
vspipe -c y4m "restorechain.vpy" - | ffmpeg -i - -c:v libx264 -crf 18 -preset slow output.mp4
One of the challenges encountered with this approach was the extensive time spent debugging scripts that were present but did not initialize. There is likely a dependency failure at some level that vsedit cannot detect. If this occurs, you may consider using a script similar to the one below, which will attempt to use each plugin and then "generate video" with a pass/fail message. It should be noted that Python's "print" statements are not supported in vsedit for debugging purposes.
# testplugins.vpy
import vapoursynth as vs
core = vs.core
clip = core.std.BlankClip(width=640, height=360, length=60, fpsnum=24, format=vs.YUV420P8)
results = []
try:
_ = core.ffms2.Source
results.append("ffms2: OK")
except Exception:
results.append("ffms2: FAIL")
try:
_ = core.resize.Bicubic(clip, format=vs.YUV420P8)
results.append("resize: OK")
except Exception:
results.append("resize: FAIL")
try:
_ = core.ctmf.CTMF(clip, radius=2)
results.append("ctmf: OK")
except Exception:
results.append("ctmf: FAIL")
try:
_ = core.knlm.KNLMeansCL(clip, d=1, a=1, h=1.0)
results.append("knlm: OK")
except Exception:
results.append("knlm: FAIL")
try:
_ = core.descratch.DeScratch(clip)
results.append("descratch: OK")
except Exception:
results.append("descratch: FAIL")
try:
_ = core.dfttest.DFTTest(clip, sigma=1.0)
results.append("dfttest: OK")
except Exception:
results.append("dfttest: FAIL")
try:
_ = core.rgvs.RemoveGrain(clip, mode=2)
results.append("rgvs: OK")
except Exception:
results.append("rgvs: FAIL")
try:
import havsfunc
_ = havsfunc.QTGMC(clip)
results.append("havsfunc/QTGMC: OK")
except Exception:
results.append("havsfunc/QTGMC: FAIL")
text_overlay = core.text.Text(clip, "\n".join(results))
text_overlay.set_output(0)
This would produce the following output. I was unable to determine why havsfunc/QTGMC were not loading, and these were essential for the film cleanup operation that I was attempting to complete.
What were the Results?
What resulted could be dismissed as a modest blur of sorts. We can see a spec being minimized, and in this case, some mpeg artifacts being reduced from my source. (Note, this will be hard to view if you are using a small screen such as a phone right now).
However, I do see promise. I could probably conjure up a better restoration chain with some time to troubleshoot.
After taking the video from the restoration chain, I then ran this through our AI upscaler. While it’s not perfect, you can see a bit of improvement with the right side of the screen, at least enough improvement to warrant further tweaking of setup.
I believe there are several valuable lessons to be learned here. If my VapourSynth tools were functioning properly without dependency issues or other complexities that I currently do not fully understand, I would be able to more effectively refine and eliminate the film defects I aim to address. I plan to spend time this upcoming weekend completely uninstalling all related software, including Python, and then reinstalling everything at the system level rather than the user level, as the setup programs recommend. Despite my preference for installing as much as possible at the user level due to security considerations, I will follow these recommendations.
Another important consideration is identifying the actual source of the improvement. Is it VapourSynth or Real-ESRGAN that is responsible? I maintain the belief that if both tools are properly optimized, the results can be significantly enhanced.
Conclusion
This project confirmed what I suspected going in: restoring old television isn't just about slapping an upscaler on it and calling it done. Tools like Real-ESRGAN can help improve resolution and clarity, but the real gains come from addressing the physical flaws in the source, such as scratches, specks, jitter, and other visible damage that comes with aging film.
VapourSynth isn't the most beginner-friendly tool, but it shows a lot of promise. The early results weren’t dramatic, but they were good enough to suggest that I'm on the right track. With more time to experiment and a properly configured plugin stack, I expect to produce cleaner and more consistent outputs.
This isn't a groundbreaking restoration effort, and I don't expect it to make a huge impact. But if it makes some episodes a little easier to watch, that's worth the effort. I'll keep refining the process and documenting what works, in case anyone else wants to take a similar path.
Please use the comments below if you hit a snag when setting up VapourSynth; I fully realize I likely omitted some essential details.