Seeing Through the Negative: Alpha, Light, and Python-Based Visual Effects
Recoloring images, simulating light, and compositing videos without leaving the command line
Introduction
My motivation for this post was initially a modest goal of playing around with light simulations, the kind you might fake in a photo editor or try to visualize in After Effects. My plan was to see if I could emulate certain optical effects using Python and image manipulation libraries. It didn’t work. Or rather, it didn’t work in the way I hoped. The results were flat, uninspiring, and missing that elusive interplay between light and form I was chasing.
So I pivoted. Instead of trying to simulate how light behaves physically, I leaned into a different approach: using negative space and alpha channels to simulate backlighting. The concept was simple: invert an image and then project a color "through" it based on brightness. This technique let me generate visuals that felt more like stylized prints than literal photos. Some results looked straight out of Andy Warhol’s pop art palette, with harsh contrasts and bold color overlays. Others evoked the strange visual language of Kubrick’s 2001: A Space Odyssey star gate — abstract and dreamlike, like light leaking through a photograph from another dimension.
During my experimentation, I encountered a bit of an unexpected phenomenon. Utilizing pure white (#FFFFFF) as the simulated light source resulted in images with unusual dark contours in areas that should have appeared bright. Initially, I presumed this was a bug within my code, until I observed similar behavior in Photoshop when applying blend modes to images with embedded transparency. This shading artifact is a consequence of how RGB and alpha are composited, rather than an error. I will elaborate on the mathematical principles behind this later, but this realization marked the point at which I understood that my work was not merely about experimenting with effects; it involved discovering fundamental aspects of digital image composition.
The Darkness after Negative Problem
I’ll use Photoshop to illustrate the behavior I observed with my original Python code. Photoshop experienced the same result using my original technique. Additionally, seeing how this is done in photoshop may help with understanding the concepts behind the technique.
To test this in Photoshop, I tried a combination of layers with masks and backing colors.
This isn’t an all-inclusive screenshot, there are several steps of fine tuning between alpha masks and merge modes that come into play. However, using several methods ultimately resulted in an image slightly darker (or depending on the experiment, lighter) than the original. Something in the algorithms prevent a return to the original.
I acknowledge that a Photoshop expert might know how to handle this issue. When I was searching/investigating, I only found a Stack Exchange/Superuser post about black and white images. I couldn't find a solution for creating a true negative with alpha for RGB images.
In terms of my Python script, let’s look at the “erroneous” method and the version I wound up using.
Erroneous Version (Resulting in Dimming):
Process: Composite the RGBA negative onto a white background, then invert the result.
Derivation:
Intermediate Color = White * (1 - Alpha/255) + NegativeColor * (Alpha/255)
Substitute White = 255 and NegativeColor = 255 - OriginalColor: Intermediate Color = 255 * (1 - Alpha/255) + (255 - OriginalColor) * (Alpha/255) Intermediate Color = (255 - Alpha) + (255 * Alpha/255) - (OriginalColor * Alpha/255) Intermediate Color = 255 - Alpha + Alpha - (OriginalColor * Alpha/255) Intermediate Color = 255 - (OriginalColor * Alpha/255)
Invert the intermediate result: Final Color = 255 - Intermediate Color Final Color = 255 - [ 255 - (OriginalColor * Alpha/255) ] Final Color = 255 - 255 + (OriginalColor * Alpha/255)
Final Formula:
Final Color = OriginalColor * (Alpha / 255)
Effect: Since Alpha/255 is a value between 0 and 1 (inclusive), multiplying the OriginalColor by this factor will either keep it the same (if the original pixel was pure white, Alpha=255) or make it darker (if Alpha < 255). Effectively, the image is being multiplied by its own brightness map, causing areas that weren't fully bright in the original to become dimmer in the restored version.
Corrected Version (With Brightness Correction):
Process: Composite the RGBA negative onto a white background, invert the result, then apply brightness correction by multiplying by 255 / Alpha.
Derivation:
From step 1, the result after compositing and inverting is: Inverted Composite = OriginalColor * (Alpha / 255)
Define the brightness correction scaling factor (handling Alpha=0 separately in code): Scale Factor = 255 / Alpha
Apply the correction: Final Color = Inverted Composite * Scale Factor Final Color = [ OriginalColor * (Alpha / 255) ] * (255 / Alpha) Final Color = OriginalColor * (Alpha * 255) / (255 * Alpha)
Simplify by canceling terms (where Alpha is not 0):
Final Color = OriginalColor * 1
Final Formula:
Final Color = OriginalColor
Effect: The output color is mathematically restored to the original color value (before clamping is applied to keep values in the 0-255 range).
Putting the Code Together
The initial code I developed was focused exclusively on processing still images. Its objectives were twofold: to invert the image and substitute the brighter areas with transparency. The second objective was crucial. My intention was not simply to create a photographic negative, but to allow the brightness of the original image to influence the alpha channel. In essence, the brighter the original image, the more transparent the resulting negative. This effect resembled viewing a film negative on a light table; ethereal yet visually compelling.
At that stage, the script was designed solely for still images. It accepted a PNG file as input and produced a negative with adjusted alpha transparency. While engaged in this project, I recalled that I had recently developed a chroma key script intended to remove green screen backgrounds from foreground images for another post. This script, also designed for still images, operated under its own separate set of parameters.
I had mentioned in this chroma keying post that chroma keying could easily be adapted to work with image sequences or even full videos. So, the idea started brewing: why not bring both techniques (the negative/alpha method and the chroma key method) under the same roof? The concepts weren’t that different. Both involved manipulating image transparency. Both could work on a frame-by-frame basis. And most importantly, both could benefit from a single command-line tool to control them.
That’s when I decided to create a more universal Python tool, something modular and expandable that could process either stills or video. I started breaking everything into pieces: a module for file I/O, one for video frame extraction and assembly, one for chroma keying, and another for the negative plus backlight simulation. At the center of it all is processor.py, the script that ties everything together and handles the branching logic based on the command-line arguments. With that in place, I could now feed it a video or a still image, choose an operation, specify a color, and get exactly what I wanted.
Of course, there’s still one big caveat: most open video formats don’t support transparency. That means while I can generate a series of alpha-enabled PNGs from a video, I can’t put that alpha channel back into a video file (not easily, anyway). There are some QuickTime codecs that support it, like ProRes 4444, but I haven’t found a clean way to tap into those using Python. So for now, if you want to preserve the alpha, the output remains a PNG sequence. You can import that into any modern video editor and use it as an overlay layer (which, honestly, is a workflow that gives you a lot of flexibility anyway).
Is low-level image or video editing of interest to you?
Consider a one-time tip to sonnik to support work like this.
Samples, Tests and Results
Unfortunately, I don’t have a good method to get testing videos to you. (Any videos I include in the article can’t seemingly be downloaded due to Substack’s global configuration). I do have some sample stills included here, along with results for comparison. I believe you can use a desktop browser and right click on an image to view/download (that is, that Substack doesn’t apply some JavaScript obfuscation to images).
Source Images and Footage
As mentioned, while this post is focusing on the new “negative with alpha” filter, I’m including the details from the previous article on chroma keying. The weatherman and image are carryovers from that article.
For the videos, I’m adding working with footage of an AI Podcaster against a greenscreen and a motion background where a monitor changes between red and green over time. I’m also including a beach flyover for the alpha-negatives.
Chroma key: Video on Video
python processor.py --inputfile source/podder-solo-on-green.mp4 --outputfile podder-merged.mp4 --operation chromakey --color 67FF00 --workingdirectory workdir --sequencename podder --backgroundsequence source/podder-background.mp4 --cleanup
Chroma key: Video on Image
python processor.py --inputfile source/podder-solo-on-green.mp4 --outputfile podder-static-map.mp4 --operation chromakey --color 67FF00 --workingdirectory workdir --sequencename podderweather --backgroundsequence source/weather_map.png –cleanup
Chroma key: Image on Image
python processor.py --inputfile source/weatherman_on_green.png --outputfile single-image-chromakey.png --operation chromakey --color 00FF00 --workingdirectory workdir --sequencename weather --backgroundsequence source/weather_map.png
Alpha Negative: Video – Retain Negative (Image Sequence Only)
python processor.py --inputfile source/flyover-for-substack.mp4 --outputfile flyover-negative --operation negative --workingdirectory workdir --sequencename flyover
(Note: Image sequence was recombined using After Effects and combined with a generic alpha background due to the file format constraint caveat mentioned above.)
Alpha Negative: Video – Apply Color (Kubrick Stargate/Warhol Effect)
python processor.py --inputfile source/flyover-for-substack.mp4 --outputfile flyover-yellow.mp4 --operation negative-reimage --color FFFF00 --workingdirectory workdir --sequencename flyover --cleanup
Alpha Negative: Image – Retain Negative
python processor.py --inputfile source/beach.png --outputfile beach-negative.png --operation negative --workingdirectory workdir --sequencename beach
Alpha Negative: Image – Apply Color (Kubrick Stargate/Warhol Effect)
python processor.py --inputfile source/beach.png --outputfile beach-blue.png --operation negative-reimage --color FFFF00 --workingdirectory workdir --sequencename beach
The Code
The code is available on my Github Utilities project. Look under directory Processor-Video for the new code specific to this project. It is also provided below underneath the conclusion of the article for your convenience.
Make sure that all code files are located in a single directory. See usage above.
You should have six python code modules:
chromakey.py
fileio.py
logger.py
negative.py
processor.py
videotoimage.py
Conclusion
What began as a failed attempt at simulating light evolved into a useful tool for negatives and alpha manipulation. This versatile visual technique can simulate effects from film-style backlighting to pop-art abstractions.
The visual output was not the only challenging aspect; numerous foundational issues with image composition had to be addressed, especially the nuances of alpha blending and brightness correction. These issues often go unnoticed until a pipeline is built from scratch, where mathematical calculations and color data can interact in unpredictable ways.
The final product is now encapsulated within a modular Python script named processor.py, which is supported by custom modules for chromakeying, negatives, light simulation, and file input/output operations. This script can be directed at an individual image or an entire video, enabling the application of various filters or modes to achieve consistent results. While there are opportunities for further development, such as adding contrast/blur filters or real-time preview capabilities, the current foundation is robust.
If you're working with visuals, even casually, I hope this walkthrough sparks some ideas. Furthermore, there is a potential that the "scriptability" of this Python code may enable automation. For instance, it can be used to process videos of a large group of individuals against a consistent background. There’s a lot of power in understanding not just the what of an effect, but the why. And when you have code that can expose and manipulate those inner workings, you’re not just editing media, you’re learning how light, perception, and pixels behave.
Let me know what other video filters you’d like to see added to this in the comments section below.
# chromakey.py
from PIL import Image
import numpy as np
import os
import glob
from logger import log
def hex_to_rgb(hexcolor):
"""
Convert a hex color string to an RGB tuple. (duplicated from negative.py - fix?
:param hexcolor:
:return:
"""
hexcolor = hexcolor.lstrip('#')
return tuple(int(hexcolor[i:i+2], 16) for i in (0, 2, 4))
def resize_to_match(img1, img2):
"""
Resize two images to match each other's dimensions while maintaining aspect ratio.
:param img1:
:param img2:
:return:
"""
if img1.size == img2.size:
return img1, img2
if img1.size[0] * img1.size[1] > img2.size[0] * img2.size[1]:
img2 = img2.resize(img1.size, Image.BILINEAR)
else:
img1 = img1.resize(img2.size, Image.BILINEAR)
return img1, img2
def chroma_key(fg, bg, keycolor, tolerance, white_protect=180):
"""
Apply chroma key effect to the foreground image using the specified key color and background image.
:param fg:
:param bg:
:param keycolor:
:param tolerance:
:param white_protect:
:return:
"""
fg_data = np.array(fg.convert("RGBA"))
bg_data = np.array(bg.convert("RGBA"))
r, g, b = keycolor
diff = np.sqrt(
(fg_data[:, :, 0] - r) ** 2 +
(fg_data[:, :, 1] - g) ** 2 +
(fg_data[:, :, 2] - b) ** 2
)
key_rgb = np.array([r, g, b])
dominant_channel = np.argmax(key_rgb)
other_channels = [i for i in range(3) if i != dominant_channel]
pixel_dominant = (
(fg_data[:, :, dominant_channel] > fg_data[:, :, other_channels[0]] + 10) &
(fg_data[:, :, dominant_channel] > fg_data[:, :, other_channels[1]] + 10)
)
luma = fg_data[:, :, :3].mean(axis=2)
is_bright = luma > white_protect
mask = (diff < tolerance) & pixel_dominant & (~is_bright)
output = np.where(mask[:, :, None], bg_data, fg_data)
return Image.fromarray(output, 'RGBA')
def process(inputfile, outputfile, keycolor, workdir, sequence_prefix, tolerance=30, background_sequence=None):
"""
Process a single image or a sequence of images for chroma keying.
:param inputfile:
:param outputfile:
:param keycolor:
:param workdir:
:param sequence_prefix:
:param tolerance:
:param background_sequence:
:return:
"""
key_rgb = hex_to_rgb(keycolor)
if os.path.isfile(inputfile):
if not background_sequence or not os.path.isfile(background_sequence):
print("Error: Background file required for single image input.")
return
try:
fg = Image.open(inputfile)
bg = Image.open(background_sequence)
fg, bg = resize_to_match(fg, bg)
result = chroma_key(fg, bg, key_rgb, tolerance)
result.save(outputfile)
except Exception as e:
print(f"Error processing single image: {e}")
else:
fg_frames = sorted(glob.glob(os.path.join(inputfile, f"{sequence_prefix}_*.png")))
if not fg_frames:
print(f"Error: No foreground frames found in {inputfile}")
return
use_static = False
static_bg = None
bg_frames = []
if background_sequence:
if os.path.isfile(background_sequence):
try:
static_bg = Image.open(background_sequence)
use_static = True
log(f"Using static background image: {background_sequence}")
except Exception as e:
print(f"Error loading static background: {e}")
return
elif os.path.isdir(background_sequence):
bg_frames = sorted(glob.glob(os.path.join(background_sequence, f"{sequence_prefix}_*.png")))
if not bg_frames:
print(f"Error: No background sequence found in {background_sequence}")
return
log(f"Using background frame sequence from: {background_sequence}")
else:
print(f"Error: Invalid background path: {background_sequence}")
return
else:
print("Error: No background sequence or static background provided.")
return
frame_count = len(fg_frames) if use_static else min(len(fg_frames), len(bg_frames))
for i in range(frame_count):
try:
if i % 30 == 0 or i == frame_count - 1:
log(f"Processing frame {i + 1} of {frame_count}")
fg = Image.open(fg_frames[i])
bg = static_bg.copy() if use_static else Image.open(bg_frames[i])
fg, bg = resize_to_match(fg, bg)
result = chroma_key(fg, bg, key_rgb, tolerance)
out_path = os.path.join(workdir, f"{sequence_prefix}_{i:04}.png")
result.save(out_path)
except Exception as e:
print(f"Frame {i} failed: {e}")
# fileio.py
import os
import glob
from logger import log
def ensure_working_directory(path):
"""
Ensure that the working directory exists. If it does not, create it.
:param path:
:return:
"""
if not os.path.exists(path):
os.makedirs(path)
def get_sequence_files(directory, prefix):
"""
Get a sorted list of files in the specified directory that match the given prefix.
:param directory:
:param prefix:
:return:
"""
pattern = os.path.join(directory, f"{prefix}_*.png")
return sorted(glob.glob(pattern))
def cleanup_sequence(root_dir, prefix):
"""
Clean up temporary files and directories created during processing.
:param root_dir:
:param prefix:
:return:
"""
paths_to_check = [root_dir,
os.path.join(root_dir, "inputframes"),
os.path.join(root_dir, "backgroundframes")]
for path in paths_to_check:
if not os.path.exists(path):
continue
files = get_sequence_files(path, prefix)
if not files:
log(f"No files found for prefix '{prefix}' in {path}")
for f in files:
try:
os.remove(f)
# log(f"Deleted file: {f}")
except Exception as e:
log(f"Failed to delete file: {f} — {e}")
if os.path.isdir(path) and not os.listdir(path):
try:
os.rmdir(path)
log(f"Deleted empty directory: {path}")
except Exception as e:
log(f"Failed to delete directory: {path} — {e}")
if os.path.isdir(root_dir) and not os.listdir(root_dir):
try:
os.rmdir(root_dir)
log(f"Deleted empty working directory: {root_dir}")
except Exception as e:
log(f"Failed to delete working directory: {root_dir} — {e}")
# logger.py
import datetime
def log(message):
"""
Log a message with a timestamp.
:param message:
:return:
"""
timestamp = datetime.datetime.now().strftime('%Y-%m-%d %I:%M:%S %p')
print(f'{timestamp} {message}')
# negative.py
import re
import numpy as np
import os
import glob
from PIL import Image, ImageOps, UnidentifiedImageError
from logger import log
def hex_to_rgb(hex_color):
"""
Convert a hex color string to an RGB tuple.
:param hex_color:
:return:
"""
hex_color = hex_color.lstrip('#')
if not re.match(r'^[0-9a-fA-F]{6}$', hex_color):
raise ValueError("Invalid hex color format. Use RRGGBB or #RRGGBB.")
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
def create_negative(image_path):
"""
Create a negative image from the given image path.
:param image_path:
:return:
"""
try:
log(f"Creating negative for image: '{image_path}'")
img = Image.open(image_path)
img.load()
img_rgb = img.convert('RGB')
inverted_rgb = ImageOps.invert(img_rgb)
alpha_channel = img.convert('L')
negative_rgba = inverted_rgb.convert('RGBA')
negative_rgba.putalpha(alpha_channel)
return negative_rgba
except FileNotFoundError:
print(f"Error: File not found: '{image_path}'")
except UnidentifiedImageError:
print(f"Error: Unrecognized image format: '{image_path}'")
except Exception as e:
print(f"Error during negative creation: {e}")
return None
def apply_light(negative_image, light_color_rgb):
"""
Apply a light color to the negative image and return the corrected RGB image.
:param negative_image:
:param light_color_rgb:
:return:
"""
try:
if not isinstance(negative_image, Image.Image) or negative_image.mode != 'RGBA':
print("Error: Input must be an RGBA image.")
return None
color_layer = Image.new('RGB', negative_image.size, light_color_rgb).convert('RGBA')
intermediate_image = Image.alpha_composite(color_layer, negative_image)
inverted_composite_rgb = ImageOps.invert(intermediate_image.convert('RGB'))
alpha_channel = negative_image.getchannel('A')
rgb_array = np.array(inverted_composite_rgb, dtype=np.float32)
alpha_array = np.array(alpha_channel, dtype=np.float32)
scale_factor = np.zeros_like(alpha_array)
np.divide(255.0, alpha_array, out=scale_factor, where=alpha_array != 0)
scale_factor_rgb = np.expand_dims(scale_factor, axis=-1)
corrected_rgb_array = rgb_array * scale_factor_rgb
corrected_rgb_array = np.clip(corrected_rgb_array, 0, 255).astype(np.uint8)
return Image.fromarray(corrected_rgb_array, 'RGB')
except Exception as e:
print(f"Error during light application: {e}")
return None
def save_image(image, filename):
"""
Save the image to the specified filename.
:param image:
:param filename:
:return:
"""
if image is None or not filename:
print("Error: Invalid image or filename.")
return False
try:
if filename.lower().endswith(('.jpg', '.jpeg')) and image.mode == 'RGBA':
image = image.convert('RGB')
image.save(filename)
return True
except Exception as e:
print(f"Error saving image '{filename}': {e}")
return False
def process(inputfile, outputfile, operation, color, workdir, sequence_prefix):
"""
Process images or sequences to create negative images or apply light color.
:param inputfile:
:param outputfile:
:param operation:
:param color:
:param workdir:
:param sequence_prefix:
:return:
"""
if os.path.isdir(inputfile):
frame_paths = sorted(glob.glob(os.path.join(inputfile, f"{sequence_prefix}_*.png")))
if not frame_paths:
log(f"No frames found in {inputfile} for prefix {sequence_prefix}")
return
if operation == 'negative':
for i, frame_path in enumerate(frame_paths):
img = create_negative(frame_path)
if img:
out_path = os.path.join(workdir, f"{sequence_prefix}_{i:04}.png")
save_image(img, out_path)
elif operation == 'negative-reimage':
try:
light_rgb = hex_to_rgb(color)
except ValueError as e:
log(f"Color Error: {e}")
return
for i, frame_path in enumerate(frame_paths):
img = create_negative(frame_path)
if img:
result = apply_light(img, light_rgb)
if result:
out_path = os.path.join(workdir, f"{sequence_prefix}_{i:04}.png")
save_image(result, out_path)
else:
if operation == 'negative':
img = create_negative(inputfile)
if img:
save_image(img, outputfile)
elif operation == 'negative-reimage':
img = create_negative(inputfile)
if img:
try:
light_rgb = hex_to_rgb(color)
except ValueError as e:
log(f"Color Error: {e}")
return
final_img = apply_light(img, light_rgb)
if final_img:
save_image(final_img, outputfile)
# processor.py
import argparse
import datetime
import os
import sys
from fileio import ensure_working_directory, cleanup_sequence
from logger import log
from videotoimage import extract_frames, frames_to_video
def define_args():
"""
Define command-line arguments using argparse for the image/video processing script.
:return:
"""
parser = argparse.ArgumentParser(description="Image/Video Processor")
parser.add_argument('--inputfile', required=True,
help='Input file (image, video, or image sequence dir)')
parser.add_argument('--outputfile', required=True, help='Output file (image or video)')
parser.add_argument('--operation', required=True,
choices=['negative', 'negative-reimage', 'chromakey'], help='Processing operation')
parser.add_argument('--color',
help='Hex color string (e.g. #00ff00) for chromakey or negative-reimage')
parser.add_argument('--workingdirectory',
required=True, help='Directory for temporary image sequences')
parser.add_argument('--sequencename',
help='Optional label to append to generated sequence')
parser.add_argument('--backgroundsequence', help='Background image or video or directory')
parser.add_argument('--cleanup', action='store_true',
help='Delete image sequence files after processing')
return parser.parse_args()
def is_video_file(path):
"""
Check if the given path is a video file based on its extension.
:param path:
:return:
"""
return path.lower().endswith(('.mp4', '.mov', '.avi', '.mkv', '.webm'))
def main():
"""
Main function to process images or videos based on command-line arguments.
:return:
"""
args = define_args()
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M')
sequence_prefix = f"{timestamp}_{args.sequencename}" if args.sequencename else timestamp
ensure_working_directory(args.workingdirectory)
log("Processing begins")
input_sequence_dir = args.inputfile
output_sequence_dir = args.workingdirectory
output_sequence_prefix = sequence_prefix
if is_video_file(args.inputfile):
input_sequence_dir = os.path.join(args.workingdirectory, "inputframes")
ensure_working_directory(input_sequence_dir)
log("Extracting frames from input video...")
if not extract_frames(args.inputfile, input_sequence_dir, output_sequence_prefix):
log("Failed to extract frames from input video.")
return
background_sequence_dir = args.backgroundsequence
if args.backgroundsequence and is_video_file(args.backgroundsequence):
background_sequence_dir = os.path.join(args.workingdirectory, "backgroundframes")
ensure_working_directory(background_sequence_dir)
log("Extracting frames from background video...")
if not extract_frames(args.backgroundsequence, background_sequence_dir, output_sequence_prefix):
log("Failed to extract frames from background video.")
return
if args.operation in ['negative', 'negative-reimage']:
from negative import process as negative_process
negative_process(
inputfile=input_sequence_dir,
outputfile=args.outputfile,
operation=args.operation,
color=args.color,
workdir=args.workingdirectory,
sequence_prefix=output_sequence_prefix
)
elif args.operation == 'chromakey':
from chromakey import process as chroma_process
chroma_process(
inputfile=input_sequence_dir,
outputfile=args.outputfile,
keycolor=args.color,
workdir=args.workingdirectory,
sequence_prefix=output_sequence_prefix,
background_sequence=background_sequence_dir
)
if is_video_file(args.outputfile):
log("Reassembling frames into output video...")
if not frames_to_video(args.workingdirectory, output_sequence_prefix, args.outputfile):
log("Failed to assemble output video.")
return
if args.cleanup:
cleanup_sequence(args.workingdirectory, output_sequence_prefix)
log("Temporary sequence files cleaned up")
if __name__ == '__main__':
main()
# videotoimage.py
import cv2
import os
import glob
def extract_frames(video_path, output_dir, prefix):
"""
Extract frames from a video file and save them as images in the specified directory.
:param video_path:
:param output_dir:
:param prefix:
:return:
"""
if not os.path.exists(video_path):
print(f"Error: Video not found: {video_path}")
return False
cap = cv2.VideoCapture(video_path)
if not cap.isOpened():
print(f"Error: Cannot open video: {video_path}")
return False
os.makedirs(output_dir, exist_ok=True)
frame_index = 0
success = True
while True:
ret, frame = cap.read()
if not ret:
break
filename = os.path.join(output_dir, f"{prefix}_{frame_index:04}.png")
if not cv2.imwrite(filename, frame):
print(f"Error: Failed to write frame {frame_index}")
success = False
frame_index += 1
cap.release()
return success
def frames_to_video(input_dir, prefix, output_path, fps=30):
"""
Convert a sequence of images into a video file.
:param input_dir:
:param prefix:
:param output_path:
:param fps:
:return:
"""
pattern = os.path.join(input_dir, f"{prefix}_*.png")
images = sorted(glob.glob(pattern))
if not images:
print(f"Error: No images found with prefix {prefix}")
return False
first_frame = cv2.imread(images[0])
height, width, layers = first_frame.shape
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter(output_path, fourcc, fps, (width, height))
for img_path in images:
frame = cv2.imread(img_path)
if frame is None:
print(f"Warning: Skipping unreadable frame {img_path}")
continue
out.write(frame)
out.release()
return True