FFmpeg Rust Scripts is a collection of high-performance utilities designed to automate common video and audio editing tasks.
🎥 FFmpeg Rust Scripts for Linux, Windows, Mac, NixOS and Freebsd
The tools are organized into functional categories.
Scene Detection & Cutting: Automatically identify scene changes and split videos into logical segments.
Trimming & Clipping: Precise tools for extracting clips using timestamps, durations, or remote URLs.
Chapters & Metadata: Extract, create, and manage metadata and chapter markers for media files.
Overlays & Transitions: Add fades, crossfades, Picture-in-Picture (PiP) effects, and Ken Burns animations.
Visualization & Analysis: Generate waveforms, scopes, and measure EBU R128 loudness levels.
Conversion & Extras: Create high-quality GIFs, WebP images, and handle time format conversions.
This project is designed to be cross-platform and work on:
- NixOS (via Home Manager)
- Linux (standard distributions like Ubuntu, Arch, etc.)
- Windows (Windows 10/11)
- macOS
- FreeBSD
Statically Compiled Binaries: Provided for NixOS, Linux, and Windows.
These are standalone executables that do not require a Rust runtime to be installed on your system.
Manual Compilation: For macOS and FreeBSD, you will need to install the Rust toolchain and compile the scripts from source to ensure compatibility with your specific system architecture.
Choose your operating system below for specific installation steps, including dependency management and environment path configuration.
🎥 NixOS FFmpeg Rust Scripts install
Install dependencies with home-manager by adding the following packages.
https://github.com/nix-community/home-manager
ffmpeg-full yt-dlp deno fd
Create a bin directory in your home folder to store the scripts.
mkdir -p ~/bin~/.zshenv
If you are using zsh add the following code to your ~/.zshenv file
This sets up the PATH for the rust scripts and configures the ffplay video driver.
typeset -U PATH path
path=("$HOME/bin" "$path[@]")
export PATHTo ensure ffplay renders correctly on NixOS, you must export the correct SDL_VIDEODRIVER for your display server (Wayland or X11).
Set ffplay driver: use either ‘wayland’ or ‘x11’
- wayland
export SDL_VIDEODRIVER=wayland- X11
export SDL_VIDEODRIVER=x11Source your ~/.zshenv if you are using the zsh shell
source ~/.zshenv~/.bashrc
If you are using bash add the following code to your ~/.bashrc
This sets up the PATH for the rust scripts and configures the ffplay video driver.
if [ -d "$HOME/bin" ]; then
PATH="$HOME/bin:$PATH"
fiTo ensure ffplay renders correctly on NixOS, you must export the correct SDL_VIDEODRIVER for your display server (Wayland or X11).
Set ffplay driver: use either ‘wayland’ or ‘x11’
- wayland
export SDL_VIDEODRIVER=wayland- X11
export SDL_VIDEODRIVER=x11Source your ~/.bashrc if you are using the bash shell
source ~/.bashrcDownload the latest NixOS release archive and sha256sum check sum and then follow these steps to install the binaries.
https://github.com/NapoleonWils0n/ffmpeg-rust-scripts/releases
Open your terminal and run the following command:
sha256sum -c nixos-ffmpeg-rust-scripts-v1.tar.gz.sha256Expected Output: nixos-ffmpeg-rust-scripts-v1.tar.gz: OK
Note: Replace version-1.0 with the actual version number you downloaded.
tar -xf nixos-ffmpeg-rust-scripts-v1.tar.gzMove the extracted binaries into your local ~/bin directory.
mv nixos-ffmpeg-rust-scripts-v1/* ~/bin/🎥 Linux FFmpeg Rust Scripts install
Install ffmpeg on debian or ubuntu, for other linux distros see the documentation for your package manager
sudo apt install ffmpegInstall fd for batch processing
sudo apt install fd-findNote: On Debian and Ubuntu
In the batch processing examples provided in this README, you will need to replace fd with fdfind
yt-dlp needed for trim-remote-clip
curl 'https://github.com/yt-dlp/yt-dlp/releases/download/2025.12.08/yt-dlp' -o ~/bin/yt-dlp chmod +x ~/bin/yt-dlp yt-dlp upgrade
yt-dlp -Ucurl -fsSL https://deno.land/install.sh | sh upgrade deno
deno upgradeCreate a bin directory in your home folder to store the scripts.
mkdir -p ~/bin~/.zshenv
If you are using zsh add the following code to your ~/.zshenv file
This sets up the PATH for the rust scripts, yt-dlp, and deno, and configures the ffplay video driver.
typeset -U PATH path
path=("$HOME/bin" "${HOME}/.deno/bin" "$path[@]")
export PATHTo ensure ffplay renders correctly on Linux, you must export the correct SDL_VIDEODRIVER for your display server (Wayland or X11).
Set ffplay driver: use either ‘wayland’ or ‘x11’
- wayland
export SDL_VIDEODRIVER=wayland- X11
export SDL_VIDEODRIVER=x11Source your ~/.zshenv if you are using the zsh shell
source ~/.zshenv~/.bashrc
If you are using bash add the following code to your ~/.bashrc
This sets up the PATH for the rust scripts, yt-dlp, and deno, and configures the ffplay video driver.
if [ -d "$HOME/bin" ]; then
PATH="$HOME/bin:$HOME/.deno/bin:$PATH"
fiTo ensure ffplay renders correctly on Linux, you must export the correct SDL_VIDEODRIVER for your display server (Wayland or X11).
Set ffplay driver: use either ‘wayland’ or ‘x11’
- wayland
export SDL_VIDEODRIVER=wayland- X11
export SDL_VIDEODRIVER=x11Source your ~/.bashrc if you are using the bash shell
source ~/.bashrcDownload the latest Linux release archive and sha256sum check sum and then follow these steps to install the binaries.
https://github.com/NapoleonWils0n/ffmpeg-rust-scripts/releases
Open your terminal and run the following command:
sha256sum -c linux-ffmpeg-rust-scripts-v1.tar.gz.sha256Expected Output: linux-ffmpeg-rust-scripts-v1.tar.gz: OK
Note: Replace version-1.0 with the actual version number you downloaded.
tar -xf linux-ffmpeg-rust-scripts-v1.tar.gzMove the extracted binaries into your local ~/bin directory.
mv linux-ffmpeg-rust-scripts-v1/* ~/bin/🎥 Windows FFmpeg Rust Scripts install
This section covers how to install the necessary dependencies and the Rust scripts on Windows.
We recommend using the Chocolatey package manager to install ffmpeg, yt-dlp, and deno quickly. However, you can also install these tools manually from their respective websites if you prefer.
The first step is to open PowerShell on your Windows 11 computer, and that too with administrator privileges. Opening PowerShell with administrator privileges is important, as you will be unable to install Chocolatey without that.
Now type in the following command to know the execution policy status of PowerShell.
Get-ExecutionPolicy
If it says ‘Restricted’, type in the following command as shown below, all on the PowerShell with administrator privileges.
Set-ExecutionPolicy AllSigned
You will have to type ‘Y’ and hit the enter key when you will be asked for a confirmation.
Now run the following command:
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
Install dependencies with Chocolatey.
choco install ffmpeg yt-dlp deno fdYou need to put the .exe files in a place where Windows can find them.
We recommend creating a bin folder in your User directory.
This is the easiest method and should be performed as a regular user (you do not need Administrator privileges).
Open PowerShell and paste the following code.
It will create the C:\Users\YourName\bin folder and automatically add it to your Path.
$binDir = "$HOME\bin"; if (!(Test-Path $binDir)) { New-Item -ItemType Directory -Path $binDir }; $oldPath = [Environment]::GetEnvironmentVariable("Path", "User"); if ($oldPath -notlike "$binDir") { $newPath = "$oldPath;$binDir".Replace(";;", ";"); [Environment]::SetEnvironmentVariable("Path", $newPath, "User"); $env:Path = [Environment]::GetEnvironmentVariable("Path", "User") + ";" + [Environment]::GetEnvironmentVariable("Path", "Machine"); Write-Host "Success! Folder created and Path updated." } else { Write-Host "Path already exists." }After running the command, restart PowerShell for the changes to take effect.
If you prefer to manage your folders manually, follow these steps:
- Create a folder named bin in your User directory (e.g., C:\Users\YourName\bin).
- Open the Start Menu, search for Edit the system environment variables, and open it.
- Click Environment Variables.
- Under User variables, select Path and click Edit.
- Click New and paste the full path to your bin folder.
- Click OK on all windows to save.
Because these scripts are unsigned and perform system-level tasks (like calling FFmpeg), Windows Defender may flag them as a False Positive threat.
To prevent the scripts from being quarantined or deleted, add your bin folder as an exclusion.
- Open the Start Menu and type Windows Security, then press Enter.
- Go to Virus and threat protection.
- Under Virus and threat protection settings, click Manage settings.
- Scroll down to Exclusions and click Add or remove exclusions.
- Click Add an exclusion and select Folder.
- Browse to and select the bin folder you created for these scripts.
This tells Windows Defender to trust all binaries within that specific directory.
Download the latest Windows release archive and sha256sum check sum and then follow these steps to install the binaries.
https://github.com/NapoleonWils0n/ffmpeg-rust-scripts/releases
Open PowerShell and run the following commands to compare the hash:
- View the expected hash
cat .\windows-ffmpeg-rust-scripts-v1.zip.sha256- Calculate the actual hash
Get-FileHash .\windows-ffmpeg-rust-scripts-v1.zip -Algorithm SHA256Note: Ensure the strings match.
Alternatively, if you have Git Bash installed on Windows, you can use the
sha256sum -c windows-ffmpeg-rust-scripts-v1.zip.sha256Before extracting the scripts, you must unblock the downloaded file.
Windows often restricts files downloaded from the internet, which can cause the scripts to fail or be deleted immediately upon extraction.
Note: In the steps below, replace “v1” with the specific version number of the release you downloaded (e.g., v1.1.0).
- Right-click the downloaded windows-ffmpeg-rust-scripts-v1.zip.
- Select Properties.
- At the bottom of the General tab, look for the Security section.
- Check the box labeled Unblock.
- Click Apply and then OK.
Use the Windows File Manager (Explorer) to extract the zip file to ensure all permissions are handled correctly.
- Right-click the windows-ffmpeg-rust-scripts-v1.zip file and select Extract All.
- Follow the prompts to finish the extraction.
- Move all the .exe files from the extracted folder into the bin directory you created in the previous step.
Open a new PowerShell window and run the version command for one of the scripts to verify it is working:
extract-frame -v🎥 Mac FFmpeg Rust Scripts install
On macOS, you will need to install the development tools and the Rust compiler to build the scripts from source.
Install the Xcode Command Line Tools to provide the necessary compilers, by running the following command in the terminal.
xcode-select --installInstall the Rust toolchain using rustup.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shInstall the Homebrew package manager. Run the following command in the terminal to install homebrew.
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"Note: When the installer finishes, look for the “Next steps” section in your terminal. You must run the commands provided there to add Homebrew to your system PATH.
Install dependencies with homebrew, by running the following command in the terminal.
brew install ffmpeg yt-dlp deno fdCreate a bin directory in your home folder to store the scripts.
mkdir -p ~/bin~/.zshenv
Add the following code to your ~/.zshenv file. This is the default shell for macOS.
This sets up the PATH for the rust scripts.
typeset -U PATH path
path=("$HOME/bin" "$path[@]")
export PATHSource the configuration to apply changes.
source ~/.zshenv~/.bashrc
If you are using bash add the following code to your ~/.bashrc
This sets up the PATH for the rust scripts.
if [ -d "$HOME/bin" ]; then
PATH="$HOME/bin:$PATH"
fiSource your ~/.bashrc if you are using the bash shell
source ~/.bashrcSince macOS requires manual compilation, follow these steps to build the binaries and move them to your bin folder.
Create a directory to store the git repository in your home directory.
mkdir -p ~/git Change into the git directory in your home.
cd ~/gitRun the following command in the terminal to clone the ffmpeg-rust-scripts git repository.
git clone https://github.com/NapoleonWils0n/ffmpeg-rust-scriptsChange directory into the ffmpeg-scripts-rust git repository.
cd ffmpeg-rust-scriptsBuild the project with Cargo.
cargo build --releaseWhen you run cargo build –release, Rust places the binaries in target/release/
Copy the scripts to the bin directory in your home.
cp target/release/* ~/bin/Note: You may see some non-executable files in the bin folder after this command (like .d files). This is normal, but you only need the files that match the script names.
🎥 Freebsd FFmpeg Rust Scripts install
Install the required tools and the Rust toolchain using the pkg manager. Run the following command in the terminal to install Rust.
sudo pkg install rustInstall dependencies with pkg.
sudo pkg install ffmpeg yt-dlp deno fd-findYou can also install ffmpeg from ports, or use poudriere to build the ffmpeg package
Note the ebumeter script uses ffplay which isnt installed with the ffmpeg package, so you need to build ffmpeg with the sdl option enable from ports or with poudriere
If you want to use the libfdk_aac audio you should also enable that option when building the ffmpeg port, and build the lame package for mp3 support
Create a bin directory in your home folder to store the scripts.
mkdir -p ~/bin~/.profile
For the default shell (sh or tcsh) shell on Freebsd, edit your shell ~/.profile
add this code to your ~/.profile file.
if [ -d "$HOME/bin" ]; then
PATH="$HOME/bin:$PATH"
fiSet ffplay driver: use either ‘wayland’ or ‘x11’
Note for the sh shell you need to use export
- wayland
export SDL_VIDEODRIVER=wayland- X11
export SDL_VIDEODRIVER=x11Note for the tcsh shell you need to use setenv
- wayland
setenv SDL_VIDEODRIVER wayland- X11
setenv SDL_VIDEODRIVER x11Reload your ~/.profile
. ~/.profile~/.zshenv
If you are using zsh add the following code to your ~/.zshenv file
This sets up the PATH for the rust scripts.
typeset -U PATH path
path=("$HOME/bin" "$path[@]")
export PATHTo ensure ffplay renders correctly on Freebsd, you must export the correct SDL_VIDEODRIVER for your display server (Wayland or X11).
Set ffplay driver: use ‘wayland’ or ‘x11’
- wayland
export SDL_VIDEODRIVER=wayland- X11
export SDL_VIDEODRIVER=x11Source your ~/.zshenv if you are using the zsh shell
source ~/.zshenv~/.bashrc
If you are using bash add the following code to your ~/.bashrc
This sets up the PATH for the rust scripts.
if [ -d "$HOME/bin" ]; then
PATH="$HOME/bin:$PATH"
fiTo ensure ffplay renders correctly on Freebsd, you must export the correct SDL_VIDEODRIVER for your display server (Wayland or X11).
Set ffplay driver: use ‘wayland’ or ‘x11’
- wayland
export SDL_VIDEODRIVER=wayland- X11
export SDL_VIDEODRIVER=x11Source your ~/.bashrc if you are using the bash shell
source ~/.bashrcSince Freebsd requires manual compilation, follow these steps to build the binaries and move them to your bin folder.
Create a directory to store the git repository in your home directory.
mkdir -p ~/git Change into the git directory in your home.
cd ~/gitRun the following command in the terminal to clone the ffmpeg-scripts-rust git repository.
git clone https://github.com/NapoleonWils0n/ffmpeg-rust-scriptsChange directory into the ffmpeg-scripts-rust git repository.
cd ffmpeg-rust-scriptsBuild the project with Cargo.
cargo build --releaseWhen you run cargo build –release, Rust places the binaries in target/release/
Copy the scripts to the bin directory in your home.
cp target/release/* ~/bin/Note: You may see some non-executable files in the bin folder after this command (like .d files). This is normal, but you only need the files that match the script names.
The trim-clip script allows for precise trimming of video or audio files with millisecond accuracy. It uses FFmpeg Input Seeking to ensure the extraction is both fast and frame-accurate.
You can use two different time unit formats:
- Sexagesimal: (HOURS:MM:SS.MILLISECONDS, e.g., 01:23:45.678)
- Seconds: (e.g., 150.5)
Note: 02:30.5 is interpreted as 2 minutes, 30 seconds, and a half-second
Provide the start time -s, the input file -i, and the end time -t.
Note: The end time -t is the number of seconds (duration) after the start time.
trim-clip -s 00:00:30 -i input.mp4 -t 00:00:30 -o clip.mp4This example creates a 30-second clip starting at the 30-second mark and ending at 60 seconds. If you omit the output -o option, the script will automatically name the file: input-name-[start-end].mp4
Batch process files in the current working directory using fd.
Note we omit the -o option to use the default outfile name: infile-name-[start-end].ext
The script supports many file types (mp4, mkv, mov, webm, wav, mp3, m4a, ogg). To batch process, specify the extension you want to search for using the -e flag.
Batch trim the first 30 seconds of all mp4 files in the current directory:
fd -e mp4 -x trim-clip -s 00:00:00 -i {} -t 00:00:30Batch trim the first 30 seconds of all mkv files in the current directory:
fd -e mkv -x trim-clip -s 00:00:00 -i {} -t 00:00:30Batch trim the first 30 seconds of all mp3 files in the current directory:
fd -e mp3 -x trim-clip -s 00:00:00 -i {} -t 00:00:30trim-clip -s 00:00:00.000 -i input -t 00:00:00.000 -o outputRun the script with the -h option to show the help.
trim-clip -hHelp output.
trim video or audio clips with millisecond accuracy https://trac.ffmpeg.org/wiki/Seeking Usage: trim-clip [OPTIONS] -s <START> -i <INFILE> -t <DURATION> Options: -s <START> start time -i <INFILE> input file -t <DURATION> number of seconds after start time -o <OUTFILE> optional output file -h, --help Print help -v, --version Print version Example: trim-clip -s 00:00:30 -i input -t 00:00:30 -o output This will create a 30 second clip starting at 30 seconds and ending at 60 seconds. Dependencies: ffmpeg: https://www.ffmpeg.org/ Notes: If -o is not provided, defaults to: input-name-[start-end].ext
The trim-clip-to script allows for precise trimming of video or audio files with millisecond accuracy. It uses FFmpeg Input Seeking to ensure the extraction is both fast and frame-accurate.
You can use two different time unit formats:
- Sexagesimal: (HOURS:MM:SS.MILLISECONDS, e.g., 01:23:45.678)
- Seconds: (e.g., 150.5)
Note: 02:30.5 is interpreted as 2 minutes, 30 seconds, and a half-second
Provide the start time -s, the input file -i, and the End Timestamp -t.
Note: Unlike trim-clip, the -t option here is the exact point on the timeline where you want the clip to stop.
trim-clip-to -s 00:00:30 -i input.mp4 -t 00:01:30 -o clip.mp4This example creates a 1-minute clip starting at the 30-second mark and ending exactly at 1 minute and 30 seconds. If you omit the output -o option, the script will automatically name the file: input-name-[start-end].mp4
Batch process files in the current working directory using fd.
Note we omit the -o option to use the default outfile name: infile-name-[start-end].ext
The script supports many file types (mp4, mkv, mov, webm, wav, mp3, m4a, ogg). To batch process, specify the extension you want to search for using the -e flag.
Batch trim from 30 seconds to 1 minute 30 all mp4 files in the current directory:
fd -e mp4 -x trim-clip-to -s 00:00:30 -i {} -t 00:01:30Batch trim from 30 seconds to 1 minute 30 all mkv files in the current directory:
fd -e mkv -x trim-clip-to -s 00:00:30 -i {} -t 00:01:30Batch trim from 30 seconds to 1 minute 30 all mp3 files in the current directory:
fd -e mp3 -x trim-clip-to -s 00:00:30 -i {} -t 00:01:30trim-clip-to -s 00:00:00.000 -i input -t 00:00:00.000 -o outputRun the script with the -h option to show the help.
trim-clip-to -hHelp output.
trim video or audio clips using start and end timestamps Usage: trim-clip-to [OPTIONS] -s <START> -i <INFILE> -t <END> Options: -s <START> start time -i <INFILE> input file -t <END> end time -o <OUTFILE> optional output file -h, --help Print help -v, --version Print version Example: trim-clip-to -s 00:00:45 -i input.mkv -t 00:01:30 This creates a 45s clip starting at 45s and ending at 1m 30s. Dependencies: ffmpeg: https://www.ffmpeg.org/
The trim-remote-clip script allows you to download and trim a specific segment of an online video without downloading the entire file. It leverages yt-dlp to fetch the stream and FFmpeg to extract the clip with millisecond accuracy.
You can use two different time unit formats:
- Sexagesimal: (HOURS:MM:SS.MILLISECONDS, e.g., 01:23:45.678)
- Seconds: (e.g., 150.5)
Note: 02:30.5 is interpreted as 2 minutes, 30 seconds, and a half-second
To change the quality of the downloaded clip, you should specify your preferred format options in your yt-dlp configuration file.
Provide the start time -s, the url -i, and the end time -t.
trim-remote-clip -s 00:00:30 -i url -t 00:01:30 -o clip.mp4This example creates a 1 minute clip starting at the 30-second mark and ending at 1 minute 30 seconds. If you omit the output -o option, the script will automatically name the file: Title-[start-end].mp4
trim-remote-clip -s 00:00:00.000 -i url -t 00:00:00.000 -o outputRun the script with the -h option to show the help.
trim-remote-clip -hHelp output.
Trim remote video clips with millisecond accuracy Usage: trim-remote-clip [OPTIONS] -s <START> -t <END> -i <INPUT> Options: -s <START> Start time (HH:MM:SS.mmm) -t <END> End time (HH:MM:SS.mmm) -i <INPUT> Input URL (YouTube, Vimeo, etc.) -o <OUTFILE> Output filename (optional, defaults to Title-[start-end].mp4) -h, --help Print help -v, --version Print version Example: trim-remote-clip -s 00:01:00 -t 00:01:30 -i 'URL' -o clip.mp4 Dependencies: ffmpeg: https://www.ffmpeg.org/ yt-dlp: https://github.com/yt-dlp/yt-dlp deno: https://deno.com/
The trim-short script creates vertical 9:16 clips from horizontal video sources. It automatically crops and scales the footage to 1080x1920 resolution, making it ready for YouTube Shorts or TikTok
You can use two different time unit formats:
- Sexagesimal: (HOURS:MM:SS.MILLISECONDS, e.g., 01:23:45.678)
- Seconds: (e.g., 150.5)
Note: 02:30.5 is interpreted as 2 minutes, 30 seconds, and a half-second
Note: If no end time -t is provided, the script defaults to a 60-second duration.
Since you are cropping a horizontal video into a vertical one, you can specify which part of the frame to keep using the -x option (percentage).
0: Left side of the frame. 25: Between the left and the center. 50: Center (Default). 75: Between the center and the right. 100: Right side of the frame.
Provide the input file -i, start time -s, and optional end time -t
Default length of 60 seconds and centered in the middle of the video.
trim-short -i input.mp4 -s 00:00:1030 second clip using the -t option and centered in the middle of the video.
trim-short -i input.mp4 -s 00:00:10 -t 00:00:4030 second clip using the -t option and the -x option set to 75 for 3/4 poisition in the video.
trim-short -i input.mp4 -s 00:00:10 -t 00:00:40 -x 75The script supports many file types (mp4, mkv, mov, webm). To batch process, specify the extension you want to search for using the -e flag with fd.
Batch create 60-second shorts from the start of all .mp4 files (Centered):
fd -e mp4 -x trim-short -i {} -s 00:00:00Batch create shorts from all .mkv files starting at 30s, cropped to the left (25%):
fd -e mkv -x trim-short -i {} -s 00:00:30 -x 25Batch trim from 30 seconds to 1 minute 30 all mp4 files in the current directory: Use the -x option for the script to specify the X position.
fd -e mp4 -x trim-short -s 00:00:30 -i {} -t 00:01:30 -x 25Note we omit the -o option to use the default outfile name: infile-name-short-[start-end].ext
trim-short -s 00:00:00.000 -i input -t 00:00:00.000 -o outputRun the script with the -h option to show the help.
trim-short -hHelp output.
Create a 9:16 vertical clip for YouTube Shorts or TikTok Usage: trim-short -i <INPUT> -s <START> [OPTIONS] Options: -i <INFILE> Input file -s <START> Start time (HH:MM:SS.mmm) -t <END> End time (optional, defaults to +60s) -x <X_POS> X-position percentage (0, 25, 50, 75, 100) [default: 50] -o <OUTFILE> Optional output file -h, --help Print help -v, --version Print version Example: trim-short -i input.mp4 -s 00:00:10 -x 75 Dependencies: ffmpeg: https://www.ffmpeg.org/
The clip-time script converts a simple list of “Start” and “End” timestamps into a cutlist format (start,duration). This cutlist is then used by the scene-cut-to script to automate the extraction of multiple clips or images from a single video.
The input text file should contain one timestamp per line. The script processes these in pairs: the first line is the start of the clip, and the second line is the end.
00:00:10.000 00:00:20.000 00:01:30.000 00:01:45.500
Provide the input file containing your timestamps -i and the optional output filename -o.
clip-time -i timestamps.txt -o cutlist.txtIf you omit the output -o option, the script will automatically name the file: input-name-cutlist.txt.
The resulting file will be formatted as start,duration, which is the required input scene-cut-to script.
example output.
00:00:10.000,00:00:10.000 00:01:30.000,00:00:15.500
clip-time -i input -o outputRun the script with the -h option to show the help.
clip-time -hHelp output.
convert a list of timestamps into an ffmpeg cutlist Usage: clip-time [OPTIONS] -i <INPUT> Options: -i <INPUT> input file containing timestamps -o <OUTFILE> output file for cutlist -h, --help Print help -v, --version Print version Example: clip-time -i timestamps.txt -o cutlist.txt Input format: 00:00:00 00:00:10 (Pairs represent start and end of a clip) Dependencies: ffmpeg: https://www.ffmpeg.org/
The scene-cut-to script automates the process of cutting a single video into multiple individual clips. It reads a cutlist (created by clip-time) and uses “End-point seeking” to ensure every clip is frame-accurate.
The script requires a text file where each line defines a clip using the format: start,duration.
30-second clip starting at the beginning 30-second clip starting at the 1-minute mark
00:00:00,00:00:30 00:01:00,00:00:30
Provide the input video file -i and the cutlist file -c.
scene-cut-to -i input.mp4 -c cutlist.txtThe script will process every line in the cutlist and generate individual files named: input-name-scene-001-[start-end].mp4.
scene-cut-to -i input -c cutfileRun the script with the -h option to show the help.
scene-cut-to -hHelp output.
split video into clips using a start,duration cutlist by calculating end-point Usage: scene-cut-to -i <INPUT> -c <CUTLIST> [OPTIONS] Options: -i <INPUT> input video file -c <CUTLIST> cutlist file comma-separated start,duration -h, --help Print help -v, --version Print version Example: scene-cut-to -i input.mp4 -c cutlist.txt Dependencies: ffmpeg: https://www.ffmpeg.org/
00:00:00,00:00:30 00:01:00,00:01:30
The combine-clips script merges a video file with an external audio file. Because it uses “stream copying” -c copy, the process is nearly instantaneous and results in zero quality loss.
Provide the input video -i and the input audio -a.
combine-clips -i video.mp4 -a audio.wav -o combined-video.mp4If you omit the output -o option, the script automatically generates a name based on the video title and its duration: video-name-combined-[HH:MM:SS].mp4.
You can batch-combine multiple pairs of files using fd. This is useful when you have several videos and matching audio files with the same base name.
file1.mp4 file1.wav file2.mp4 file2.wav
To automatically pair and combine these, run:
fd -e mp4 -x combine-clips -i {} -a {.}.wavrunning the following code will combine file1.mp4 with file1.wav and file2.mp4 with file2.wav
Note: The {.} syntax in fd strips the extension from the video file, allowing the script to find the matching .wav file automatically.
combine-clips -i input -a audio -o output.mp4Run the script with the -h option to show the help.
combine-clips -hHelp output.
Combine audio and video files Usage: combine-clips -i <INPUT> -a <AUDIO> [OPTIONS] Options: -i <INPUT> Input video file (-i) -a <AUDIO> Input audio file (-a) -o <OUTFILE> Output file (optional) -v, --version Print version -h, --help Print help Dependencies: ffmpeg, ffprobe: https://www.ffmpeg.org/
🎥 waveform
The waveform script generates a static image representation of the audio levels from either a video or an audio file.
You can define the dimensions, colors, and image format.
Provide the input file and choose your desired color and format.
waveform -i input.mp4 -c orange -j png -o output.pngIf the -o option is omitted, the script generates a default name based on the input file: input-waveform.jpg
If you have a directory of files and need to generate waveforms for all of them, you can use fd for high-speed batch processing.
fd -e mp4 -x waveform -i {}This command finds every MP4 file and creates a corresponding JPG waveform using the default white color and 1280x420 dimensions.
waveform -i input.mp4 -o output.jpgRun the script with the -h option to show the help.
waveform -hHelp output.
create a waveform image from a video or audio file Usage: waveform [OPTIONS] -i <INPUT> Options: -i <INPUT> input file -c <COLOR> waveform color [default: white] -w <WIDTH> output width [default: 1280] -e <HEIGHT> output height [default: 420] -j <FORMAT> image format jpg or png [default: jpg] -o <OUTFILE> output file optional -h, --help Print help -v, --version Print version Example: waveform -i input.mp4 -c orange -j jpg Colors: https://ffmpeg.org/ffmpeg-utils.html#Color Dependencies: ffmpeg: https://www.ffmpeg.org/
🎥 scopes
The scopes script is a professional analysis tool that plays a video with technical scopes stacked vertically underneath.
It uses ffplay to provide a live playback window, and it automatically scales the scopes to match the width of your input video, ensuring a clean and consistent layout.
You can launch the playback with different analysis views using the following flags:
-i: Displays a Histogram parade. -o: Displays an RGB Overlay waveform. -p: Displays an RGB Parade waveform. -s: Displays both the RGB Overlay and Parade stacked together. -w: Displays a standard Luma Waveform. -v: Displays a Vectorscope for color and saturation analysis.
To view your video with an RGB Parade, simply run:
scopes -p input.mp4Note: Because this script uses ffplay for real-time visualization, it does not output a file; it opens an interactive playback window.
Run the script with the -h option to show the help.
scopes -hHelp output.
Display video with professional scopes stacked below Usage: scopes [OPTIONS] <INPUT> Arguments: <INPUT> Input file Options: -i Display Histogram -o Display RGB Overlay -p Display RGB Parade -s Display RGB Overlay and Parade -w Display Waveform -v Display Vectorscope -h, --help Print help -V, --version Print version Example: scopes -w input.mp4 Dependencies: ffplay: https://www.ffmpeg.org/
The ebu-meter script provides a real-time visual representation of audio loudness levels according to the EBU R128 standard.
It is an essential tool for ensuring your audio meets broadcast or streaming loudness requirements by monitoring LUFS (Loudness Units relative to Full Scale).
You can set a specific target loudness level (in LUFS) using the -t flag.
The meter will visually calibrate itself to this reference point, helping you identify if your audio is too quiet or exceeding your limits.
To monitor the loudness of a file using the default target of -16 LUFS (standard for most streaming platforms):
ebu-meter -i input.mp4Note: Because this script uses ffplay for real-time visualization, it opens an interactive playback window and does not generate an output file.
Run the script with the -h option to show the help.
ebu-meter -hHelp output.
display EBU R128 audio loudness meter Usage: ebu-meter [OPTIONS] -i <INFILE> Options: -i <INFILE> input file -t <TARGET> audio target level [default: -16] -h, --help Print help -v, --version Print version Example: ebu-meter -i input.mp4 -t -16 Dependencies: ffplay: https://www.ffmpeg.org/
The contact-sheet script generates a tiled image of thumbnails representing the entire duration of a video.
It is ideal for getting a quick visual overview of a clip’s content, allowing you to customize the grid layout, thumbnail size, and background colors.
When using the -s (seek) option, you can specify the start time in two different formats:
Sexagesimal: HOURS:MM:SS.MILLISECONDS (e.g., 00:02:30.5).
Seconds: A simple numerical value (e.g., 150.5).
Note: In sexagesimal format, fractions are interpreted as fractions of a second (e.g., .5 is half a second), not as frame counts.
Generate a high-density 8x8 contact sheet with timestamps enabled:
contact-sheet -i input.mp4 -s 00:00:00 -w 320 -t 8x8 -x on -o output.pngThe default start time is 00:00:05 to exclude black frame with videos that fade up from black, this can be overriden by using the -s option with 00:00:00
If the -o option is omitted, the script generates a filename containing the time range: input-contact-[00:00:05–00:10:00].jpg
To quickly generate contact sheets for every MP4 in a directory, use fd. This example creates a 4x4 grid with white padding for each video.
fd -e mp4 -x contact-sheet -i {} -s 00:00:10 -w 200 -t 4x4 -p 7 -m 2 -c whitecontact-sheet -i input -s 00:00:00 -w 320 -t 8x8 -x on -o output.pngRun the script with the -h option to show the help.
contact-sheet -hHelp output.
create an image with thumbnails from a video Usage: contact-sheet [OPTIONS] -i <INFILE> Options: -i <INFILE> -s <SEEK> [default: 00:00:05] -w <WIDTH> [default: 160] -t <LAYOUT> [default: 4x3] -p <PADDING> [default: 7] -m <MARGIN> [default: 2] -c <COLOR> [default: black] -f <FONTCOLOR> [default: white] -b <BOXCOLOR> [default: black] -x <TIMESTAMPS> [default: off] -j <FORMAT> image format (jpg or png) [default: jpg] -o <OUTFILE> -h, --help -v, --version Example: contact-sheet -i input.mp4 -s 00:00:00.000 -w 160 -t 4x3 -j png Dependencies: ffmpeg, ffprobe: https://www.ffmpeg.org/ Notes: -x on enables timestamps. -j sets image format (jpg/png).
The chapter-csv script is Step 1 in the chapter creation workflow. It converts a simple list of timestamps and titles into the complex metadata format required by FFmpeg.
The input CSV file should contain a timestamp followed by a title on each line. The script automatically calculates the duration of each chapter by using the start time of the next line.
Important:
The very last line must be the total duration of the video (labeled as “End”); this tells the script when the final chapter finishes.
Example chapters.csv:
00:00:00,Intro 00:02:30,Scene 1 00:05:00,Scene 2 00:07:00,Scene 3 00:10:00,End
In this example, Scene 3 will start at 07:00 and end at 10:00. The “End” label is a marker for the duration and is not created as a chapter itself.
Provide the input CSV file -i and the optional output filename -o.
chapter-csv -i chapters.csv -o chapters-metadata.txtIf you omit the output -o option, the script defaults to naming the file: input-name-metadata.txt.
Managing chapters is a two-step process:
Step 1: Use this script (chapter-csv) to generate a metadata text file. Step 2: Use the chapter-add script to mux that metadata file into your video.
chapter-csv -i input -o outputRun the script with the -h option to show the help.
chapter-csv -hHelp output.
Convert a chapter CSV (Time, Title) to FFmpeg metadata format Usage: chapter-csv [OPTIONS] -i <INFILE> Options: -i <INFILE> Input CSV file -o <OUTFILE> Output metadata file (optional, defaults to input_name-metadata.txt) -h, --help Print help -v, --version Print version Example: chapter-csv -i chapters.csv -o chapters-metadata.txt Dependencies: ffmpeg: https://www.ffmpeg.org/
csv file example
00:00:00,Intro 00:02:30,Scene 1 00:05:00,Scene 2 00:07:00,Scene 3 00:10:00,End
The chapter-add script is Step 2 in the chapter creation workflow. It takes the metadata file created by chapter-csv and “muxes” it into your video or audio file.
Because this script uses “stream copying” (-codec copy), it does not re-encode your media. Adding chapters is nearly instantaneous and preserves the original quality of your video and audio.
Provide the input video -i and the metadata text file -m.
chapter-add -i input.mp4 -m metadata.txt -o output.mp4If you omit the output -o option, the script automatically names the file: input-name-chapters.ext (using the original file extension).
You can verify the chapters are present by playing the video in a player like mpv
or by using ffprobe
ffprobe -i output.mp4 -show_chapterschapter-add -i input -m metadata.txt -o outputRun the script with the -h option to show the help.
chapter-add -hHelp output.
Mux FFmpeg metadata chapters into a video or audio file without re-encoding Usage: chapter-add [OPTIONS] -i <INFILE> -m <METAFILE> Options: -i <INFILE> Input video or audio file -m <METAFILE> Metadata text file (FFMPEG METADATA format) -o <OUTFILE> Output file (optional, defaults to input-chapters.ext) -h, --help Print help -v, --version Print version Example: chapter-add -i input.mp4 -m metadata.txt -o output.mp4 Dependencies: ffmpeg: https://www.ffmpeg.org/
The chapter-extract script performs the reverse of the chapter workflow: it pulls existing chapter markers out of a media file and saves them into the same portable CSV format used by the other tools.
The script generates a CSV file with two columns: Time, Title. It automatically adds an “End” record at the bottom based on the final chapter’s duration.
Example output (chapters.csv):
00:00:00,Intro 00:00:10,Scene 1 00:00:20,Scene 2 00:00:30,Scene 3 00:01:00,End
Provide the input video -i and the optional output filename -o.
chapter-extract -i movie.mp4 -o chapters.csvIf you omit the output -o option, the script defaults to: input-name.csv.
Since the output is already in “Time, Title” format, you can easily use this for YouTube descriptions. To remove the commas for a cleaner look, use the following command.
sed 's/,/ /' chapters.csv > youtube-timestamps.txtchapter-extract -i input -o outputRun the script with the -h option to show the help.
chapter-extract -hHelp output.
Extract chapters from a video or audio file and save as a CSV Usage: chapter-extract [OPTIONS] -i <INFILE> Options: -i <INFILE> Input video or audio file -o <OUTFILE> Output CSV file (optional, defaults to input_name.csv) -h, --help Print help -v, --version Print version Example: chapter-extract -i input.mkv -o chapters.csv This creates a CSV with: Time, Title Dependencies: ffmpeg, ffprobe: https://www.ffmpeg.org/
The subtitle-add script allows you to “soft-mux” external subtitle files (like .srt or .vtt) into your video as a proper metadata track.
Unlike “hard-coding” (which burns the text into the image), this script adds a toggleable track that can be turned on or off in your video player.
It uses stream copying, so the process is instant and does not lose any video quality.
The script is smart about container formats:
If muxing into MP4, it automatically uses the mov_text codec required by Apple and web players.
If muxing into MKV, it preserves the original subtitle format.
subtitle-add -i input -s subtitle -o output.mp4You can use the -l option to specify the language.
subtitle-add -i input -s subtitle.srt -l eng -o output.mp4If you have a large library of videos and matching subtitle files, you can process them all at once using fd.
Requirement: The video and subtitle files must have the same base name.
Example structure:
movie_01.mp4 movie_01.srt movie_02.mp4 movie_02.srt
Run the following command to batch-process every MP4 in the folder:
fd -e mp4 -x subtitle-add -i {} -s {.}.srtWe omit the -o option so the script uses the default naming convention: input-subs.mp4
subtitle-add -i input -s subtitle -o output.mp4Run the script with the -h option to show the help.
subtitle-add -hHelp output.
Add SRT/VTT subtitles to a video as a track you can toggle on and off Usage: subtitle-add [OPTIONS] -i <INFILE> -s <SUBFILE> Options: -i <INFILE> Input video file -s <SUBFILE> Subtitle file (SRT or VTT) -l <LANG> Language code (e.g., eng, ita, fra) [default: eng] -o <OUTFILE> Output file (optional, defaults to input-subs.ext) -h, --help Print help -v, --version Print version Example: subtitle-add -i input.mp4 -s input.srt -l eng -o output.mp4 Dependencies: ffmpeg: https://www.ffmpeg.org/
The overlay-clip script is designed for B-roll insertion.
It overlays a video clip on top of your main footage at a specific time, while keeping the original background audio playing.
Video: The overlay clip -b completely covers the background video -a for the duration of the overlay clip.
Audio: The background clip’s audio continues to play. The overlay clip’s audio is ignored.
Duration: Once the overlay clip ends, the background video automatically becomes visible again
Provide the background video, the B-roll overlay clip, and the start time.
overlay-clip -a bottom-video.mp4 -b overlay.mp4 -p 00:00:15 -o output.mp4The overlay duration is depends on the length of the overlay clip
If you omit the -o option, the script generates a descriptive name: input-overlay-[00:00:15].mp4
overlay-clip -a bottom-video.mp4 -b overlay.mp4 -p 00:00:05Run the script with the -h option to show the help.
overlay-clip -hHelp output.
Overlay one video clip on top of another video clip Usage: overlay-clip -a <INPUT> -b <OVERLAY> -p <POSITION> [OPTIONS] Options: -a <INPUT> Bottom video (-a) -b <OVERLAY> Overlay video (-b) -p <POSITION> Time to start the overlay (e.g., 5 or 00:00:05) -o <OUTFILE> Output file (optional) -h, --help Print help -v, --version Print version Example: overlay-clip -a bottom-video.mp4 -b overlay.mp4 -p 00:00:05 Dependencies: ffmpeg: https://www.ffmpeg.org/
The overlay-pip script creates a professional Picture-in-Picture effect.
Unlike the basic overlay, this tool automatically scales the overlay video, adds a customizable border, and includes smooth fade-in/fade-out transitions.
The script provides granular control over where and how the PiP appears:
Position -x: Choose the corner: tl (top-left), tr (top-right), bl (bottom-left), or br (bottom-right). Margin -m: The distance in pixels from the edge of the screen. Width -w: Scales the PiP video. Defaults to 1/4 of the background width. Border -k and -c: Set the thickness (4 or 0) and color (e.g., white, red, or hex codes like #2f2f2f). Fade -f: Automatically adds a smooth fade-in at the start and fade-out at the end of the PiP clip.
overlay-pip -a background.mp4 -b pip.mp4 -p 00:00:30 -x tr -m 20 -w 480 -k 4 -c white -o output.mp4If you omit the -o option, the script generates a name based on the inputs and the start position
Run the script with the -h option to show the help.
overlay-pip -hHelp output.
Create a Picture-in-Picture (PiP) overlay Usage: overlay-pip -a <INPUT> -b <PIP_VIDEO> -p <POSITION> [OPTIONS] Options: -a <INPUT> Bottom video (-a) -b <OVERLAY> Overlay video (-b) -p <POSITION> Time to start the overlay -m <MARGIN> Margin [default: 20] -x <PIP_POS> PiP position (tl, tr, bl, br) [default: tr] -w <WIDTH> Width (defaults to 1/4 of video size) -f <FADE> Fade duration [default: 0.2] -k <BORDER> Border size (4 or 0) [default: 4] [possible values: 0, 4] -c <COLOR> Border color [default: #2f2f2f] -o <OUTFILE> Output file (optional) -h, --help Print help -v, --version Print version Example: overlay-pip -a background.mp4 -b pip.mp4 -p 00:00:05 -x br -m 30 -k 4 -c white Dependencies: ffmpeg: https://www.ffmpeg.org/
The fade-clip script applies a fade-in transition to both the video and audio of a clip simultaneously.
This is ideal for smoothing out the beginning of a video or audio track, preventing abrupt starts by gradually increasing the opacity and volume from zero over a specified duration.
The script performs two actions at once:
Video: Fades from black to full visibility.
Audio: Fades from silence to full volume.
If you have a directory of clips that all need the same fade-in applied, you can batch process them using fd.
fd -e mp4 -x fade-clip -i {} -d 1This command will find every MP4 and apply a 1-second fade-in. We omit the -o option so the script uses the default naming convention: input-faded-in-[duration].mp4.
fade-clip -i input.mp4 -d 00:00:02 -o output.mp4Run the script with the -h option to show the help.
fade-clip -hHelp output.
Fade in a video and audio clip Usage: fade-clip [OPTIONS] -i <INFILE> Options: -i <INFILE> Input video file -d <DURATION> Fade duration (e.g., 2 or 00:00:02) [default: 00:00:00.500] -o <OUTFILE> Output file (optional) -h, --help Print help -v, --version Print version Example: fade-clip -i input.mp4 -d 00:00:02 Dependencies: ffmpeg: https://www.ffmpeg.org/
🎥 xfade
The xfade script creates smooth transitions between two video clips. It applies the xfade filter for video and the acrossfade filter for audio simultaneously, ensuring a professional audio-visual blend.
A transition requires an “offset”—the exact second in the timeline where the first clip starts fading into the second.
This script calculates the offset for you automatically by subtracting the transition duration from the length of the first clip. You only need to provide an offset manually if you want the transition to start earlier.
Provide the two clips, the duration of the transition, and the type of effect you want.
xfade -a clip1.mp4 -b clip2.mp4 -d 00:00:02 -t dissolve -o output.mp4If you omit the -o option, the script generates a descriptive name: clip1_name-xfade-transition_type-[duration].mp4.
xfade -a clip1.mp4 -b clip2.mp4 -d duration -t transition -f offset -o output.mp4Run the script with the -h option to show the help.
xfade -hHelp output.
FFmpeg xfade transitions Usage: xfade [OPTIONS] -a <INPUT1> -b <INPUT2> -d <DURATION> Options: -a <INPUT1> First clip (-a) -b <INPUT2> Second clip (-b) -d <DURATION> Transition duration (e.g., 2 or 00:00:02) -t <TRANSITION> Transition type [default: fade] -f <OFFSET> Offset (start time of transition). Calculated automatically if not provided -o <OUTFILE> Output file (optional) -h, --help Print help -v, --version Print version TRANSITIONS: circleclose, circlecrop, circleopen, diagbl, diagbr, diagtl, diagtr, dissolve, distancefade, fade, fadeblack, fadegrays, fadewhite, hblur, hlslice, horzclose, horzopen, hrslice, pixelize, radial, rectcrop, slidedown, slideleft, slideright, slideup, smoothdown, smoothleft, smoothright, smoothup, squeezeh, squeezev, vdslice, vertclose, vertopen, vuslice, wipebl, wipebr, wipedown, wipeleft, wiperight, wipetl, wipetr, wipeup Dependencies: ffmpeg: https://www.ffmpeg.org/
🎥 pan-scan
The pan-scan script animates a static image by panning the “camera” across it in a specified direction.
It automatically handles the scaling and cropping math to ensure the movement is smooth and the output matches the original image’s aspect ratio.
The script supports four primary movement directions via the -p flag:
l: Pans the camera from left to right. r: Pans the camera from right to left. u: Pans the camera from top to bottom (panning "up" the image). d: Pans the camera from bottom to top (panning "down" the image)
Provide your image, the desired video length, and the direction of the pan.
pan-scan -i input.jpg -d 00:00:10 -p l -o output.mp4If you omit the -o option, the script generates a descriptive name based on the direction and duration: input-pan-left-[00:00:10].mp4
If you have a directory of images that all need the same pan animation, you can batch process them using fd.
fd -e jpg -x pan-scan -i {} -d 00:00:05 -p lThis command finds every JPG and applies a 5-second left-to-right pan. We omit the -o option so the script uses the default naming convention: input-pan-left-[00:00:05].mp4.
pan-scan -i input -d 00:00:10 -p (l|r|u|d) -o output.mp4Run the script with the -h option to show the help.
pan-scan -hHelp output.
Pan scan over an image using scale/crop math Usage: pan-scan [OPTIONS] -i <INFILE> -d <DURATION> -p <POSITION> Options: -i <INFILE> Input image file -d <DURATION> Duration (e.g., 10 or 00:00:10) -p <POSITION> Position: l (left), r (right), u (up), d (down) -o <OUTFILE> Output file (optional) -h, --help Print help -v, --version Print version Example: pan-scan -i photo.jpg -d 00:00:10 -p l Dependencies: ffmpeg: https://www.ffmpeg.org/
🎥 zoompan
The zoompan script converts a static image into a video clip by applying a Ken Burns-style zoom animation.
To prevent the “jitter” often associated with the standard FFmpeg zoompan filter, this script uses high-resolution initial scaling before applying the movement.
You can control both the direction of the zoom and the anchor point of the camera:
Zoom -z: Choose between in (gradual magnification) or out (gradual pull-back). Position -p: Set the anchor point for the zoom. Options include: c (center) tl (top-left) tr (top-right) bl (bottom-left) br (bottom-right) tc (top-center) bc (bottom-center)
zoompan -i input.jpg -d 00:00:10 -z in -p c -o output.mp4If the -o option is omitted, the script generates a descriptive name: image-zoom-in-c-[10].mp4.
Batch process all jpg files in the current working directory, applying a 5-second zoom-in to the center of each image using fd.
fd -e jpg -x zoompan -i {} -d 5 -z in -p cWe omit the -o option so the script uses the default naming convention for every file processed.
zoompan -i input.jpg -d 00:00:05 -z (in|out) -p (tl|c|tc|tr|bl|br) -o output.mp4Run the script with the -h option to show the help.
zoompan -hHelp output.
Ken Burns style zoom animation Usage: zoompan [OPTIONS] -i <INPUT> -d <DURATION> Options: -i <INFILE> Input image (png, jpg, jpeg) -d <DURATION> Duration (e.g., 10 or 00:00:10) -z <ZOOM> Zoom direction: in, out [default: in] -p <POSITION> Position: tl, tc, tr, c, bl, bc, br [default: c] -o <OUTFILE> Output file (optional) -h, --help Print help -v, --version Print version Example: zoompan -i image.jpg -d 10 -z in -p c Dependencies: ffmpeg: https://www.ffmpeg.org/
The audio-silence script replaces or adds a silent audio track to a video file.
It works by copying the original video stream without re-encoding and generating a silent AAC audio track using lavfi.
This is useful for clearing existing audio or adding a silent track to a “video-only” file to ensure compatibility with players that require an audio stream.
Replace the audio in a video with a high-quality stereo silent track:
audio-silence -i input.mp4 -c stereo -r 48000 -o output.mp4If the -o option is omitted, the script generates a default name: input-silence.mp4
To process all MP4 files in a directory, use fd. By default, the script uses mono channels and a 44100 sample rate.
fd -e mp4 -x audio-silence -i {}To override the defaults and use stereo at 48000Hz for all files
fd -e mp4 -x audio-silence -i {} -c stereo -r 48000audio-silence -i input.mp4 -c (mono|stereo) -r (44100|48000) -o output.mp4Run the script with the -h option to show the help.
audio-silence -hHelp output.
Replaces or adds a silent audio track to a video file Usage: audio-silence [OPTIONS] -i <INFILE> Options: -i <INFILE> Input video file -c <CHANNELS> Audio channels (mono or stereo) [default: mono] -r <RATE> Sample rate (e.g., 44100, 48000) [default: 44100] -o <OUTFILE> Output file (optional, defaults to input-silence.ext) -h, --help Print help -v, --version Print version Example: audio-silence -i input.mp4 -c stereo -r 48000 -o output.mp4 Dependencies: ffmpeg: https://www.ffmpeg.org/
The extract-frame script saves a single frame from a video as a png or jpg image.
Note that you can use two different time unit formats for the -s option:
sexagesimal (HOURS:MM:SS.MILLISECONDS, as in 01:23:45.678), or in seconds.
If a fraction is used, such as 02:30.05, this is interpreted as “5 100ths of a second”, not as frame 5.
For instance, 02:30.5 would be 2 minutes, 30 seconds, and a half a second, which would be the same as using 150.5 in seconds.
Extract a frame at 15 seconds, scaled to 1280px width, saved as a jpg which is the default.
extract-frame -s 00:00:15 -i input.mp4 -x 1280If the -o option is omitted, the script generates a default name including the timestamp: input-frame-[00:00:15].jpg
If width (-x) or height (-y) is omitted, the original video dimensions are used.
If only one dimension is specified (either -x or -y), the other value is automatically calculated to preserve the aspect ratio.
To quickly extract frames from every MP4 file in a directory, use fd.
Extract a frame from the very beginning (00:00:00) of every video:
fd -e mp4 -x extract-frame -i {} -s 00:00:00Extract a frame at the 30-second mark from every video:
fd -e mp4 -x extract-frame -i {} -s 00:00:30extract-frame -i input.mp4 -s 00:00:00.000 -t (png|jpg) -x width -y height -o output.(png|jpg)Run the script with the -h option to show the help.
extract-frame -hHelp output.
extract a single frame from a video Usage: extract-frame [OPTIONS] -s <START> -i <INFILE> Options: -s <START> timestamp to extract -i <INFILE> input file -f <FORMAT> output format [default: jpg] -x <WIDTH> output width -y <HEIGHT> output height -o <OUTFILE> optional output file -h, --help Print help -v, --version Print version Example: extract-frame -s 00:00:15 -i input.mp4 -x 1280 -f jpg Dependencies: ffmpeg: https://www.ffmpeg.org/ Notes: If width/height is omitted, original size is used. If -o is not provided, defaults to: input-frame-[timestamp].ext
The img2video script converts a static image into a video file with a specified duration.
The script creates a high-compatibility H.264 video at 30fps with a YUV420P pixel format, ensuring it plays correctly on almost all devices and web platforms.
When specifying the duration with the -d option, you can use two formats:
Seconds: A simple numerical value (e.g., 10).
Sexagesimal: HOURS:MM:SS (e.g., 00:00:10).
Convert an image into a 10-second video clip:
img2video -i input.png -d 00:00:10 -o output.mp4If the -o option is omitted, the script generates a default filename including the duration: input-[00:00:10].mp4.
To quickly convert every image in a directory into a video clip, use fd.
Batch convert all PNG files in the current directory into 10-second video clips:
Using Seconds: A simple numerical value (e.g., 10).
fd -e png -x img2video -i {} -d 10Batch convert all JPG files in the current directory into 10-second video clips:
Using Sexagesimal: HOURS:MM:SS (e.g., 00:00:10).
fd -e jpg -x img2video -i {} -d 00:00:10img2video -i input.png -d (000) -o output.mp4Run the script with the -h option to show the help.
img2video -hHelp output.
Convert a static image to a video file with a specified duration Usage: img2video [OPTIONS] -i <INFILE> -d <DURATION> Options: -i <INFILE> Input image file (png, jpg, jpeg) -d <DURATION> Duration (e.g., 10 or 00:00:10) -o <OUTFILE> Output file (optional) -h, --help Print help -v, --version Print version Example: img2video -i input.png -d 00:00:10 -o output.mp4 Dependencies: ffmpeg: https://www.ffmpeg.org/
The sexagesimal-time script calculates a precise duration by subtracting a start timecode from an end timecode.
This is designed to help determine the exact length needed for trimming video or audio files with FFmpeg.
The script handles standard sexagesimal formats (HOURS:MM:SS) and also works with milliseconds (HOURS:MM:SS.mmm) for high-precision calculations.
Calculate the duration between 1 minute and 1 minute 45.5 seconds:
sexagesimal-time -s 00:01:00 -e 00:01:45.500Output:
00:00:45.500
sexagesimal-time -s 00:00:30 -e 00:01:30Run the script with the -h option to show the help.
sexagesimal-time -hHelp output.
calculate duration from start and end timecodes Usage: sexagesimal-time -s <START> -e <END> Options: -s <START> start time -e <END> end time -h, --help Print help -v, --version Print version Example: sexagesimal-time -s 00:01:00 -e 00:01:45.500 Output: 00:00:45.500 Dependencies: None (Pure Rust math)
ouput
00:13:17
also works with milliseconds
🎥 vid2gif
The vid2gif script converts a video file into a high-quality GIF animation.
To ensure the best visual quality, the script uses a two-stage FFmpeg process:
it first generates a custom colour palette from the video and then applies that palette to create the final GIF.
This prevents the “dithering” or colour-banding issues common in standard GIF conversions.
Convert a video to a GIF with a specific width and frame rate:
vid2gif -i input.mp4 -w 480 -f 15 -o output.gifIf the -o option is omitted, the script generates a default filename using the input file’s name: input.gif.
The default width is 320px and the default frame rate is 10 fps.
To quickly convert all MP4 files in a directory into GIF animations, use fd.
Batch convert all MP4 files using default settings:
fd -e mp4 -x vid2gif -i {}Batch convert all MP4 files with a custom width of 480px and 15 fps:
fd -e mp4 -x vid2gif -i {} -w 480 -f 15vid2gif -s 00:00:00.000 -i input.mp4 -t 00:00:00.000 -f 10 -w 320 -o output.gifRun the script with the -h option to show the help.
vid2gif -hHelp output.
convert video to high quality gif Usage: vid2gif [OPTIONS] -i <INFILE> Options: -i <INFILE> input file -w <WIDTH> width [default: 320] -f <FPS> fps [default: 10] -o <OUTFILE> output file -h, --help Print help -v, --version Print version Example: vid2gif -i input.mp4 -w 480 -f 15 -o animation.gif Dependencies: ffmpeg, ffprobe: https://www.ffmpeg.org/
🎥 webp
The webp script converts a video into an animated WebP image using FFmpeg.
Animated WebP files often provide better compression than GIFs while supporting a full range of colours and transparency, making them ideal for high-quality web animations.
Convert a video to an animated WebP with a custom width and frame rate:
webp -i input.mp4 -w 480 -f 15 -o output.webpIf the -o option is omitted, the script generates a default filename based on the input file’s name: input.webp
The default settings are 320px width and 10 fps. The resulting file is set to loop infinitely.
To quickly convert multiple MP4 files in a directory into animated WebP images, use fd.
Batch convert all MP4 files in the current directory using default settings:
fd -e mp4 -x webp -i {}Batch convert all MP4 files with a custom width of 600px and 15 fps:
fd -e mp4 -x webp -i {} -w 600 -f 15webp -i input.mp4 -w 320 -f 10 -o output.webpRun the script with the -h option to show the help.
webp -hHelp output.
convert video to an animated webp Usage: webp [OPTIONS] -i <INFILE> Options: -i <INFILE> input file -w <WIDTH> width [default: 320] -f <FPS> fps [default: 10] -o <OUTFILE> output file -h, --version Print help -v, --help Print version Example: webp -i input.mp4 -w 480 -f 15 -o animation.webp Dependencies: ffmpeg, ffprobe: https://www.ffmpeg.org/
Automated scene detection and video splitting.
The scene-detect-auto script performs the functions of the following scripts automatically
- scene-detect (Identifies scene changes)
- scene-time (Converts timestamps to a cutlist)
- scene-cut (Splits the video into clips)
Because this script can generate a large number of video clips, it is best to create a dedicated directory for each video you want to process.
Create a directory for your project (e.g., “scene-detect”). using the command line or your file manager
On NixOS, Linux, Mac or Freebsd
mkdir -p scene-detectOn Windows (PowerShell)
New-Item -ItemType Directory -Path "scene-detect"Move the video to process into the scene-detect directory, where input.mp4 is the name of the video to process
Move the video file (e.g., input.mp4) into the folder you just created.
On NixOS, Linux, Mac or Freebsd
mv input.mp4 scene-detect/On Windows (PowerShell):
Move-Item -Path "input.mp4" -Destination "scene-detect"On all operating systems:
cd scene-detectRun the script with the -i option. By default, the detection threshold is 0.3.
scene-detect-auto -i input.mp4Use the -t option to adjust the sensitivity. Lower values detect more scenes; higher values detect fewer.
scene-detect-auto -i input.mp4 -t 0.5Script usage.
scene-detect-auto -i <INPUT> -t <THRESHOLD>Run the script with the -h option to show the help.
scene-detect-auto -hHelp output.
Automated scene detection and video splitting Usage: scene-detect-auto [OPTIONS] -i <INPUT> Options: -i <INPUT> input video file -t <THRESHOLD> detection threshold (0.0 to 1.0) [default: 0.3] -h, --help Print help -v, --version Print version Example: scene-detect-auto -i input.mp4 Dependencies: ffmpeg, ffprobe: https://www.ffmpeg.org/ Notes: Creates detection.txt and cutlist.txt automatically.
Notes: Creates detection.txt and cutlist.txt automatically.
scene-detect takes a video file and a threshold for the scene detection from 0.1 to 0.9 you can also use the -s and -e options to set a range for the scene detection.
If you dont specify a range scene detection will be perform on the whole video.
Note: manual scene-detection uses 3 scripts that work together
- scene-detect (Identifies scene changes)
- scene-time (Converts timestamps to a cutlist)
- scene-cut (Splits the video into clips)
Because this script can generate a large number of video clips, it is best to create a dedicated directory for each video you want to process.
Create a directory for your project (e.g., “scene-detect”). using the command line or your file manager
On NixOS, Linux, Mac or Freebsd
mkdir -p scene-detectOn Windows (PowerShell)
New-Item -ItemType Directory -Path "scene-detect"Move the video to process into the scene-detect directory, where input.mp4 is the name of the video to process
Move the video file (e.g., input.mp4) into the folder you just created.
On NixOS, Linux, Mac or Freebsd
mv input.mp4 scene-detect/On Windows (PowerShell):
Move-Item -Path "input.mp4" -Destination "scene-detect"On all operating systems:
cd scene-detectRun the script with the -i option. By default, the detection threshold is 0.3.
scene-detect -i input.mp4Use the -t option to adjust the sensitivity. Lower values detect more scenes; higher values detect fewer.
scene-detect -i input.mp4 -t 0.5Note: this will create a text file called input-detection.txt (where “input” is the name of your video).
Where input is the name of the video that has been processed.
You use the input-detection.txt file with scene-time script in the next step.
scene-detect -s 00:00:00 -i input -e 00:00:00 -t (0.1 - 0.9) -f sec -o outputRun the script with the -h option to show the help.
scene-detect -hHelp output.
Detect scene changes in a video Usage: scene-detect -i <INPUT> [OPTIONS] Options: -i <INPUT> Input video file -s <START> Start time (HH:MM:SS.mmm) -e <END> End time (HH:MM:SS.mmm) -t <THRESHOLD> Detection threshold (0.1 to 0.9) [default: 0.3] -f <FORMAT> Output format: "sec" for seconds, else HH:MM:SS.mmm -o <OUTFILE> Output filename (optional) -h, --help Print help -v, --version Print version Example: scene-detect -i input.mp4 -t 0.4 -f sec Dependencies: ffmpeg: https://www.ffmpeg.org/
The scene-time script is the second step in the manual scene-cutting process. It takes the list of timestamps generated by scene-detect and calculates the duration between each point.
Note: manual scene-detection uses 3 scripts that work together
- scene-detect (Identifies scene changes)
- scene-time (Converts timestamps to a cutlist)
- scene-cut (Splits the video into clips)
This step is useful because it allows you to open the detection file in a text editor and manually add, remove, or adjust timestamps before generating the final cutlist.
The script reads a text file containing timestamps (one per line). It supports both seconds and sexagesimal (HH:MM:SS) formats.
0:00:00 0:00:11.875000 0:00:15.750000
The script calculates the duration for each segment and creates a “cutlist”. Each line contains the start time and the duration, separated by a comma
The script creates clips by subtracting the cut point from the start point and converts sexagesimal format and then creates a file with the start point a comma and then the duration of the clip
Provide the detection file created in the previous step using the -i option.
scene-time -i input-detection.txtThis will create a file named input-detection-cutlist.txt. You will use this cutlist file with the scene-cut script in the final step.
The output of the scene-time script is used with the scene-cut script to create the clips
0,11.875 11.875,3.875
scene-time -i input -o outputRun the script with the -h option to show the help.
scene-time -hHelp output.
Create ffmpeg cutlist from scene detection timestamps Usage: scene-time -i <INPUT> [OPTIONS] Options: -i <INPUT> Input file containing timestamps -o <OUTFILE> Output filename (optional) -h, --help Print help -v, --version Print version Example: scene-time -i timestamps.txt -o cutlist.txt Dependencies: ffmpeg: https://www.ffmpeg.org/
The scene-cut script is the third and final step in the manual scene-cutting process. It takes a video file and the cutlist generated by scene-time to split the video into individual clips.
Note: manual scene-detection uses 3 scripts that work together
- scene-detect (Identifies scene changes)
- scene-time (Converts timestamps to a cutlist)
- scene-cut (Splits the video into clips)
The script uses FFmpeg to perform the cuts. It is designed for speed and accuracy, automatically naming each output clip based on the original filename and its scene number.
FFmpeg requires a start point and a duration (not an end point) to cut accurately. The cutlist must be comma-separated values.
Example using sexagesimal format (HH:MM:SS):
00:00:00,00:00:30 00:01:00,00:00:30
Provide the original video with the -i option and the cutlist file with the -c option.
scene-cut -i input.mp4 -c input-detection-cutlist.txtThe script will process the video and generate files named like this: input-scene-001-[00:00:00–00:00:30].mp4
scene-cut -i input.mp4 -c cutfile.txtRun the script with the -h option to show the help.
scene-cut -hHelp output.
Split a video into individual scenes based on a cutlist Usage: scene-cut -i <INPUT> -c <CUTLIST> [OPTIONS] Options: -i <INPUT> Input video file -c <CUTLIST> Cutlist file (comma-separated start,duration) -h, --help Print help -v, --version Print version Example: scene-cut -i input.mp4 -c cutlist.txt Dependencies: ffmpeg: https://www.ffmpeg.org/
ffmpeg requires a start point and a duration, not an end point
00:00:00,00:00:30 00:01:00,00:00:30
0,30 60,30
The scene-images script generates a thumbnail image for every cut point defined in your cutlist. This is so you can visually verify that your scene detection or manual timestamps are accurate.
The script supports both PNG and JPG formats, and allows you to specify custom widths or heights while maintaining the original aspect ratio of the video.
You can specify the width (-x) or height (-y). If you only provide one, the script will automatically calculate the other to maintain the correct proportions.
Provide the original video with the -i option and the cutlist file (generated by scene-time) with the -c option.
scene-images -i input.mp4 -c input-detection-cutlist.txt -x 1280 -t jpgThe script will generate images named after each scene: input-scene-001-[00:00:00].jpg
scene-images -i input -c cutfile -t (png|jpg) -x width -y heightRun the script with the -h option to show the help.
scene-images -hHelp output.
Create thumbnails from scene detection timestamps Usage: scene-images -i <INPUT> -c <CUTLIST> [OPTIONS] Options: -i <INPUT> Input video file -c <CUTLIST> Cutlist file (comma-separated start,duration) -t <FORMAT> Image format (png or jpg) [default: jpg] -x <WIDTH> Width of the output image -y <HEIGHT> Height of the output image -h, --help Print help -v, --version Print version Example: scene-images -i input.mp4 -c cutlist.txt -x 1280 -t jpg Dependencies: ffmpeg: https://www.ffmpeg.org/
This section covers how to compile the scripts for different platforms using Nix. We use three distinct targets to ensure maximum compatibility.
First, create a structured directory on your Desktop to collect the finished binaries.
Create directories for each target platform
mkdir -p ~/Desktop/build/{nixos,linux,windows}Optional: Clean existing binaries if doing a fresh release
rm -f ~/Desktop/build/{nixos,linux,windows}/*mkdir -p ~/git/projects/ffmpeg-scripts-rustchange directory into the project directory
cd ~/git/projects/ffmpeg-scripts-rustvi flake.nixflake.nix
{
description = "rust flake";
inputs = {
nixpkgs.url = "github:nixos/nixpkgs?ref=nixos-unstable";
naersk.url = "github:nix-community/naersk";
rust-overlay = {
url = "github:oxalica/rust-overlay";
inputs.nixpkgs.follows = "nixpkgs";
};
flake-utils.url = "github:numtide/flake-utils";
};
outputs = { self, nixpkgs, naersk, rust-overlay, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
# system is already provided by eachDefaultSystem, so we don't define it here
overlays = [ (import rust-overlay) ];
pkgs = import nixpkgs { inherit system overlays; };
rustToolchain = pkgs.rust-bin.stable.latest.default.override {
extensions = [ "rust-src" "rust-analyzer" ];
targets = [ "x86_64-unknown-linux-musl" "x86_64-pc-windows-gnu" ];
};
naerskLib = (naersk.lib.${system}.override {
cargo = rustToolchain;
rustc = rustToolchain;
});
in {
devShells.default = pkgs.mkShell {
# ADD MINGW TO THE SHELL FOR LINKING
buildInputs = [
rustToolchain
pkgs.pkgsCross.mingwW64.stdenv.cc
];
# Tell Cargo which linker to use for Windows
# Add these lines to help the linker find pthreads
shellHook = ''
export RUST_SRC_PATH="${rustToolchain}/lib/rustlib/src/rust/library"
export NIX_CROSS_LDFLAGS="-L${pkgs.pkgsCross.mingwW64.windows.pthreads}/lib"
export NIX_CROSS_CFLAGS_COMPILE="-I${pkgs.pkgsCross.mingwW64.windows.pthreads}/include"
export CARGO_TARGET_X86_64_PC_WINDOWS_GNU_LINKER="x86_64-w64-mingw32-gcc"
export CARGO_TARGET_X86_64_PC_WINDOWS_GNU_RUSTFLAGS="-L ${pkgs.pkgsCross.mingwW64.windows.pthreads}/lib"
'';
};
packages.default = naerskLib.buildPackage {
src = ./.;
};
}
); # This closes eachDefaultSystem
} # This closes outputsrun nix develop which will set up the rust environment
nix developa .gitignore file will be created
.gitignore
with the following content
/targetwe need to edit the .gitignore
vi .gitignoreand a new line to exclude the results symlink
/target
/result*check the git status
git statusand then commit the changes
git add .gitignoreand add a commit message
git commit -m "update .gitignore to exclude build results"note you do not need to be inside the nix develop shell to run nix build
run nix build to build the binaries for NixOS
nix buildThis will place your binaries in ./result/bin/ instead of target/release/. Building this way ensures the build is 100% reproducible and isolated from your local system state. Instead of a simple mv, which might fail if the source is the read-only Nix store, you should copy the binaries.
When you copy a binary out of the Nix store, it keeps its “rpath.” This means it still knows exactly where to find its library dependencies in the nix store, so it will continue to work perfectly.
Create the bin directory in your home if you dont have one
mkdir -p ~/binRun this command to copy all 32 binaries at once to the bin directory in your home
cp ./result/bin/* ~/bin/copy the scripts to the build directory on the desktop for github
cp ./result/bin/* ~/Desktop/build/nixos/A Note on Updates
Keep in mind that if you change your Rust code and run nix build again, the binaries in ~/bin will not update automatically. You’ll just need to run that cp command again to “deploy” your latest versions.
To build portable binaries that run on any Linux distribution, we use the musl target. This statically links all libraries so the binary is self-contained.
You must be inside the nix develop shell to run this command.
nix developBuild the binaries using the musl target
cargo build --release --target x86_64-unknown-linux-muslThe binaries will be located in target/x86_64-unknown-linux-musl/release/.
list the binaries
ls -l target/x86_64-unknown-linux-musl/release/How to verify they are truly “Static” One of the main reasons to use musl is to ensure the binary has no external dependencies.
You can verify this by running the ldd command on one of the new binaries:
ldd target/x86_64-unknown-linux-musl/release/scene-detect-autoExpected Output: It should say statically linked or not a dynamic executable.
This confirms that a user on Ubuntu, Debian, or Arch can just download that file and run it immediately (provided they have ffmpeg installed).
Copy the binaries to the build directory on the desktop
fd -t f -d 1 -E "*.*" . target/x86_64-unknown-linux-musl/release/ -x cp {} ~/Desktop/build/linux/explaination of the fd command
-t f: Look for files only.
-d 1: Depth 1 (don't go into subfolders like deps or build).
-E "*.*": Exclude any file with a dot in the name (skips .d, .rlib, etc.).
.: The search pattern (matches everything not excluded).
target/.../release/: The directory to search in.
-x cp {} ...: Execute the copy command for every search result.
The ! -name ”.” logic in standard find can sometimes be finicky depending on the shell, but fd’s -E (exclude) flag is very robust.
This will cleanly grab your 32 binaries and ignore all the compiler junk.
To build binaries for Windows, we use the MinGW-w64 toolchain. You must be inside the nix develop shell to run this command.
This ensures the environment variables for the linker and library paths are correctly set.
nix developBuild the binaries using the gnu target
cargo build --release --target x86_64-pc-windows-gnuThe binaries will be located in target/x86_64-pc-windows-gnu/release/.
list the binaries
ls -l target/x86_64-pc-windows-gnu/release/*.exe Copy the binaries to the build directory on the desktop
fd -t f -e exe --max-depth 1 . target/x86_64-pc-windows-gnu/release/ -x cp {} ~/Desktop/build/windows/If the build fails with an error stating it cannot find -l:libpthread.a, ensure your flake.nix includes the CARGO_TARGET_X86_64_PC_WINDOWS_GNU_RUSTFLAGS variable in the shellHook.
If you have recently modified the flake.nix, you may need to exit the shell and run nix develop again to refresh the environment.
Running a cargo clean before rebuilding can also help resolve linking conflicts.
cargo cleancd ~/DesktopNote: replace v1 with the release number in the examples below.
mv build ffmpeg-rust-scripts-build-v1Change into the ffmpeg-rust-scripts-build-v1 directory
cd ffmpeg-rust-scripts-build-v1linux
mv linux linux-ffmpeg-rust-scripts-v1nixos
mv nixos nixos-ffmpeg-rust-scripts-v1windows
mv windows windows-ffmpeg-rust-scripts-v1The ffmpeg_rust_scripts file is build from the main.rs which is a list of all the scripts, and is not needed so we can remove it.
Remove the ffmpeg_rust_scripts file from the Linux, NixOS and Windows build directories
Linux
rm -i linux-ffmpeg-rust-scripts-v1/ffmpeg_rust_scriptsNixOS
rm -i nixos-ffmpeg-rust-scripts-v1/ffmpeg_rust_scriptsWindows
rm -i windows-ffmpeg-rust-scripts-v1/ffmpeg_rust_scripts.exetar -czvf linux-ffmpeg-rust-scripts-v1.tar.gz linux-ffmpeg-rust-scripts-v1 tar -czvf nixos-ffmpeg-rust-scripts-v1.tar.gz nixos-ffmpeg-rust-scripts-v1zip -r windows-ffmpeg-rust-scripts-v1.zip windows-ffmpeg-rust-scripts-v1mkdir -p release-files-v1move the tar files and zip into the release-files directory
mv *-ffmpeg-rust-scripts-v[0-9]*{.tar.gz,.zip} release-files-v1/change directory into the release-files-v1 directory
cd release-files-v1create the checksums
sha256sum linux-ffmpeg-rust-scripts-v1.tar.gz > linux-ffmpeg-rust-scripts-v1.tar.gz.sha256sha256sum nixos-ffmpeg-rust-scripts-v1.tar.gz > nixos-ffmpeg-rust-scripts-v1.tar.gz.sha256sha256sum windows-ffmpeg-rust-scripts-v1.zip > windows-ffmpeg-rust-scripts-v1.zip.sha256To remove the old scripts before rebuilding
Remove the result symlink which points to the previous Nix store build.
Run nix develop to enter the development environment with all necessary dependencies
nix developRemove the old compiled binaries and build artifacts (for both Linux and Windows targets) by running cargo clean.
cargo clean