The examples are for Linux and access the web camera through the Video4Linux2 interface. To control web camera settings, use the tool v4l2-ctl
. To list connected camera devices, you can use the command: v4l2-ctl --list-devices
. On a typical Debian-ish Linux distro, you will also want to add your user to the video
and audio
groups, so that you can easily access the webcam from a non-desktop session.
Capture to an image file, continually overwriting it with new contents
ffmpeg -y -f v4l2 -video_size 1280x720 -i /dev/video0 \
-r 0.2 -qscale:v 2 -update 1 /tmp/webcam.jpg
-f v4l2 | specify input format explicitly as capture from a Video4Linux2 device |
-video_size 1280x720 | specify video frame size from webcam |
-i /dev/video0 | select input device (a UVC-compatible webcam in my case) |
-r 0.2 | set output frame rate to one per 5 seconds |
-qscale:v 2 | set video quality [JPEG quality in this case], 2 is highest quality. |
-update 1 | Image2 muxer option, enable in place update of image file for each video output frame |
Point the output file to a place served by your web server to make your camera image available on the web. The ffmpeg command will run until interrupted or killed.
Add a timestamp to captured images
ffmpeg -y -f v4l2 -video_size 1280x720 -i /dev/video0 \
-r 0.2 \
-vf "drawtext=text=%{localtime}:fontcolor=white@1.0:fontsize=26:borderw=1:x=980:y=25" \
-qscale:v 2 -update 1 /tmp/webcam.jpg
Here we have inserted the drawtext video filter into the processing pipeline. We use its text expansion facilities to simply render the local time onto each video frame with filter-argument text=%{localtime}
. It is placed in the top right corner of the image using the x
and y
arguments.
Running as background job
You can ssh to the host which has the web camera connected, and start the ffmpeg capture process as a background job:
ffmpeg -y -loglevel fatal \
-f v4l2 -video_size 1280x720 -i /dev/video0 \
-r 0.2 \
-vf "drawtext=text=%{localtime}:fontcolor=white@1.0:fontsize=26:borderw=1:x=980:y=25" \
-qscale:v 2 -update 1 /tmp/webcam.jpg \
</dev/null &>/tmp/webcam-ffmpeg.log & disown $!
This silences ffmpeg to log only fatal errors, runs it in the background and finally detaches the process from your [bash] shell’s job control, to avoid it being killed if you log out. A more polished solution would be to create a systemd service which controls the ffmpeg webcam capture process, running as a dedicated low privilege system user.
Creating a time lapse video from a bunch of image files
As a sort of bonus chapter on this post, here is how to create a time lapse video from a bunch of captured image files. Assuming you have a directory with JPEG images named in such a way that they sort chronologically by their filenames (padded sequence numbers or timestamps), here’s how you can transform them into a video.
VP9 video in WebM container:
ffmpeg -y -f image2 -pattern_type glob -framerate 30 \
-i webcam-images/\*.jpg \
-pix_fmt yuv420p -b 1500k timelapsevid.webm
H264 video in MP4 container:
ffmpeg -y -f image2 -pattern_type glob -framerate 30 \
-i webcam-images/\*.jpg \
-pix_fmt yuv420p -b 1500k timelapsevid.mp4
-f image2 | Input demuxer is Image2, which can read image files. |
-pattern_type glob | Instructs Image2 demuxer to treat input pattern as file name glob. |
-framerate 30 | Set desired framerate; how many images to display per second in the resulting video. |
-i webcam-images/\*.jpg | Set input to a glob pattern matching the images files you would like to include in the video. Note that we do not want the shell to expand the glob, but rather pass the asterisk verbatim to ffmpeg. |
-pix_fmt yuv420p | Set video codec pixel format. YUV420p is selected to ensure compatibility with a broad range of decoders/players. |
-b 1500k | Set desired bitrate of video file. |
Note that all input images should have the same dimensions. Otherwise, you will likely have to add more options to ffmpeg to transform everything to a single suitable video size.
The resulting video files will be suitable for publishing on the web using the <video>
tag.