Categories
Hardware Other

EU puts pressure on smartphone manufacturers

In August of 2020, I wrote a post about how smartphone manufacturers fail to provide a long enough period of security updates to the devices they sell. Leaving the market to itself has obviously lead to planned obsolescence being the norm for Android-based devices, where it is necessary to buy a new phone every 2–3 years to stay secure. But things might change for the better. The European Commission is planning to extend [1] its Ecodesign and Energy labelling directive to also apply to smartphone (and similar) products, and with it comes requirements to reparability and minimum security update support period. Currently proposed is a 5 year period for such products, which is great news. Going further, Germany is lobbying [2] to get a 7 year support period for updates and spare parts. It will be interesting to see the outcome of this.

On a personal note, I ended up buying a new Samsung S21 phone, after Sony stopped updates for my two year old Xperia compact. The Samsung phone is too big, but I could not find a better alternative. And I will likely get at least 4 years of updates. I have no need to replace my smartphone every 2 years and contribute to such ridiculous resource waste.

References

  1. Heise online article (translated to English):
    https://www-heise-de.translate.goog/news/EU-plant-Energielabel-und-strenge-Umweltregeln-fuer-Smartphones-und-Tablets-6171979.html?_x_tr_sl=auto&_x_tr_tl=en
  2. Heise online article (translated to English):
    https://www-heise-de.translate.goog/news/Bundesregierung-Smartphones-sollen-sieben-Jahre-lang-Updates-erhalten-6179995.html?_x_tr_sl=auto&_x_tr_tl=en
Categories
Hardware Linux

Capture images from a webcam using ffmpeg

The examples are for Linux and access the web camera through the Video4Linux2 interface. To control web camera settings, use the tool v4l2-ctl. To list connected camera devices, you can use the command: v4l2-ctl --list-devices. On a typical Debian-ish Linux distro, you will also want to add your user to the video and audio groups, so that you can easily access the webcam from a non-desktop session.

Capture to an image file, continually overwriting it with new contents

ffmpeg -y -f v4l2 -video_size 1280x720 -i /dev/video0 \
       -r 0.2 -qscale:v 2 -update 1 /tmp/webcam.jpg
-f v4l2specify input format explicitly as capture from a Video4Linux2 device
-video_size 1280x720specify video frame size from webcam
-i /dev/video0select input device (a UVC-compatible webcam in my case)
-r 0.2set output frame rate to one per 5 seconds
-qscale:v 2set video quality [JPEG quality in this case], 2 is highest quality.
-update 1Image2 muxer option, enable in place update of image file for each video output frame
Options breakdown

Point the output file to a place served by your web server to make your camera image available on the web. The ffmpeg command will run until interrupted or killed.

Add a timestamp to captured images

ffmpeg -y -f v4l2 -video_size 1280x720 -i /dev/video0 \
       -r 0.2 \
       -vf "drawtext=text=%{localtime}:fontcolor=white@1.0:fontsize=26:borderw=1:x=980:y=25" \
       -qscale:v 2 -update 1 /tmp/webcam.jpg

Here we have inserted the drawtext video filter into the processing pipeline. We use its text expansion facilities to simply render the local time onto each video frame with filter-argument text=%{localtime}. It is placed in the top right corner of the image using the x and y arguments.

Running as background job

You can ssh to the host which has the web camera connected, and start the ffmpeg capture process as a background job:

ffmpeg -y -loglevel fatal \
       -f v4l2 -video_size 1280x720 -i /dev/video0 \
       -r 0.2 \
       -vf "drawtext=text=%{localtime}:fontcolor=white@1.0:fontsize=26:borderw=1:x=980:y=25" \
       -qscale:v 2 -update 1 /tmp/webcam.jpg \
       </dev/null &>/tmp/webcam-ffmpeg.log & disown $!

This silences ffmpeg to log only fatal errors, runs it in the background and finally detaches the process from your [bash] shell’s job control, to avoid it being killed if you log out. A more polished solution would be to create a systemd service which controls the ffmpeg webcam capture process, running as a dedicated low privilege system user.

Creating a time lapse video from a bunch of image files

As a sort of bonus chapter on this post, here is how to create a time lapse video from a bunch of captured image files. Assuming you have a directory with JPEG images named in such a way that they sort chronologically by their filenames (padded sequence numbers or timestamps), here’s how you can transform them into a video.

VP9 video in WebM container:

ffmpeg -y -f image2 -pattern_type glob -framerate 30 \
       -i webcam-images/\*.jpg \
       -pix_fmt yuv420p -b 1500k timelapsevid.webm

H264 video in MP4 container:

ffmpeg -y -f image2 -pattern_type glob -framerate 30 \
       -i webcam-images/\*.jpg \
       -pix_fmt yuv420p -b 1500k timelapsevid.mp4
-f image2Input demuxer is Image2, which can read image files.
-pattern_type globInstructs Image2 demuxer to treat input pattern as file name glob.
-framerate 30Set desired framerate; how many images to display per second in the resulting video.
-i webcam-images/\*.jpgSet input to a glob pattern matching the images files you would like to include in the video. Note that we do not want the shell to expand the glob, but rather pass the asterisk verbatim to ffmpeg.
-pix_fmt yuv420pSet video codec pixel format. YUV420p is selected to ensure compatibility with a broad range of decoders/players.
-b 1500kSet desired bitrate of video file.
Options breakdown

Note that all input images should have the same dimensions. Otherwise, you will likely have to add more options to ffmpeg to transform everything to a single suitable video size.

The resulting video files will be suitable for publishing on the web using the <video> tag.

Categories
Code

How to make a shell script log JSON-messages

If you have a shell script running in some environment where logs are expected to be formatted as JSON, it can be cumbersome to ensure all commands in the script output valid single line JSON-formatted messages, instead of just raw lines of text, which is what a shell script commonly does. Here I present a technique which can be used so that the script needs very little modifications to be able to output structured JSON instead of raw text lines.

We will setup a bash script so that its regular output is redirected to a JSON encoder co-process automatically. This is setup at the beginning of the script, and subsequent commands’ output will automatically be wrapped as JSON-messages. It requires the jq command to be present on the system where script runs.

#!/usr/bin/env bash

# 1 Make copies of shell's current stdout and stderr file
# descriptors:
exec 100>&1 200>&2

# 2 Define function which logs arguments or stdin as JSON-message to stdout:
log() {
    if [ "$1" = - ]; then
        jq -Rsc '{"@timestamp": now|strftime("%Y-%m-%dT%H:%M:%S%z"),
                  "message":.}' 1>&100 2>&200
    else
        jq --arg m "$*" -nc '{"@timestamp": now|strftime("%Y-%m-%dT%H:%M:%S%z"),
                              "message":$m}' 1>&100 2>&200
    fi
}

# 3 Start a co-process which transforms input lines to JSON messages:
coproc JSON_LOGGER { jq --unbuffered -Rc \
      '{"@timestamp": now|strftime("%Y-%m-%dT%H:%M:%S%z"),
        "message":.}' 1>&100 2>&200; }

# 4 Finally redirect shell's stdout/stderr to JSON logger
# co-process:
exec 1>&${JSON_LOGGER[1]} 2>&${JSON_LOGGER[1]}

# What follows is whatever you need your script to do

echo Hello brave world
echo '  testing "escaping" and white  space  '
echo >&2 this goes do stderr

uname

# If we want multiple output lines from a single command
# wrapped in a single JSON-message, we need to pipe it to the log function:
curl -sS --head https://api.github.com/|head -n 3|log -

# .. otherwise, each curl output line would become its own
# JSON-encoded message, which may not be desirable.

Output to a terminal with color support should look something like this:

{"@timestamp":"2021-07-01T14:55:15+02:00","message":"Hello brave world"}
{"@timestamp":"2021-07-01T14:55:15+02:00","message":"  testing \"escaping\" and white  space  "}
{"@timestamp":"2021-07-01T14:55:15+02:00","message":"this goes do stderr"}
{"@timestamp":"2021-07-01T14:55:15+02:00","message":"Linux"}
{"@timestamp":"2021-07-01T14:55:16+02:00","message":"HTTP/2 200 \r\nserver: GitHub.com\r\ndate: Thu, 01 Jul 2021 12:55:10 GMT\r"}

Jq will take care of all the necessary escaping and always produce valid single line JSON-structured messages, regardless of message payload.

Notes

  • You could make the above setup reusable, by putting the code in its own file and sourcing it at the beginning of scripts that need it.
  • High precision timestamps cannot be generated natively in jq. If you need millisecond or higher precision on the log event timestamps, there are ways to do it, depending on the shell and command line tools available. If you have bash >= 5, you can modify the log() function so that timestamp string is generated using the expression:
    $(date +%Y-%m-%dT%H:%M:%S).${EPOCHREALTIME#*[.,]}$(date +%z). You’ll need to pass this as an --arg to jq for each invocation and use it in the JSON template. Also, it is a bit harder to accomplish for the log transformer co-process, because ideally we’d like only a single persistent jq process running, for efficiency reasons and no pipeline buffering.
  • You could easily expand the structured log messages by adding JSON fields to the jq templates. For example, you could add a level field, to indicate log level. Or a hostname field using the $HOSTNAME shell variable.
  • Building on the previous point, you could create two separate JSON encoder functions, where one handles stderr messages and logs them at level ERROR, while another one logs regular stdout as level INFO. Then create two co-processes for stdout and stderr, with different jq JSON templates respectively. Finally, redirect main stdout to the first co-process and stderr to the second.
  • For the log transformer co-process, be aware that pipeline buffering can have unfortunate effects, for instance missing the last log events before script exits. This is why jq is invoked with --unbuffered.