Categories: geek » linux

RSS - Atom - Subscribe via email

Re-encoding the EmacsConf videos with FFmpeg and GNU Parallel

| geek, linux, emacsconf

It turns out that using -crf 56 compressed the EmacsConf a little too aggressively, losing too much information in the video. We wanted to reencode everything, maybe going back to the default value of -crf 32. My laptop would have taken a long time to do all of those videos. Fortunately, one of the other volunteers shared a VM on a machine with 12 cores, and I had access to a few other systems. It was a good opportunity to learn how to use GNU Parallel to send jobs to different machines and retrieve the results.

First, I updated the compression script,

ffmpeg -y -i "$FILE"  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -aq-mode 2 -tile-columns 0 -tile-rows 0 -frame-parallel 0 -cpu-used 8 -auto-alt-ref 1 -lag-in-frames 25 -g 240 -pass 1 -f webm -an -threads 8 /dev/null &&
if [[ $FILE =~ "webm" ]]; then
    ffmpeg -y -i "$FILE" $*  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -tile-columns 2 -tile-rows 2 -frame-parallel 0 -cpu-used -5 -auto-alt-ref 1 -lag-in-frames 25 -pass 2 -g 240 -ac 2 -threads 8 -c:a copy "${FILE%.*}--compressed$SUFFIX.webm"
    ffmpeg -y -i "$FILE" $*  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -tile-columns 2 -tile-rows 2 -frame-parallel 0 -cpu-used -5 -auto-alt-ref 1 -lag-in-frames 25 -pass 2 -g 240 -ac 2 -threads 8 -c:a libvorbis "${FILE%.*}--compressed$SUFFIX.webm"

I made an originals.txt file with all the original filenames. It looked like this:


I set up a ~/.parallel/emacsconf profile with something like this so that I could use three computers and my laptop, sending one job each and displaying progress:

--sshlogin computer1 --sshlogin computer2 --sshlogin computer3 --sshlogin : -j 1 --progress --verbose --joblog parallel.log

I already had SSH key-based authentication set up so that I could connect to the three remote computers.

Then I spread the jobs over four computers with the following command:

cat originals.txt | parallel -J emacsconf \
                             --transferfile {} \
                             --return '{=$_ =~ s/\..*?$/--compressed32.webm/=}' \
                             --cleanup \
                             --basefile \
                             bash 32 {}

It copied each file over to the computer it was assigned to, processed the file, and then copied the file back.

It was also helpful to occasionally do echo 'killall -9 ffmpeg' | parallel -J emacsconf -j 1 --onall if I cancelled a run.

It still took a long time, but less than it would have if any one computer had to crunch through everything on its own.

This was much better than my previous way of doing things, which involved copying the files over, running ffmpeg commands, copying the files back, and getting somewhat confused about which directory I was in and which file I assigned where and what to do about incompletely-encoded files.

I sometimes ran into problems with incompletely-encoded files because I'd cancelled the FFmpeg process. Even though ffprobe said the files were long, they were missing a large chunk of video at the end. I added a compile-media-verify-video-frames function to compile-media.el so that I could get the last few seconds of frames, compare them against the duration, and report an error if there was a big gap.

Then I changed emacsconf-publish.el to use the new filenames, and I regenerated all the pages. For EmacsConf 2020, I used some Emacs Lisp to update the files. I'm not particularly fond of wrangling video files (lots of waiting, high chance of error), but I'm glad I got the computers to work together.

View or add comments

Adding an overlay to my webcam via OBS 26.1

| geek, linux

A- likes to change her name roughly every two months, depending on whatever things she’s focusing on. In 2020, she went through six names, giving us plenty of mental exercise and amusement. Pretend play is wonderful and she picks up all sorts of interesting attributes along the way, so we’re totally fine with letting her pretend all the time instead of limiting it to specific times.

A-‘s been going to virtual kindergarten. So far, her teachers and classmates have been cool with the name changes. They met her in her Stephanie phase, and they shifted over to Elizabeth without batting an eye. A-‘s been experimenting with a new name, though, so I thought I’d try to figure out a way to make the teachers’ lives a little easier. We use Google Meet to connect to class. A- likes to log in as me because then we’re alphabetically sorted close to one of her friends in class, the high-tech equivalent of wanting to sit with your friends. So the name that’s automatically displayed when she’s speaking is no help either.

It turns out that OBS (Open Broadcast Studio) has a virtual webcam feature in version 26.1, and it works for MacOS X and Linux. I followed the instructions for installing OBS 26.1 on Ubuntu. To enable the virtual webcam device on Linux, I installed v4l2loopback-dkms. I was initially mystified when I got the error could not insert 'v4l2loopback': Operation not permitted. That was because I have Secure Boot on my laptop, so I just needed to reboot, choose Enroll MOK from the boot menu, and put in the password that I specified during the setup process. After I did that, clicking on the Start Virtual Camera button in OBS worked. I tested it in Google Meet and the image was properly displayed. I don’t know if we’ll need it, but it’s handy to have in my back pocket in case A- decides to change her name again.

Yay Linux and free software!

View or add comments

Compiling autotrace against GraphicsMagick instead of ImageMagick

| geek, linux

In an extravagant instance of yak-shaving, I found myself trying to compile autotrace so that I could use it to trace my handwriting samples so that I could make a font with FontForge so that I could make worksheets for A- without worrying about finding a handwriting font with the right features and open font licensing.

AutoTrace had been written against ImageMagick 5, but this was no longer easily available. ImageMagick 7 greatly changed the API. Fortunately, GraphicsMagick kept the ImageMagick 5 API. I eventually figured out enough about autoconf and C (neither of which I had really worked with much before) to switch the library paths out while attempting to preserve backward compatibility. I think I got it, but I can’t really tell aside from the fact that compiling it with GraphicsMagick makes the included tests run. Yay!

At first I tried AC_DEFINE-ing twice in the same package check, but it turns out only the last one sticks, so I moved the other AC_DEFINE to a different condition.

Anyway, here’s the pull request. Whee! I’m coding!

View or add comments

Learning more about Docker

| geek, linux

I’ve been mostly heads-down on parenting for a few years now. A- wasn’t keen on babysitters, so my computing time consisted of a couple of hours during the graveyard shift, after our little night-owl finally settled into bed. I felt like I was treading water: keep Emacs News going, check in and do some consulting once in a while so that the relationship doesn’t go cold, do my weekly reviews, try to automate things here and there.

I definitely felt the gaps between the quick-and-dirty coding I did and the best practices I saw elsewhere. I felt a little anxious about not having development environments and production deployment processes for my personal projects. Whenever I messed up my blog or my web-based tracker, I stayed up extra late to fix things, coding while tired and sleepy and occasionally interrupted by A- needing extra snuggling. I updated whenever I felt it was necessary for security, but the risk discouraged me from trying to make things better.

Lately, though, I feel like I’ve been able to actually have some more focused time to learn new things. A- is a little more used to a bedtime routine, and I no longer have to reserve as much energy and patience for dealing with tantrums. She still sleeps really late, but it’s manageable. And besides, I’d tracked the time I spent playing a game on my phone, so I knew I had a little discretionary time I could use more effectively.

Docker is one of the tools on my to-learn list. I think it will help a lot to have environments that I can experiment with and recreate whenever I want. I tried Vagrant before, but Docker feels a lot lighter-weight.

I started by moving my sketch viewer into a Docker container. It’s a basic Node server with read-only access to my sketches, so that was mostly a matter of changing it to be configured via environment variables and mounting the sketches as a volume. I added dockerfile-mode to my Emacs, made a Dockerfile and a .dockerignore file following the tutorial for Dockerizing a Node.js web app, tried it out on my laptop, and pushed the image to my private Docker hub so that I could pull the image on my server. It turned out that Linode’s kernel had overlay built in instead of compiled as a module, so I followed this tip to fix it.

cat << EOF > /etc/systemd/system/containerd.service.d/override.conf

I also needed to uninstall my old and docker-compose, add the Docker PPA, and install docker-ce in order to get docker login to work properly on my server.

The next step was to move my web interface for tracking – not Quantified Awesome, but the button-filled webpage I’ve been using on my phone. I used lots of environment variables for passwords and tokens, so I switched to using --env-file file instead.

In order to move Quantified Awesome or my blog into Docker, I needed a MySQL container that could load my backups. docker-compose.yml Loading the SQL was just a matter of mounting the backup files in /docker-entrypoint-initdb.d, and mounting a directory as /var/lib/mysql should help with data persistence. If I added a script that created a user and granted access from '%', I could access the MySQL inside the Docker container from my laptop. I didn’t want my MySQL container to be publicly exposed on my server, though. It turned out that Docker bypassed ufw by setting iptables rules directly, so I followed the other instructions in this Stackoverflow answer and added these to the end of my /etc/ufw/after.rules:

:ufw-user-forward - [0:0]
:DOCKER-USER - [0:0]

-A DOCKER-USER -j ufw-user-forward

-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d


There’s more discussion on docker and ufw, but I don’t quite have the brainspace right now to fully understand it.

Anyway. Progress. is in a Docker container, and so is my button-based time tracker. I have a Docker container that I can use to load SQL backups, and I can connect to it for testing. The next step would probably be to try moving Quantified Awesome into a Docker container that talks to my MySQL container. If I can get that working, then I can try moving my blog into a container too.

Yesterday was for sleeping. Today I wanted to clean up my notes and post them, since I’ll forget too much if I keep going. More coding will have to wait for tomorrow–or maybe the day after, if I use some time for consulting instead. But slow progress is still progress, and it’s nice to feel like more of a geek again.

View or add comments

Experimenting with adding labels to photos

| geek, linux

A-‘s gotten really interested in letters lately, so I’m looking for ways to add more personally relevant text and images to her life. She often flips through the 4″x6″ photos I got printed. Since they’re photos, they’re sturdier and can stand up to a little bending. I wanted to experiment with printing more pictures and labeling them with text, building on the script I used to label our toy storage bins. Here’s a sample:

2018-06-14-16-12-14 ★★★ A- gently lifted the flaps in the book. 🏷fine-motor

(Name still elided in the sample until I figure out what we want to do with names and stuff. I’ll use NAME_REPLACEMENTS to put her name into the printed ones, though, since kids tend to learn how to read their names first.)

I haven’t printed these out yet to see how legible they are, but a quick on-screen check looks promising.



# Add label from ImageDescription EXIF tag to bottom of photo

IFS=$(echo -en "\n\b")

# NAME_REPLACEMENTS is an environment variable that's a sed expression
# for any name replacements, like s/A-/Actual name goes here/g

for file in $*; do
  description=$(exiftool -s -s -s -$description_field "$file" \
     | sed "s/\"/\\\"/g;s/'/\\\'/g;$NAME_REPLACEMENTS")
  date=$(exiftool -s -s -s -DateTimeOriginal "$file" \
     | cut -c 1-10 | sed s/:/-/g)
  width=$(identify -format "%w" "$file")
  height=$(identify -format "%h" "$file")
  largest=$(( $width > $height ? $width : $height ))
  density=$(( $largest / $output_width_inches ))
  correct_height=$(( $output_height_inches * $density ))
  captionwidth=$(( $width - $border * 2 ))
  convert "$file" -density $density -units PixelsPerInch \
    -gravity North -extent ${width}x${correct_height} \
    -strip \( -undercolor white -background white \
    -fill black -font "$font" -bordercolor White \
    -gravity SouthWest -border $border -pointsize $pointsize \
    -size ${captionwidth}x  caption:"$date $description" \) \
    -composite "$destination/$file"
gwenview $destination

Here’s my current rename-based-on-exif, too. I modified it to use the ImageDescription or the UserComment field, and I switched to using Unicode stars and labels instead of # to minimize problems with exporting to HTML.


date="\${DateTimeOriginal;s/[ :]/-/g}"
rating="\${Rating;s/([1-5])/'★' x \$1/e}"
tags="\${Subject;s/^/🏷/;s/, / 🏷/g}"
field=FileName  # TestName for testing
exiftool -m -"$field<$date $rating \${ImageDescription} $tags.%e" \
         -"$field<$date $rating \${UserComment} $tags.%e" "$@"

In order to upload my fancy-shmancy Unicode-filenamed files, I also had to convert my WordPress database from utf8 to utf8mb4. This upgrade plugin was very helpful.

View or add comments

Oops report: Moving from i386 to amd64 on my server

Posted: - Modified: | geek, linux

I was trying to install Docker on my Linode virtual private server so that I could experiment with containers. I had a problem with the error “no supported platform found in manifest list.” Eventually, I realized that dpkg --print-architecture showed that my Ubuntu package architecture was i386 even though my server was 64-bit. That was probably due to upgrading in-place through the years, starting with a 32-bit version of Ubuntu 10.

I tried dpkg --add-architecture amd64, which let me install the docker-ce package from the Docker repository. Unfortunately, I didn’t review it carefully enough (the perils of SSHing from my cellphone), and installing that removed a bunch of other i386 packages like sudo, ssh, and screen. Ooops!

Even though we’ve been working on weaning lately, I decided that letting A- nurse a long time in her sleep might give me a little time to try to fix things. I used Linode’s web-based console to try to log in. I forgot the root password, so I used their tool for resetting the root password. After I got that sorted out, though, I found that I couldn’t resolve network resources. I’d broken the system badly enough that I needed to use another rescue tool to mount my drives, chroot to them, and install stuff from there. I was still getting stuck. I needed more focused time.

Fortunately, I’d broken my server during the weekend, so W- was around to take care of A- while I tried to figure things out. I had enough free space to create another root partition and install Ubuntu 16, which was a straightforward process with Linode’s Deploy Image tool.

I spent a few hours trying to figure out if I could set everything up in Docker containers from the start. I got the databases working, but I kept getting stymied by annoying WordPress redirection issues even after setting home and siteurl in the database and defining them in my config file. I tried adding Nginx reverse proxying to the mix, and it got even more tangled.

Eventually, I gave up and went back to running the services directly on my server. Because I did the new install in a separate volume, it was easy to mount the old volume and copy or symlink my configuration files.

Just in case I need to do this again, here are the packages that apt says I installed:

  • General:
    • screen
    • apt-transport-https
    • ca-certificates
    • curl
    • dirmngr
    • gnupg
    • software-properties-common
    • borgbackup
  • For the blog:
    • mysql-server
    • php-fpm
    • php-mysql
    • php-xml
  • For Quantified Awesome:
    • ruby-bundler
    • ruby-dev
  • For experimenting:
    • docker-compose
  • For compiling Emacs:
    • make
    • gcc
    • g++
    • zlib1g-dev
    • libmysqlclient-dev
    • autoconf
    • texinfo
    • gnutls-dev
    • ncurses-dev
  • From external repositories:

I got the list by running:

zgrep 'Commandline: apt' /var/log/apt/history.log /var/log/apt/history.log.*.gz

I saved my selections with dpkg --get-selections so that I can load them with dpkg --set-selections << ...; apt-get dselect-upgrade if I need to do this again.

Symbolic links to old volume:

  • /var/www
  • /usr/local
  • /home/sacha
  • /var/lib/mysql (after installing)

Copied after installing – I’ll probably want to tidy this up:

  • /etc/nginx/sites-available
  • /etc/nginx/sites-enabled

Lessons learned:

  • Actually check the list of packages to remove.
  • Consider fresh installs for major upgrades.

When things settle down, I should probably look into organizing one of the volumes as a proper data volume so that I can cleanly reinstall the root partition whenever I want to.

I also want to explore Docker again – maybe once I’ve wrapped my mind around how Docker, Nginx, WordPress, Passenger, virtual hosts, and subdirectories all fit together. Still, I’m glad I got my site up and running again!

View or add comments

Extracting the xinput device number instead of hardcoding it

| geek, linux

I’ve been using my wireless mouse more often these days. XWindows detected it fine and it works without a hitch, hooray! The downside is that as an additional input device, it threw my xinput device numbering off, so the script I was using to rotate the stylus input along with the screen on my tablet PC stopped working. Easy enough to fix by extracting the device number from the output of xinput using the cut command.

The relevant changes were:

xsetwacom set $(xinput | grep eraser | cut -c 55-56) rotate $direction
xsetwacom set $(xinput | grep touch | cut -c 55-56) rotate $direction
xsetwacom set $(xinput | grep stylus | cut -c 55-56) rotate $direction

My rotate-screen script on GitHub

View or add comments