Categories: geek » linux

RSS - Atom - Subscribe via email

Figuring out how to use ffmpeg to mask a chroma-keyed video based on the differences between images

| linux, geek, ffmpeg, video

A- is really into Santa and Christmas because of the books she's read. Last year, she wanted to set up the GoPro to capture footage during Christmas Eve. I helped her set it up for a timelapse video. After she went to bed, we gradually positioned the presents. I extracted the frames from the video, removed the ones that caught us moving around, and then used Krita's new animation features to animate sparkles so that the presents magically appeared. She mentioned the sparkles a number of times during her deliberations about whether Santa exists or not.

This year, I want to see if I can use green-screen videos like this reversed-spin sparkle or this other sparkle video. I'm going to take a series of images, with each image adding one more gift. Then I'm going to make a mask in Krita with white covering the gift and a transparent background for the rest of the image. Then I'll use chroma-key to drop out the green screen of the sparkle video and mask it in so that the sparkles only happen within the boundaries of the gift that was added. I also want to fade one image into the other, and I want the sparkles to fade out as the gift appears.

Figuring things out

I didn't know how to do any of that yet with ffmpeg, so here's how I started figuring things out. First, I wanted to see how to fade test.jpg into test2.jpg over 4 seconds.

ffmpeg -y -loop 1 -i test.jpg -loop 1 -i test2.jpg -filter_complex "[1:v]fade=t=in:d=4:alpha=1[fadein];[0:v][fadein]overlay[out]" -map "[out]" -r 1 -t 4 -shortest test.webm

Here's another way using the blend filter:

ffmpeg -y -loop 1 -i test.jpg -loop 1 -i test2.jpg -filter_complex "[1:v][0:v]blend=all_expr='A*(if(gte(T,4),1,T/4))+B*(1-(if(gte(T,4),1,T/4)))" -t 4 -r 1 test.webm

Then I looked into chromakeying in the other video. I used balloons instead of sparkles just in case she happened to look at my screen.

ffmpeg -y -i test.webm -i balloons.mp4 -filter_complex "[1:v]chromakey=0x00ff00:0.1:0.2[ckout];[0:v][ckout]overlay[out]" -map "[out]" -shortest -r 1 overlaid.webm

I experimented with the alphamerge filter.

ffmpeg -y -i test.jpg -i test2.jpg -i mask.png -filter_complex "[1:v][2:v]alphamerge[a];[0:v][a]overlay[out]" -map "[out]" masked.jpg

Okay! That overlaid test.jpg with a masked part of test2.jpg. How about alphamerging in a video? First, I need a mask video…

ffmpeg -y -loop 1 -i mask.png  -r 1 -t 4  mask.webm

Then I can combine that:

ffmpeg -loglevel 32 -y -i test.webm -i balloons.mp4 -i mask.webm -filter_complex "[1:v][2:v]alphamerge[masked];[0:v][masked]overlay[out]" -map "[out]" -r 1 -t 4 alphamerged.webm

Great, let's figure out how to combine chroma-key and alphamerge video. The naive approach doesn't work, probably because they're both messing with the alpha layer.

ffmpeg -loglevel 32 -y -i test.webm -i balloons.mp4 -i mask.webm -filter_complex "[1:v]chromakey=0x00ff00:0.1:0.2[ckout];[ckout][2:v]alphamerge[masked];[0:v][masked]overlay[out]" -map "[out]" -r 1 -t 4 masked.webm

So I probably need to blend the chromakey and the mask. Let's see if I can extract the chromakey alpha.

ffmpeg -loglevel 32 -y -i test.webm -i balloons.mp4 -i mask.webm -filter_complex "[1:v]chromakey=0x00ff00:0.1:0.2,format=rgba,alphaextract[out]" -map "[out]" -r 1 -t 4

Now let's blend it with the mask.webm.

ffmpeg -loglevel 32 -y -i test.webm -i balloons.mp4 -i mask.webm -filter_complex "[1:v]chromakey=0x00ff00:0.1:0.2,format=rgba,alphaextract[ckalpha];[ckalpha][2:v]blend=all_mode=and[out]" -map "[out]" -r 1 -t 4 masked-alpha.webm

Then let's use it as the alpha:

ffmpeg -loglevel 32 -y -i test.webm -i balloons.mp4 -i masked-alpha.webm -filter_complex "[2:v]format=rgba[mask];[1:v][mask]alphamerge[masked];[0:v][masked]overlay[out]" -map "[out]" -r 1 -t 4 alphamerged.webm

Okay, that worked! Now how do I combine everything into one command? Hmm…

ffmpeg -loglevel 32 -y -loop 1 -i test.jpg -t 4 -loop 1 -i test2.jpg -t 4 -i balloons.mp4 -loop 1 -i mask.png -t 4 -filter_complex "[1:v][0:v]blend=all_expr='A*(if(gte(T,4),1,T/4))+B*(1-(if(gte(T,4),1,T/4)))'[fade];[2:v]chromakey=0x00ff00:0.1:0.2,format=rgba,alphaextract[ckalpha];[ckalpha][3:v]blend=all_mode=and,format=rgba[maskedalpha];[2:v][maskedalpha]alphamerge[masked];[fade][masked]overlay[out]" -map "[out]" -r 5 -t 4 alphamerged.webm

Then I wanted to fade the masked video out by the end.

ffmpeg -loglevel 32 -y -loop 1 -i test.jpg -t 4 -loop 1 -i test2.jpg -t 4 -i balloons.mp4 -loop 1 -i mask.png -t 4 -filter_complex "[1:v][0:v]blend=all_expr='A*(if(gte(T,4),1,T/4))+B*(1-(if(gte(T,4),1,T/4)))'[fade];[2:v]chromakey=0x00ff00:0.1:0.2,format=rgba,alphaextract[ckalpha];[ckalpha][3:v]blend=all_mode=and,format=rgba[maskedalpha];[2:v][maskedalpha]alphamerge[masked];[masked]fade=type=out:st=2:d=1:alpha=1[maskedfade];[fade][maskedfade]overlay[out]" -map "[out]" -r 10 -t 4 alphamerged.webm

Making the video

When A- finally went to bed, we arranged the presents, using the GoPro to take a picture at each step of the way. I cropped and resized the images, using Krita to figure out the cropping rectangle and offset.

for FILE in *.JPG; do convert $FILE -crop 1558x876+473+842 -resize 1280x720 cropped/$FILE; done

I used ImageMagick to calculate the masks automatically.

while [ "$j" -lt $len ]; do
  compare -fuzz 15% cropped/${files[$i]} cropped/${files[$j]} -compose Src -highlight-color White -lowlight-color Black masks/${files[$j]}
  convert -morphology Open Disk -morphology Close Disk -blur 20x5 masks/${files[$j]} processed-masks/${files[$j]}

Then I faded the images together to make a video.

import ffmpeg
import glob
files = glob.glob("images/cropped/*.JPG")
fps = 15
crf = 32
out = ffmpeg.input(files[0], loop=1, r=fps)
duration = 3
for i in range(1, len(files)):
    out = ffmpeg.filter([out, ffmpeg.input(files[i], loop=1, r=fps).filter('fade', t='in', d=duration, st=i*duration, alpha=1)], 'overlay')
args = out.output('images.webm', t=len(files) * duration, r=fps, y=None, crf=crf).compile()
print(' '.join(f'"{item}"' for item in args))

"ffmpeg" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2317.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2318.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2319.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2320.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2321.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2322.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2323.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2324.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2325.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2326.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2327.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2328.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2329.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2330.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2331.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2332.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2333.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2334.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2335.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2336.JPG" "-loop" "1" "-r" "15" "-i" "images/cropped/GOPR2337.JPG" "-filter_complex" "[1]fade=alpha=1:d=3:st=3:t=in[s0];[0][s0]overlay[s1];[2]fade=alpha=1:d=3:st=6:t=in[s2];[s1][s2]overlay[s3];[3]fade=alpha=1:d=3:st=9:t=in[s4];[s3][s4]overlay[s5];[4]fade=alpha=1:d=3:st=12:t=in[s6];[s5][s6]overlay[s7];[5]fade=alpha=1:d=3:st=15:t=in[s8];[s7][s8]overlay[s9];[6]fade=alpha=1:d=3:st=18:t=in[s10];[s9][s10]overlay[s11];[7]fade=alpha=1:d=3:st=21:t=in[s12];[s11][s12]overlay[s13];[8]fade=alpha=1:d=3:st=24:t=in[s14];[s13][s14]overlay[s15];[9]fade=alpha=1:d=3:st=27:t=in[s16];[s15][s16]overlay[s17];[10]fade=alpha=1:d=3:st=30:t=in[s18];[s17][s18]overlay[s19];[11]fade=alpha=1:d=3:st=33:t=in[s20];[s19][s20]overlay[s21];[12]fade=alpha=1:d=3:st=36:t=in[s22];[s21][s22]overlay[s23];[13]fade=alpha=1:d=3:st=39:t=in[s24];[s23][s24]overlay[s25];[14]fade=alpha=1:d=3:st=42:t=in[s26];[s25][s26]overlay[s27];[15]fade=alpha=1:d=3:st=45:t=in[s28];[s27][s28]overlay[s29];[16]fade=alpha=1:d=3:st=48:t=in[s30];[s29][s30]overlay[s31];[17]fade=alpha=1:d=3:st=51:t=in[s32];[s31][s32]overlay[s33];[18]fade=alpha=1:d=3:st=54:t=in[s34];[s33][s34]overlay[s35];[19]fade=alpha=1:d=3:st=57:t=in[s36];[s35][s36]overlay[s37];[20]fade=alpha=1:d=3:st=60:t=in[s38];[s37][s38]overlay[s39]" "-map" "[s39]" "-crf" "32" "-r" "15" "-t" "63" "-y" "images.webm"

Next, I faded the masks together. These ones faded in and out so that only one mask was active at a time.

import ffmpeg
import glob
files = glob.glob("images/processed-masks/*.JPG")
files = files[:-2]  # Omit the last two, where I'm just turning off the lights
fps = 15
crf = 32
out = ffmpeg.input('color=black:s=1280x720', f='lavfi', r=fps)
duration = 3
for i in range(0, len(files)):
    out = ffmpeg.filter([out, ffmpeg.input(files[i], loop=1, r=fps).filter('fade', t='in', d=1, st=(i + 1)*duration, alpha=1).filter('fade', t='out', st=(i + 2)*duration - 1)], 'overlay')
args = out.output('processed-masks.webm', t=len(files) * duration, r=fps, y=None, crf=crf).compile()
print(' '.join(f'"{item}"' for item in args))

"ffmpeg" "-f" "lavfi" "-r" "15" "-i" "color=s=1280x720" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2318.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2319.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2320.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2321.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2322.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2323.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2324.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2325.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2326.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2327.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2328.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2329.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2330.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2331.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2332.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2333.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2334.JPG" "-loop" "1" "-r" "15" "-i" "images/processed-masks/GOPR2335.JPG" "-filter_complex" "[1]fade=alpha=1:d=1:st=3:t=in[s0];[s0]fade=st=5:t=out[s1];[0][s1]overlay[s2];[2]fade=alpha=1:d=1:st=6:t=in[s3];[s3]fade=st=8:t=out[s4];[s2][s4]overlay[s5];[3]fade=alpha=1:d=1:st=9:t=in[s6];[s6]fade=st=11:t=out[s7];[s5][s7]overlay[s8];[4]fade=alpha=1:d=1:st=12:t=in[s9];[s9]fade=st=14:t=out[s10];[s8][s10]overlay[s11];[5]fade=alpha=1:d=1:st=15:t=in[s12];[s12]fade=st=17:t=out[s13];[s11][s13]overlay[s14];[6]fade=alpha=1:d=1:st=18:t=in[s15];[s15]fade=st=20:t=out[s16];[s14][s16]overlay[s17];[7]fade=alpha=1:d=1:st=21:t=in[s18];[s18]fade=st=23:t=out[s19];[s17][s19]overlay[s20];[8]fade=alpha=1:d=1:st=24:t=in[s21];[s21]fade=st=26:t=out[s22];[s20][s22]overlay[s23];[9]fade=alpha=1:d=1:st=27:t=in[s24];[s24]fade=st=29:t=out[s25];[s23][s25]overlay[s26];[10]fade=alpha=1:d=1:st=30:t=in[s27];[s27]fade=st=32:t=out[s28];[s26][s28]overlay[s29];[11]fade=alpha=1:d=1:st=33:t=in[s30];[s30]fade=st=35:t=out[s31];[s29][s31]overlay[s32];[12]fade=alpha=1:d=1:st=36:t=in[s33];[s33]fade=st=38:t=out[s34];[s32][s34]overlay[s35];[13]fade=alpha=1:d=1:st=39:t=in[s36];[s36]fade=st=41:t=out[s37];[s35][s37]overlay[s38];[14]fade=alpha=1:d=1:st=42:t=in[s39];[s39]fade=st=44:t=out[s40];[s38][s40]overlay[s41];[15]fade=alpha=1:d=1:st=45:t=in[s42];[s42]fade=st=47:t=out[s43];[s41][s43]overlay[s44];[16]fade=alpha=1:d=1:st=48:t=in[s45];[s45]fade=st=50:t=out[s46];[s44][s46]overlay[s47];[17]fade=alpha=1:d=1:st=51:t=in[s48];[s48]fade=st=53:t=out[s49];[s47][s49]overlay[s50];[18]fade=alpha=1:d=1:st=54:t=in[s51];[s51]fade=st=56:t=out[s52];[s50][s52]overlay[s53]" "-map" "[s53]" "-crf" "32" "-r" "15" "-t" "54" "-y" "processed-masks.webm"

I ended up using this particle glitter video because the gifts were small, so I wanted a video that was dense with sparkly things. I also wanted the sparkles to be more concentrated on the area where the gifts were, so I resized it and positioned it.

ffmpeg -loglevel 32 -y -f lavfi -i color=black:s=1280x720 -i sparkles4.webm -ss 13 -filter_complex "[1:v]scale=700:392[sparkles];[0:v][sparkles]overlay=x=582:y=194,setpts=(PTS-STARTPTS)*1.05[out]" -map "[out]" -r 15 -t 53 -shortest sparkles-trimmed.webm
ffmpeg -y -stream_loop 2 -i sparkles-trimmed.webm -t 57 sparkles-looped.webm              

Lastly, I combined the videos with the sparkles.

ffmpeg -loglevel 32 -y -i images.webm -i sparkles-looped.webm -i processed-masks.webm -filter_complex "[1:v]chromakey=0x0a9d06:0.1:0.2,format=rgba,alphaextract[ckalpha];[ckalpha][2:v]blend=all_mode=and,format=rgba[maskedalpha];[1:v][maskedalpha]alphamerge[masked];[masked]fade=t=out:st=57:d=1:alpha=1[maskedfaded];[0:v][maskedfaded]overlay[combined];[combined]tpad=start_mode=clone:start_duration=4:stop_mode=clone:stop_duration=4[out]" -map "[out]" -r 15 -crf 32 output.webm

After many iterations and a very late night, I got (roughly) the video I wanted, which I'm not posting here because of reasons. But it worked, yay! Now I don't have to manually place stars frame-by-frame in Krita, and I can just have all that magic happen semi-automatically.

Re-encoding the EmacsConf videos with FFmpeg and GNU Parallel

| geek, linux, emacsconf, ffmpeg, video

It turns out that using -crf 56 compressed the EmacsConf a little too aggressively, losing too much information in the video. We wanted to reencode everything, maybe going back to the default value of -crf 32. My laptop would have taken a long time to do all of those videos. Fortunately, one of the other volunteers shared a VM on a machine with 12 cores, and I had access to a few other systems. It was a good opportunity to learn how to use GNU Parallel to send jobs to different machines and retrieve the results.

First, I updated the compression script,

ffmpeg -y -i "$FILE"  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -aq-mode 2 -tile-columns 0 -tile-rows 0 -frame-parallel 0 -cpu-used 8 -auto-alt-ref 1 -lag-in-frames 25 -g 240 -pass 1 -f webm -an -threads 8 /dev/null &&
if [[ $FILE =~ "webm" ]]; then
    ffmpeg -y -i "$FILE" $*  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -tile-columns 2 -tile-rows 2 -frame-parallel 0 -cpu-used -5 -auto-alt-ref 1 -lag-in-frames 25 -pass 2 -g 240 -ac 2 -threads 8 -c:a copy "${FILE%.*}--compressed$SUFFIX.webm"
    ffmpeg -y -i "$FILE" $*  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -tile-columns 2 -tile-rows 2 -frame-parallel 0 -cpu-used -5 -auto-alt-ref 1 -lag-in-frames 25 -pass 2 -g 240 -ac 2 -threads 8 -c:a libvorbis "${FILE%.*}--compressed$SUFFIX.webm"

I made an originals.txt file with all the original filenames. It looked like this:


I set up a ~/.parallel/emacsconf profile with something like this so that I could use three computers and my laptop, sending one job each and displaying progress:

--sshlogin computer1 --sshlogin computer2 --sshlogin computer3 --sshlogin : -j 1 --progress --verbose --joblog parallel.log

I already had SSH key-based authentication set up so that I could connect to the three remote computers.

Then I spread the jobs over four computers with the following command:

cat originals.txt | parallel -J emacsconf \
                             --transferfile {} \
                             --return '{=$_ =~ s/\..*?$/--compressed32.webm/=}' \
                             --cleanup \
                             --basefile \
                             bash 32 {}

It copied each file over to the computer it was assigned to, processed the file, and then copied the file back.

It was also helpful to occasionally do echo 'killall -9 ffmpeg' | parallel -J emacsconf -j 1 --onall if I cancelled a run.

It still took a long time, but less than it would have if any one computer had to crunch through everything on its own.

This was much better than my previous way of doing things, which involved copying the files over, running ffmpeg commands, copying the files back, and getting somewhat confused about which directory I was in and which file I assigned where and what to do about incompletely-encoded files.

I sometimes ran into problems with incompletely-encoded files because I'd cancelled the FFmpeg process. Even though ffprobe said the files were long, they were missing a large chunk of video at the end. I added a compile-media-verify-video-frames function to compile-media.el so that I could get the last few seconds of frames, compare them against the duration, and report an error if there was a big gap.

Then I changed emacsconf-publish.el to use the new filenames, and I regenerated all the pages. For EmacsConf 2020, I used some Emacs Lisp to update the files. I'm not particularly fond of wrangling video files (lots of waiting, high chance of error), but I'm glad I got the computers to work together.

Adding an overlay to my webcam via OBS 26.1

| geek, linux

A- likes to change her name roughly every two months, depending on whatever things she's focusing on. In 2020, she went through six names, giving us plenty of mental exercise and amusement. Pretend play is wonderful and she picks up all sorts of interesting attributes along the way, so we're totally fine with letting her pretend all the time instead of limiting it to specific times.

A-‘s been going to virtual kindergarten. So far, her teachers and classmates have been cool with the name changes. They met her in her Stephanie phase, and they shifted over to Elizabeth without batting an eye. A-‘s been experimenting with a new name, though, so I thought I'd try to figure out a way to make the teachers' lives a little easier. We use Google Meet to connect to class. A- likes to log in as me because then we're alphabetically sorted close to one of her friends in class, the high-tech equivalent of wanting to sit with your friends. So the name that's automatically displayed when she's speaking is no help either.

It turns out that OBS (Open Broadcast Studio) has a virtual webcam feature in version 26.1, and it works for MacOS X and Linux. I followed the instructions for installing OBS 26.1 on Ubuntu. To enable the virtual webcam device on Linux, I installed v4l2loopback-dkms. I was initially mystified when I got the error could not insert 'v4l2loopback': Operation not permitted. That was because I have Secure Boot on my laptop, so I just needed to reboot, choose Enroll MOK from the boot menu, and put in the password that I specified during the setup process. After I did that, clicking on the Start Virtual Camera button in OBS worked. I tested it in Google Meet and the image was properly displayed. I don't know if we'll need it, but it's handy to have in my back pocket in case A- decides to change her name again.

Yay Linux and free software!

Compiling autotrace against GraphicsMagick instead of ImageMagick

Posted: - Modified: | geek, linux

In an extravagant instance of yak-shaving, I found myself trying to compile autotrace so that I could use it to trace my handwriting samples so that I could make a font with FontForge so that I could make worksheets for A- without worrying about finding a handwriting font with the right features and open font licensing.

AutoTrace had been written against ImageMagick 5, but this was no longer easily available. ImageMagick 7 greatly changed the API. Fortunately, GraphicsMagick kept the ImageMagick 5 API. I eventually figured out enough about autoconf and C (neither of which I had really worked with much before) to switch the library paths out while attempting to preserve backward compatibility. I think I got it, but I can't really tell aside from the fact that compiling it with GraphicsMagick makes the included tests run. Yay!

At first I tried AC_DEFINE-ing twice in the same package check, but it turns out only the last one sticks, so I moved the other AC_DEFINE to a different condition.

Anyway, here's the pull request. Whee! I'm coding!

Learning more about Docker

| geek, linux

I’ve been mostly heads-down on parenting for a few years now. A- wasn’t keen on babysitters, so my computing time consisted of a couple of hours during the graveyard shift, after our little night-owl finally settled into bed. I felt like I was treading water: keep Emacs News going, check in and do some consulting once in a while so that the relationship doesn’t go cold, do my weekly reviews, try to automate things here and there.

I definitely felt the gaps between the quick-and-dirty coding I did and the best practices I saw elsewhere. I felt a little anxious about not having development environments and production deployment processes for my personal projects. Whenever I messed up my blog or my web-based tracker, I stayed up extra late to fix things, coding while tired and sleepy and occasionally interrupted by A- needing extra snuggling. I updated whenever I felt it was necessary for security, but the risk discouraged me from trying to make things better.

Lately, though, I feel like I’ve been able to actually have some more focused time to learn new things. A- is a little more used to a bedtime routine, and I no longer have to reserve as much energy and patience for dealing with tantrums. She still sleeps really late, but it’s manageable. And besides, I’d tracked the time I spent playing a game on my phone, so I knew I had a little discretionary time I could use more effectively.

Docker is one of the tools on my to-learn list. I think it will help a lot to have environments that I can experiment with and recreate whenever I want. I tried Vagrant before, but Docker feels a lot lighter-weight.

I started by moving my sketch viewer into a Docker container. It’s a basic Node server with read-only access to my sketches, so that was mostly a matter of changing it to be configured via environment variables and mounting the sketches as a volume. I added dockerfile-mode to my Emacs, made a Dockerfile and a .dockerignore file following the tutorial for Dockerizing a Node.js web app, tried it out on my laptop, and pushed the image to my private Docker hub so that I could pull the image on my server. It turned out that Linode’s kernel had overlay built in instead of compiled as a module, so I followed this tip to fix it.

cat << EOF > /etc/systemd/system/containerd.service.d/override.conf

I also needed to uninstall my old and docker-compose, add the Docker PPA, and install docker-ce in order to get docker login to work properly on my server.

The next step was to move my web interface for tracking – not Quantified Awesome, but the button-filled webpage I’ve been using on my phone. I used lots of environment variables for passwords and tokens, so I switched to using --env-file file instead.

In order to move Quantified Awesome or my blog into Docker, I needed a MySQL container that could load my backups. docker-compose.yml Loading the SQL was just a matter of mounting the backup files in /docker-entrypoint-initdb.d, and mounting a directory as /var/lib/mysql should help with data persistence. If I added a script that created a user and granted access from '%', I could access the MySQL inside the Docker container from my laptop. I didn’t want my MySQL container to be publicly exposed on my server, though. It turned out that Docker bypassed ufw by setting iptables rules directly, so I followed the other instructions in this Stackoverflow answer and added these to the end of my /etc/ufw/after.rules:

:ufw-user-forward - [0:0]
:DOCKER-USER - [0:0]

-A DOCKER-USER -j ufw-user-forward

-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d
-A DOCKER-USER -j DROP -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -d
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d
-A DOCKER-USER -j DROP -p udp -m udp --dport 0:32767 -d


There’s more discussion on docker and ufw, but I don’t quite have the brainspace right now to fully understand it.

Anyway. Progress. is in a Docker container, and so is my button-based time tracker. I have a Docker container that I can use to load SQL backups, and I can connect to it for testing. The next step would probably be to try moving Quantified Awesome into a Docker container that talks to my MySQL container. If I can get that working, then I can try moving my blog into a container too.

Yesterday was for sleeping. Today I wanted to clean up my notes and post them, since I’ll forget too much if I keep going. More coding will have to wait for tomorrow–or maybe the day after, if I use some time for consulting instead. But slow progress is still progress, and it’s nice to feel like more of a geek again.

Experimenting with adding labels to photos

Posted: - Modified: | geek, linux

A-‘s gotten really interested in letters lately, so I'm looking for ways to add more personally relevant text and images to her life. She often flips through the 4″x6″ photos I got printed. Since they're photos, they're sturdier and can stand up to a little bending. I wanted to experiment with printing more pictures and labeling them with text, building on the script I used to label our toy storage bins. Here's a sample:

2018-06-14-16-12-14 ★★★ A- gently lifted the flaps in the book. 🏷fine-motor

(Name still elided in the sample until I figure out what we want to do with names and stuff. I'll use NAME_REPLACEMENTS to put her name into the printed ones, though, since kids tend to learn how to read their names first.)

I haven't printed these out yet to see how legible they are, but a quick on-screen check looks promising.



# Add label from ImageDescription EXIF tag to bottom of photo

IFS=$(echo -en "\n\b")

# NAME_REPLACEMENTS is an environment variable that's a sed expression
# for any name replacements, like s/A-/Actual name goes here/g

for file in $*; do
  description=$(exiftool -s -s -s -$description_field "$file" \
     | sed "s/\"/\\\"/g;s/'/\\\'/g;$NAME_REPLACEMENTS")
  date=$(exiftool -s -s -s -DateTimeOriginal "$file" \
     | cut -c 1-10 | sed s/:/-/g)
  width=$(identify -format "%w" "$file")
  height=$(identify -format "%h" "$file")
  largest=$(( $width > $height ? $width : $height ))
  density=$(( $largest / $output_width_inches ))
  correct_height=$(( $output_height_inches * $density ))
  captionwidth=$(( $width - $border * 2 ))
  convert "$file" -density $density -units PixelsPerInch \
    -gravity North -extent ${width}x${correct_height} \
    -strip \( -undercolor white -background white \
    -fill black -font "$font" -bordercolor White \
    -gravity SouthWest -border $border -pointsize $pointsize \
    -size ${captionwidth}x  caption:"$date $description" \) \
    -composite "$destination/$file"
gwenview $destination

Here's my current rename-based-on-exif, too. I modified it to use the ImageDescription or the UserComment field, and I switched to using Unicode stars and labels instead of # to minimize problems with exporting to HTML.


date="\${DateTimeOriginal;s/[ :]/-/g}"
rating="\${Rating;s/([1-5])/'★' x \$1/e}"
tags="\${Subject;s/^/🏷/;s/, / 🏷/g}"
field=FileName  # TestName for testing
exiftool -m -"$field<$date $rating \${ImageDescription} $tags.%e" \
         -"$field<$date $rating \${UserComment} $tags.%e" "$@"

In order to upload my fancy-shmancy Unicode-filenamed files, I also had to convert my WordPress database from utf8 to utf8mb4. This upgrade plugin was very helpful.

Oops report: Moving from i386 to amd64 on my server

Posted: - Modified: | geek, linux

I was trying to install Docker on my Linode virtual private server so that I could experiment with containers. I had a problem with the error “no supported platform found in manifest list.” Eventually, I realized that dpkg --print-architecture showed that my Ubuntu package architecture was i386 even though my server was 64-bit. That was probably due to upgrading in-place through the years, starting with a 32-bit version of Ubuntu 10.

I tried dpkg --add-architecture amd64, which let me install the docker-ce package from the Docker repository. Unfortunately, I didn't review it carefully enough (the perils of SSHing from my cellphone), and installing that removed a bunch of other i386 packages like sudo, ssh, and screen. Ooops!

Even though we've been working on weaning lately, I decided that letting A- nurse a long time in her sleep might give me a little time to try to fix things. I used Linode's web-based console to try to log in. I forgot the root password, so I used their tool for resetting the root password. After I got that sorted out, though, I found that I couldn't resolve network resources. I'd broken the system badly enough that I needed to use another rescue tool to mount my drives, chroot to them, and install stuff from there. I was still getting stuck. I needed more focused time.

Fortunately, I'd broken my server during the weekend, so W- was around to take care of A- while I tried to figure things out. I had enough free space to create another root partition and install Ubuntu 16, which was a straightforward process with Linode's Deploy Image tool.

I spent a few hours trying to figure out if I could set everything up in Docker containers from the start. I got the databases working, but I kept getting stymied by annoying WordPress redirection issues even after setting home and siteurl in the database and defining them in my config file. I tried adding Nginx reverse proxying to the mix, and it got even more tangled.

Eventually, I gave up and went back to running the services directly on my server. Because I did the new install in a separate volume, it was easy to mount the old volume and copy or symlink my configuration files.

Just in case I need to do this again, here are the packages that apt says I installed:

  • General:
    • screen
    • apt-transport-https
    • ca-certificates
    • curl
    • dirmngr
    • gnupg
    • software-properties-common
    • borgbackup
  • For the blog:
    • mysql-server
    • php-fpm
    • php-mysql
    • php-xml
  • For Quantified Awesome:
    • ruby-bundler
    • ruby-dev
  • For experimenting:
    • docker-compose
  • For compiling Emacs:
    • make
    • gcc
    • g++
    • zlib1g-dev
    • libmysqlclient-dev
    • autoconf
    • texinfo
    • gnutls-dev
    • ncurses-dev
  • From external repositories:

I got the list by running:

zgrep 'Commandline: apt' /var/log/apt/history.log /var/log/apt/history.log.*.gz

I saved my selections with dpkg --get-selections so that I can load them with dpkg --set-selections << ...; apt-get dselect-upgrade if I need to do this again.

Symbolic links to old volume:

  • /var/www
  • /usr/local
  • /home/sacha
  • /var/lib/mysql (after installing)

Copied after installing – I'll probably want to tidy this up:

  • /etc/nginx/sites-available
  • /etc/nginx/sites-enabled

Lessons learned:

  • Actually check the list of packages to remove.
  • Consider fresh installs for major upgrades.

When things settle down, I should probably look into organizing one of the volumes as a proper data volume so that I can cleanly reinstall the root partition whenever I want to.

I also want to explore Docker again – maybe once I've wrapped my mind around how Docker, Nginx, WordPress, Passenger, virtual hosts, and subdirectories all fit together. Still, I'm glad I got my site up and running again!