Categories: emacsconf

View topic page - RSS - Atom - Subscribe via email

Converting our VTT files to TTML

| emacsconf, geek, ffmpeg

I wanted to convert our VTT files to TTML files so that we might be able to use them for training lachesis for transcript segmentation. I downloaded the VTT files from EmacsConf 2021 to a directory and copied the edited captions from the EmacsConf 2022 backstage area (using head -1 ${FILE} | grep -q "captioned" to distinguish them from the automatic ones). I installed the ttconv python package. Then I used the following command to convert the TTML files:

for FILE in *.vtt; do
    BASE=$(basename -s .vtt "$FILE");
    ffmpeg -y -i $FILE $BASE.srt; tt convert -i $BASE.srt -o $BASE.ttml
done           

I haven't gotten around to installing whanever I need in order to get lachesis to work under Python 2.7, since it hasn't been updated for Python 3. It'll probably be a low-priority project anyway, as EmacsConf is fast approaching. Anyway, I thought I'd stash this in my blog somewhere in case I need to make TTML files again!

Re-encoding the EmacsConf videos with FFmpeg and GNU Parallel

| geek, linux, emacsconf, ffmpeg, video

It turns out that using -crf 56 compressed the EmacsConf a little too aggressively, losing too much information in the video. We wanted to reencode everything, maybe going back to the default value of -crf 32. My laptop would have taken a long time to do all of those videos. Fortunately, one of the other volunteers shared a VM on a machine with 12 cores, and I had access to a few other systems. It was a good opportunity to learn how to use GNU Parallel to send jobs to different machines and retrieve the results.

First, I updated the compression script, compress-video-low.sh:

Q=$1
WIDTH=1280
HEIGHT=720
AUDIO_RATE=48000
VIDEO_FILTER="scale=w=${WIDTH}:h=${HEIGHT}:force_original_aspect_ratio=1,pad=${WIDTH}:${HEIGHT}:(ow-iw)/2:(oh-ih)/2,fps=25,colorspace=all=bt709:iall=bt601-6-625:fast=1"
FILE=$2
SUFFIX=$Q
shift
shift
ffmpeg -y -i "$FILE"  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -aq-mode 2 -tile-columns 0 -tile-rows 0 -frame-parallel 0 -cpu-used 8 -auto-alt-ref 1 -lag-in-frames 25 -g 240 -pass 1 -f webm -an -threads 8 /dev/null &&
if [[ $FILE =~ "webm" ]]; then
    ffmpeg -y -i "$FILE" $*  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -tile-columns 2 -tile-rows 2 -frame-parallel 0 -cpu-used -5 -auto-alt-ref 1 -lag-in-frames 25 -pass 2 -g 240 -ac 2 -threads 8 -c:a copy "${FILE%.*}--compressed$SUFFIX.webm"
else
    ffmpeg -y -i "$FILE" $*  -pixel_format yuv420p -vf $VIDEO_FILTER -colorspace 1 -color_primaries 1 -color_trc 1 -c:v libvpx-vp9 -b:v 0 -crf $Q -tile-columns 2 -tile-rows 2 -frame-parallel 0 -cpu-used -5 -auto-alt-ref 1 -lag-in-frames 25 -pass 2 -g 240 -ac 2 -threads 8 -c:a libvorbis "${FILE%.*}--compressed$SUFFIX.webm"
fi

I made an originals.txt file with all the original filenames. It looked like this:

emacsconf-2020-frownies--the-true-frownies-are-the-friends-we-made-along-the-way-an-anecdote-of-emacs-s-malleability--case-duckworth.mkv
emacsconf-2021-montessori--emacs-and-montessori-philosophy--grant-shangreaux.webm
emacsconf-2021-pattern--emacs-as-design-pattern-learning--greta-goetz.mp4
...

I set up a ~/.parallel/emacsconf profile with something like this so that I could use three computers and my laptop, sending one job each and displaying progress:

--sshlogin computer1 --sshlogin computer2 --sshlogin computer3 --sshlogin : -j 1 --progress --verbose --joblog parallel.log

I already had SSH key-based authentication set up so that I could connect to the three remote computers.

Then I spread the jobs over four computers with the following command:

cat originals.txt | parallel -J emacsconf \
                             --transferfile {} \
                             --return '{=$_ =~ s/\..*?$/--compressed32.webm/=}' \
                             --cleanup \
                             --basefile compress-video-low.sh \
                             bash compress-video-low.sh 32 {}

It copied each file over to the computer it was assigned to, processed the file, and then copied the file back.

It was also helpful to occasionally do echo 'killall -9 ffmpeg' | parallel -J emacsconf -j 1 --onall if I cancelled a run.

It still took a long time, but less than it would have if any one computer had to crunch through everything on its own.

This was much better than my previous way of doing things, which involved copying the files over, running ffmpeg commands, copying the files back, and getting somewhat confused about which directory I was in and which file I assigned where and what to do about incompletely-encoded files.

I sometimes ran into problems with incompletely-encoded files because I'd cancelled the FFmpeg process. Even though ffprobe said the files were long, they were missing a large chunk of video at the end. I added a compile-media-verify-video-frames function to compile-media.el so that I could get the last few seconds of frames, compare them against the duration, and report an error if there was a big gap.

Then I changed emacsconf-publish.el to use the new filenames, and I regenerated all the pages. For EmacsConf 2020, I used some Emacs Lisp to update the files. I'm not particularly fond of wrangling video files (lots of waiting, high chance of error), but I'm glad I got the computers to work together.

Adding little nudges to help on the EmacsConf wiki

| emacs, emacsconf

A number of people helped capture the talks for EmacsConf 2021, which was fantastic because we were able to stream all of the first day's talks with open captions and most of the second day's talks too. Right now, in fact, there are only two talks left that haven't been captioned. After the conference, a couple of other people volunteered to help out as well. Whee!

I want to figure out a good way to help people work on the things that they're interested in without necessarily burdening them with too much work, too little work, too much coordination, not enough coordination. Before the conference, one of the perks we had offered was that captioners got early access to the videos. I had a password-protected directory on a web server and an index that I made using Emacs Lisp to display the the talks that still need to be captioned. People e-mailed me to call dibs on the talk they wanted to caption, and that was how we avoided duplicating work. Now that all the videos are public, of course, people can just go to the regular wiki.

The other thing to think about is that in addition to captioning the two remaining talks (not essential, but it would be nice), there are also different levels of things that we can do. It would be nice to have chapter markers for some of the longer Q&A sessions. It would be fantastic to cross reference those with the questions and answers so that so that people can jump to the section they're interested in. It'd be incredible if somebody actually wrote down the answers. And it'd be even more awesome if people actually captioned the Q&A sessions as well, which were in many cases much longer than the actual sessions. So this is a fair bit of work, but people can probably pick a level that matches their interest and time available.

I'm not entirely sure how to coordinate this especially since I've got limited computer time. So my goal is to have something where volunteers can basically just wander around looking for talks that they're interested in and see ways to help out, or see a list of things that could use some work. So for example, while they're browsing the maintainers talk, they might say, "Oh, this one needs some chapter markers. I want to help with that. How do I do that? How do I get started?" And then they go down that path. On the other hand, you might have somebody sitting down saying, "I've got an hour and I want to go help out. What can I do?"

I don't want to keep data in many different places. I wonder if I can use the wiki for a lot of this coordination. Now that the videos are public, I've started tagging the pages that need extra help, like long Q&A session that need chapter markers.

With a little bit more work, I think people will be able to follow the instructions from there, especially if they've done this kind of captioning before, or email us to ask for help and then we can get them started.

I also thought about using Etherpad to do that kind of coordination where people would put their name next to a thing to reserve it, but then that's one more step. I don't know. At the moment, editing the wiki is a bit of an involved process. Worst-case scenario (best-case, actually, if we have lots of people wanting to help? =) ), people can call dibs by emailing us at and one of us organizers will add a little note there in the volunteer attribute. It's probably a good start, so we'll see where we can take it.

EmacsConf backstage: picking timestamps from a waveform

| emacs, emacsconf

We wanted to trim the Q&A session recordings so that people don't have to listen to the transition from the main presentation or the long silence until we got around to stopping the recording.

The MPV video player didn't have a waveform view, so I couldn't just jump to the parts with sound. Audacity could show waveforms, but it didn't have an easy way to copy the timestamp. I didn't want to bother with heavyweight video-editing applications on my Lenovo X220. So the obvious answer is, of course, to make a text editor do the job. Yay Emacs!

Screenshot_20211204_013446.png

Figure 1: Select timestamps using a waveform

It's very experimental and I don't know if it'll work for anyone else. If you want to use it, you will also need mpv.el, the MPV media player, and the ffmpeg command-line tool. Here's my workflow:

  • M-x waveform-show to select the file.
  • left-click on the waveform to copy the timestamp and start playing from there
  • right-click to sample from that spot
  • left and right to adjust the position, shift-left and shift-right to take smaller steps
  • SPC to copy the current MPV position
  • j to jump to a timestamp (hh:mm:ss or seconds)
  • > to speed up, < to slow down

I finally figured out how to use SVG to embed the waveform generated by FFMPEG and animate the current MPV playback position. Whee! There's lots of room for improvement, but it's a pretty fun start.

If you're curious, you can find the code at https://github.com/sachac/waveform-el . Let me know if it actually works for you!

Editing subtitles in Emacs with subed, with synchronized video playback through mpv

Posted: - Modified: | emacs, subed, emacsconf

I've been adding subtitles to the talks from EmacsConf 2020, taking advantage of the text that was helpfully autogenerated when I uploaded the videos to the EmacsConf channel on YouTube. Today I spent some time figuring out how to add WebVTT support to subed, an Emacs major mode for editing subtitles. It turns out that it's pretty cool to be able to bring up the relevant segment in the video whenever the text wasn't clear. Here's a quick video of it in action. It shows how I can mostly focus on adding punctuation and changing capitalization, checking every so often with mpv via mpv.el. All in all, it took me 24 minutes to edit the subtitles for a 17-minute talk. Whee!

Demonstration of subed-mode

I submitted a pull request to get the .vtt support into subed-mode in case anyone else finds it helpful. I've only tested the mpv synchronization so far, and I'm looking forward to exploring its other features.

You can see these particular subtitles on the talk page for Beyond Vim and Emacs: A Scalable UI Paradigm. Enjoy!

Update 2020-12-13: subed-vtt.el has been merged into master, so you'll get it when you check out subed. Yay!

Update 2021-07-19: Check out Lindsey Kuper's step-by-step instructions for getting YouTube to autogenerate captions that you can edit.