Categories: geek » emacs » subed

RSS - Atom - Subscribe via email

#EmacsConf backstage: autopilot with crontab

| emacs, emacsconf, subed

[2023-10-26 Thu]: updated handle-session and added talk

I figured out multi-track streaming so close to EmacsConf 2022 that there wasn't enough time to get other volunteers used to working with the setup, especially since I was still scrambling to figure out more infrastructure as the conference approached. We decided I'd run both streams myself, which meant I needed to make things as automatic as possible so that I wouldn't go crazy. I wanted a lot of things to happen automatically: playing recorded intros and videos, browsing to the right URLs depending on the type of Q&A, publishing updates to the wiki, and so on.

I used timers and TODO state changes to execute commands via TRAMP, which was pretty cool for the most part. But it turned out TRAMP doesn't like being called when it's already running, like when it's being called from two timers going off at the same time. It gives a "Forbidden reentrant call of TRAMP". We found a couple of quick workarounds: I could reschedule the talks to be a minute apart, or I could cancel the conflicting timer and just start them with the shell scripts.

Last year, we had a shell script that played the intro and the main talk, and other scripts to handle the Q&A by opening BigBlueButton, Etherpad, or the IRC channel. Much of the logic was in Emacs Lisp because it was easy to write it that way. For this year, I wanted to write a script that handled the intro, video, and Q&A portions. This is now in roles/obs/templates/handle-session.

handle-session
#!/bin/bash
# 
#
# Handle the intro/talk/Q&A for a session
# Usage: handle-session $SLUG

YEAR=""
BASE_DIR=""
FIREFOX_NAME=firefox-esr
SLUG=$1

# Kill background music if playing
if screen -list | grep -q background; then
    screen -S background -X quit
fi

# Update the status
sudo -u  talk $SLUG PLAYING &

# Update the overlay
overlay $SLUG

# Play the intro if it exists. If it doesn't exist, switch to the intro slide and stop processing.

if [[ -f $BASE_DIR/assets/intros/$SLUG.webm ]]; then
  killall -s TERM $FIREFOX_NAME
  mpv $BASE_DIR/assets/intros/$SLUG.webm
else
  firefox --kiosk $BASE_DIR/assets/in-between/$SLUG.png
  exit 0
fi

# Play the video if it exists. If it doesn't exist, switch to the BBB room and stop processing.
if [ "x$TEST_MODE" = "x" ]; then
  LIST=($BASE_DIR/assets/stream/--$SLUG*--main.webm)
else
  LIST=($BASE_DIR/assets/test/--$SLUG*--main.webm)
fi
FILE="${LIST[0]}"
if [ ! -f "$FILE" ]; then
    # Is there an original file?
    LIST=($BASE_DIR/assets/stream/--$SLUG*--original.{webm,mp4,mov})
    FILE="${LIST[0]}"
fi

if [[ -f $FILE ]]; then
  killall -s TERM $FIREFOX_NAME
  mpv $FILE
else
  /usr/local/bin/bbb $SLUG
  exit 0
fi

sudo -u  talk $SLUG CLOSED_Q &

# Open the appropriate Q&A URL
QA=$(jq -r '.talks[] | select(.slug=="'$SLUG'")["qa-backstage-url"]' < $BASE_DIR/talks.json)
QA_TYPE=$(jq -r '.talks[] | select(.slug=="'$SLUG'")["qa-type"]' < $BASE_DIR/talks.json)
echo "QA_TYPE $QA_TYPE QA $QA"
if [ "$QA_TYPE" = "live" ]; then
  /usr/local/bin/bbb $SLUG
elif [ "$QA" != "null" ]; then
  /usr/local/bin/music &
  /usr/bin/firefox $QA
  # i3-msg 'layout splith'
fi
wait

It builds on roles/obs/templates/bbb, roles/obs/templates/overlay, and roles/obs/templates/music. I also have a roles/prerec/templates/talk script that uses emacsclient to update the status of the talk.

I wrote some Tampermonkey scripts to automate joining the web conference and the IRC channel.

Now that we have a script that handles all the different things related to a session, it's easier to schedule the execution of that script. Instead of using Emacs timers and running into that problem with tramp, I want to try using cron. Cron is a standard UNIX and Linux tool for scheduling things to run at certain times. You make a plain text file in a particular format: minute, hour, day of month, month, day of week, and then the command, and then you tell cron to use that file with something like crontab your-file. Since it's plain text, we can generate it with Emacs Lisp and format-time-string, save with TRAMP, and install with ssh. Each track has its own user account for streaming, so each track can have its own file.

emacsconf-stream-format-crontab: Return crontab entries for TALKS.
(defun emacsconf-stream-format-crontab (track talks &optional test-mode)
  "Return crontab entries for TALKS.
Use the display specified in TRACK.
If TEST-MODE is non-nil, load the videos from the test directory."
  (concat
   (format
    "PATH=/usr/local/bin:/usr/bin
MAILTO=\"\"
XDG_RUNTIME_DIR=\"/run/user/%d\"
" (plist-get track :uid))
   (mapconcat
    (lambda (talk)
      (format "%s /usr/bin/screen -dmS play-%s bash -c \"DISPLAY=%s TEST_MODE=%s /usr/local/bin/handle-session %s | tee -a ~/track.log\"\n"
              ;; cron times are UTC
              (format-time-string "%-M %-H %-d %m *" (plist-get talk :start-time))
              (plist-get talk :slug)
              (plist-get track :vnc-display)
              (if test-mode "1" "")
              (plist-get talk :slug)))
    (emacsconf-filter-talks talks))))

emacsconf-stream-crontabs: Write the streaming users’ crontab files.
(defun emacsconf-stream-crontabs (&optional test-mode info)
  "Write the streaming users' crontab files.
If TEST-MODE is non-nil, use the videos in the test directory.
If INFO is non-nil, use that as the schedule instead."
  (interactive)
  (let ((emacsconf-publishing-phase 'conference))
    (setq info (or info (emacsconf-publish-prepare-for-display (emacsconf-get-talk-info))))
    (dolist (track emacsconf-tracks)
      (let ((talks (seq-filter (lambda (talk)
                                 (string= (plist-get talk :track)
                                          (plist-get track :name)))
                               info))
            (crontab (expand-file-name (concat (plist-get track :id) ".crontab")
                                       (concat (plist-get track :tramp) "~"))))
        (with-temp-file crontab
          (when (plist-get track :autopilot)
            (insert (emacsconf-stream-format-crontab track talks test-mode))))
        (emacsconf-stream-track-ssh track (concat "crontab ~/" (plist-get track :id) ".crontab"))))))

I want to test the whole setup before the conference, of course. First, I needed test videos. This generates test videos and subtitles following our naming convention.

emacsconf-stream-generate-test-videos
(defun emacsconf-stream-generate-test-videos (&optional info)
  "Generate 1-minute test videos for INFO."
  (interactive)
  (setq info (or info (emacsconf-publish-prepare-for-display (emacsconf-get-talk-info))))
  (let* ((dir (expand-file-name "test" emacsconf-stream-asset-dir))
         (default-directory dir)
         (subed-default-subtitle-length 1000)
         (test-length 60))
    (unless (file-directory-p dir)
      (make-directory dir t))
    (shell-command
     (format "ffmpeg -y -f lavfi -i testsrc=duration=%d:size=1280x720:rate=10 -i background-music.opus -shortest %s "
             test-length (expand-file-name "template.webm" dir)))
    (dolist (talk info)
      (with-temp-file (expand-file-name (concat (plist-get talk :file-prefix) "--main.vtt") dir)
        (subed-vtt-mode)
        (subed-auto-insert)
        (dotimes (i test-length)
          (subed-append-subtitle
           nil
           (* i 1000)
           (1- (* i 1000))
           (format "%s %02d %s"
                   (plist-get talk :slug)
                   i
                   (substring "123456789 123456789 123456789 123456789 123456789 123456789 "
                              (1+ (length (format "%s %02d" (plist-get talk :slug) i))))))))
      (copy-file
       (expand-file-name "template.webm" dir)
       (expand-file-name (concat (plist-get talk :file-prefix) "--main.webm") dir)
       t))))

Then I needed to write a crontab based on a different schedule. This code sets up a series of test videos to start about a minute after I run the code, with the dev stream set up to start a minute after the gen stream.

(let* ((offset-seconds 60)
       (start-time (time-add (current-time) offset-seconds))
       (emacsconf-schedule-validation-functions nil)
       (emacsconf-schedule-default-buffer-minutes 1)
       (emacsconf-schedule-default-buffer-minutes-for-live-q-and-a 1)
       (emacsconf-schedule-strategies '(emacsconf-schedule-allocate-buffer-time
                                        emacsconf-schedule-copy-previous-track))
       (schedule (emacsconf-schedule-prepare
                  (emacsconf-schedule-inflate-sexp
                   `(("GEN"
                      :start ,(format-time-string "%Y-%m-%d %H:%M" start-time)
                      :set-track "General")
                     (sat-open :time 1)
                     (uni :time 1) ; live Q&A
                     (adventure :time 1) ; pad Q&A
                     ("DEV"
                      :start
                      ,(format-time-string "%Y-%m-%d %H:%M" (time-add start-time 60))
                      :set-track "Development")
                     (repl :time 1) ; IRC
                     (matplotllm :time 1) ; pad
                     (voice :time 1) ; live
                     )))))
  (emacsconf-stream-crontabs t schedule))

That generates gen.crontab and dev.crontab. This is what gen.crontab looks like for testing:

PATH=/usr/local/bin:/usr/bin
MAILTO=""
XDG_RUNTIME_DIR="/run/user/2002"
35 11 26 10 * /usr/bin/screen -dmS play-sat-open bash -c "DISPLAY=:5 TEST_MODE=1 /usr/local/bin/handle-session sat-open | tee -a ~/track.log"
36 11 26 10 * /usr/bin/screen -dmS play-uni bash -c "DISPLAY=:5 TEST_MODE=1 /usr/local/bin/handle-session uni | tee -a ~/track.log"
38 11 26 10 * /usr/bin/screen -dmS play-adventure bash -c "DISPLAY=:5 TEST_MODE=1 /usr/local/bin/handle-session adventure | tee -a ~/track.log"

The result: for both tracks, the intro videos play, the test videos play, and web browsers go to the right places for the Q&A.

In case I need to resume manual control:

emacsconf-stream-cancel-crontab: Remove crontab for TRACK.
(defun emacsconf-stream-cancel-crontab (track)
  "Remove crontab for TRACK."
  (interactive (list (emacsconf-complete-track)))
  (plist-put track :autopilot nil)
  (emacsconf-stream-track-ssh track "crontab -r"))

emacsconf-stream-cancel-all-crontabs: Remove crontabs.
(defun emacsconf-stream-cancel-all-crontabs ()
  "Remove crontabs."
  (interactive)
  (dolist (track emacsconf-tracks)
    (plist-put track :autopilot nil)
    (emacsconf-stream-track-ssh track "crontab -r")))

Here are some things I learned along the way:

  • I needed to use timedatectl set-timezone America/Toronto to change the server's timezone to America/Toronto so that the crontab would run at the right time.

    In Ansible terms, that's:

    	- name: Set system timezone
    		tags: tz
    		community.general.timezone:
    			name: ""
    	- name: Restart cron
    		tags: tz
    		ansible.builtin.service:
    			name: cron
    			state: restarted
    
  • I also needed to specify the PATH so that I didn't need to add the absolute paths in all the other shell scripts, XDG_RUNTIME_DIR to get audio working, and DISPLAY so that windows showed up in the right place.

I think this will let me run both tracks for EmacsConf with more ease and less frantic juggling. We'll see!

Using Emacs and Python to record an animation and synchronize it with audio

| emacs, emacsconf, python, subed, video

[2023-01-14 Sat]: Removed my fork since upstream now has the :eval function.

The Q&A session for Things I'd like to see in Emacs (Richard Stallman) from EmacsConf 2022 was done over Mumble. Amin pasted the questions into the Mumble chat buffer and I copied them into a larger buffer as the speaker answered them, but I didn't do it consistently. I figured it might be worth making another video with easier-to-read visuals. At first, I thought about using LaTeX to create Beamer slides with the question text, which I could then turn into a video using ffmpeg. Then I decided to figure out how to animate the text in Emacs, because why not? I figured a straightforward typing animation would probably be less distracting than animate-string, and emacs-director seems to handle that nicely. I forked it to add a few things I wanted, like variables to make the typing speed slower (so that it could more reliably type things on my old laptop, since sometimes the timers seemed to have hiccups) and an :eval step for running things without needing to log them. (2023-01-14: Upstream has the :eval feature now.)

To make it easy to synchronize the resulting animation with the chapter markers I derived from the transcript of the audio file, I decided to beep between scenes. First step: make a beep file.

ffmpeg -y -f lavfi -i 'sine=frequency=1000:duration=0.1' beep.wav

Next, I animated the text, with a beep between scenes. I used subed-parse-file to read the question text directly from the chapter markers, and I used simplescreenrecorder to set up the recording settings (including audio).

(defun my-beep ()
  (interactive)
  (save-window-excursion
    (shell-command "aplay ~/recordings/beep.wav &" nil nil)))

(require 'director)
(defvar emacsconf-recording-process nil)
(shell-command "xdotool getwindowfocus windowsize 1282 720")
(progn
  (switch-to-buffer (get-buffer-create "*Questions*"))
  (erase-buffer)
  (org-mode)
  (face-remap-add-relative 'default :height 300)
  (setq-local mode-line-format "   Q&A for EmacsConf 2022: What I'd like to see in Emacs (Richard M. Stallman) - emacsconf.org/2022/talks/rms")
  (sit-for 3)
  (delete-other-windows)
  (hl-line-mode -1)
  (when (process-live-p emacsconf-recording-process) (kill-process emacsconf-recording-process))
  (setq emacsconf-recording-process (start-process "ssr" (get-buffer-create "*ssr*")
                                                   "simplescreenrecorder"
                                                   "--start-recording"
                                                   "--start-hidden"))
  (sit-for 3)
  (director-run
   :version 1
   :log-target '(file . "/tmp/director.log")
   :before-start
   (lambda ()
     (switch-to-buffer (get-buffer-create "*Questions*"))
     (delete-other-windows))
   :steps
   (let ((subtitles (subed-parse-file "~/proj/emacsconf/rms/emacsconf-2022-rms--what-id-like-to-see-in-emacs--answers--chapters.vtt")))
     (apply #'append
            (list
             (list :eval '(my-beep))
             (list :type "* Q&A for Richard Stallman's EmacsConf 2022 talk: What I'd like to see in Emacs\nhttps://emacsconf.org/2022/talks/rms\n\n"))
            (mapcar
             (lambda (sub)
               (list
                (list :log (elt sub 3))
                (list :eval '(progn (org-end-of-subtree)
                                    (unless (bolp) (insert "\n"))))
                (list :type (concat "** " (elt sub 3) "\n\n"))
                (list :eval '(org-back-to-heading))
                (list :wait 5)
                (list :eval '(my-beep))))
             subtitles)))
   :typing-style 'human
   :delay-between-steps 0
   :after-end (lambda ()
                (process-send-string emacsconf-recording-process "record-save\nwindow-show\nquit\n"))
   :on-failure (lambda ()
                 (process-send-string emacsconf-recording-process "record-save\nwindow-show\nquit\n"))
   :on-error (lambda ()
               (process-send-string emacsconf-recording-process "record-save\nwindow-show\nquit\n"))))

I used the following code to copy the latest recording to animation.webm and extract the audio to animation.wav. my-latest-file and my-recordings-dir are in my Emacs config.

(let ((name "animation.webm"))
  (copy-file (my-latest-file my-recordings-dir) name t)
  (shell-command
   (format "ffmpeg -y -i %s -ar 8000 -ac 1 %s.wav"
           (shell-quote-argument name)
           (shell-quote-argument (file-name-sans-extension name)))))

Then I needed to get the timestamps of the beeps in the recording. I subtracted a little bit (0.82 seconds) based on comparing the waveform with the results.

filename = "animation.wav"
from scipy.io import wavfile
from scipy import signal
import numpy as np
import re
rate, source = wavfile.read(filename)
peaks = signal.find_peaks(source, height=1000, distance=1000)
base_times = (peaks[0] / rate) - 0.82
print(base_times)

I noticed that the first question didn't seem to get beeped properly, so I tweaked the times. Then I wrote some code to generate a very long ffmpeg command that used trim and tpad to select the segments and extend them to the right durations. There was some drift when I did it without the audio track, but the timestamps seemed to work right when I included the Q&A audio track as well.

import webvtt
import subprocess
chapters_filename =  "emacsconf-2022-rms--what-id-like-to-see-in-emacs--answers--chapters.vtt"
answers_filename = "answers.wav"
animation_filename = "animation.webm"
def get_length(filename):
    result = subprocess.run(["ffprobe", "-v", "error", "-show_entries",
                             "format=duration", "-of",
                             "default=noprint_wrappers=1:nokey=1", filename],
        stdout=subprocess.PIPE,
        stderr=subprocess.STDOUT)
    return float(result.stdout)

def get_frames(filename):
    result = subprocess.run(["ffprobe", "-v", "error", "-select_streams", "v:0", "-count_packets",
                             "-show_entries", "stream=nb_read_packets", "-of",
                             "csv=p=0", filename],
        stdout=subprocess.PIPE,
        stderr=subprocess.STDOUT)
    return float(result.stdout)

answers_length = get_length(answers_filename)
# override base_times
times = np.asarray([  1.515875,  13.50, 52.32125 ,  81.368625, 116.66625 , 146.023125,
       161.904875, 182.820875, 209.92125 , 226.51525 , 247.93875 ,
       260.971   , 270.87375 , 278.23325 , 303.166875, 327.44925 ,
       351.616375, 372.39525 , 394.246625, 409.36325 , 420.527875,
       431.854   , 440.608625, 473.86825 , 488.539   , 518.751875,
       544.1515  , 555.006   , 576.89225 , 598.157375, 627.795125,
       647.187125, 661.10875 , 695.87175 , 709.750125, 717.359875])
fps = 30.0
times = np.append(times, get_length(animation_filename))
anim_spans = list(zip(times[:-1], times[1:]))
chapters = webvtt.read(chapters_filename)
if chapters[0].start_in_seconds == 0:
    vtt_times = [[c.start_in_seconds, c.text] for c in chapters]
else:
    vtt_times = [[0, "Introduction"]] + [[c.start_in_seconds, c.text] for c in chapters] 
vtt_times = vtt_times + [[answers_length, "End"]]
# Add ending timestamps
vtt_times = [[x[0][0], x[1][0], x[0][1]] for x in zip(vtt_times[:-1], vtt_times[1:])]
test_rate = 1.0

i = 0
concat_list = ""
groups = list(zip(anim_spans, vtt_times))
import ffmpeg
animation = ffmpeg.input('animation.webm').video
audio = ffmpeg.input('rms.opus')

for_overlay = ffmpeg.input('color=color=black:size=1280x720:d=%f' % answers_length, f='lavfi')
params = {"b:v": "1k", "vcodec": "libvpx", "r": "30", "crf": "63"}
test_limit = 1
params = {"vcodec": "libvpx", "r": "30", "copyts": None, "b:v": "1M", "crf": 24}
test_limit = 0
anim_rate = 1
import math
cursor = 0
if test_limit > 0:
    groups = groups[0:test_limit]
clips = []

# cursor is the current time
for anim, vtt in groups:
    padding = vtt[1] - cursor - (anim[1] - anim[0]) / anim_rate
    if (padding < 0):
        print("Squeezing", math.floor((anim[1] - anim[0]) / (anim_rate * 1.0)), 'into', vtt[1] - cursor, padding)
        clips.append(animation.trim(start=anim[0], end=anim[1]).setpts('PTS-STARTPTS')) 
    elif padding == 0:
        clips.append(animation.trim(start=anim[0], end=anim[1]).setpts('PTS-STARTPTS'))
    else:
        print("%f to %f: Padding %f into %f - pad: %f" % (cursor, vtt[1], (anim[1] - anim[0]) / (anim_rate * 1.0), vtt[1] - cursor, padding))
        cursor = cursor + padding + (anim[1] - anim[0]) / anim_rate
        clips.append(animation.trim(start=anim[0], end=anim[1]).setpts('PTS-STARTPTS').filter('tpad', stop_mode="clone", stop_duration=padding))
    for_overlay = for_overlay.overlay(animation.trim(start=anim[0], end=anim[1]).setpts('PTS-STARTPTS+%f' % vtt[0]))
    clips.append(audio.filter('atrim', start=vtt[0], end=vtt[1]).filter('asetpts', 'PTS-STARTPTS'))
args = ffmpeg.concat(*clips, v=1, a=1).output('output.webm', **params).overwrite_output().compile()
print(' '.join(f'"{item}"' for item in args))

Anyway, it's here for future reference. =)

View org source for this post

subed.el: Word-level timing improvements, TSV support

| emacs, subed

I figured out how to align the subtitles to get word-level timestamps and generate SRV2 files, so now I'm working on improving the support in subed.el so that it can work with those timestamps.

The subed-word-data-load-from-file function in subed-word-data.el should load the word data from the SRV2 file and attempt to match it up with the text, colouring words if they were successfully matched.

Screenshot_2022-10-26_13-46-31.png

Figure 1: After subed-word-data-load-from-file

I also updated and committed code for working with TSV files like the label export from the Audacity audio editor. The concise format might make editing and reviewing easier. The files look like this:

Screenshot_2022-10-26_13-49-00.png

Figure 2: Tab-separated values

To convert an existing file, use subed-convert (from subed-common.el). You can also manually turn on subed-tsv-mode from subed-tsv.el when you're visitng a TSV subtitle/label file. Tab-separated values can be in any sort of text file and tsv is a common file extension, so I don't automatically add it to auto-mode-alist.

The changes should be in 1.0.16 or the latest version from the Git repository at https://github.com/sachac/subed .

Coverage reporting in Emacs with Buttercup, Undercover, Coverage, and a Makefile

| emacs, elisp, subed

One of the things that I always wanted to get back to was the practice of having good test coverage. That way, I can have all these tests catch me in case I break something in my sleep-deprived late-night hacking sessions, and I can see where I may have missed a spot.

Fortunately, subed-mode included lots of tests using the Buttercup testing framework. They look like this:

(describe "SRT"
  (describe "Getting"
    (describe "the subtitle ID"
      (it "returns the subtitle ID if it can be found."
        (with-temp-srt-buffer
         (insert mock-srt-data)
         (subed-jump-to-subtitle-text 2)
         (expect (subed-subtitle-id) :to-equal 2)))
      (it "returns nil if no subtitle ID can be found."
        (with-temp-srt-buffer
         (expect (subed-subtitle-id) :to-equal nil))))
    ...))

and I can run them with make test, which the Makefile defines as emacs -batch -f package-initialize -L . -f buttercup-run-discover.

I don't have Cask set up for subed. I should probably learn how to use Cask. In the meantime, I needed to figure out how to get my Makefile to get the buttercup tests to capture the coverage data and report it in a nice way.

It turns out that the undercover coverage recording library works well with buttercup. It took me a little fiddling (and some reference to undercover.el-buttercup-integration-example) to figure out exactly how to invoke it so that undercover instrumented libraries that I was loading, since the subed files were in one subdirectory and the tests were in another. This is what I eventually came up with for tests/undercover-init.el:

(add-to-list 'load-path "./subed")
(when (require 'undercover nil t)
  (undercover "./subed/*.el" (:report-format 'simplecov) (:send-report nil)))

Then the tests files could start with:

(load-file "./tests/undercover-init.el")
(require 'subed-srt)

and my Makefile target for running tests with coverage reporting could be:

test-coverage:
	mkdir -p coverage
	UNDERCOVER_FORCE=true emacs -batch -L . -f package-initialize -f buttercup-run-discover

Displaying the coverage information in code buffers was easy with the coverage package. It looks in the git root directory for the coverage results, so I didn't need to tell it where the results were. This is what it looks like:

2022-01-02-19-00-28.svg

There are a few other options for displaying coverage info. cov uses the fringe and coverlay focuses on highlighting missed lines.

So now I can actually see how things are going, and I can start writing tests for some of those gaps. At some point I may even do the badge thing mentioned in my blog post from 2015 on continuous integration and code coverage for Emacs packages. There are a lot of things I'm slowly remembering how to do… =)

Defining generic and mode-specific Emacs Lisp functions with cl-defmethod

| elisp, emacs, subed

2022-01-27: Added example function description.
2022-01-02: Changed quote to function in the defalias.

I recently took over the maintenance of subed, an Emacs mode for editing subtitles. One of the things on my TODO list was to figure out how to handle generic and format-specific functions instead of relying on defalias. For example, there are SubRip files (.srt), WebVTT files (.vtt), and Advanced SubStation Alpha (.ass). I also want to add support for Audacity labels and other formats.

There are some functions that will work across all of them once you have the appropriate format-specific functions in place, and there are some functions that have to be very different depending on the format that you're working with. Now, how do you do those things in Emacs Lisp? There are several ways of making general functions and specific functions.

For example, the forward-paragraph and backward-paragraph commands use variables to figure out the paragraph separators, so buffer-local variables can change the behaviour.

However, I needed a bit more than regular expressions. An approach taken in some packages like smartparens is to have buffer-local variables have the actual functions to be called, like sp-forward-bound-fn and sp-backward-bound-fn.

(defvar-local sp-forward-bound-fn nil
  "Function to restrict the forward search")

(defun sp--get-forward-bound ()
  "Get the bound to limit the forward search for looking for pairs.
If it returns nil, the original bound passed to the search
function will be considered."
  (and sp-forward-bound-fn (funcall sp-forward-bound-fn)))

Since there were so many functions, I figured that might be a little bit unwieldy. In Org mode, custom export backends are structs that have an alist that maps the different types of things to the functions that will be called, overriding the functions that are defined in the parent export backend.

(cl-defstruct (org-export-backend (:constructor org-export-create-backend)
          (:copier nil))
  name parent transcoders options filters blocks menu)

(defun org-export-get-all-transcoders (backend)
  "Return full translation table for BACKEND.

BACKEND is an export back-end, as return by, e.g,,
`org-export-create-backend'.  Return value is an alist where
keys are element or object types, as symbols, and values are
transcoders.

Unlike to `org-export-backend-transcoders', this function
also returns transcoders inherited from parent back-ends,
if any."
  (when (symbolp backend) (setq backend (org-export-get-backend backend)))
  (when backend
    (let ((transcoders (org-export-backend-transcoders backend))
          parent)
      (while (setq parent (org-export-backend-parent backend))
        (setq backend (org-export-get-backend parent))
        (setq transcoders
              (append transcoders (org-export-backend-transcoders backend))))
      transcoders)))

The export code looked a little bit complicated, though. I wanted to see if there was a different way of doing things, and I came across cl-defmethod. Actually, the first time I tried to implement this, I was focused on the fact that cl-defmethod could call different things depending on the class that you give it. So initially I had created a couple of classes: subed-backend class, and then subclasses such as subed-vtt-backend. This allowed me to store the backend as a buffer-local variable and differentiate based on that.

(require 'eieio)

(defclass subed-backend ()
  ((regexp-timestamp :initarg :regexp-timestamp
                     :initform ""
                     :type string
                     :custom string
                     :documentation "Regexp matching a timestamp.")
   (regexp-separator :initarg :regexp-separator
                     :initform ""
                     :type string
                     :custom string
                     :documentation "Regexp matching the separator between subtitles."))
  "A class for data and functions specific to a subtitle format.")

(defclass subed-vtt-backend (subed-backend) nil
  "A class for WebVTT subtitle files.")

(cl-defmethod subed--timestamp-to-msecs ((backend subed-vtt-backend) time-string)
  "Find HH:MM:SS,MS pattern in TIME-STRING and convert it to milliseconds.
Return nil if TIME-STRING doesn't match the pattern.
Use the format-specific function for BACKEND."
  (save-match-data
    (when (string-match (oref backend regexp-timestamp) time-string)
      (let ((hours (string-to-number (match-string 1 time-string)))
            (mins  (string-to-number (match-string 2 time-string)))
            (secs  (string-to-number (match-string 3 time-string)))
            (msecs (string-to-number (subed--right-pad (match-string 4 time-string) 3 ?0))))
        (+ (* (truncate hours) 3600000)
           (* (truncate mins) 60000)
           (* (truncate secs) 1000)
           (truncate msecs))))))

Then I found out that you can use major-mode as a context specifier for cl-defmethod, so you can call different specific functions depending on the major mode that your buffer is in. It doesn't seem to be mentioned in the elisp manual, so at some point I should figure out how to suggest mentioning it. Anyway, now I have some functions that get called if the buffer is in subed-vtt-mode and some functions that get called if the buffer is in subed-srt-mode.

The catch is that cl-defmethod can't define interactive functions. So if I'm defining a command, an interactive function that can be called with M-x, then I will need to have a regular function that calls the function defined with cl-defmethod. This resulted in a bit of duplicated code, so I have a macro that defines the method and then defines the possibly interactive command that calls that method. I didn't want to think about whether something was interactive or not, so my macro just always creates those two functions. One is a cl-defmethod that I can override for a specific major mode, and one is the function that actually calls it, which may may not be interactive. It doesn't handle &rest args, but I don't have any in subed.el at this time.

(defmacro subed-define-generic-function (name args &rest body)
  "Declare an object method and provide the old way of calling it."
  (declare (indent 2))
  (let (is-interactive
        doc)
    (when (stringp (car body))
      (setq doc (pop body)))
    (setq is-interactive (eq (caar body) 'interactive))
    `(progn
       (cl-defgeneric ,(intern (concat "subed--" (symbol-name name)))
           ,args
         ,doc
         ,@(if is-interactive
               (cdr body)
             body))
       ,(if is-interactive
            `(defun ,(intern (concat "subed-" (symbol-name name))) ,args
               ,(concat doc "\n\nThis function calls the generic function `"
                        (concat "subed--" (symbol-name name)) "' for the actual implementation.")
               ,(car body)
               (,(intern (concat "subed--" (symbol-name name)))
                ,@(delq nil (mapcar (lambda (a)
                                      (unless (string-match "^&" (symbol-name a))
                                        a))
                                    args))))
          `(defalias (quote ,(intern (concat "subed-" (symbol-name name))))
             (function ,(intern (concat "subed--" (symbol-name name))))
             ,doc)))))

For example, the function:

(subed-define-generic-function timestamp-to-msecs (time-string)
  "Find timestamp pattern in TIME-STRING and convert it to milliseconds.
Return nil if TIME-STRING doesn't match the pattern.")

expands to:

(progn
  (cl-defgeneric subed--timestamp-to-msecs
      (time-string)
    "Find timestamp pattern in TIME-STRING and convert it to milliseconds.
Return nil if TIME-STRING doesn't match the pattern.")
  (defalias 'subed-timestamp-to-msecs 'subed--timestamp-to-msecs "Find timestamp pattern in TIME-STRING and convert it to milliseconds.
Return nil if TIME-STRING doesn't match the pattern."))

and the interactive command defined with:

(subed-define-generic-function forward-subtitle-end ()
  "Move point to end of next subtitle.
Return point or nil if there is no next subtitle."
  (interactive)
  (when (subed-forward-subtitle-id)
    (subed-jump-to-subtitle-end)))

expands to:

(progn
  (cl-defgeneric subed--forward-subtitle-end nil "Move point to end of next subtitle.
Return point or nil if there is no next subtitle."
                 (when
                     (subed-forward-subtitle-id)
                   (subed-jump-to-subtitle-end)))
  (defun subed-forward-subtitle-end nil "Move point to end of next subtitle.
Return point or nil if there is no next subtitle.

This function calls the generic function `subed--forward-subtitle-end' for the actual implementation."
         (interactive)
         (subed--forward-subtitle-end)))

Then I can define a specific one with:

(cl-defmethod subed--timestamp-to-msecs (time-string &context (major-mode subed-srt-mode))
  "Find HH:MM:SS,MS pattern in TIME-STRING and convert it to milliseconds.
Return nil if TIME-STRING doesn't match the pattern.
Use the format-specific function for MAJOR-MODE."
  (save-match-data
    (when (string-match subed--regexp-timestamp time-string)
      (let ((hours (string-to-number (match-string 1 time-string)))
            (mins  (string-to-number (match-string 2 time-string)))
            (secs  (string-to-number (match-string 3 time-string)))
            (msecs (string-to-number (subed--right-pad (match-string 4 time-string) 3 ?0))))
        (+ (* (truncate hours) 3600000)
           (* (truncate mins) 60000)
           (* (truncate secs) 1000)
           (truncate msecs))))))

The upside is that it's easy to either override or extend a function's behavior. For example, after I sort subtitles, I want to renumber them if I'm in an SRT buffer because SRT subtitles have numeric IDs. This doesn't happen in any of the other modes. So I can just define that this bit of code runs after the regular code that runs.

(cl-defmethod subed--sort :after (&context (major-mode subed-srt-mode))
  "Renumber after sorting. Format-specific for MAJOR-MODE."
  (subed-srt--regenerate-ids))

The downside is that going to the function's definition and stepping through it is a little more complicated because it's hidden behind this macro and the cl-defmethod infrastructure. I think that if you describe-function the right function, the internal version with the --, then it will list the different implementations of it. I added a note to the regular function's docstring to make it a little easier.

Here's what M-x describe-function subed-forward-subtitle-end looks like:

describe-function.svg

Figure 1: Describing a generic function

I'm going to give this derived-mode branch a try for a little while by subtitling some more EmacsConf talks before I merge it into the main branch. This is my first time working with cl-defmethod, and it looks pretty interesting.

Using word-level timing information when editing subtitles or captions in Emacs

| emacs, subed, video

2022-10-26: Merged word-level timing support into subed.el, so I don't need my old caption functions.

2022-04-18: Switched to using yt-dlp.

I like to split captions at logical points, such as at the end of a phrase or sentence. At first, I used subed.el to play the video for the caption, pausing it at the appropriate point and then calling subed-split-subtitle to split at the playback position. Then I modified subed-split-subtitle to split at the video position that's proportional to the text position, so that it's roughly in the right spot even if I'm not currently listening. That got me most of the way to being able to quickly edit subtitles.

It turns out that word-level timing is actually available from YouTube if I download the autogenerated SRV2 file using yt-dlp, which I can do with the following function:

(defun my-caption-download-srv2 (id)
  (interactive "MID: ")
  (require 'subed-word-data)
  (when (string-match "v=\\([^&]+\\)" id) (setq id (match-string 1 id)))
  (let ((default-directory "/tmp"))
    (call-process "yt-dlp" nil nil nil "--write-auto-sub" "--write-sub" "--no-warnings" "--sub-lang" "en" "--skip-download" "--sub-format" "srv2"
                  (concat "https://youtu.be/" id))
    (subed-word-data-load-from-file (my-latest-file "/tmp" "\\.srv2\\'"))))

2022-10-26: I can also generate a SRV2-ish file using torchaudio, which I can then load with subed-word-data-load-from-file.

(defun my-caption-fix-common-errors (data)
  (mapc (lambda (o)
          (mapc (lambda (e)
                  (when (string-match (concat "\\<" (regexp-opt (if (listp e) (seq-remove (lambda (s) (string= "" s)) e)
                                                                  (list e)))
                                              "\\>")
                                      (alist-get 'text o))
                    (map-put! o 'text (replace-match (car (if (listp e) e (list e))) t t (alist-get 'text o)))))
                my-subed-common-edits))
        data))

Assuming I start editing from the beginning of the file, then the part of the captions file after point is mostly unedited. That means I can match the remainder of the current caption with the word-level timing to try to figure out the time to use when splitting the subtitle, falling back to the proportional method if the data is not available.

(defun subed-avy-set-up-actions ()
  (interactive)
  (make-local-variable 'avy-dispatch-alist)
  (add-to-list
   'avy-dispatch-alist
   (cons ?, 'subed-split-subtitle)))

(use-package subed
  :if my-laptop-p
  :load-path "~/vendor/subed/subed"
  :hook
  (subed-mode . display-fill-column-indicator-mode)
  (subed-mode . subed-avy-set-up-actions)
  :bind
  (:map subed-mode-map
        ("M-," . subed-split-subtitle)
        ("M-." . subed-merge-with-next)
        ("M-p" . avy-goto-char-timer)
        ("M-e" . avy-goto-char-timer)))

That way, I can use the word-level timing information for most of the reformatting, but I can easily replay segments of the video if I'm unsure about a word that needs to be changed.

If I want to generate a VTT based on the caption data, breaking it at certain words, these functions help:

(defvar my-caption-breaks
  '("the" "this" "we" "we're" "I" "finally" "but" "and" "when")
  "List of words to try to break at.")
(defun my-caption-make-groups (list &optional threshold)
  (let (result
        current-item
        done
        (current-length 0)
        (limit (or threshold 70))
        (lower-limit 30)
        (break-regexp (concat "\\<" (regexp-opt my-caption-breaks) "\\>")))
    (while list
      (cond
       ((null (car list)))
       ((string-match "^\n*$" (alist-get 'text (car list)))
        (push (cons '(text . " ") (car list)) current-item)
        (setq current-length (1+ current-length)))
       ((< (+ current-length (length (alist-get 'text (car list)))) limit)
        (setq current-item (cons (car list) current-item)
              current-length (+ current-length (length (alist-get 'text (car list))) 1)))
       (t (setq done nil)
          (while (not done)
          (cond
           ((< current-length lower-limit)
            (setq done t))
           ((and (string-match break-regexp (alist-get 'text (car current-item)))
                 (not (string-match break-regexp (alist-get 'text (cadr current-item)))))
            (setq current-length (- current-length (length (alist-get 'text (car current-item)))))
            (push (pop current-item) list)
            (setq done t))
           (t
            (setq current-length (- current-length (length (alist-get 'text (car current-item)))))
            (push (pop current-item) list))))
          (push nil list)
          (setq result (cons (reverse current-item) result) current-item nil current-length 0)))
      (setq list (cdr list)))
    (reverse result)))

(defun my-caption-format-as-subtitle (list &optional word-timing)
  "Turn a LIST of the form (((start . ms) (end . ms) (text . s)) ...) into VTT.
If WORD-TIMING is non-nil, include word-level timestamps."
  (format "%s --> %s\n%s\n\n"
          (subed-vtt--msecs-to-timestamp (alist-get 'start (car list)))
          (subed-vtt--msecs-to-timestamp (alist-get 'end (car (last list))))
          (s-trim (mapconcat (lambda (entry)
                               (if word-timing
                                   (format " <%s>%s"
                                           (subed-vtt--msecs-to-timestamp (alist-get 'start entry))
                                           (string-trim (alist-get 'text entry)))
                                 (alist-get 'text entry)))
                             list ""))))

(defun my-caption-to-vtt (&optional data)
  (interactive)
  (with-temp-file "captions.vtt"
    (insert "WEBVTT\n\n"
            (mapconcat
             (lambda (entry) (my-caption-format-as-subtitle entry))
             (my-caption-make-groups
              (or data (my-caption-fix-common-errors subed-word-data--cache)))
             ""))))
This is part of my Emacs configuration.

Using Emacs to fix automatically generated subtitle timestamps

Posted: - Modified: | emacs, subed

I like how people are making more and more Emacs-related videos. I think subtitles, transcripts, and show notes would go a long way to helping people quickly search, skim, and squeeze these videos into their day.

Youtube's automatically-generated subtitles overlap. I think some players scroll the subtitles, but the ones I use just display them in alternating positions. I like to have non-overlapping subtitles, so here's some code that works with subed.el to fix the timestamps.

(defun my/subed-fix-timestamps ()
  "Change all ending timestamps to the start of the next subtitle."
  (goto-char (point-max))
  (let ((timestamp (subed-subtitle-msecs-start)))
    (while (subed-backward-subtitle-time-start)
      (subed-set-subtitle-time-stop timestamp)
      (setq timestamp (subed-subtitle-msecs-start)))))

Then it's easy to edit the subtitles (punctuation, capitalization, special terms), especially with the shortcuts for splitting and merging subtitles.

For transcripts with starting and ending timestamps per paragraph, I like using the merge shortcut to merge all the subtitles for a paragraph together. Here's a sample: https://emacsconf.org/2020/talks/05/

Tonight I edited automatically-generated subtitles for a screencast that was about 40 minutes long. The resulting file had 1157 captions, so about 2 seconds each. I finished it in about 80 minutes, pretty much the 2x speed that I've been seeing. I can probably get a little faster if I figure out good workflows for:

  • jumping: avy muscle memory, maybe?
  • splitting things into sentences and phrases
  • fixing common speech recognition errors (ex: emax -> Emacs, which I handle with regex replaces; maybe a list of them?)

I experimented with making a hydra for this before, but thinking about the keys to use slowed me down a bit and it didn't flow very well. Might be worth tinkering with.

Transcribing from scratch takes me about 4-5x playtime. I haven't tweaked my workflow for that one yet because I've only transcribed one talk with subed.el , and there's a backlog of talks that already have automatically generated subtitles to edit. Low-hanging fruit! =)

So that's another thing I (or other people) can occasionally do to help out even if I don't have enough focused time to think about a programming challenge or do a podcast myself. And I get to learn more in the process, too. Fun!