I want to make it easier to livestream without worrying about leaking private information. Tradeoff: slower conversations with the chat, but more peace of mind.
I think I've sorted out a setup involving two instances of OBS, with
the source instance sending the stream with a delay to the restreaming
instance that will then send it on to YouTube. This allows me to cut
the feed from the source instance to the restreaming instance in case
something happens.
The first OBS is the one that has my screen capture, webcam, audio, etc. Here's what I needed to do to change it.
Create a new profile or rename the profile to "Source".
Name the collection of streams "Source" as well.
In Settings - Hotkeys, define a keyboard shortcut for Stop streaming (discard delay). I use Super + F12.
I used the Profile menu to create a new profile called "Restream" and the Scene Collection menu to create a new collection called "Restream." I set up the scene as follows:
Create a text source with the backup message.
Create a media source.
Uncheck Local File.
Uncheck Restart playback when source becomes active.
Input: srt://127.0.0.1:9000?mode=listener
In the first OBS (the source), click on Start streaming. After some delay, the stream will appear, and I can move or resize it.
I was a little thrown off by the fact that my audio bars didn't initially show up in the mixer in the restreamer, but both recording and streaming seem to include the audio.
To stop the stream, I can switch to OBS, click on Stop streaming, and (important!) choose Stop streaming (discard delay). The OBS window might be buried under other things on my second screen, though, and that's too many clicks and mouse movements. The keyboard shortcut Super + F12 we just set up should be handy, but I might not remember that, so let's add some scripts. The OBS websocket protocol doesn't support discarding the delay buffer yet, but I'm on Linux and X11, so I can use xdotool to simulate a keypress. Here I select the window matching the profile name I set up previously.
I can org-capture the timestamp of the panic so that I can doublecheck the recording.
;;;###autoload
(defunsacha-obs-panic ()
"Stop streaming and discard the delay buffer.This uses a hotkey I defined in OBS."
(interactive)
(shell-command "~/bin/panic")
(org-capture-string "Panicked""l")
(org-capture-finalize))
I always have Emacs around, and if it's not my main app, I have an autokey shortcut that maps super + 1 to focus on Emacs. Then I can M-x panic and Emacs completion will take care of finding the right function.
Let's add a menu item for even more panic assistance:
New in this video: subed-record-sum-time, #+PAD_LEFT and #+PAD_RIGHT
I like the constraints of a one-minute video, so I added a subed-record-sum-time command. That way, when I edit the video using Emacs, I can check how long the result will be. First, I split the subtitles, align it with the audio to fix the timestamps, and double check the times. Then I can skip my oopses. Sometimes WhisperX doesn't catch them, so I also look at waveforms and characters per second. I already talk quickly, so I'm not going to speed that up but I can trim the pauses in between phrases which is easy to do with waveforms. Sometimes, after reviewing a draft, I realize I need a little more time. If the original audio has some silence, I can just copy and paste it. If not, I can pad left or pad right to add some silence. I can try the flow of some sections and compile the video when I'm ready. Emacs can do almost anything. Yay Emacs!
I like the constraints of a one-minute video, so I added a subed-record-sum-time command. That way, when I edit the video using Emacs, I can check how long the result will be.
subed-record uses subtitles and directives in
comments in a VTT subtitle file to edit audio
and video. subed-record-sum-time calculates
the resulting duration and displays it in the
minibuffer.
First, I split the subtitles, align it with the audio to fix the timestamps, and double check the times.
I'm experimenting with an algorithmic way to
combine the breaks from my script with the
text from the transcript. subed-align calls
the aeneas forced alignment tool to match up
the text with the timestamps. I use
subed-waveform-show-all to show all the
waveforms.
Then I can skip my oopses.
Adding a NOTE #+SKIP comment before a
subtitle makes subed-record-compile-video
and subed-record-compile-flow skip that part
of the audio.
Sometimes WhisperX doesn't catch them,
WhisperX sometimes doesn't transcribe my false starts if I repeat things quickly.
so I also look at waveforms
subed-waveform-show-all adds waveforms for
all the subtitles. If I notice there's a pause
or a repeated shape in the waveform, or if I
listen and notice the repetition, I can
confirm by middle-clicking on the waveform to
sample part of it.
and characters per second.
Low characters per second is sometimes a sign
that the timestamps are incorrect or there's a
repetition that wasn't transcribed.
I already talk quickly, so I'm not going to speed that up
Also, I already sound like a chipmunk;
mechanically speeding up my recording to fit
in a certain time will make that worse =)
but I can trim the pauses in between phrases which is easy to do with waveforms.
left-click to set the start, right-click to
set the stop. If I want to adjust the
previous/next one at the same time, I would
use shift-left-click or shift-right-click, but
here I want to skip the gaps between phrases,
so I adjust the current subtitle without
making the previous/next one longer.
Sometimes, after reviewing a draft, I realize I need a little more time.
I can specify visuals like a video, animated
GIF, or an image by adding a [[file:...]]
link in the comment for a subtitle. That
visual will be used until the next visual is
specified in a comment on a different
subtitle. subed-record-compile-video can
automatically speed up video clips to fit in
the time for the current audio segment, which
is the set of subtitles before the next visual
is defined. After I compile and review the
video, sometimes I notice that something goes by too quickly.
If the original audio has some silence, I can just copy and paste it.
This can sometimes feel more natural than adding in complete silence.
If not, I can pad left or pad right to add some silence.
I added a new feature so that I could specify
something like #+PAD_RIGHT: 1.5 in a comment
to add 1.5 seconds of silence after the audio
specified by that subtitle.
I can try the flow of some sections
I can select a region and then use M-x
subed-record-compile-try-flow to play the
audio or C-u M-x
subed-record-compile-try-flow to play the
audio+video for that region.
and compile the video when I'm ready.
subed-record-compile-video compiles the
video to the file specified in #+OUTPUT:
filename. ffmpeg is very arcane, so I'm glad
I can simplify my use of it with Emacs Lisp.
Emacs can do almost anything. Yay Emacs!
Non-linear audio and video editing is actually
pretty fun in a text editor, especially when I
can just use M-x vundo to navigate my undo
history.
I want to document more of my Minecraft adventures with A+. Video is a natural way to do this. It builds on her familiarity with the tutorials and streams she enjoys watching. I set up OBS on her laptop and plugged in my Blue Yeti microphone. We did our first interview yesterday. I edited and subtitled it (because why not!), uploaded it as an unlisted YouTube video, and shared it with her dad, sister, and cousins.
I did the video editing in Emacs with subed-record. First, I used WhisperX to transcribe the video, and I used subed-align to fix the timestamps with aeneas. I normalized the audio with Audacity and I exported the .opus file for use in subed-record.el. Then I added NOTE #+SKIP before times I wanted to remove, like when she asked for a retake. Here's what that subtitle markup looks like:
WEBVTT
NOTE #+SKIP
00:00:00.000 --> 00:00:16.679
And then I'll record in my side also
and we'll just put it in somehow.
Somehow. Okay. We can edit that, right?
Yeah, we'll learn how to edit things.
It'll be great.
NOTE
Introduction
#+AUDIO: cuberventures-001.opus
[[file:intro.webm]]
#+OUTPUT: cuberventures-001-fxnt-create-2-windmill-home-cafe-trains-hotel-half-underwater.webm
00:00:16.680 --> 00:00:19.399
Okay, so now we're here with <username>.
00:00:19.400 --> 00:00:23.039
I want to find out what you like about Minecraft and
00:00:23.040 --> 00:00:26.079
all the cool things that you have been building lately.
This was a little different from my usual video creation workflow, where I record the audio and the video separately. When I wrote subed-record.el, I assumed I'd edit the audio first, choose images/GIFs/videos that were already ready to go, and then combine those visuals with sections of audio, speeding things up or slowing things down as needed. Now I wanted to apply the same edits to the video as I did to the audio. A+ did a great job of looking at stuff in Minecraft while talking about them, so I wanted to keep her narration in sync. I added some code to allow me to specify a same-edits keyword for the visuals. That meant that I would use the same selection list that I used for cutting the audio. Here's what that subtitle markup looks like:
NOTE
[[file:2024-12-31 10-35-14.mkv]]
#+OPTIONS: same-edits
00:00:43.860 --> 00:00:45.941
Shall we take a tour of my world?
00:00:45.942 --> 00:00:50.079
Sure, let's tell people which mod pack this is.
00:00:50.080 --> 00:00:55.639
This is FXNT Create 2, also known as FoxyNoTail Create 2.
NOTE Windmill
00:00:55.640 --> 00:00:58.239
I've got this little bit of path leading to the interview
00:00:58.240 --> 00:01:01.839
room. This is my unfinished windmill. I've been meaning to
This workflow lets me cut out segments in the middle of the video, like this:
00:17:30.200 --> 00:17:33.119
great start for a tour. I'm looking forward to seeing what
00:17:33.120 --> 00:17:34.112
you will build next.
NOTE #+SKIP
00:17:34.113 --> 00:18:02.379
Do you have any last words before
we try to figure out this video editing thing?
Yeah. We'll cut that last part out.
Let's just do a retake on that last part.
Someday. Out here. Okay. There you go.
This is a beautiful view.
00:18:02.380 --> 00:18:08.119
The last things I want to say about this world is there'll be
I also wanted to start the video with a segment from my recording, so we could see her avatar on screen during the introduction. She kept her computer on first-person POV instead of changing the camera. I used mpv to figure out the timestamps for the start and end of the part that I wanted to use, then I used ffmpeg to cut that clip. I added a comment with a link to that video in order to use it before the main video. That's the [[file:intro.webm]] in the first section's comments.
After testing a small section of the transcript by selecting a region and using subed-record-compile-video, I deselected the region and used subed-record-compile-video to produce the whole video.
I also modified subed-record-compile-subtitles
to include the other non-directive comments, so I
can include the section headings in the raw VTT
file and have them turn up in the exported
version. Then I can use the new
subed-section-comments-as-chapters command to
copy those as chapters for the YouTube
description.
We're not going to share that particular video
yet, but I'm looking forward to trying that
technique with videos about stuff I'm figuring out
in Minecraft or Emacs. It's also tempting me to
think about ways to specify transitions like
crossfades and other fancy effects like overlays.
I like using the transcript as the starting point
for video editing. It just makes sense to me to
work with it as text. I also like this experiment
with documenting more of our Minecraft
experimentation. It seems to get her talking and
encourages her to build more. I'm looking forward
to learning more about Minecraft and making videos
too.
We did another video today using the new shortcuts
I've just set up for toggling OBS recording. This
time we didn't even need to do any editing. I used
Org Export to make her a little HTML file that had
the two videos on it, so she can review it any
time. Onward!
tldr (2167 words): I can make animating presentation maps easier by
writing my own functions for the Emacs text editor. In this post, I
show how I can animate an SVG element by element. I can also add IDs
to the path and use CSS to build up an SVG with temporary highlighting
in a Reveal.js presentation.
Convert PDF to SVG with Inkscape (Cairo option) or pdftocairo)
PNG / Supernote PDF: Combined shapes. Process
Break apart, fracture overlaps
Recombine
Set IDs
Sort paths -> Animation style 1
Adobe Fresco: individual elements in order; landscape feels natural
Animation styles
Animation style 1: Display elements one after another
Animation style 2: Display elements one after another, and also show/hide highlights
Table: slide ID, IDs to add, temporary highlights -> Reveal.js: CSS with transitions
Ideas for next steps:
Explore graphviz & other diagramming tools
Frame-by-frame SVGs
on include
write to files
FFmpeg crossfade
Recording Reveal.js presentations
Use OCR results?
I often have a hard time organizing my thoughts into a linear
sequence. Sketches are nice because they let me jump around and still
show the connections between ideas. For presentations, I'd like to
walk people through these sketches by highlighting different areas.
For example, I might highlight the current topic or show the previous
topics that are connected to the current one. Of course, this is
something Emacs can help with. Before we dive into it, here are quick
previews of the kinds of animation I'm talking about:
Figure 1: Animation style 1: based on drawing order
Animation style 2: building up a map with temporary highlights
Getting the sketches: PDFs are not all the same
Let's start with getting the sketches. I usually export my sketches as
PNGs from my Supernote A5X. But if I know that I'm going to animate a
sketch, I can export it as a PDF. I've recently been experimenting
with Adobe Fresco on the iPad, which can also export to PDF. The PDF I
get from Fresco is easier to animate, but I prefer to draw on the
Supernote because it's an e-ink device (and because the kiddo usually
uses the iPad).
If I start with a PNG, I could use Inkscape to trace the PNG and turn
it into an SVG. I think Inkscape uses autotrace behind the scenes. I
don't usually put my highlights on a separate layer, so autotrace will
make odd shapes.
It's a lot easier if you start off with vector graphics in the first
place. I can export a vector PDF from the SuperNote A5X and either
import it into Inkscape using the Cairo option or use the command-line
pdftocairo tool.
I've been looking into using Adobe Fresco, which is a free app
available for the iPad. Fresco's PDF export can be converted to an SVG
using Inkscape or PDF to Cairo. What I like about the output of this
app is that it gives me individual elements as their own paths and
they're listed in order of drawing. This makes it really easy to
animate by just going through the paths in order.
Animation style 1: displaying paths in order
Here's a sample SVG file that pdfcairo creates from an Adobe Fresco
PDF export:
Adobe Fresco also includes built-in time-lapse, but since I often like
to move things around or tidy things up, it's easier to just work with
the final image, export it as a PDF, and convert it to an SVG.
I can make a very simple animation by setting the opacity of all the
paths to 0, then looping through the elements to set the opacity back
to 1 and write that version of the SVG to a separate file.
From how-can-i-generate-png-frames-that-step-through-the-highlights:
my-animate-svg-paths: Add one path at a time. Save the resulting SVGs to OUTPUT-DIR.
Figure 2: Animating SVG paths based on drawing order
Neither Supernote nor Adobe Fresco give me the original stroke
information. These are filled shapes, so I can't animate something
drawing it. But having different elements appear in sequence is fine
for my purposes. If you happen to know how to get stroke information
out of Supernote .note files or of an iPad app that exports nice
single-line SVGs that have stroke direction, I would love to hear
about it.
Identifying paths from Supernote sketches
When I export a PDF from Supernote and convert it to an SVG, each
color is a combined shape with all the elements. If I want to animate
parts of the image, I have to break it up and recombine selected
elements (Inkscape's Ctrl-k shortcut) so that the holes in shapes are
properly handled. This is a bit of a tedious process and it usually
ends up with elements in a pretty random order. Since I have to
reorder elements by hand, I don't really want to animate the sketch
letter-by-letter. Instead, I combine them into larger chunks like
topics or paragraphs.
The following code takes the PDF, converts it to an SVG, recolours
highlights, and then breaks up paths into elements:
my-sketch-convert-pdf-and-break-up-paths: Convert PDF to SVG and break up paths.
(defunmy-sketch-convert-pdf-and-break-up-paths (pdf-file &optional rotate)
"Convert PDF to SVG and break up paths."
(interactive (list (read-file-name
(format "PDF (%s): "
(my-latest-file "~/Dropbox/Supernote/EXPORT/""pdf"))
"~/Dropbox/Supernote/EXPORT/"
(my-latest-file "~/Dropbox/Supernote/EXPORT/""pdf")
t
nil
(lambda (s) (string-match "pdf" s)))))
(unless (file-exists-p (concat (file-name-sans-extension pdf-file) ".svg"))
(call-process "pdftocairo" nil nil nil "-svg" (expand-file-name pdf-file)
(expand-file-name (concat (file-name-sans-extension pdf-file) ".svg"))))
(let ((dom (xml-parse-file (expand-file-name (concat (file-name-sans-extension pdf-file) ".svg"))))
highlights)
(setq highlights (dom-node 'g'((id . "highlights"))))
(dom-append-child dom highlights)
(dolist (path (dom-by-tag dom 'path))
;; recolor and move
(unless (string-match (regexp-quote "rgb(0%,0%,0%)") (or (dom-attr path 'style) ""))
(dom-remove-node dom path)
(dom-append-child highlights path)
(dom-set-attribute
path 'style
(replace-regexp-in-string
(regexp-quote "rgb(78.822327%,78.822327%,78.822327%)")
"#f6f396"
(or (dom-attr path 'style) ""))))
(let ((parent (dom-parent dom path)))
;; break apart
(when (dom-attr path 'd)
(dolist (part (split-string (dom-attr path 'd) "M " t " +"))
(dom-append-child
parent
(dom-node 'path`((style . ,(dom-attr path 'style))
(d . ,(concat "M " part))))))
(dom-remove-node dom path))))
;; remove the use
(dolist (use (dom-by-tag dom 'use))
(dom-remove-node dom use))
(dolist (use (dom-by-tag dom 'image))
(dom-remove-node dom use))
;; move the first g down
(let ((g (car (dom-by-id dom "surface1"))))
(setf (cddar dom)
(seq-remove (lambda (o)
(and (listp o) (string= (dom-attr o 'id) "surface1")))
(dom-children dom)))
(dom-append-child dom g)
(when rotate
(let* ((old-width (dom-attr dom 'width))
(old-height (dom-attr dom 'height))
(view-box (mapcar 'string-to-number (split-string (dom-attr dom 'viewBox))))
(rotate (format "rotate(90) translate(0 %s)" (- (elt view-box 3)))))
(dom-set-attribute dom 'width old-height)
(dom-set-attribute dom 'height old-width)
(dom-set-attribute dom 'viewBox (format "0 0 %d %d" (elt view-box 3) (elt view-box 2)))
(dom-set-attribute highlights 'transform rotate)
(dom-set-attribute g 'transform rotate))))
(with-temp-file (expand-file-name (concat (file-name-sans-extension pdf-file) "-split.svg"))
(svg-print (car dom)))))
You can see how the spaces inside letters like "o" end up being black.
Selecting and combining those paths fixes that.
Combining paths in Inkscape
If there were shapes that were touching, then I need to draw lines and
fracture the shapes in order to break them apart.
Fracturing shapes and checking the highlights
The end result should be an SVG with the different chunks that I might
want to animate, but I need to identify the paths first. You can
assign object IDs in Inkscape, but this is a bit of an annoying
process since I haven't figured out a keyboard-friendly way to set
object IDs. I usually find it easier to just set up an Autokey
shortcut (or AutoHotkey in Windows) to click on the ID text box so
that I can type something in.
Autokey script for clicking
import time
x, y= mouse.get_location()
# Use the coordinates of the ID text field on your screen; xev can help
mouse.click_absolute(3152, 639, 1)
time.sleep(1)
keyboard.send_keys("<ctrl>+a")
mouse.move_cursor(x, y)
Then I can select each element, press the shortcut key, and type an ID
into the textbox. I might use "t-…" to indicate the text for a map
section, "h-…" to indicate a highlight, and arrows by specifying
their start and end.
Setting IDs in Inkscape
To simplify things, I wrote a function in Emacs that will go through
the different groups that I've made, show each path in a different
color and with a reasonable guess at a bounding box, and prompt me for
an ID. This way, I can quickly assign IDs to all of the paths. The
completion is mostly there to make sure I don't accidentally reuse an
ID, although it can try to combine paths if I specify the ID. It saves
the paths after each change so that I can start and stop as needed.
Identifying paths in Emacs is usually much nicer than identifying them
in Inkscape.
Identifying paths inside Emacs
my-svg-identify-paths: Prompt for IDs for each path in FILENAME.
(defunmy-svg-identify-paths (filename)
"Prompt for IDs for each path in FILENAME."
(interactive (list (read-file-name "SVG: " nil nil
(lambda (f) (string-match "\\.svg$" f)))))
(let* ((dom (car (xml-parse-file filename)))
(paths (dom-by-tag dom 'path))
(vertico-count 3)
(ids (seq-keep (lambda (path)
(unless (string-match "path[0-9]+" (or (dom-attr path 'id) "path0"))
(dom-attr path 'id)))
paths))
(edges (window-inside-pixel-edges (get-buffer-window)))
id)
(my-svg-display "*image*" dom nil t)
(dolist (path paths)
(when (string-match "path[0-9]+" (or (dom-attr path 'id) "path0"))
;; display the image with an outline
(unwind-protect
(progn
(my-svg-display "*image*" dom (dom-attr path 'id) t)
(setq id (completing-read
(format "ID (%s): " (dom-attr path 'id))
ids))
;; already exists, merge with existing element
(if-let ((old (dom-by-id dom id)))
(progn
(dom-set-attribute
old
'd
(concat (dom-attr (dom-by-id dom id) 'd)
" ";; change relative to absolute
(replace-regexp-in-string "^m""M"
(dom-attr path 'd))))
(dom-remove-node dom path)
(setq id nil))
(dom-set-attribute path 'id id)
(add-to-list 'ids id))))
;; save the image just in case we get interrupted halfway through
(with-temp-file filename
(svg-print dom))))))
Then I can animate SVGs by specifying the IDs. I can reorder the paths
in the SVG itself so that I can animate it group by group, like the
way that the Adobe Fresco SVGs were animated element by element.
The way it works is that the my-svg-reorder-paths function removes
and readds elements following the list of IDs specified, so
everything's ready to go for step-by-step animation. Here's the code:
Animation style 2: Building up a map with temporary highlights
I can also use CSS rules to transition between opacity values for more
complex animations. For my EmacsConf 2023 presentation, I wanted to
make a self-paced, narrated presentation so that people could follow
hyperlinks, read the source code, and explore. I wanted to include a
map so that I could try to make sense of everything. For this map, I
wanted to highlight the previous sections that were connected to the
topic for the current section.
I used a custom Org link to include the full contents of the SVG
instead of just including it with an img tag.
#+ATTR_HTML: :class r-stretchmy-include:~/proj/emacsconf-2023-emacsconf/map.svg?wrap=export html
my-include-export: Export PATH to FORMAT using the specified wrap parameter.
I wanted to be able to specify the entire sequence using a table in
the Org Mode source for my presentation. Each row had the slide ID, a
list of highlights in the form prev1,prev2;current, and a
comma-separated list of elements to add to the full-opacity view.
Reveal.js adds a "current" class to the slide, so I can use that as a
trigger for the transition. I have a bit of Emacs Lisp code that
generates some very messy CSS, in which I specify the ID of the slide,
followed by all of the elements that need their opacity set to 1, and
also specifying the highlights that will be shown in an animated way.
my-reveal-svg-progression-css: Make the CSS.
(defunmy-reveal-svg-progression-css (map-progression &optional highlight-duration)
"Make the CSS.map-progression should be a list of lists with the following format:((\"slide-id\" \"prev1,prev2;cur1\" \"id-to-add1,id-to-add2\") ...)."
(setq highlight-duration (or highlight-duration 2))
(let (full)
(format
"<style>%s</style>"
(mapconcat
(lambda (slide)
(setq full (append (split-string (elt slide 2) ",") full))
(format "#slide-%s.present path { opacity: 0.2 }%s { opacity: 1 !important }%s"
(car slide)
(mapconcat (lambda (id) (format "#slide-%s.present #%s" (car slide) id))
full
", ")
(my-reveal-svg-highlight-different-colors slide)))
map-progression
"\n"))))
Since it's automatically generated, I don't have to worry about it
once I've gotten it to work. It's all hidden in a
results drawer. So this CSS highlights specific parts of the SVG with
a transition, and the highlight changes over the course of a second or
two. It highlights the previous names and then the current one. The
topics I'd already discussed would be in black, and the topics that I
had yet to discuss would be in very light gray. This could give people
a sense of the progress through the presentation.
As a result, as I go through my presentation, the image appears to
build up incrementally, which is the effect that I was going for.
I can test this by exporting only my map slides:
Graphviz, mermaid-js, and other diagramming tools can make SVGs. I
should be able to adapt my code to animate those diagrams by adding
other elements in addition to path. Then I'll be able to make
diagrams even more easily.
Since SVGs can contain CSS, I could make an SVG equivalent of the
CSS rules I used for the presentation, maybe calling a function with
a Lisp expression that specifies the operations (ex:
("frame-001.svg" "h-foo" opacity 1)). Then I could write frames to
SVGs.
FFmpeg has a crossfade filter. With a little bit of figuring out, I
should be able to make the same kind of animation in a webm form
that I can include in my regular videos instead of using Reveal.js
and CSS transitions.
I've also been thinking about automating the recording of my
Reveal.js presentations. For my EmacsConf talk, I opened my
presentation, started the recording with the system audio and the
screen, and then let it autoplay the presentation. I checked on it
periodically to avoid the screensaver/energy saving things from
kicking in and so that I could stop the recording when it's
finished. If I want to make this take less work, one option is to
use ffmpeg's "-t" argument to specify the expected duration of the
presentation so that I don't have to manually stop it. I'm also
thinking about using Puppeteer to open the presentation, check when
it's fully loaded, and start the process to record it - maybe even
polling to see whether it's finished. I haven't gotten around to it
yet. Anyhow, those are some ideas to explore next time.
As for animation, I'm still curious about the possibility of
finding a way to access the raw stroke information if it's even
available from my Supernote A5X (difficult because it's a
proprietary data format) or finding an app for the iPad that exports
single line SVGs that use stroke information instead of fill. That
would only be if I wanted to do those even fancier animations that
look like the whole thing is being drawn for you. I was trying to
figure out if I could green screen the Adobe Fresco timelapse videos
so that even if I have a pre-sketch to figure out spacing and remind
me what to draw, I can just export the finished elements. But
there's too much anti-aliasing and I haven't figured out how to do
it cleanly yet. Maybe some other day.
I use Google Cloud Vision's text detection engine to convert my
handwriting to text. It can give me bounding polygons for words or
paragraphs. I might be able to figure out which curves are entirely
within a word's bounding polygon and combine those automatically.
It would be pretty cool if I could combine the words recognized by
Google Cloud Vision with the word-level timestamps from speech
recognition so that I could get word-synced sketchnote animations
with maybe a little manual intervention.
Anyway, those are some workflows for animating sketches with Inkscape
and Emacs. Yay Emacs!
Overall notes in Emacs with outline, org-timer timestamped notes; capture to this file
Elisp to start/stop the stream → find old code
Use the Yeti? Better sound
tee to a local recording
grab screenshot from SuperNote mirror?
Live streaming info density:
High: Emacs News review, package/workflow demo
Narrating a blog post to make it a video
Categorizing Emacs News, exploring packages
Low: Figuring things out
YouTube can do closed captions for livestreams, although accuracy is
low. Videos take a while to be ready to download.
Experimenting with working out loud
I wanted to write a report on EmacsConf 2023 so that we could share it
with speakers, volunteers, participants, donors, related organizations
like the Free Software Foundation, and other communities. I
experimented with livestreaming via YouTube while I worked on the
conference highlights.
It's a little over an hour long and probably very boring, but it was
nice of people to drop by and say hello.
The main parts are:
0:00: reading through other conference reports for inspiration
49:00: fiddling with the formatting and the export
It mostly worked out, aside from a brief moment of "uhhh, I'm
looking at our private conf.org file on stream". Fortunately, the
e-mail addresses that were showed were the public ones.
Technical details
Setup:
I set up environment variables and screen resolution:
I switch to a larger size and a light theme. I also turn consult previews off to minimize the risk of leaking data through buffer previews.
my-emacsconf-prepare-for-screenshots: Set the resolution, change to a light theme, and make the text bigger.
I can think of a few workflow tweaks that might be fun:
a stream notes buffer on the right side of the screen for context
information, timestamped notes to make editing/review easier (maybe
using org-timer), etc. I experimented with some streaming-related
code in my config, so I can dust that off and see what that's like.
I also want to have an org-capture template for it so that I can add
notes from anywhere.
I think I'll try going through an informal presentation or Emacs News as my next livestream experiment, since that's probably higher information density.
I wanted to get the Q&A sessions up quickly after the conference, so I
uploaded them to YouTube and added them to the EmacsConf 2023
playlist. I used YouTube's video editor to roughly guess where to
trim them based on the waveforms. I needed to actually trim the source
videos, though, so that our copies would be up to date and I could use
those for the Toobnix uploads.
My first task was to figure out which videos needed to be trimmed to
match the YouTube edits. First, I retrieved the video details using
the API and the code that I added to emacsconf-extract.el.
After quickly checking the results, I copied them over to the original videos, updated the video data in my conf.org, and republished the info pages in the wiki.
The time I spent on figuring out how to talk to the YouTube API feels like it's paying off.
I ran into quota limits when uploading videos to YouTube with a command-line tool, so I uploaded videos by selecting up to 15 videos at a time using the web-based interface. Each video was a draft, though, and I was having a hard time updating its visibility through the API. I think it eventually worked, but in the meantime, I used this very hacky hack to look for the "Edit Draft" button and click through the screens to publish them.
emacsconf-extract-youtube-publish-video-drafts-with-spookfox: Look for drafts and publish them.
Another example of a hacky Spookfox workaround was publishing the
unlisted videos. I couldn't figure out how to properly authenticate
with the Toobnix (Peertube) API to change the visibility of videos.
Peertube uses AngularJS components in the front end, so using
.click() on the input elements didn't seem to trigger anything. I
found out that I needed to use .dispatchEvent(new Event('input')) to
tell the dropdown for the visibility to display the options. source
emacsconf-extract-toobnix-publish-video-from-edit-page: Messy hack to set a video to public and store the URL.
(defunemacsconf-extract-toobnix-publish-video-from-edit-page ()
"Messy hack to set a video to public and store the URL."
(interactive)
(spookfox-js-injection-eval-in-active-tab "document.querySelector('label[for=privacy]').scrollIntoView(); document.querySelector('label[for=privacy]').closest('.form-group').querySelector('input').dispatchEvent(new Event('input'));" t)
(sit-for 1)
(spookfox-js-injection-eval-in-active-tab "document.querySelector('span[title=\"Anyone can see this video\"]').click()" t)
(sit-for 1)
(spookfox-js-injection-eval-in-active-tab "document.querySelector('button.orange-button').click()" t)(sit-for 3)
(emacsconf-extract-store-url)
(shell-command "xdotool key Alt+Tab sleep 1 key Ctrl+w Alt+Tab"))
It's a little nicer using Spookfox to automate browser interactions
than using xdotool, since I can get data out of it too. I could also
have used Puppeteer from either Python or NodeJS, but it's nice
staying with Emacs Lisp. Spookfox has some Javascript limitations
(can't close windows, etc.), so I might still use bits of xdotool or
Puppeteer to work around that. Still, it's nice to now have an idea of
how to talk to AngularJS components.