Categories: emacsconf

View topic page - RSS - Atom - Subscribe via email

Scaling a BigBlueButton server down to a 1 GB node between uses

| geek, tech, emacsconf, emacs

Now that we've survived EmacsConf, I've been looking into running a BigBlueButton server so that various Emacs meetups can use it if they like instead of relying on Jitsi or other free video-conferencing services. (I spent some time looking into Galene, but I'm not quite sure that's ready for our uses yet, like this issue that LibrePlanet ran into with recording.)

BigBlueButton requires a server with at least 4 CPU cores and 8 GB of RAM to even start up, and it doesn't like to share with other services. This costs about USD 48+tax/month on Linode or USD 576+tax/year, which is not an efficient use of funds yet. I could delete it after each instance, but I've been having a hard time properly restoring it from backup after deploying to a new IP address. bbb-conf --setip doesn't seem to catch everything, so I was still getting curl errors related to verifying the certificate.

A reasonable in-between is to run it on Linode's lowest plan (1 core, 1GB RAM; USD 60+tax for the year) in between meetups, and then spin things up for maybe 6-12 hours around each meetup. If I go with the 4-core 8 GB setup, that would be an extra USD 0.43 - 0.86 USD per meetup, which is eminently doable. I could even go with the recommended configuration of 8 cores and 16 GB memory on a dedicated CPU plan (USD 0.216/hour, so USD 1.30 to 2.59 per meetup). This was the approach that we used while preparing for EmacsConf. Since I didn't have a lot of programming time, I scaled the node up to 4 core / 8GB RAM whenever I had time to work on it, and I scaled it down to 1GB at the end of each of my working sessions. I scaled it up to dedicated 8 core / 16 GB RAM for EmacsConf, during which we used roughly half of the CPU capacity in order to host a max of 107 simultaneous users over 7 meetings.

I reviewed my BigBlueButton setup notes in the EmacsConf organizers notebook and the 2024 notebook and set up a Linode instance under my account, so that I can handle the billing and also so that Amin Bandali doesn't get spammed by all the notifications (up, down, up, down…). And then I'll be able to just scale it up when EmacsConf comes around again, which is nice.

Anyway, BBB refuses to install on a machine with fewer than 4 cores or 8 GB RAM, but once you set it up, it'll valiantly thrash around even on an underpowered server, which makes working with the server over ssh a lot slower. Besides, that's not friendly to other people using the same server. I wanted to configure the services so that they would only run on a server of the correct size. It turns out that systemd will let you specify either ConditionMemory and ConditionCPUs in the unit configuration file, and that you can use files ending in .conf in a directory named like yourservicename.service.d to override part of the configuration. Clear examples were hard to find, so I wanted to share these notes.

Since ConditionMemory is specified in bytes (ex: 8000000000), I found ConditionCPUs to be easier to read.

I used this command to check if I'd gotten the syntax right:

systemd-analyze condition 'ConditionCPUs=>=4'

and then I wrote this script to set up the overrides:

CPUS_REQUIRED=4
for ID in coturn.service redis-server.service bigbluebutton.target multi-user.target bbb-graphql-server.service bbb-rap-resque-worker.service bbb-webrtc-sfu.service bbb-fsesl-akka.service bbb-webrtc-recorder.service bbb-pads.service bbb-export-annotations.service bbb-web.service freeswitch.service etherpad.service bbb-rap-starter.service bbb-rap-caption-inbox.service freeswitch.service bbb-apps-akka.service bbb-graphql-actions postgresql@14-main.service; do
    mkdir -p /etc/systemd/system/$ID.d
    printf "[Unit]\nConditionCPUs=>=$CPUS_REQUIRED\n" > /etc/systemd/system/$ID.d/require-cpu.conf
done
systemctl daemon-reload
systemd-analyze verify bigbluebutton.target

It seems to work. When I use linode-cli to resize to the testing size, BigBlueButton works:

#!/bin/bash
source /home/sacha/.profile
PATH=/home/sacha/.local/bin/:$PATH
linode-cli linodes resize $BBB_ID --type g6-standard-4 --allow_auto_disk_resize false
sleep 4m
linode-cli linodes boot $BBB_ID
sleep 3m
ssh root@bbb.emacsverse.org "bbb-conf --restart; cd ~/greenlight-v3; docker compose restart"
notify-send "Should be ready"

And when I resize it down to a 1 GB nanode, BigBlueButton doesn't get started and the VPS is nice and responsive when I SSH in.

#!/bin/bash
source /home/sacha/.profile
PATH=/home/sacha/.local/bin/:$PATH
echo Powering off
linode-cli linodes shutdown $BBB_ID
sleep 60
echo "Resizing BBB node to nanode, dormant"
linode-cli linodes resize $BBB_ID --type g6-nanode-1 --allow_auto_disk_resize false

So now I'm going to coordinate with Ihor Radchenko about when he might want to try this out for OrgMeetup, and I can talk to other meetup organizers to figure out times. People will probably want to test things before announcing it to their meetup groups, so we just need to schedule that. It's BigBlueButton 3.0. I'm not 100% confident in the setup. We had some technical issues with some EmacsConf speakers even though we did a tech check with them before we went live with their session. Not sure what happened there.

I'm still a little nervous about accidentally forgetting to downscale the server and running up a bill, but I've scheduled downscaling with the at command before, so that's helpful. If it turns out to be something we want to do regularly, I might even be able to use a cronjob from my other server so that it happens even if my laptop is off, and maybe set up a backup nginx server with a friendly message (and maybe a list of upcoming meetups) in case people connect before it's been scaled up. Anyway, I think that's a totally good use of part of the Google Open Source Peer Bonus I received last year.

As an aside, you can change a room's friendly_id to something actually friendly. In the Rails console (docker exec -it greenlight-v3 bundle exec rails console), you could do something like this:

Room.find_by(friendly_id: "CURRENT_ROOM_ID").update_attribute(:friendly_id, "NEW_CUSTOM_ID")

Anyway, let me know if you organize an Emacs meetup and want to give this BigBlueButton instance a try!

View org source for this post

EmacsConf 2024 notes

Posted: - Modified: | emacs, emacsconf

The videos have been uploaded, thank-you notes have been sent, and the kiddo has decided to play a little Minecraft on her own, so now I get to write some quick notes on EmacsConf 2024.

Stats

Talks 31
Hours 10.7
Q&A web conferences 21
Hours 7.8
  • Saturday:
    • gen: 177 peak + 14 peak lowres
    • dev: 226 peak + 79 peak lowres
  • Sunday:
    • gen: 89 peak + 10 peak lowres

Server configuration:

meet 16GB 8core dedicated peak 409% CPU (100% is 1 CPU), average 69.4%
front 32GB 8core shared peak 70.66% CPU (100% is 1 CPU)
live 64GB 16core shared peak 552% CPU (100% is 1 CPU) average 144%
res 46GB 12core peak 81.54% total CPU (100% is 12 CPUs); each OBS ~250%), mem 7GB used
media 3GB 1core  

YouTube livestream stats:

Shift Peak Avg
Gen Sat AM 46 28
Gen Sat PM 24 16
Dev Sat AM 15 7
Dev Sat PM 20 12
Gen Sun AM 28 17
Gen Sun PM 26 18

Timeline

Call for proposals [2024-06-30 Sun]
CFP deadline [2024-09-20 Fri]
Speaker notifications [2024-09-27 Fri]
Publish schedule [2024-10-25 Fri]
Video target date [2024-11-08 Fri]
EmacsConf [2024-12-07 Sat]-[2024-12-07 Sat]

We did early acceptances again this year. That was nice. I wasn't sure about committing longer periods of time early in the scheduling process, so I usually tried to nudge people to plan a 20-minute video with the option of possibly doing more, and I okayed longer talks once we figured out what the schedule looked like.

There were 82 days between the call for proposals and the CFP deadline, another 49 days from that to the video target date, and 29 days between the video target date and EmacsConf. It felt like there was a good amount of time for proposals and videos. Six videos came in before or on the target date. The rest trickled in afterwards, which was fine because we wanted to keep things low-pressure for the speakers. We had enough capacity to process and caption the videos as they came in.

Data

We continued to use an Org file to store the talk information. It would be great to add some validation functions:

  • Check permissions and ownership for files
  • Check case sensitivity for Q&A type detection
  • Check BBB redirect pages to make sure they exist
  • Check transcripts for ` because that messes up formatting; consider escaping for the wiki
  • Check files are public and readable
  • Check captioned by comment vs caption status vs captioner

Speakers uploaded their files via PsiTransfer again. I didn't get around to setting up the FTP server. I should probably rename ftp-upload.emacsconf.org to upload.emacsconf.org so that people don't get confused.

Communication

As usual, we announced the EmacsConf call for proposals on emacs-tangents, Emacs News, emacsconf-discuss, emacsconf-org, https://reddit.com/r/emacs. System Crafters, Irreal, and Emacs APAC, mentioned it, and people also posted about EmacsConf on Mastodon, X, BlueSky, and Facebook. @len@toot.si suggested submitting EmacsConf to https://foss.events, so I did. There was some other EmacsConf-related discussions in r/emacs. 200ok and Ardeo organized an in-person meetup in Switzerland, and emacs.si got together in Ljubljana.

For communicating with speakers and volunteers, I used lots of mail merge (emacsconf-mail.el). Most of the templates only needed a little tweaking from last year's code. I added a function to help me double-check delivery, since the batches that I tried to send via async sometimes ran into errors.

Next time, I think it could be interesting to add more blog posts and Mastodon toots.

Also, maybe it would be good to get in touch with podcasts like

to give a heads up on EmacsConf before it happens and also let them know when videos are available.

We continued to use Mumble for backstage coordination. It worked out well.

Schedule

The schedule worked out to two days of talks, with two tracks on the first day, and about 15-20 minutes between each talk. We were able to adapt to late submissions, last-minute cancellations, and last-minute switches from Q&A to live.

We added an open mic session on Sunday to fill in the time from a last-minute cancellation. That worked out nicely and it might be a good idea to schedule in that time next year. It was also good to move some of the usual closing remarks earlier. We were able to wrap up in a timely manner, which was great for some hosts and participants because they didn't have to stay up so late.

Sunday was single-track, so it was nice and relaxed. I was a little worried that people might get bored if the current talk wasn't relevant to their interests, but everyone managed just fine. I probably should have remembered that Emacs people are good at turning extra time into more configuration tweaks.

Most of the scheduling was determined by people's time constraints, so I didn't worry too much about making the talks flow logically. I accidentally forgot to note down one speaker's time constraints, but he caught it when we e-mailed the draft schedule and I was able to move things around for a better time for him.

There was a tiny bit of technical confusion because the automated schedule publishing on res had case-sensitive matching (case-fold-search was set to nil), so if a talk was set to "Live" Q&A, it didn't announce it as a live talk because it was looking for live. Whoops. I've added that configuration setting to my emacsconf-stream-config.el, so the ansible scripts should get it next time.

I asked Leo and Corwin if they wanted to manually control the talks this year. They opted to leave it automatically managed by crontab so that they wouldn't have to worry as much about timekeeping. It worked reliably. Hooray for automation! The only scheduling hiccup was because I turned off the crontab so that we could do Saturday closing remarks when we wanted to and I forgot to reenable autopilot the next day. We noticed when the opening remarks didn't start right on the dot, and I got everything back on track.

Like last year, I scheduled the dev track to start a little later than the gen track. That made for a less frantic morning. Also, this year we scheduled Sunday morning to start with more IRC Q&A instead of live Q&A. We didn't notice any bandwidth issues on Sunday morning this time.

It would be nice to have Javascript countdowns in some kind of web interface to make it easier for hosts, especially if we can update it with the actual time the current video will end in MPV.

I can also update the emacsconf-stream.el code to make it easier to automatically count down to the next talk or to a specific talk.

We have Javascript showing local time on the individual talk pages, but it would be nice to localize the times on all the schedule/watch pages too.

Most of my stuff (scheduling, publishing, etc.) is handled by automation with just a little bit of manual nudging every so often, so it might be possible to organize an event that's more friendly to Europe/APAC timezones.

Recorded videos

As usual, we strongly encouraged speakers to record videos to lower everyone's stress levels and allow for captioning by volunteers, so that's what most speakers did. We were able to handle a few last-minute submissions as well as a live talk. Getting videos also meant we could publish them as each talk went live, including automatically putting the videos and transcripts on the wiki.

We didn't have obvious video encoding cut-offs, so re-encoding in a screen was a reliable way to avoid interruptions this year. Also, no one complained about tiny text or low resolution, so the talk preparation instructions seem to be working out.

Automatically normalizing the audio with ffmpeg-normalize didn't work out, so Leo Vivier did a last-minute scramble to normalize the audio the day before the conference. Maybe that's something that volunteers can help with during the lead-up to the conference, or maybe I can finally figure out how to fit that into my process. I don't have much time or patience to listen to things, but it would be nice to get that sorted out early.

Next year we can try remixing the audio to mono. One of the talks had some audio moving around, which was a little distracting. Also, some people listen to the talks in one ear, so it would be good to drop things down to mono for them.

We think 60fps videos stressed the res server a bit, resulting in dropped frames. Next year, we can downsample those to 30fps and add a note to the talk preparation instructions. The hosts also suggested looking into setting up streaming from each host's computer instead of using our shared VNC sessions.

There was some colour smearing and weirdness when we played some videos with mpv on res. Upgrading MPV to v0.38 fixed it.

Some people requested dark mode (light text on dark background), so maybe we can experiment with recommending that next year.

I did a last-minute change to the shell scripts to load resources from the cache directory instead of the assets/stream directory, but I didn't get all of the file references, so sometimes the test videos played or the introductions didn't have captions. On the plus side, I learned how to use j in MPV to reload a subtitle file.

Sometimes we needed to play the videos manually. If we get the hang of starting MPV in a screen or tmux session, it might be easier for hosts to check how much time is left, or to restart a video at a specific point if needed. Leo said he'll work on figuring out the configuration and the Lua scripts.

I uploaded all the videos to YouTube and scheduled them. That was nice because then I didn't have to keep updating things during the conference. It turns out that Toobnix also has a way to schedule uploads. I just need to upload it as unlisted first, and then choose Scheduled from the visibility. I wonder if peertube-cli can be extended to schedule things. Anyway, since I didn't know about that during the conference, I just used emacsconf-publish-upload-talk function to upload videos.

It was fun playing Interview with an Emacs Enthusiast in 2023 [Colorized] - YouTube at lunch. I put together some captions for it after the conference, so maybe we can play it with captions next year.

Recorded introductions

We record introductions so that hosts don't have to worry about how to say things on air. I should probably send the intro check e-mail earlier–maybe on the original video target date, even if speakers haven't submitted their videos yet. This will reduce the last-minute scramble to correct intros.

When I switched the shell scripts to use the cache directory, I forgot to get it to do the intros from that directory as well, so some of the uncorrected intros were played.

I forgot to copy the intro VTTs to the cache directory. This should be handled by the subed-record process for creating intros, so it'll be all sorted out next year.

Captioning

We used WhisperX for speech-to-text this year. It did a great job at preparing the first drafts of captions that our wonderful army of volunteer captioners could then edit. WhisperX's built-in voice activity detection cut down a lot on the hallucinations that OpenAI Whisper had during periods of silence in last year's captions, and there was only one instance of WhisperX missing a chunk of text from a speaker that I needed to manually fill in. I upgraded to a Lenovo P52 with 64GB RAM, so I was able to handle last-minute caption processing on my computer. It might be handy to have a smaller model ready for those last-minute requests, or have something ready to go for the commercial APIs.

The timestamps were a little bit off. It was really helpful that speakers and volunteers used the backstage area to check video quality. I used Aeneas to re-align the text, but Aeneas was also confused by silences. I've added some code to subed so that I can realign regions of subtitles using Aeneas or WhisperX timestamps, and I also wrote some code to skim timestamps for easy verification.

Anush V experimented with using machine learning for subtitle segmentation, so that might be something to explore going forward.

BigBlueButton web conference

This year we set up a new BigBlueButton web conferencing server. The server with our previous BigBlueButton instance had been donated by a defunct nonprofit, so it finally got removed on October 27. After investigating whether Jitsi or Galene might be a good fit for EmacsConf, we decided to continue with BigBlueButton. There were some concerns about non-free Mongo for BBB versions >= 2.3 and < 3, so I installed BBB 3.0. This was hard to get working on a Docker on the existing res server. We decided it was worth spinning up an additional Linode virtual private server. It turned out that BBB refused to run on anything smaller than 8GB/4core, so I scaled up to that during testing, scaled back down to 1GB/1core in between, and scaled up to 16GB/8core dedicated during the conference.

I'm still not 100% sure I set everything up correctly or that everything was stable. Maybe next year BBB 3.0 will be better-tested, someone more sysad-y can doublecheck the setup, or we can try Galene.

One of the benefits of upgrading to BBB 3.0 was that we could use the smart layout feature to drag the webcam thumbnails to the side of the shared screen. This made shared screens much easier to read. I haven't automated this yet, but it was easy enough for us to do via the shared VNC session.

On the plus side, it was pretty straightforward to use the Rails console to create all the rooms. We used moderator access codes to give all the speakers moderator access. Mysteriously, superadmins didn't automatically have moderator access to all the rooms even if they were logged in, so we needed to add host access by hand so that they could start the recordings.

Since we self-hosted and were budgeting more for the full-scale node, I didn't feel comfortable scaling it up to production size until a few days before the conference. I sent the access codes with the check-in e-mails to give speakers time to try things out.

Compared to last year's stats:

  2023 2024
Max number of simultaneous users 62 107
Max number of simultaneous meetings 6 7
Max number of people in one meeting 27 25
Total unique people 84 102
Total unique talking 36 40

(Max number of simultaneous users wasn't deduplicated, since we need that number for server load planning)

Tech checks and hosting

FlowyCoder did a great job getting everyone checked in, especially once I figured out the right checklist to use. We used people's emergency contact information a couple of times.

Corwin and Leo were able to jump in and out of the different streams for hosting. Sometimes they were both in the same Q&A session, which made it more conversational especially when they were covering for technical issues. We had a couple of crashes even though the tech checks went fine, so that was weird. Maybe something's up with BBB 3.0 or how I set it up.

Next time, we can consider asking speakers what kind of facilitation style they like. A chatty host? Someone who focuses on reading the questions and then gets out of the way? Speakers reading their own questions and the host focusing on timekeeping/troubleshooting?

Streaming

I experimented with setting up the live0 streaming node as a 64GB 32core dedicated CPU server, but that was overkill, so we went back down to 64GB 16core and it still didn't approach the CPU limits.

The 480p stream seemed stable, hooray! I had set it up last year to automatically kick in as soon as I started streaming to Icecast, and that worked out. I think I changed a loop to be while true instead of making it try 5 times, so that probably helped.

I couldn't get Toobnix livestreaming to work this year. On the plus side, that meant that I could use OBS to directly stream to YouTube instead of trying to set up multicasting. I set up one YouTube livestreaming event for each shift and added the RTMP keys to our shift checklists so that I could update the settings before starting the stream. That was pretty straightforward.

This year, I wrote a little randomizer function to display things on the countdown screen. At first I just dumped in https://www.gnu.org/fun/jokes/gnuemacs.acro.exp.en.html, but some of those were not quite what I was looking for. (… Probably should've read them all first!) Then I added random packages from GNU ELPA and NonGNU ELPA, and that was more fun. I might add MELPA next time too. The code for dumping random packages is probably worth putting into a different blog post, since it's the sort of thing people might like to add to their dashboards or screensavers.

I ran into some C-s annoyances in screen even with flow control turned off, so it might be a good idea to switch to tmux instead of screen.

Next year, I think it might be a good idea to make intro images for each talk. Then we can use that as the opening slide in BigBlueButton (unless they're already sharing something else) as well as a video thumbnail.

Publishing

The automated process for publishing talks and transcripts to the wiki occasionally needed nudging when someone else had committed a change to the wiki. I thought I had a git pull in there somewhere, but maybe I need to look at it some more.

I forgot to switch the conference publishing phase and enable the inclusion of Etherpads, but fortunately Ihor noticed. I did some last-minute hacking to add them in, and then I remembered the variables I needed to set. Just need to add it to our process documentation.

Etherpad

We used Etherpad 1.9.7 to collect Q&A again this year. I didn't upgrade to Etherpad v2.x because I couldn't figure out how to get it running within the time I set aside for it, but maybe that's something for next year.

I wrote some Elisp to copy the current ERC line (unwrapped) for easier pasting into Etherpad. That worked out really well, and it let me keep up with copying questions from IRC to the pad in between other bits of running around. (emacsconf-erc-copy in emacsconf-erc.el)

Next year, I'll add pronouns and pronunciations to the Etherpad template so that hosts can refer to them easily.

If I rejig the template to move the next/previous links so that notes can be added to the end, I might be able to use the Etherpad API to add text from IRC.

IRC

We remembered to give the libera.chat people a heads-up before the conference, so we didn't run into usage limits for https://chat.emacsconf.org. Yay!

Aside from writing emacsconf-erc-copy (emacsconf-erc.el) to make it easier to add text from IRC to the Etherpad, I didn't tinker much with the IRC setup for this year. It continued to be a solid platform for discussion.

I think a keyboard shortcut for inserting a talk's URL could be handy and should be pretty easy to add to my Embark keymap.

Extracting the Q&A

We sometimes forgot to start the recording for the Q&A until a few minutes into the talk. I considered extracting the Q&A recordings from the Icecast dump or YouTube stream recordings in order to get those first few minutes, but decided it wasn't worth it since people could generally figure out the answers.

Getting the recordings off BigBlueButton was easier this year because I configured it with video as an additional processing format, so we could grab one file per session instead of combining the different streams with ffmpeg.

I did a quick pass of the Q&A transcripts and chat logs to see if people mentioned anything that they might want to take out. I also copied IRC messages and the pads, and I copied over the answers from the transcripts using the new emacsconf-extract-subed-copy-section-text function.

Audio mixing was uneven. It might be nice to figure out separate audio recordings just in case (#12302, bigbluebutton-dev). We ended up not tinkering with the audio for the Q&A, so next time, I can probably upload them without waiting to see if anyone wants to fiddle with the audio.

Trimming the Q&A was pretty straightforward. I added a subed-crop-media-file function to subed so that I can trim files easily.

Thanks to my completion functions for adding section headings based on comments, it was easy to index the Q&A this year. I didn't even put it up backstage for people to work on.

Nudged by @ctietze, I'm experimenting with adding sticky videos if Javascript is enabled so that it's easier to navigate using the transcript. There's still a bit of tinkering to do, but it's a start.

I added some conference-related variables to a .dir-locals.el file so that I can more easily update things even for past conferences. This is mostly related to publishing the captions on the wiki pages, which I do with Emacs Lisp.

Budget and donations

Costs (USD, not including 13% tax):

52.54 Extra costs for hosting in December
3.11 Extra costs for BBB testing in November
120 Hosting costs year-round (two Linode nanodes)

Total of USD 175.65 + tax, or USD 198.48 for 2024.

The Free Software Foundation also provided media.emacsconf.org for serving media files. Ry P provided res.emacsconf.org for OBS streaming over VNC sessions.

Amin Bandali was away during the conference weekend and no one else knew how to get the list of donors and current donation stats from the FSF Working Together program on short notice. Next time, we can get that sorted out beforehand so that we can thank donors properly.

Documentation and time

I think my biggest challenge was having less time to prepare for EmacsConf this year because the kiddo wanted more of my attention. In many ways, the automation that I'd been gradually building up paid off. We were able to pull together EmacsConf even though I had limited focus time.

Here's my Emacs-related time data (including Emacs News and tweaking my config):

Year Jan Feb March April May June July Aug Sept Oct Nov Dec Total
2023 23.4 15.9 16.2 11.2 4.4 11.5 6.5 13.3 36.6 86.6 93.2 113.0 432
2024 71.2 12.0 5.6 6.6 3.3 9.6 11.0 4.7 36.0 40.3 52.3 67.7 320

(and here's a longer-term analysis going back to 2012.)

I spent 92.6 hours total in October and November 2024 doing Emacs-related things, compared to 179.8 hours the previous year – so, around half the time. Part of the 2023 total was related to preparing my presentation for EmacsConf, so I was much more familiar with my scripts then. Apparently, there was still a lot more that I needed to document. As I scrambled to get EmacsConf sorted out, I captured quick tasks/notes for the things I need to add to our organizers notebook. Now I get to go through all those notes in my inbox. Maybe next year will be even smoother.

On the plus side, all the process-related improvements meant that the other volunteers could jump in pretty much whenever they wanted, including during the conference itself. I didn't want to impose firm commitments on people or bug them too much by e-mail, so we kept things very chill in terms of scheduling and planning. If people were available, we had stuff people could help with. If people were busy, that was fine, we could manage. This was nice, especially when I applied the same sort of chill approach to myself.

I'd like to eventually get to the point of being able to mostly follow my checklists and notes from the start of the conference planning process to the end. I've been moving notes from year-specific organizer notebooks to the main organizers' notebook. I plan to keep that one as the main file for notes and processes, and then to have specific dates and notes in the yearly ones.

Thanks

  • Thank you to all the speakers, volunteers, and participants, and to all those other people in our lives who make it possible through time and support.
  • Thanks to Leo Vivier and Corwin Brust for hosting the sessions, and to FlowyCoder for checking people in.
  • Thanks to our proposal review volunteers James Howell, JC Helary, and others for helping with the early acceptance process.
  • Thanks to our captioning volunteers: Mark Lewin, Rodrigo Morales, Anush, annona, and James Howell, and some speakers who captioned their own talks.
  • Thanks to Leo Vivier for fiddling with the audio to get things nicely synced.
  • Thanks to volunteers who kept the mailing lists free from spam.
  • Thanks to Bhavin Gandhi, Christopher Howard, Joseph Turner, and screwlisp for quality-checking.
  • Thanks to shoshin for the music.
  • Thanks to Amin Bandali for help with infrastructure and communication.
  • Thanks to Ry P for the server that we're using for OBS streaming and for processing videos.
  • Thanks to the Free Software Foundation for Emacs itself, the mailing lists, the media.emacsconf.org server, and handling donations on our behalf through the FSF Working Together program. https://www.fsf.org/working-together/fund
  • Thanks to the many users and contributers and project teams that create all the awesome free software we use, especially: BigBlueButton, Etherpad, Icecast, OBS, TheLounge, libera.chat, ffmpeg, OpenAI Whisper, WhisperX, the aeneas forced alignment tool, PsiTransfer, subed, and many, many other tools and services we used to prepare and host this years conference
  • Thanks to everyone!

Overall

Good experience. Lots of fun. I'd love to do it again next year. EmacsConf feels like a nice, cozy get-together where people share the cool things they've been working on and thinking about. People had fun! They said:

  • "emacsconf is absolutely knocking it out of the park when it comes to conference logistics"
  • "I think this conference has defined the terms for a successful online conference."
  • "EmacsConf is one of the big highlights of my year every year. Thank you a ton for running this 😊"

It's one of the highlights of my year too. =) Looking forward to the next one!

In the meantime, y'all can stay connected via Emacs News, meetups (online and in person), Planet Emacslife, and now emacs.tv. Enjoy!

p.s. I'd love to learn from other people's conference blog posts, EmacsConf or otherwise. I'm particularly interested in virtual conferences and how we can tinker with them to make them even better. I'm having a hard time finding posts; please feel free to send me links to ones you've liked or written!

View org source for this post

EmacsConf backstage: Makefile targets

Posted: - Modified: | emacsconf

[2024-11-16 Sat]: Removed highlight_words from whisperx call.

We like to use pre-recorded videos at EmacsConf to minimize technical risks. This also means we can caption them beforehand, stream them with open captions, and publish them as soon as the talk goes live.

Here's the process:

  1. Speakers upload their videos in whatever format they like. We use PsiTransfer to accept the uploaded files.
  2. We rename the files to have the talk title and speaker name in the filename, like emacsconf-2024-emacs30--emacs-30-highlights--philip-kaludercic--original.mov.
  3. We use FFmpeg to reencode them to WEBM so that everything is available in a free format, and we replace the --original.* part with --reencoded.webm. We copy this to --main.webm as a starting point.
  4. We extract the audio and save it to --reencoded.opus.
  5. We use ffmpeg-normalize to normalize the audio and save it to --normalized.opus.
  6. We use WhisperX to get a reasonable starting point for captions, which we save to --reencoded.vtt. I remove the underlines and the tsv and srt files.
  7. Someone edits the captions. We save edited captions as --main.vtt.
  8. --normalized.opus and --main.vtt get combined into --main.webm.

I've been slowly learning how to set up Makefile rules to automate more and more of this. Let's go through parts of the roles/prerec/templates/Makefile.

Make the reencoded webm from the original MP4, MOV, MKV, or WEBM

Here's the rule that makes a --reencoded.webm based on the original mp4, mov, mkv, or webm.

VIDEO_EXTS = mp4 mkv webm mov
source_patterns = $(foreach ext,$(VIDEO_EXTS),$(1)--original.$(ext))
emacsconf-%--reencoded.webm: SOURCES = $(call source_patterns, emacsconf-$*)
emacsconf-%--reencoded.webm:
  $(eval SOURCE := $(lastword $(sort $(wildcard $(SOURCES)))))
  @if [ -z "$(SOURCE)" ]; then \
    echo "No source file found for $@"; \
    echo "Tried: $(SOURCES)"; \
    exit 1; \
  fi
  @echo "Using source: $(SOURCE)"
  ./reencode-in-screen.sh "$(SOURCE)"

Reencoding can take a while and it's prone to me accidentally breaking it, so we stick it in a GNU screen so that I don't accidentally quit it. This is reencode-in-screen.sh:

#!/bin/bash
ORIGINAL=$1
BASE="${ORIGINAL%--original.*}"
REENCODED="${BASE}--reencoded.webm"
SLUG=$(echo "$ORIGINAL" | perl -ne '/^emacsconf-[0-9]*-(.*?)--/ && print $1')
LOCK=".lock-$SLUG"

if [ ! -f "$REENCODED" ]; then
    if [  -f "$LOCK" ]; then
        echo "$LOCK already exists, waiting for it"
    else
        touch "$LOCK"
        screen -dmS reencode-$SLUG /bin/bash -c "reencode.sh \"$ORIGINAL\" \"$REENCODED\" && thumbnail.sh \"$MAIN\" && rm \"$LOCK\""
        echo "Processing $REENCODED in reencode-$SLUG"
    fi
fi

which calls roles/prerec/templates/reencode.sh. Here's the templatized version from Ansible:

#!/usr/bin/env bash

set -euo pipefail

# Defaults
q={{ reencode_quality }}
cpu={{ reencode_cpu }}
time_limit=""
print_only=false
limit_resolution={{ res_y }}
limit_fps={{ fps }}

while getopts :q:c:t:s OPT; do
    case $OPT in
        q|+q)
            q="$OPTARG"
            ;;
        c|+c)
            cpu="$OPTARG"
            ;;
        t|+t)
            time_limit="-to $OPTARG"
            ;;
        s)
            print_only=true
            ;;
        *)
            echo "usage: `basename $0` [+-q ARG] [+-c ARG} [--] ARGS..."
            exit 2
    esac
done
shift `expr $OPTIND - 1`
OPTIND=1

input="$1"
output="${2:-$(echo $input | sed 's/--original.*/--reencoded.webm/')}"

command="$(cat<<EOF
ffmpeg -y -i "$input" $time_limit \
       -vf "scale='-1':'min($limit_resolution,ih)',
            fps='$limit_fps'" \
       -c:v libvpx-vp9 -b:v 0 -crf $q -an \
       -row-mt 1 -tile-columns 2 -tile-rows 2 -cpu-used $cpu -g 240 \
       -pass 1 -f webm -threads $cpu /dev/null &&
    ffmpeg -y -i "$input" $time_limit \
           -vf "scale='-1':'min($limit_resolution,ih)',
                fps='$limit_fps'" \
               -c:v libvpx-vp9 -b:v 0 -crf $q -c:a libopus \
               -row-mt 1 -tile-columns 2 -tile-rows 2 -cpu-used $cpu \
               -pass 2 -threads $cpu -- "$output"
EOF
)"

if [ $print_only == true ]; then
    echo "$command"
else
    eval "$command"
fi

Process the audio and captions

Processing the audio is relatively straightforward.

emacsconf-%--reencoded.opus: emacsconf-%--reencoded.webm
  ffmpeg -i "$<" -c:a copy "$@"

emacsconf-%--normalized.opus: emacsconf-%--reencoded.opus
  ffmpeg-normalize "$<" -ofmt opus -c:a libopus -o "$@"

emacsconf-%--reencoded.vtt: emacsconf-%--reencoded.opus
  whisperx --model large-v2 --align_model WAV2VEC2_ASR_LARGE_LV60K_960H --compute_type int8 --print_progress True --max_line_width 50 --segment_resolution chunk --max_line_count 1 --language en "$<"

After this, we need to manually process the --reencoded.vtt and then eventually save the edited version as --main.vtt.

Combine the video, audio, and subtitles

The next part of the Makefile creates the --main.webm from the reencoded, normalized, and edited files, or from just the --reencoded.webm if that's all that's available.

emacsconf-%--main.webm: emacsconf-%--reencoded.webm emacsconf-%--normalized.opus emacsconf-%--main.vtt
  ffmpeg -i emacsconf-$*--reencoded.webm -i emacsconf-$*--normalized.opus -i emacsconf-$*--main.vtt \
    -map 0:v -map 1:a -c:v copy -c:a copy \
    -map 2 -c:s webvtt -y \
    $@

emacsconf-%--main.webm: emacsconf-%--reencoded.webm
  cp "$<" "$@"

This works because the Makefile picks the most specific set of dependencies.

Making all the files based on the original ones that are available

Finally, we need some rules to make various things. We do this with a wildcard match for all the original files, and then we make a list without the --original.*. After that, we can just use addsuffix to add the different file endings.

PRERECS_ORIGINAL := $(wildcard emacsconf-*--original.*)
PREFIXES := $(shell for f in $(PRERECS_ORIGINAL); do echo "$${f%--original.*}"; done)
PRERECS_REENCODED := $(addsuffix --reencoded.webm, $(PREFIXES))
PRERECS_OPUS := $(addsuffix --reencoded.opus, $(PREFIXES))
PRERECS_NORMAL := $(addsuffix --normalized.opus, $(PREFIXES))
PRERECS_MAIN := $(addsuffix --main.webm, $(PREFIXES))
PRERECS_CAPTIONS := $(addsuffix --reencoded.vtt, $(PREFIXES))

all: reencoded opus normal main
reencoded: $(PRERECS_REENCODED)
opus: $(PRERECS_OPUS)
normal: $(PRERECS_NORMAL)
captions: $(PRERECS_CAPTIONS)
main: $(PRERECS_MAIN)

I sometimes do the captions on my computer, so I've left them out of the all target.

Seems to be doing all right so far. It's nice having the Makefile figure out what's changed and what needs to be updated.

View org source for this post

EmacsConf backstage: making lots of intro videos with subed-record

| emacsconf, subed, emacs

Summary (735 words): Emacs is a handy audio/video editor. subed-record can combine multiple audio files and images to create multiple output videos.

Watch on YouTube

It's nice to feel like you're saying someone's name correctly. We ask EmacsConf speakers to introduce themselves in the first few seconds of their video, but people often forget to do that, so that's okay. We started recording introductions for EmacsConf 2022 so that stream hosts don't have to worry about figuring out pronunciation while they're live. Here's how I used subed-record to turn my recordings into lots of little videos.

First, I generated the title images by using Emacs Lisp to replace text in a template SVG and then using Inkscape to convert the SVG into a PNG. Each image showed information for the previous talk as well as the upcoming talk. (emacsconf-stream-generate-in-between-pages)

emacsconf.svg.png
Figure 1: Sample title image

Then I generated the text for each talk based on the title, the speaker names, pronunciation notes, pronouns, and type of Q&A. Each introduction generally followed the pattern, "Next we have title by speakers. Details about Q&A." (emacsconf-pad-expand-intro and emacsconf-subed-intro-subtitles below)

00:00:00.000 --> 00:00:00.999
#+OUTPUT: sat-open.webm
[[file:/home/sacha/proj/emacsconf/2023/assets/in-between/sat-open.svg.png]]
Next, we have "Saturday opening remarks".

00:00:05.000 --> 00:00:04.999
#+OUTPUT: adventure.webm
[[file:/home/sacha/proj/emacsconf/2023/assets/in-between/adventure.svg.png]]
Next, we have "An Org-Mode based text adventure game for learning the basics of Emacs, inside Emacs, written in Emacs Lisp", by Chung-hong Chan. He will answer questions via Etherpad.

I copied the text into an Org note in my inbox, which Syncthing copied over to the Orgzly Revived app on my Android phone. I used Google Recorder to record the audio. I exported the m4a audio file and a rough transcript, copied them back via Syncthing, and used subed-record to edit the audio into a clean audio file without oopses.

Each intro had a set of captions that started with a NOTE comment. The NOTE comment specified the following:

  • #+AUDIO:: the audio source to use for the timestamped captions that follow
  • [[file:...]]: the title image I generated for each talk. When subed-record-compile-video sees a comment with a link to an image, video, or animated GIF, it takes that visual and uses it for the span of time until the next visual.
  • #+OUTPUT: the file to create.
NOTE #+OUTPUT: hyperdrive.webm
[[file:/home/sacha/proj/emacsconf/2023/assets/in-between/hyperdrive.svg.png]]
#+AUDIO: intros-2023-11-21-cleaned.opus

00:00:15.680 --> 00:00:17.599
Next, we have "hyperdrive.el:

00:00:17.600 --> 00:00:21.879
Peer-to-peer filesystem in Emacs", by Joseph Turner

00:00:21.880 --> 00:00:25.279
and Protesilaos Stavrou (also known as Prot).

00:00:25.280 --> 00:00:27.979
Joseph will answer questions via BigBlueButton,

00:00:27.980 --> 00:00:31.080
and Prot might be able to join depending on the weather.

00:00:31.081 --> 00:00:33.439
You can join using the URL from the talk page

00:00:33.440 --> 00:00:36.320
or ask questions through Etherpad or IRC.

NOTE
#+OUTPUT: steno.webm
[[file:/home/sacha/proj/emacsconf/2023/assets/in-between/steno.svg.png]]
#+AUDIO: intros-2023-11-19-cleaned.opus

00:03:23.260 --> 00:03:25.480
Next, we have "Programming with steno",

00:03:25.481 --> 00:03:27.700
by Daniel Alejandro Tapia.

NOTE
#+AUDIO: intro-2023-11-29-cleaned.opus

00:00:13.620 --> 00:00:16.580
You can ask your questions via Etherpad and IRC.

00:00:16.581 --> 00:00:18.079
We'll send them to the speaker

00:00:18.080 --> 00:00:19.919
and post the answers in the talk page

00:00:19.920 --> 00:00:21.320
after the conference.

I could then call subed-record-compile-video to create the videos for all the intros, or mark a region with C-SPC and then subed-record-compile-video only the intros inside that region.

Sample intro

Using Emacs to edit the audio and compile videos worked out really well because it made it easy to change things.

  • Changing pronunciation or titles: For EmacsConf 2023, I got the recordings sorted out in time for the speakers to correct my pronunciation if they wanted to. Some speakers also changed their talk titles midway. If I wanted to redo an intro, I just had to rerecord that part, run it through my subed-record audio cleaning process, add an #+AUDIO: comment specifying which file I want to take the audio from, paste it into my main intros.vtt, and recompile the video.
  • Cancelling talks: One of the talks got cancelled, so I needed to update the images for the talk before it and the talk after it. I regenerated the title images and recompiled the videos. I didn't even need to figure out which talk needed to be updated - it was easy enough to just recompile all of them.
  • Changing type of Q&A: For example, some speakers needed to switch from answering questions live to answering them after the conference. I could just delete the old instructions, paste in the instructions from elsewhere in my intros.vtt (making sure to set #+AUDIO to the file if it came from a different take), and recompile the video.

And of course, all the videos were captioned. Bonus!

So that's how using Emacs to edit and compile simple videos saved me a lot of time. I don't know how I'd handle this otherwise. 47 video projects that might all need to be updated if, say, I changed the template? Yikes. Much better to work with text. Here are the technical details.

Generating the title images

I used Inkscape to add IDs to our template SVG so that I could edit them with Emacs Lisp. From emacsconf-stream.el:

emacsconf-stream-generate-in-between-pages: Generate the title images.
(defun emacsconf-stream-generate-in-between-pages (&optional info)
  "Generate the title images."
  (interactive)
  (setq info (or emacsconf-schedule-draft (emacsconf-publish-prepare-for-display (emacsconf-filter-talks (or info (emacsconf-get-talk-info))))))
  (let* ((by-track (seq-group-by (lambda (o) (plist-get o :track)) info))
         (dir (expand-file-name "in-between" emacsconf-stream-asset-dir))
         (template (expand-file-name "template.svg" dir)))
    (unless (file-directory-p dir)
      (make-directory dir t))
    (mapc (lambda (track)
            (let (prev)
              (mapc (lambda (talk)
                      (let ((dom (xml-parse-file template)))
                        (mapc (lambda (entry)
                                (let ((prefix (car entry)))
                                  (emacsconf-stream-svg-set-text dom (concat prefix "title")
                                                 (plist-get (cdr entry) :title))
                                  (emacsconf-stream-svg-set-text dom (concat prefix "speakers")
                                                 (plist-get (cdr entry) :speakers))
                                  (emacsconf-stream-svg-set-text dom (concat prefix "url")
                                                 (and (cdr entry) (concat emacsconf-base-url (plist-get (cdr entry) :url))))
                                  (emacsconf-stream-svg-set-text
                                   dom
                                   (concat prefix "qa")
                                   (pcase (plist-get (cdr entry) :q-and-a)
                                     ((rx "live") "Live Q&A after talk")
                                     ((rx "pad") "Etherpad")
                                     ((rx "IRC") "IRC Q&A after talk")
                                     (_ "")))))
                              (list (cons "previous-" prev)
                                    (cons "current-" talk)))
                        (with-temp-file (expand-file-name (concat (plist-get talk :slug) ".svg") dir)
                          (dom-print dom))
                        (shell-command
                         (concat "inkscape --export-type=png -w 1280 -h 720 --export-background-opacity=0 "
                                 (shell-quote-argument (expand-file-name (concat (plist-get talk :slug) ".svg")
                                                                         dir)))))
                      (setq prev talk))
                    (emacsconf-filter-talks (cdr track)))))
          by-track)))

emacsconf-stream-svg-set-text: Update DOM to set the tspan in the element with ID to TEXT.
(defun emacsconf-stream-svg-set-text (dom id text)
  "Update DOM to set the tspan in the element with ID to TEXT.
If the element doesn't have a tspan child, use the element itself."
  (if (or (null text) (string= text ""))
      (let ((node (dom-by-id dom id)))
        (when node
          (dom-set-attribute node 'style "visibility: hidden")
          (dom-set-attribute (dom-child-by-tag node 'tspan) 'style "fill: none; stroke: none")))
    (setq text (svg--encode-text text))
    (let ((node (or (dom-child-by-tag
                     (car (dom-by-id dom id))
                     'tspan)
                    (dom-by-id dom id))))
      (cond
       ((null node)
        (error "Could not find node %s" id))                      ; skip
       ((= (length node) 2)
        (nconc node (list text)))
       (t (setf (elt node 2) text))))))

Generating the script

From emacsconf-pad.el:

emacsconf-pad-expand-intro: Make an intro for TALK.
(defun emacsconf-pad-expand-intro (talk)
  "Make an intro for TALK."
  (cond
   ((null (plist-get talk :speakers))
    (format "Next, we have \"%s\"." (plist-get talk :title)))
   ((plist-get talk :intro-note)
    (plist-get talk :intro-note))
   (t
    (let ((pronoun (pcase (plist-get talk :pronouns)
                     ((rx "she") "She")
                     ((rx "\"ou\"" "Ou"))
                     ((or 'nil "nil" (rx string-start "he") (rx "him")) "He")
                     ((rx "they") "They")
                     (_ (or (plist-get talk :pronouns) "")))))
      (format "Next, we have \"%s\", by %s%s.%s"
              (plist-get talk :title)
              (replace-regexp-in-string ", \\([^,]+\\)$"
                                        ", and \\1"
                                        (plist-get talk :speakers))
              (emacsconf-surround " (" (plist-get talk :pronunciation) ")" "")
              (pcase (plist-get talk :q-and-a)
                ((or 'nil "") "")
                ((rx "after") " You can ask questions via Etherpad and IRC. We'll send them to the speaker, and we'll post the answers on the talk page afterwards.")
                ((rx "live")
                 (format " %s will answer questions via BigBlueButton. You can join using the URL from the talk page or ask questions through Etherpad or IRC."
                         pronoun
                         ))
                ((rx "pad")
                 (format " %s will answer questions via Etherpad."
                         pronoun
                         ))
                ((rx "IRC")
                 (format " %s will answer questions via IRC in the #%s channel."
                         pronoun
                         (plist-get talk :channel)))))))))

And from emacsconf-subed.el:

emacsconf-subed-intro-subtitles: Create the introduction as subtitles.
(defun emacsconf-subed-intro-subtitles ()
  "Create the introduction as subtitles."
  (interactive)
  (subed-auto-insert)
  (let ((emacsconf-publishing-phase 'conference))
    (mapc
     (lambda (sub) (apply #'subed-append-subtitle nil (cdr sub)))
     (seq-map-indexed
      (lambda (talk i)
        (list
         nil
         (* i 5000)
         (1- (* i 5000))
         (format "#+OUTPUT: %s.webm\n[[file:%s]]\n%s"
                 (plist-get talk :slug)
                 (expand-file-name
                  (concat (plist-get talk :slug) ".svg.png")
                  (expand-file-name "in-between" emacsconf-stream-asset-dir))
                 (emacsconf-pad-expand-intro talk))))
      (emacsconf-publish-prepare-for-display (emacsconf-get-talk-info))))))

View org source for this post

EmacsConf backstage: Trimming the BigBlueButton recordings based on YouTube duration

| emacsconf, emacs, youtube, video

I wanted to get the Q&A sessions up quickly after the conference, so I uploaded them to YouTube and added them to the EmacsConf 2023 playlist. I used YouTube's video editor to roughly guess where to trim them based on the waveforms. I needed to actually trim the source videos, though, so that our copies would be up to date and I could use those for the Toobnix uploads.

My first task was to figure out which videos needed to be trimmed to match the YouTube edits. First, I retrieved the video details using the API and the code that I added to emacsconf-extract.el.

(setq emacsconf-extract-youtube-api-video-details (emacsconf-extract-youtube-get-video-details emacsconf-extract-youtube-api-playlist-items))

Then I made a table comparing the file duration with the YouTube duration, showing rows only if the difference was more than 3 minutes.

(append
 '(("type" "slug" "file duration" "youtube duration" "diff"))
 (let ((threshold-secs (* 3 60))) ; don't sweat small differences
   (seq-mapcat
    (lambda (talk)
      (seq-keep
       (lambda (row)
         (when (plist-get talk (cadr row))
           (let* ((video (emacsconf-extract-youtube-find-url-video-in-list
                          (plist-get talk (cadr row))
                          emacsconf-extract-youtube-api-video-details))
                  (video-duration (if (and video (emacsconf-extract-youtube-duration-msecs video))
                                      (/ (emacsconf-extract-youtube-duration-msecs video) 1000.0)))
                  (file-duration (ceiling
                                  (/ (compile-media-get-file-duration-ms (emacsconf-talk-file talk (format "--%s.webm" (car row))))
                                     1000.0))))
             (when (and video-duration (> (abs (- file-duration video-duration)) threshold-secs))
               (list (car row)
                     (plist-get talk :slug)
                     (and file-duration (format-seconds "%h:%z%.2m:%.2s" file-duration))
                     (and video-duration (format-seconds "%h:%z%.2m:%.2s" video-duration))
                     (emacsconf-format-seconds
                      (abs (- file-duration video-duration))))))))
       '(("main" :youtube-url)
         ("answers" :qa-youtube-url))))
    (emacsconf-publish-prepare-for-display (emacsconf-get-talk-info)))))

Then I got the commands to trim the videos.

 (mapconcat (lambda (row)
              (let ((talk (emacsconf-resolve-talk (elt row 1))))
                (format "ffmpeg -y -i %s--%s.webm -t %s -c copy %s--%s--trimmed.webm"
                        (plist-get talk :file-prefix)
                        (car row)
                        (concat (elt row 3) ".000")
                        (plist-get talk :file-prefix)
                        (car row))))
            (cdr to-trim)
            "\n"))

After quickly checking the results, I copied them over to the original videos, updated the video data in my conf.org, and republished the info pages in the wiki.

The time I spent on figuring out how to talk to the YouTube API feels like it's paying off.

EmacsConf backstage: Figuring out our maximum number of simultaneous BigBlueButton users

| emacsconf

[2023-12-30 Sat] Update: fix total number of unique users; I flipped the assoc so that the car is the user ID and the cdr is the name

A few people have generously donated money to EmacsConf, so now we're thinking of how to use that money effectively to scale EmacsConf up or help people be happier.

One of the things I'd like to improve is our BigBlueButton web conferencing setup, since fiddling with the screen layout was a little annoying this year. We're using BigBlueButton 2.2, which was released in 2020. The current version is 2.7 and has a few improvements that I think would be very useful.

  • Better layouts mean that webcams can be on the left side, leaving more space for the presentation, which means a more pleasant viewing experience and less manual fiddling with the sizes of things.
  • Built-in timers could help speakers and hosts easily stay on track.
  • A unified WEBM export (instead of separate videos for webcams and screensharing) means less post-processing with ffmpeg, and probably a better layout too.
  • The option to share system audio when using a Chromium-based browser means easier multimedia presentations, since setting up audio loopbacks can be tricky.

We'd love to use those improvements at the next EmacsConf, and they might be handy for the handful of other Emacs meetups who use our BigBlueButton setup from time to time. I think reducing the mental load from managing screen layouts might be an important step towards making it possible to have a third track.

The current BigBlueButton is a 6-core 3.4GHz virtual machine with 8 GB RAM. During EmacsConf 2023, the CPU load stayed at around 35%, with 4 GB memory used. It idles at 3% CPU and about 3 GB RAM. We have ssh access to an account with sudo, but no higher-level access in case that breaks or in case we mess up upgrading the underlying Ubuntu distribution too, which we should because it's reached its support end-of-life.

BigBlueButton's website recommends installing 2.7 on a clean, dedicated system instead of trying to do the upgrade in place. It requires a major version upgrade to at least Ubuntu 20.04, and it recommends 16 GB memory and 8 CPU cores.

System administration isn't my current cup of tea, and the other organizers might be busy.

Some choices we're thinking about are:

  • Continue with our current 2.2 setup, just hack better layouts into it with Tampermonkey or something: probably not a very good choice from the perspective of being a good citizen of the Internet, since the system's out of date
  • Try to upgrade in place and hope we don't break anything: one of the other organizers is willing to add this to his maybe-do list
  • Install 2.7 on a new node, try to migrate to it to figure out the process, and then maybe consider spinning up a new node during EmacsConf, adding it to our hosting costs budget
  • Pay for BigBlueButton hosting: might be worth it if no one wants to take on the responsibility for managing BBB ourselves
  • Switch to hosted Jitsi: recording might be trickier

Commercial BigBlueButton hosts tend to charge based on the number of simultaneous users and the number of rooms.

It's been nice having one room per group of speakers because then we can e-mail speakers their personal URL for testing and checking in, the scripts can join the correct room automatically, we never have to worry about time, and all the recordings are split up. In previous years, we rotated among a set of five rooms, but then we needed to keep track of who was using which rooms. I think going with multiple rooms makes sense.

So it mostly comes down to the number of simultaneous users. I rsynced /var/bbb/recording/raw and cross-referenced each talk with its BBB meeting using slugs I'd added to the meeting title, disambiguating them as needed. Then I could use the following function from emacsconf-extract.el:

Report on simultaneous users
(defun emacsconf-extract-bbb-report ()
  (let* ((max 0)
         (participant-count 0)
         (meeting-count 0)
         (max-meetings 0)
         (max-participants 0)
         meeting-participants
         (meeting-events
          (sort
           (seq-mapcat
            (lambda (talk)
              (when (plist-get talk :bbb-meeting-id)
                (let ((dom (xml-parse-file (emacsconf-extract-bbb-raw-events-file-name talk)))
                      participants talking meeting-events)
                  (mapc (lambda (o)
                          (pcase (dom-attr o 'eventname)
                            ("ParticipantJoinEvent"
                             (cl-pushnew (cons (dom-text (dom-by-tag o 'userId))
                                               (dom-text (dom-by-tag o 'name)))
                                         participants)
                             (push (cons (string-to-number (dom-text (dom-by-tag o 'timestampUTC)))
                                         (dom-attr o 'eventname))
                                   meeting-events))
                            ("ParticipantLeftEvent"
                             (when (string= (dom-attr o 'module) "PARTICIPANT")
                               (push (cons (string-to-number (dom-text (dom-by-tag o 'timestampUTC)))
                                           (dom-attr o 'eventname))
                                     meeting-events)))
                            ("ParticipantTalkingEvent"
                             (cl-pushnew (assoc-default (dom-text (dom-by-tag o 'participant)) participants) talking))
                            ((or
                              "CreatePresentationPodEvent"
                              "EndAndKickAllEvent")
                             (push (cons (string-to-number (dom-text (dom-by-tag o 'timestampUTC)))
                                         (dom-attr o 'eventname))
                                   meeting-events))))
                        (dom-search dom (lambda (o) (dom-attr o 'eventname))))
                  (cl-pushnew (list :slug (plist-get talk :slug)
                                    :participants participants
                                    :talking talking)
                              meeting-participants)
                  meeting-events)))
            (emacsconf-get-talk-info))
           (lambda (a b) (< (car a) (car b))))))
    (dolist (event meeting-events)
      (pcase (cdr event)
        ("CreatePresentationPodEvent" (cl-incf meeting-count) (when (> meeting-count max-meetings) (setq max-meetings meeting-count)))
        ("ParticipantJoinEvent" (cl-incf participant-count) (when (> participant-count max-participants) (setq max-participants participant-count)))
        ("ParticipantLeftEvent" (cl-decf participant-count))
        ("EndAndKickAllEvent" (cl-decf meeting-count))))
    `((,(length meeting-participants) "Number of meetings analyzed")
      (,max-participants "Max number of simultaneous users")
      (,max-meetings "Max number of simultaneous meetings")
      (,(apply 'max (mapcar (lambda (o) (length (plist-get o :participants))) meeting-participants)) "Max number of people in one meeting")
      (,(length (seq-uniq (seq-mapcat (lambda (o) (mapcar #'cdr (plist-get o :participants))) meeting-participants))) "Total unique users")
      (,(length (seq-uniq (seq-mapcat (lambda (o) (plist-get o :talking)) meeting-participants))) "Total unique talking"))))

31 Number of meetings analyzed
62 Max number of simultaneous users
6 Max number of simultaneous meetings
27 Max number of people in one meeting
84 Total unique users
36 Total unique talking

The number of simultaneous users is pretty manageable. Most people watch the stream, which we broadcast via Icecast, so those numbers aren't reflected here. I think we tended to have between 100-200 viewers on Icecast.

For that kind of usage, some hosting options are:

  • BigBlueButton hosting:

    Host Monthly Concurrent users Notes
    BiggerBlueButton USD 40 150 I'd need to check if we can have more than 10 created rooms if only at most 10 are used concurrently
    Web Hosting Zone USD 49 100  
    Myna Parrot USD 60 75 USD 150/month + USD 15 setup fee if we want to use our own URL
    BigBlueButton.host USD 85 80  
    BigBlueMeeting USD 125 100  
    BBB On Demand     8 vCPU 32 GB RAM: USD 1.20/hour, USD 0.05/hour when stopped: USD 86 for 3 days
    BBB On Demand   100 USD 2.40/hour: USD 173 for 3 days
  • Virtual private server: We'd need to set up and manage this ourselves. We could probably run it for one week before to give speakers time to do their tech-checks and one week after to give me time to pull the recordings. The other servers are on Linode, so it might make sense to keep it there too and manage it all in one place.

    Type Monthly  
    dedicated 8 GB 4-core USD 72 USD 0.108/hour, so USD 36 if we run it for two weeks
    dedicated CPU 16 GB 8-core USD 144 USD 0.216/hour, so USD 72 if we run it for two weeks

It would be nice if we could just do the upgrade and get it back onto our current server (also, fixing up our current server with a proper SMTP setup so that it could send out things like password reminder emails), although the current BigBlueButton server was donated by a defunct organization so it might be a good idea to have a backup plan for it anyway.

It would also be nice to add it to our Ansible configuration so that we could install BigBlueButton that way, maybe based on ansible-role-bigbluebutton. But again, not my current cup of tea, so it will need to wait until someone can step up to do it or I get around to it.

The Free Software Foundation feels strongly about software as a service substitute. They're okay with virtual private servers, but I'm not sure how far their moral objection goes when it comes to using and paying for free/libre/opensource software as a service, like BigBlueButton. I'm personally okay with paying for services, especially if they're based on free software. Since EmacsConf is committed to using free software and not requiring people to use non-free software, that might be something the other organizers can weigh in on. If someone feels strongly enough about it, maybe they'll work on it. I think it can be hard enough for people to find the time for stuff they like, so if no one particularly likes doing this sort of stuff, I'm okay with scaling down or paying for something that's ready to go.

Anyway, at least we have the numbers for decisions!

View org source for this post

EmacsConf backstage: Using Spookfox to publish YouTube and Toobnix video drafts

| emacsconf, emacs, spookfox, youtube, video

I ran into quota limits when uploading videos to YouTube with a command-line tool, so I uploaded videos by selecting up to 15 videos at a time using the web-based interface. Each video was a draft, though, and I was having a hard time updating its visibility through the API. I think it eventually worked, but in the meantime, I used this very hacky hack to look for the "Edit Draft" button and click through the screens to publish them.

emacsconf-extract-youtube-publish-video-drafts-with-spookfox: Look for drafts and publish them.
(defun emacsconf-extract-youtube-publish-video-drafts-with-spookfox ()
  "Look for drafts and publish them."
  (while (not (eq (spookfox-js-injection-eval-in-active-tab
                   "document.querySelector('.edit-draft-button div') != null" t) :false))
    (progn
      (spookfox-js-injection-eval-in-active-tab
       "document.querySelector('.edit-draft-button div').click()" t)
      (sleep-for 2)
      (spookfox-js-injection-eval-in-active-tab
       "document.querySelector('#step-title-3').click()" t)
      (when (spookfox-js-injection-eval-in-active-tab
             "document.querySelector('tp-yt-paper-radio-button[name=\"PUBLIC\"] #radioLabel').click()" t)
        (spookfox-js-injection-eval-in-active-tab
         "document.querySelector('#done-button').click()" t)
        (while (not (eq  (spookfox-js-injection-eval-in-active-tab
                          "document.querySelector('#close-button .label') == null" t)
                         :false))
          (sleep-for 1))

        (spookfox-js-injection-eval-in-active-tab
         "document.querySelector('#close-button .label').click()" t)
        (sleep-for 1)))))

Another example of a hacky Spookfox workaround was publishing the unlisted videos. I couldn't figure out how to properly authenticate with the Toobnix (Peertube) API to change the visibility of videos. Peertube uses AngularJS components in the front end, so using .click() on the input elements didn't seem to trigger anything. I found out that I needed to use .dispatchEvent(new Event('input')) to tell the dropdown for the visibility to display the options. source

emacsconf-extract-toobnix-publish-video-from-edit-page: Messy hack to set a video to public and store the URL.
(defun emacsconf-extract-toobnix-publish-video-from-edit-page ()
  "Messy hack to set a video to public and store the URL."
  (interactive)
  (spookfox-js-injection-eval-in-active-tab "document.querySelector('label[for=privacy]').scrollIntoView(); document.querySelector('label[for=privacy]').closest('.form-group').querySelector('input').dispatchEvent(new Event('input'));" t)
  (sit-for 1)
  (spookfox-js-injection-eval-in-active-tab "document.querySelector('span[title=\"Anyone can see this video\"]').click()" t)
  (sit-for 1)
  (spookfox-js-injection-eval-in-active-tab "document.querySelector('button.orange-button').click()" t)(sit-for 3)
  (emacsconf-extract-store-url)
  (shell-command "xdotool key Alt+Tab sleep 1 key Ctrl+w Alt+Tab"))

It's a little nicer using Spookfox to automate browser interactions than using xdotool, since I can get data out of it too. I could also have used Puppeteer from either Python or NodeJS, but it's nice staying with Emacs Lisp. Spookfox has some Javascript limitations (can't close windows, etc.), so I might still use bits of xdotool or Puppeteer to work around that. Still, it's nice to now have an idea of how to talk to AngularJS components.