Scaling a BigBlueButton server down to a 1 GB node between uses

| geek, tech, emacsconf, emacs

Now that we've survived EmacsConf, I've been looking into running a BigBlueButton server so that various Emacs meetups can use it if they like instead of relying on Jitsi or other free video-conferencing services. (I spent some time looking into Galene, but I'm not quite sure that's ready for our uses yet, like this issue that LibrePlanet ran into with recording.)

BigBlueButton requires a server with at least 4 CPU cores and 8 GB of RAM to even start up, and it doesn't like to share with other services. This costs about USD 48+tax/month on Linode or USD 576+tax/year, which is not an efficient use of funds yet. I could delete it after each instance, but I've been having a hard time properly restoring it from backup after deploying to a new IP address. bbb-conf --setip doesn't seem to catch everything, so I was still getting curl errors related to verifying the certificate.

A reasonable in-between is to run it on Linode's lowest plan (1 core, 1GB RAM; USD 60+tax for the year) in between meetups, and then spin things up for maybe 6-12 hours around each meetup. If I go with the 4-core 8 GB setup, that would be an extra USD 0.43 - 0.86 USD per meetup, which is eminently doable. I could even go with the recommended configuration of 8 cores and 16 GB memory on a dedicated CPU plan (USD 0.216/hour, so USD 1.30 to 2.59 per meetup). This was the approach that we used while preparing for EmacsConf. Since I didn't have a lot of programming time, I scaled the node up to 4 core / 8GB RAM whenever I had time to work on it, and I scaled it down to 1GB at the end of each of my working sessions. I scaled it up to dedicated 8 core / 16 GB RAM for EmacsConf, during which we used roughly half of the CPU capacity in order to host a max of 107 simultaneous users over 7 meetings.

I reviewed my BigBlueButton setup notes in the EmacsConf organizers notebook and the 2024 notebook and set up a Linode instance under my account, so that I can handle the billing and also so that Amin Bandali doesn't get spammed by all the notifications (up, down, up, down…). And then I'll be able to just scale it up when EmacsConf comes around again, which is nice.

Anyway, BBB refuses to install on a machine with fewer than 4 cores or 8 GB RAM, but once you set it up, it'll valiantly thrash around even on an underpowered server, which makes working with the server over ssh a lot slower. Besides, that's not friendly to other people using the same server. I wanted to configure the services so that they would only run on a server of the correct size. It turns out that systemd will let you specify either ConditionMemory and ConditionCPUs in the unit configuration file, and that you can use files ending in .conf in a directory named like yourservicename.service.d to override part of the configuration. Clear examples were hard to find, so I wanted to share these notes.

Since ConditionMemory is specified in bytes (ex: 8000000000), I found ConditionCPUs to be easier to read.

I used these commands to check if I'd gotten the syntax right:

systemd-analyze condition 'ConditionCPUs=>=4'

and then I wrote this script to set up the overrides:

CPUS_REQUIRED=4
for ID in coturn.service redis-server.service bigbluebutton.target multi-user.target bbb-graphql-server.service haproxy.service bbb-rap-resque-worker.service bbb-webrtc-sfu.service bbb-fsesl-akka.service bbb-webrtc-recorder.service bbb-pads.service bbb-export-annotations.service bbb-web.service freeswitch.service etherpad.service bbb-rap-starter.service bbb-rap-caption-inbox.service freeswitch.service bbb-apps-akka.service bbb-graphql-actions postgresql@14-main.service; do
    mkdir -p /etc/systemd/system/$ID.d
    printf "[Unit]\nConditionCPUs=>=$CPUS_REQUIRED\n" > /etc/systemd/system/$ID.d/require-cpu.conf
done
systemctl daemon-reload
systemd-analyze verify bigbluebutton.target

It seems to work. When I use linode-cli to resize to the testing size, BigBlueButton works:

linode-cli linodes resize $BBB_ID --type g6-standard-4 --allow_auto_disk_resize false
sleep 5m
linode-cli linodes boot $BBB_ID
ssh root@bbb.emacsverse.org "cd ~/greenlight-v3; docker compose restart"
notify-send "Should be ready"

And when I resize it down to a 1 GB nanode, BigBlueButton doesn't get started and the VPS is nice and responsive when I SSH in.

echo Powering off
linode-cli linodes shutdown $BBB_ID
sleep 30
echo "Resizing BBB node to nanode, dormant"
linode-cli linodes resize $BBB_ID --type g6-nanode-1 --allow_auto_disk_resize false

So now I'm going to coordinate with Ihor Radchenko about when he might want to try this out for OrgMeetup, and I can talk to other meetup organizers to figure out times. People will probably want to test things before announcing it to their meetup groups, so we just need to schedule that. It's BigBlueButton 3.0. I'm not 100% confident in the setup. We had some technical issues with some EmacsConf speakers even though we did a tech check with them before we went live with their session. Not sure what happened there.

I'm still a little nervous about accidentally forgetting to downscale the server and running up a bill, but I've scheduled downscaling with the at command before, so that's helpful. If it turns out to be something we want to do regularly, I might even be able to use a cronjob from my other server so that it happens even if my laptop is off, and maybe set up a backup nginx server with a friendly message (and maybe a list of upcoming meetups) in case people connect before it's been scaled up. Anyway, I think that's a totally good use of part of the Google Open Source Peer Bonus I received last year.

As an aside, you can change a room's friendly_id to something actually friendly. In the Rails console (docker exec -it greenlight-v3 bundle exec rails console), you could do something like this:

Room.find_by(friendly_id: "CURRENT_ROOM_ID").update_attribute(:friendly_id, "NEW_CUSTOM_ID")

Anyway, let me know if you organize an Emacs meetup and want to give this BigBlueButton instance a try!

View org source for this post
You can comment with Disqus or you can e-mail me at sacha@sachachua.com.