Categories: streaming

RSS - Atom - Subscribe via email

Quick notes on livestreaming to YouTube with FFmpeg on a Lenovo X230T

| video, youtube, streaming, ffmpeg, yay-emacs

[2024-01-05 Fri]: Updated scripts

Text from the sketch

Quick thoughts on livestreaming

Why:

  • work out loud
  • share tips
  • share more
  • spark conversations
  • (also get questions about things)

Doable with ffmpeg on my X230T:

  • streaming from my laptop
  • lapel mic + system audio,
  • second screen for monitoring

Ideas for next time:

  • Overall notes in Emacs with outline, org-timer timestamped notes; capture to this file
  • Elisp to start/stop the stream → find old code
  • Use the Yeti? Better sound
  • tee to a local recording
  • grab screenshot from SuperNote mirror?

Live streaming info density:

  • High: Emacs News review, package/workflow demo
  • Narrating a blog post to make it a video
  • Categorizing Emacs News, exploring packages
  • Low: Figuring things out

YouTube can do closed captions for livestreams, although accuracy is low. Videos take a while to be ready to download.

Experimenting with working out loud

I wanted to write a report on EmacsConf 2023 so that we could share it with speakers, volunteers, participants, donors, related organizations like the Free Software Foundation, and other communities. I experimented with livestreaming via YouTube while I worked on the conference highlights.

It's a little over an hour long and probably very boring, but it was nice of people to drop by and say hello.

The main parts are:

  • 0:00: reading through other conference reports for inspiration
  • 6:54: writing an overview of the talks
  • 13:10: adding quotes for specific talks
  • 25:00: writing about the overall conference
  • 32:00: squeezing in more highlights
  • 49:00: fiddling with the formatting and the export

It mostly worked out, aside from a brief moment of "uhhh, I'm looking at our private conf.org file on stream". Fortunately, the e-mail addresses that were showed were the public ones.

Technical details

Setup:

  • I set up environment variables and screen resolution:

      # From pacmd list-sources | egrep '^\s+name'
      LAPEL=alsa_input.usb-Jieli_Technology_USB_Composite_Device_433035383239312E-00.mono-fallback #
      YETI=alsa_input.usb-Blue_Microphones_Yeti_Stereo_Microphone_REV8-00.analog-stereo
      SYSTEM=alsa_output.pci-0000_00_1b.0.analog-stereo.monitor
      # MIC=$LAPEL
      # AUDIO_WEIGHTS="1 1"
      MIC=$YETI
      AUDIO_WEIGHTS="0.5 0.5"
      OFFSET=+1920,430
      SIZE=1280x720
      SCREEN=LVDS-1  # from xrandr
      xrandr --output $SCREEN --mode 1280x720
    
  • I switch to a larger size and a light theme. I also turn consult previews off to minimize the risk of leaking data through buffer previews.
    my-emacsconf-prepare-for-screenshots: Set the resolution, change to a light theme, and make the text bigger.
    (defun my-emacsconf-prepare-for-screenshots ()
      (interactive)
      (shell-command "xrandr --output LVDS-1 --mode 1280x720")
      (modus-themes-load-theme 'modus-operandi)
      (my-hl-sexp-update-overlay)
      (set-face-attribute 'default nil :height 170)
      (keycast-mode))
    

Testing:

ffmpeg -f x11grab -video_size $SIZE -i :0.0$OFFSET -y /tmp/test.png; display /tmp/test.png
ffmpeg -f pulse -i $MIC -f pulse -i $SYSTEM -filter_complex amix=inputs=2:weights=$AUDIO_WEIGHTS:duration=longest:normalize=0 -y /tmp/test.mp3; mpv /tmp/test.mp3
DATE=$(date "+%Y-%m-%d-%H-%M-%S")
ffmpeg -f x11grab -framerate 30 -video_size $SIZE -i :0.0$OFFSET -f pulse -i $MIC -f pulse -i $SYSTEM -filter_complex "amix=inputs=2:weights=$AUDIO_WEIGHTS:duration=longest:normalize=0" -c:v libx264 -preset fast -maxrate 690k -bufsize 2000k -g 60 -vf format=yuv420p -c:a aac -b:a 96k -y -flags +global_header "/home/sacha/recordings/$DATE.flv" -f flv

Streaming:

DATE=$(date "+%Y-%m-%d-%H-%M-%S")
ffmpeg -f x11grab -framerate 30 -video_size $SIZE -i :0.0$OFFSET -f pulse -i $MIC -f pulse -i $SYSTEM -filter_complex "amix=inputs=2:weights=$AUDIO_WEIGHTS:duration=longest:normalize=0[audio]" -c:v libx264 -preset fast -maxrate 690k -bufsize 2000k -g 60 -vf format=yuv420p -c:a aac -b:a 96k -y -f tee -map 0:v -map '[audio]' -flags +global_header  "/home/sacha/recordings/$DATE.flv|[f=flv]rtmp://a.rtmp.youtube.com/live2/$YOUTUBE_KEY"

To restore my previous setup:

my-emacsconf-back-to-normal: Go back to a more regular setup.
(defun my-emacsconf-back-to-normal ()
  (interactive)
  (shell-command "xrandr --output LVDS-1 --mode 1366x768")
  (modus-themes-load-theme 'modus-vivendi)
  (my-hl-sexp-update-overlay)
  (set-face-attribute 'default nil :height 115)
  (keycast-mode -1))

Ideas for next steps

I can think of a few workflow tweaks that might be fun:

  • a stream notes buffer on the right side of the screen for context information, timestamped notes to make editing/review easier (maybe using org-timer), etc. I experimented with some streaming-related code in my config, so I can dust that off and see what that's like. I also want to have an org-capture template for it so that I can add notes from anywhere.
  • a quick way to add a screenshot from my Supernote to my Org files

I think I'll try going through an informal presentation or Emacs News as my next livestream experiment, since that's probably higher information density.

View org source for this post

Getting live speech into Emacs with Deepgram's streaming API

| speechtotext, emacs, speech, streaming

This is a quick demonstration of using Deepgram's streaming API to do speech recognition live. It isn't as accurate as OpenAI Whisper but since Whisper doesn't have a streaming API, it'll do for now. I can correct misrecognized words manually. I tend to talk really quickly, so it displays the words per minute in my modeline. I put the words into an Org Mode buffer so I can toggle headings with avy and cycle visibility. When I'm done, it saves the text, JSON, and WAV for further processing. I think it'll be handy to have a quick way to take live notes during interviews or when I'm thinking out loud. Could be fun!

I'm still getting some weirdness when the mode turns on when I don't expect it, so that's something to look into. Maybe I won't use it as a mode for now. I'll just use my-live-speech-start and my-live-speech-stop.

General code

(defvar my-live-speech-buffer "*Speech*")
(defvar my-live-speech-process nil)
(defvar my-live-speech-output-buffer "*Speech JSON*")

(defvar my-live-speech-functions
  '(my-live-speech-display-in-speech-buffer
    my-live-speech-display-wpm
    my-live-speech-append-to-etherpad)
  "Functions to call with one argument, the recognition results.")

(defun my-live-speech-start ()
  "Turn on live captions."
  (interactive)
  (with-current-buffer (get-buffer-create my-live-speech-buffer)
    (unless (process-live-p my-live-speech-process)
      (let ((default-directory "~/proj/deepgram-live"))
        (message "%s" default-directory)
        (with-current-buffer (get-buffer-create my-live-speech-output-buffer)
          (erase-buffer))
        (setq my-live-speech-recent-words nil
              my-live-speech-wpm-string "READY ")
        (setq my-deepgram-process
              (make-process
               :command '("bash" "run.sh")
               :name "speech"
               :filter 'my-live-speech-json-filter
               :sentinel #'my-live-speech-process-sentinel
               :buffer my-live-speech-output-buffer)))
      (org-mode))
    (display-buffer (current-buffer))))

(defun my-live-speech-stop ()
  (interactive)
  (if (process-live-p my-live-speech-process)
      (kill-process my-live-speech-process))
  (setq my-live-speech-wpm-string nil))

;; (define-minor-mode my-live-speech-mode
;;  "Show live speech and display WPM.
;; Need to check how to reliably turn this on and off."
;;  :global t :group 'sachac
;;  (if my-live-speech-mode
;;      (my-live-speech-start)
;;    (my-live-speech-stop)
;;    (setq my-live-speech-wpm-string nil)))

;; based on subed-mpv::client-filter
(defun my-live-speech-handle-json (line-object)
  "Process the JSON object in LINE."
  (run-hook-with-args 'my-live-speech-functions (json-parse-string line :object-type 'alist)))

(defun my-live-speech-process-sentinel (proc event)
  (when (string-match "finished" event)
    (my-live-speech-stop)
    ;(my-live-speech-mode -1)
    ))

(defun my-live-speech-json-filter (proc string)
  (when (buffer-live-p (process-buffer proc))
    (with-current-buffer (process-buffer proc)
      (let* ((proc-mark (process-mark proc))
             (moving (= (point) proc-mark)))
        ;;  insert the output
        (save-excursion
          (goto-char proc-mark)
          (insert string)
          (set-marker proc-mark (point)))
        (if moving (goto-char proc-mark))
        ;; process and remove all complete lines of JSON (lines are complete if ending with \n)
        (let ((pos (point-min)))
          (while (progn (goto-char pos)
                        (end-of-line)
                        (equal (following-char) ?\n))
            (let* ((end (point))
                   (line (buffer-substring pos end)))
              (delete-region pos (+ end 1))
              (with-current-buffer (get-buffer my-live-speech-buffer)
                (my-live-speech-handle-json line)))))))))

Python code based on the Deepgram streaming test suite:

Very rough app.py
# Based on streaming-test-suite
# https://developers.deepgram.com/docs/getting-started-with-the-streaming-test-suite

import pyaudio
import asyncio
import json
import os
import websockets
from datetime import datetime
import wave
import sys

startTime = datetime.now()
key = os.environ['DEEPGRAM_API_KEY']
live_json = os.environ.get('LIVE_CAPTIONS_JSON', True)
all_mic_data = []
all_transcripts = []
all_words = []
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 16000
CHUNK = 8000

audio_queue = asyncio.Queue()
REALTIME_RESOLUTION = 0.250
SAMPLE_SIZE = 0

def save_info():
    global SAMPLE_SIZE
    base = startTime.strftime('%Y%m%d%H%M')
    wave_file_path = os.path.abspath(f"{base}.wav")
    wave_file = wave.open(wave_file_path, "wb")
    wave_file.setnchannels(CHANNELS)
    wave_file.setsampwidth(SAMPLE_SIZE)
    wave_file.setframerate(RATE)
    wave_file.writeframes(b"".join(all_mic_data))
    wave_file.close()
    with open(f"{base}.txt", "w") as f:
        f.write("\n".join(all_transcripts))
    with open(f"{base}.json", "w") as f:
        f.write(json.dumps(all_words))
    if live_json:
        print(f'{{"msg": "🟢 Saved to {base}.txt , {base}.json , {base}.wav", "base": "{base}"}}')
    else:
        print(f"🟢 Saved to {base}.txt , {base}.json , {base}.wav")

# Used for microphone streaming only.
def mic_callback(input_data, frame_count, time_info, status_flag):
    audio_queue.put_nowait(input_data)
    return (input_data, pyaudio.paContinue)

async def run(key, method="mic", format="text", **kwargs):
    deepgram_url = f'wss://api.deepgram.com/v1/listen?punctuate=true&smart_format=true&utterances=true&encoding=linear16&sample_rate=16000'
    async with websockets.connect(
        deepgram_url, extra_headers={"Authorization": "Token {}".format(key)}
    ) as ws:
        async def sender(ws):
            try:
                while True:
                    mic_data = await audio_queue.get()
                    all_mic_data.append(mic_data)
                    await ws.send(mic_data)
            except websockets.exceptions.ConnectionClosedOK:
                await ws.send(json.dumps({"type": "CloseStream"}))
                if live_json:
                    print('{"msg": "Closed."}')
                else:
                    print("Closed.")
        async def receiver(ws):
            global all_words
            """Print out the messages received from the server."""
            first_message = True
            first_transcript = True
            transcript = ""
            async for msg in ws:
                res = json.loads(msg)
                if first_message:
                    first_message = False
                try:
                    # handle local server messages
                    if res.get("msg"):
                        if live_json:
                            print(json.dumps(res))
                        else:
                            print(res["msg"])
                    if res.get("is_final"):
                        transcript = (
                            res.get("channel", {})
                            .get("alternatives", [{}])[0]
                            .get("transcript", "")
                        )
                        if transcript != "":
                            if first_transcript:
                                first_transcript = False
                            if live_json:
                                print(json.dumps(res.get("channel", {}).get("alternatives", [{}])[0]))
                            else:
                                print(transcript)
                            all_transcripts.append(transcript)
                            all_words = all_words + res.get("channel", {}).get("alternatives", [{}])[0].get("words", [])
                        # if using the microphone, close stream if user says "goodbye"
                        if method == "mic" and "goodbye" in transcript.lower():
                            await ws.send(json.dumps({"type": "CloseStream"}))
                            if live_json:
                                print('{"msg": "Done."}')
                            else:
                                print("Done.")
                    # handle end of stream
                    if res.get("created"):
                        save_info()
                except KeyError:
                    print(f"🔴 ERROR: Received unexpected API response! {msg}")

        # Set up microphone if streaming from mic
        async def microphone():
            audio = pyaudio.PyAudio()
            stream = audio.open(
                format=FORMAT,
                channels=CHANNELS,
                rate=RATE,
                input=True,
                frames_per_buffer=CHUNK,
                stream_callback=mic_callback,
            )

            stream.start_stream()

            global SAMPLE_SIZE
            SAMPLE_SIZE = audio.get_sample_size(FORMAT)

            while stream.is_active():
                await asyncio.sleep(0.1)

            stream.stop_stream()
            stream.close()

        functions = [
            asyncio.ensure_future(sender(ws)),
            asyncio.ensure_future(receiver(ws)),
        ]

        functions.append(asyncio.ensure_future(microphone()))
        if live_json:
            print('{"msg": "Ready."}')
        else:
            print("🟢 Ready.")
        await asyncio.gather(*functions)

def main():
    """Entrypoint for the example."""
    # Parse the command-line arguments.
    try:
        asyncio.run(run(key, "mic", "text"))
    except websockets.exceptions.InvalidStatusCode as e:
        print(f'🔴 ERROR: Could not connect to Deepgram! {e.headers.get("dg-error")}')
        print(
            f'🔴 Please contact Deepgram Support (developers@deepgram.com) with request ID {e.headers.get("dg-request-id")}'
        )
        return
    except websockets.exceptions.ConnectionClosedError as e:
        error_description = f"Unknown websocket error."
        print(
            f"🔴 ERROR: Deepgram connection unexpectedly closed with code {e.code} and payload {e.reason}"
        )

        if e.reason == "DATA-0000":
            error_description = "The payload cannot be decoded as audio. It is either not audio data or is a codec unsupported by Deepgram."
        elif e.reason == "NET-0000":
            error_description = "The service has not transmitted a Text frame to the client within the timeout window. This may indicate an issue internally in Deepgram's systems or could be due to Deepgram not receiving enough audio data to transcribe a frame."
        elif e.reason == "NET-0001":
            error_description = "The service has not received a Binary frame from the client within the timeout window. This may indicate an internal issue in Deepgram's systems, the client's systems, or the network connecting them."

        print(f"🔴 {error_description}")
        # TODO: update with link to streaming troubleshooting page once available
        # print(f'🔴 Refer to our troubleshooting suggestions: ')
        print(
            f"🔴 Please contact Deepgram Support (developers@deepgram.com) with the request ID listed above."
        )
        return

    except websockets.exceptions.ConnectionClosedOK:
        return

    except Exception as e:
        print(f"🔴 ERROR: Something went wrong! {e}")
        save_info()
        return


if __name__ == "__main__":
    sys.exit(main() or 0)

The Python script sends the microphone stream to Deepgram and prints out the JSON output. The Emacs Lisp code starts an asynchronous process and reads the JSON output, displaying the transcript and calculating the WPM based on the words. run.sh just loads the venv for this project (requirements.txt based on the streaming text suite) and then runs app.py, since some of the Python library versions conflict with other things I want to experiment with.

I also added my-live-speech-wpm-string to my mode-line-format manually using Customize, since I wanted it displayed on the left side instead of getting lost when I turn keycast-mode on.

I'm still a little anxious about accidentally leaving a process running, so I check with ps aux | grep python3. Eventually I'll figure out how to make sure everything gets properly stopped when I'm done.

Anyway, there it is!

Display in speech buffer

(defun my-live-speech-display-in-speech-buffer (recognition-results)
  (with-current-buffer (get-buffer-create my-live-speech-buffer)
    (let-alist recognition-results
      (let* ((pos (point))
             (at-end (eobp)))
        (goto-char (point-max))
        (unless (eolp) (insert "\n"))
        (when .msg
          (insert .msg "\n"))
        (when .transcript
          (insert .transcript "\n"))
        ;; scroll to the bottom if being displayed
        (if at-end
            (when (get-buffer-window (current-buffer))
              (set-window-point (get-buffer-window (current-buffer)) (point)))
          (goto-char pos))))))

(defun my-live-speech-toggle-heading ()
  "Toggle a line as a heading."
  (interactive)
  (with-current-buffer (get-buffer my-live-speech-buffer)
    (display-buffer (current-buffer))
    (with-selected-window (get-buffer-window (get-buffer my-live-speech-buffer))
      (let ((avy-all-windows nil))
        (avy-goto-line 1))
      (org-toggle-heading 1))))

(defun my-live-speech-cycle-visibility ()
  "Get a quick overview."
  (interactive)
  (with-current-buffer (get-buffer my-live-speech-buffer)
    (display-buffer (current-buffer))
    (if (eq org-cycle-global-status 'contents)
        (progn
          (run-hook-with-args 'org-cycle-pre-hook 'all)
          (org-fold-show-all '(headings blocks))
          (setq org-cycle-global-status 'all)
          (run-hook-with-args 'org-cycle-hook 'all))
      (run-hook-with-args 'org-cycle-pre-hook 'contents)
      (org-cycle-content)
      (setq org-cycle-global-status 'contents)
      (run-hook-with-args 'org-cycle-hook 'contents))))

Display words per minute

(defvar my-live-speech-wpm-window-seconds 15 "How many seconds to calculate WPM for.")
(defvar my-live-speech-recent-words nil "Words spoken in `my-live-speech-wpm-window-minutes'.")
(defvar my-live-speech-wpm nil "Current WPM.")
(defvar my-live-speech-wpm-colors  ; haven't figured out how to make these work yet
  '((180 :foreground "red")
    (170 :foreground "yellow")
    (160 :foreground "green")))
(defvar my-live-speech-wpm-string nil "Add this somewhere in `mode-line-format'.")
(defun my-live-speech-wpm-string ()
  (propertize
   (format "%d WPM " my-live-speech-wpm)
   'face
   (cdr (seq-find (lambda (row) (> my-live-speech-wpm (car row))) my-live-speech-wpm-colors))))

(defun my-live-speech-display-wpm (recognition-results)
  (let-alist recognition-results
    (when .words
      ;; calculate WPM
      (setq my-live-speech-recent-words
            (append my-live-speech-recent-words .words nil))
      (let ((threshold (- (assoc-default 'end (aref .words (1- (length .words))))
                          my-live-speech-wpm-window-seconds)))
        (setq my-live-speech-recent-words
              (seq-filter
               (lambda (o)
                 (>= (assoc-default 'start o)
                     threshold))
               my-live-speech-recent-words))
        (setq my-live-speech-wpm
              (/
               (length my-live-speech-recent-words)
               (/ (- (assoc-default 'end (aref .words (1- (length .words))))
                     (assoc-default 'start (car my-live-speech-recent-words)))
                  60.0)))
        (setq my-live-speech-wpm-string (my-live-speech-wpm-string))))))

Append to EmacsConf Etherpad

(defvar my-live-speech-etherpad-id nil)
(defun my-live-speech-append-to-etherpad (recognition-results)
  (when my-live-speech-etherpad-id
    (emacsconf-pad-append-text my-live-speech-etherpad-id (concat " " (assoc-default 'transcript recognition-results)))))
This is part of my Emacs configuration.

Emacs Conf video tech notes: jit.si, twitch.tv, livestreamer, ffmpeg

Posted: - Modified: | emacs, geek, ffmpeg, streaming

Last week’s Emacs Conf was fantastic. There were lots of people at the in-person event in San Francisco, and people could also watch the stream through twitch.tv and ask questions through IRC. There were remote speakers and in-person speakers, and that mix even worked for the impromptu lightning talks sprinkled throughout the day.

This is how the tech worked:

  • Before the conference started, the organizers set up a laptop for streaming on twitch.tv/emacsconf. This was hooked up to the main display (a large television with speakers). They also configured the account to record and archive videos. In the free account, recorded videos are available for 14 days.
  • Remote speakers were brought in using the Jitsi open source video conferencing system, using the public servers at meet.jit.si. This was on the same computer that did the twitch.tv streaming, so people watching the stream could see whatever was shared through Jitsi. Organizers read out questions from the in-person audience and from the IRC channel. The audio from Jitsi wasn’t directly available through twitch.tv, though. Instead, the audio came in as a recording from the laptop’s microphone.
  • Local speakers either used the streaming laptop to go to a specific webpage they wanted to talk about, or joined the Jitsi web conference using Google Chrome or Chromium so that they could share their screen. The organizers muted the second Jitsi client to avoid audio feedback loops.

That worked out really well. There were more than a hundred remote viewers. As one of them, I can definitely rate the experience as surprisingly smooth.

All that’s left now is to figure out how to make a more lasting archive of the Emacs Conf videos. As it turns out, twitch.tv or online tools don’t make it easy to download stream recordings that are longer than three hours. Fortunately, livestreamer can handle the job. Here’s what I did to download the timestream data from one of the recordings of EmacsConf:

livestreamer -o emacsconf-1.ts --hls-segment-threads 4 http://www.twitch.tv/emacsconf/v/13421774 best
ffmpeg -i emacsconf-1.ts -acodec copy -absf aac_adtstoasc -vcodec copy emacsconf-1.mp4

I normally use Camtasia Studio to edit videos, but for some reason, it kept flaking out on me today. After the umpteenth crash, I decided to keep things simple by using ffmpeg to extract the relevant part of the video. To extract a segment, you can use -ss to specify the start time and t to specify the duration. Here’s a sample command:

ffmpeg -i emacsconf-1.mp4 -ss 1:18:06.11 -t 0:03:32.29 -c:v copy -c:a copy emacsconf-engine-mode.mp4

Your version of ffmpeg might have a -to option, which would let you specify the end time instead of using -t to specify duration.

I’m coordinating with the other organizers to see if there’s a better way to process the videos, so that’s why we haven’t released them publicly yet. (Soon!) It would be nice to improve the audio, especially for some of the talks, and maybe it would be good to add overlays or zoom in as well. The on-site organizers captured backup videos and screen recordings, too, so we might want to edit some of those clips into the streamed recording. One of the organizers has access to better video editing tools, so we’ll try that out.

Anyway, those were the commands that helped me get started with command-line conversion and editing of Twitch.tv recorded videos. Hope they come in handy for other people too.

For more info about EmacsConf 2015, check out http://emacsconf2015.org/. There’ll probably be an announcement there once the videos are up. =)

Hat tip to Reddit and superuser.com for tips.

My new Google Hangouts On Air checklist, plus upcoming Nov 29 Q&A on learning

Posted: - Modified: | process, youtube, streaming

Google Hangouts On Air is a quick, free way to have a videocast with up to 10 participants and as many passive watchers as you want, thanks to streaming through YouTube. The stream is about 20 seconds delayed and the commenting interface is still kinda raw, but as a quick way to set up and broadcast video chats, it’s hard to beat that.

I picked up a lot of great ideas from Pat Flynn’s first Q&A Hangout. He used ChatWing to set up a chat room that everyone could join, and the experience was much smoother than using CommentTracker or something like that.

Here’s my new Google Hangouts On Air workflow for the Emacs Chats series I’ve been doing. Since the Emacs crowd is fairly technical, I used IRC as my chat room, with a web interface for others who didn’t have an IRC client handy. (Naturally, I used ERC to chat on IRC from within Emacs.) Having other people around worked out really well, because I could take a break and ask other people’s questions. =)

2013-11-03 My new Google Hangouts workflow

The other new thing I tried this time around was starting the broadcast really early (like, half an hour early) and setting it to share my screen with the coming-soon information, which meant that I could post the streaming URL in lots of different places.

I’d like to expand this to doing regular Hangouts On Air Q&As or conversations. How about we use the sach.ac/live URL to point to the next Hangout On Air I’ve scheduled? As of this writing, this will be a Q&A on November 29 on learning and note-taking. We’ll probably stream it over YouTube and have a chat room for discussion/Q&A. Want to pick my brain? See you then!

(Want more one-on-one help? Book a Helpout session – there’s a nominal charge to keep slots available instead of letting no-shows book them all.)