Topic - Workflows
Workflows described elsewhere:
Most of the functions are in my Emacs configuration.
On this page:
Capturing and processing thoughts
On the go:
- I add the thought as a note in my Inbox.org using Orgzly Revived.
- Syncthing synchronizes it with my laptop.
From within Emacs, I use org-capture
to save the note to my Inbox.org
.
From my web browser, I select the text and use org-capture-extension to save it.
Eventually, I review my Inbox.org:
- I flesh out some thoughts or delete them.
- Sometimes I use
org-refile
to move a thought somewhere. - Sometimes I turn something into a blog post right in my inbox.
Reading books
Toronto Public Library has sooooo many books. I rarely buy books since my backlog of borrowable books is pretty much infinite.
TheReading e-books:
- I like using Libby to read books on my iPad. I highlight interesting quotes and key concepts as I read.
- Then I export the book highlights as JSON and format them as Org Mode so that I can add them to my books.org file. This organizes them by chapter.
When I read paper books, I type or handwrite short quotes. If I want to save a long quote and I don't feel like typing it in:
- I use the camera on my Android phone to focus on the text.
- I use Google Lens to recognize the text.
- I select the text and share it with Orgzly Revived.
- Syncthing synchronizes my phone with my computer.
- When I'm back in Org Mode in Emacs, I copy it to my books.org file.
Once I have the raw notes in books.org:
- I select quotes to share in Mastodon toots or blog posts.
- I often make a book sketchnote. Sometimes I draw these as I read the book, and sometimes I do them afterwards.
I have a script to help me manage the three library cards we have. It renews library books and lists the ones I need to return.
NodeJS+Puppeteer script for renewing Toronto Public Library books
/** Totally rough script for managing multiple library cards and searching/requesting books from the Toronto Public Library. Might be too idiosyncratic to use as is, but could be a starting point for your own script. ----------------------------------------------------------------------------------- Copyright 2022 Sacha Chua <sacha@sachachua.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ const puppeteer = require('puppeteer'); const moment = require('moment'); const process = require('process'); const fetch = require('node-fetch'); const cheerio = require('cheerio'); const readline = require('readline'); const fs = require('fs'); require('dotenv').config(); var cards; if (process.env['CARD_FILE']) { cards = JSON.parse(fs.readFileSync(process.env['CARD_FILE'])); } else { throw 'CARD_FILE should point to a JSON file with an array of the form [{"id": "00000000000000", abbrev: "nickname", pin: "0000", limit: 50}, ...]'; } var command = process.argv[2]; // Renew items that are due within this number of days, or before a specified date (YYYY-MM-DD) var threshold = process.env['LIBRARY_THRESHOLD'] || '2'; var debug = process.env['DEBUG']; const jsonFile = process.env['JSON_FILE']; // file to write library data into in JSON format const orgFile = process.env['ORG_FILE']; // file to write Org Mode format info to const searchParams = 'N=37751+37918+20206&Ns=p_pub_date_sort&Nso=0'; // regular English books that can be checked out, sorted by pub date var today = moment().format('YYYY-MM-DD'); if (!threshold) { threshold = '2'; } if (threshold.match(/^[0-9]+$/)) { threshold = moment().add(threshold, 'days').format('YYYY-MM-DD'); } exports.login = async function(page, card) { console.debug('logging in...'); await page.goto('https://account.torontopubliclibrary.ca/signin?redirect=%2Fcheckouts'); await page.waitForSelector('#userID'); try { console.debug('entering info...'); await page.click('#userID'); await page.keyboard.type(card.id); await page.click('#password'); await page.keyboard.type(card.pin); await page.click('#form_signin .button'); await page.waitForNavigation({ waitUntil: 'networkidle2' }).catch(err => {}); console.debug('ready...'); if (await page.$('#sign-out-link').catch(err => {})) { return true; } } catch (err) { console.log(err); } }; exports.renew = async function(page, itemIDs) { console.debug('renewing...'); if (itemIDs.length == 0) return []; let list = []; for (let i = 0; i < itemIDs.length; i++) { let selector = 'label[for=item_' + itemIDs[i] + ']'; await page.waitForSelector(selector); await page.evaluate((selector) => { document.querySelector(selector).closest('tr').querySelector('button').click(); }, selector); await page.waitForResponse((resp) => resp.url().match(/renewals/)).then(async (res) => { let data = await res.json(); list.push(data); }); } return list; }; function sortByDue(a, b) { if (a.dueDate < b.dueDate) return -1; if (a.dueDate > b.dueDate) return 1; return 0; } function summarizeData(card) { if (!card.data) { card.data = {}; } if (card.data.holds) { card.data.readyForPickup = card.data.holds.filter((o) => o.status == 'READY'); } else { card.data.readyForPickup = []; } if (card.data.checkouts) { card.data.checkouts = card.data.checkouts.map((o) => { o.abbrev = card.abbrev; o.toReturn = (o.dueDate <= ((o.item.format.type == 'DVD') ? today : threshold)); return o; }); card.data.returns = card.data.checkouts.filter(o => o.toReturn).sort(sortByDue); } else { card.data.returns = []; } return card; } async function logInAndGetData(page, card) { let userData = {}; const getUserData = new Promise((resolve, reject) => { page.on('response', async (resp) => { if (!resp.url().match(/rest/)) { return false; } const url = resp.url(); if (url.match(/charges$/)) { console.debug('got charge data...'); let m = url.match(/users\/([0-9]+)\//); userData.systemID = m[1]; userData.charges = await resp.json(); } else if (url.match(/notifications$/)) { console.debug('got notifications...'); userData.notifications = await resp.json(); } else if (url.match(/holds\/current$/)) { console.debug('got holds...'); userData.holds = await resp.json(); } else if (url.match(/checkouts$/)) { console.debug('got checkouts...'); userData.checkouts = await resp.json(); } else if (url.match(/renewals$/)) { console.debug('got renewals...'); let data = await resp.json(); if (!data?.errorCode) { userData.checkouts = userData.checkouts.map((o) => { if (o.id == data.id) { return data; } else { return o; } }); } } if (userData.charges && userData.notifications && userData.holds && userData.checkouts) { card.data = userData; resolve(card); } return true; }); }); await exports.login(page, card); return await getUserData; } async function processUser(page, card) { const data = await logInAndGetData(page, card); card = summarizeData(card); if (card.data.returns?.length > 0) { let renewals = await exports.renew(page, card.data.returns.filter((o) => o.renewalsRemaining > 0).map((o) => o.id)); console.log(renewals); } return summarizeData(card); } const checkoutLimit = 50; function url(item) { return 'https://torontopubliclibrary.ca' + (item.url || item.item.url); } function formatReport(cards) { cards = cards.map(summarizeData); let totalPickups = cards.reduce((prev, o) => prev + o.data.readyForPickup.length, 0); let totalReturns = cards.reduce((prev, o) => prev + o.data.returns.length, 0); let earliestPickup = cards.reduce((prev, o) => { return o.data.readyForPickup.reduce((p2, hold) => { if (!p2 || hold.readyDateExpiration < p2) { return hold.readyDateExpiration; } else { return p2; } }, prev); }, undefined); let otherReturns = cards.reduce((prev, c) => { return prev.concat(c.data.checkouts?.filter((o) => !o.toReturn)); }, []).sort(sortByDue).map(formatCheckout).join("\n"); let allCheckouts = cards.reduce((prev, o) => prev + o.data?.checkouts?.length, 0); let inTransit = cards.reduce((prev, card) => prev.concat(card.data?.holds?.filter((h) => h.intransit).map((o) => { o.abbrev = card.abbrev; return o; })), []) .map((o) => `${o.abbrev} ${o.item?.title} ${url(o)}`).join("\n"); let holds = cards.reduce((prev, o) => prev.concat(o.data?.holds.map((h) => { h.abbrev = o.abbrev; return h; })), []); let deadline = moment(earliestPickup).format('YYYY-MM-DD ddd'); let pickupByUser = cards.map((c) => { let space = (c.limit || checkoutLimit) - c.data.checkouts?.length || 0; let spaceReport = (c.data.checkouts ? ((space > 0) ? space : ' NO SPACE') : 'UNKNOWN'); let holds = c.data.readyForPickup?.map((hold) => `- ${moment(hold.readyDateExpiration).diff(today, 'days')} ${hold.item.title} ${url(hold)}`).join("\n") || ''; let returns = c.data.returns?.map(formatCheckout).join("\n") || ''; let activeHolds = c.data?.holds?.filter((h) => h.status == 'PENDING'); return `** ${c.abbrev}: ${c.data.readyForPickup.length} / ${space} - transit ${c.data?.holds?.filter((h) => h.intransit).length} - active ${activeHolds.length} - return ${c.data.returns.length} - checked out ${c.data.checkouts.length} - holds ${c.data.holds.length}${holds ? "\n** Pick up\n" + holds : ''}${returns ? "\n** Returns\n" + returns : ''}`; }).join("\n\n"); return `* Library DEADLINE: <${deadline}> Date of report: ${today} Total for pickup: ${totalPickups}${earliestPickup ? ' in ' + moment(earliestPickup).diff(today, 'days') : ''} - return ${totalReturns} - checked out: ${allCheckouts} ${pickupByUser}${inTransit ? `\n* In transit\n${inTransit}` : ''} * Other returns ${otherReturns} * Active holds ${holds.filter((o) => o.status == 'PENDING').map((o) => `- ${o.abbrev} ${o.item.title} ${o.queuePosition}/${o.queueLength}x${o.item.circulatingCopies} ${url(o)}`).join("\n")} `; } function formatCheckout(o) { return `- [ ] ${o.abbrev} ${moment(o.dueDate).diff(today, 'days')}-${o.renewalsRemaining || 'X'} ${o.item.title} ${o.id} https://torontopubliclibrary.ca${o?.item?.url}`; } async function scrapeData() { const browser = await puppeteer.launch({ headless: debug != 'chrome', args: ['--no-sandbox']}); for (let card of cards) { const context = await browser.createIncognitoBrowserContext(); const page = await context.newPage(); await processUser(page, card); } browser.close(); if (jsonFile) { fs.writeFileSync(jsonFile, JSON.stringify(cards)); } return cards; } async function search(text) { let page = 0; const maxPages = 10; const process = async function(url) { const body = await fetch(url).then((res) => res.text()); const $ = cheerio.load(body); const links = $('.title a').map(function() { return '- ' + $(this).attr('href').replace(/^.*R=/, '') + "\t" + $(this).text().trim().replace(' : ', ': ') + "\t" + $(this).closest('.description').find('.format-year').text().trim().replace(/[ \t\r\n]+/g, ' ') + ' https://www.torontopubliclibrary.ca' + $(this).attr('href'); }).toArray(); if ($('li.pagination-next a').length > 0) { page = page + 1; if (page < maxPages) { let remainder = await process('https://www.torontopubliclibrary.ca' + $('li.pagination-next a').attr('href')); return links.concat(remainder); } else { return links; } } else { return links; } }; let results; if (text.match(/http/)) { console.debug(text); results = await process(text); } else { results = await process('https://www.torontopubliclibrary.ca/search.jsp?' + searchParams + '&Ntt=' + encodeURIComponent(text)); } console.log(results.reverse().join("\n")); } // https://stackoverflow.com/questions/10623798/how-do-i-read-the-contents-of-a-node-js-stream-into-a-string-variable function streamToString (stream) { const chunks = []; return new Promise((resolve, reject) => { stream.on('data', (chunk) => chunks.push(Buffer.from(chunk))); stream.on('error', (err) => reject(err)); stream.on('end', () => resolve(Buffer.concat(chunks).toString('utf8'))); }); } async function readTitleIDs() { const input = await streamToString(process.stdin); return [...input.split("\n").map((o) => { let m = o.match(/^(?:[- \t]*)([0-9]+)/); return m && m[1]; }).filter((o) => o)]; } async function requestItems(card) { const titleIDs = await readTitleIDs(); const browser = await puppeteer.launch({ headless: debug != 'chrome', args: ['--no-sandbox']}); const page = await browser.newPage(); const data = await logInAndGetData(page, card); await titleIDs.reduce(async (prev, val) => { await prev; const url = 'https://www.torontopubliclibrary.ca/placehold?titleId=' + val; console.log(url); try { await page.goto(url); await page.waitForSelector('#hold-button input'); await page.click('#hold-button input'); return await page.waitForNavigation({ waitUntil: 'networkidle2' }).catch(err => { console.log(err); }); } catch (err) { console.log(err); } }, Promise.resolve()); await browser.close(); } if (require.main === module) { (async () => { if (!command || command == 'help') { console.log(`scrape: load data from library website, renew items search <keywords>...: search for English-language books request <abbrev>: take a text file with lines starting with IDs (as from search) and request them using the card that matches ABBREV report: print cached data If no operation is specified, print out a report of currently-saved books. DEBUG=1 is handy because sometimes Puppeteer gets stuck, so it's nice to be able to click on buttons. `); return; } else if (command == 'scrape') { await scrapeData(); let report = formatReport(cards); if (process.env['ORG_FILE']) { fs.writeFileSync(orgFile, report); } console.log(report); } else if (command == 'search') { if (process.argv[3].match('http')) { await search(process.argv[3]); } else { await search(process.argv.slice(3).map((o) => '"' + o + '"').join(' ')); } } else if (command == 'request') { const abbrev = process.argv[3]; await requestItems(cards.find((o) => o.abbrev == abbrev)); } else if (command == 'report') { if (jsonFile) { try { let cards = require(jsonFile); let report = formatReport(cards); if (orgFile) { fs.writeFileSync(orgFile, report); } console.log(report); } catch (err) { console.log('Could not read ' + jsonFile + ', please scrape the data first'); } } else { console.log('Not caching library data. Please set JSON_FILE and scrape the data.'); } } })(); }
Reading blogs
NetNewsWire on my iPad. It's easy to add new blogs to it with the share command. When I come across an interesting quote, I:
I've been using- Use the Share menu to open the post in Google Chrome.
- Select the text and use "Copy Link with Highlight".
- I share the selected text to Ice Cubes for Mastodon.
- I format it as a blockquote with
>
and add quotation marks. - I paste in the URL to the text fragment.
- I post the toot.
I set up NetNewsWire to synchronize with FreshRSS in a Docker container. Sometimes I use FreshRSS's web interface on my Linux laptop. I haven't yet figured out how to get elfeed and elfeed-protocol to properly synchronize with FreshRSS via the Fever protocol.
I like using minifeed.net and indieblog.page/all to find new blogs.
Drawing, processing my sketches, and writing about them
Lately, I've been drawing my sketches using Noteful on my iPad. I use this coloured dot grid I made for the SuperNote A5X as the background, and I have a landscape version as well.
I use a sketch ID of the form YYYY-MM-DD-NN. The number comes from my journaling system and allows me to uniquely identify the images.
I usually export the current page as an image and save it to Dropbox so that it gets synchronized with my laptop. Sometimes if I want to be able to resize, recolour, or animate elements, I'll export it as a PDF so that I can convert it to PNG.
Once I have the image file, I process it using
Emacs Lisp. The code is generally in the Supernote section of my configuration.
I can open the latest sketch with the
my-latest-sketch
function that checks my
~/Dropbox/sketches
and
~/Dropbox/Supernote/EXPORT
directories for the
latest sketch, or I can just process it directly
with my-supernote-process-latest
. Here's how the
code processes the sketch:
my-image-recognize
: Use Google Cloud Vision to recognize the handwriting and convert it to text. Save the results in a .txt with the same base name as the image file.my-sketch-rename
: If the file has an ID, rename the file based on the journal entry with that ID.- Is it an SVG or PDF? (Convert PDFs to SVG using pdftocairo)
my-sketch-svg-prepare
:- Remove backgrounds
- Change colour references to hex
- Add a solid white background
- Change fill attributes to style
- Replace colours (useful for converting grayscale sketches from the Supernote)
- Is it a PNG or JPG?
- Replace colours (useful for converting grayscale sketches from the Supernote)
- Rotate and crop as needed.
my-image-store
: If the file is properly renamed and tagged, store the fileset in my~/sync/private-sketches
directory if it has the private tag, or in my~/sync/sketches
directory otherwise. Syncthing copies the public directory to the server.- Recreate the JSON for sketches.sachachua.com based on the synchronized sketches so that it's available from my sketch viewer.
Once the JSON has been recreated, sketches.sachachua.com/YYYY-MM-DD-NN can redirect to sketches based on their IDs (ex: https://sachachua.com/2025-02-26-06).
my-supernote-process-latest
also opens the
associated .txt file with the converted text. I
edit that. Useful shortcuts:
M-<up>
andM-<down>
: move paragraphs aroundM-S-<up>
andM-S-<down>
: move lines aroundM-O
: I've mapped this tojoin-line
.
After I correct the text, I usually draft a blog
post using my-write-about-sketch
. This creates
an Org Mode subtree in my posts.org
file with a
reference to the image and a details block that
includes the text. I use the custom my_details
block I defined using org-special-block-extras so
that the text is inside a
<details><summary>...</summary>...</details>
element.
From there, it's just the usual blog post workflow. After I publish the blog, sachachua.com/YYYY-MM-DD-NN redirects to the blog post instead. (ex: https://sachachua.com/2025-03-26-01)
Writing a blog post
I draft blog posts using Org Mode in Emacs. I usually start writing them in either my posts.org or my inbox.org.
I like to add lots of links. I have custom Org
Mode link types to make it easy to link to blog
posts, sections of my Emacs configuration, project
files, sketches, Emacs Lisp functions, and more.
Then I can use C-c l
(which I've bound to
org-store-link
) to store the link and C-c C-l
to insert it.
When I'm ready to post, I use C-c e s 1 1
to export using my-org-11ty-export
, which wraps around ox-11ty.el.
I use the 11ty static site generator to turn those HTML and JSON files into a blog with reverse-chronological archives, category pages, feeds, and so on.
Here's my rough workflow:
- Keep
npx eleventy --serve
running in the background, using.eleventyignore
to make rebuilds reasonably fast. (serve
in this Makefile) - Export the subtree with
C-c e s 1 1
, which usesorg-export-dispatch
to callmy-org-11ty-export
with the subtree. - After about 10 seconds, use
my-org-11ty-copy-just-this-post
and verify. - Use
my-mastodon-11ty-toot-post
to compose a toot. Edit the toot and post it. - Check that the
EXPORT_MASTODON
property has been set. - Export the subtree again, this time with the front matter.
- Publish my whole blog: (
generate-all
in this Makefile)- Replace the contents of
.eleventyignore
file with a shorter one. - Generate the blog.
- Update the index using Pagefind.
- Rsync to the server.
- Restart the Nginx server just in case.
- Restore the longer
.eleventyignore
file so that dev builds are faster.
- Replace the contents of
Compiling Emacs News
For Emacs News, my workflow goes like this:
- Upvote posts on Reddit.
- Add YouTube videos to a playlist.
- Update the Emacs Calendar.
- Generate the Emacs News list for the week by evaluating the block after "Actually generate the section".
- Announce GNU ELPA packages on info-gnu-emacs using
my-announce-elpa-package
. - Collect links from Mastodon.
- Collect links from other sites.
- Check for duplicates with
my-emacs-news-check-duplicates
. - Categorize links with
my-org-categorize-emacs-news/body
. - Review and incorporate some of the emacs-devel links e-mailed by Andrés Ramiréz.
- Publish the Emacs News blog post.
- Publish the news as plain text, HTML, and attached Org file using
my-org-share-news
. - Use
my-tweet-emacs-news
to post on Mastodon. I usually post on Bluesky as well. I'm probably going to phase out posting on X.
Emacs News functions are generally defined in sachac/emacs-news: Weekly Emacs news (index.org).
Updating emacs.tv
- Select the subtree for the latest Emacs News.
- Call
M-x emacstv-add-from-org
. - Open videos.org.
- Add tags if I feel like organizing things.
- Use
M-x emacstv-build
to sort the file and update various feeds. npm run build
git commit -m "update" -a; git push
Routines
- Daily
- Get the kiddo through the day
- Play the piano
- Go for a long walk or bike ride
- Write or draw
- Draw moment of the day using Noteful on my iPad
- Update my web-based journal
- Check on my mom
- Weekly
- Mondays:
- Saturdays:
- Make sure my time records look sensible (no open-ended records, etc.)
my-org-prepare-weekly-review
- Other time during the week
- Check in with my consulting client
- Monthly
my-org-prepare-monthly-review
- Update now page
- Prepare invoice for consulting
- Yearly
- August:
- Write and draw an annual review
- February:
- Write and draw an annual review for A+
- Celebrate A+'s birthday
- August:
Life
- I like using a Load 75 cargo bike to get around with the kiddo. I usually bring a Bakkie Bag so that I can tow A+'s bike when she gets tired.
- I'm slowly working on an Org Mode inventory of various things in the house.
- W- and I use OurGroceries to coordinate the grocery list.