worth doing even if you don't feel like you can draw well
really, I just draw stick figures
good for your own thoughts and other people's
own thoughts:
non-linear
visual metaphors & organizers can be helpful
can be a launchpad for more details
other people's thoughts: distill key points from a talk, book, etc. using my understanding
visual cues make it easy to see important things first
doodling is fun
IDs help with linking (ex: 2024-10-17-02)
How I use sketchnotes:
Flesh out an idea, especially during non-computer time
Sketch talks or books to make them easier to review
Optical character recognition (Google Cloud Vision API, etc.) to blog text: I edit this to provide a good text alternative in blog posts
My evil plan
Sketchnotes are very shareable
People are always looking for visuals to add.
When people share them, they usually tell me about it
I get to find out what else people are thinking about & learning from.
More learning! More fun!
It's also a nice way to give back to people who've shared what they learned
Then they might share more!
I've been enjoying using sketchnotes as an idea
launchpad for audio braindumps or blog posts, as a
quick way to review the key points of a book or
talk, and as a way to participate in the larger
conversation. It's easy for me to link to sketches
and extract the text within them.
Someday I'll probably improve my ability to search
for the text within sketches. Right now, I just go
by filenames and the text in my blog posts. I can
probably make something that goes through the text
annotations in the JSON files from Google Cloud
Vision, or maybe I can turn them into a text file
that can be updated when I write a blog post. Hmm,
that actually sounds pretty straightforward, I
should go do that…
tldr (2167 words): I can make animating presentation maps easier by
writing my own functions for the Emacs text editor. In this post, I
show how I can animate an SVG element by element. I can also add IDs
to the path and use CSS to build up an SVG with temporary highlighting
in a Reveal.js presentation.
Text from the sketch
PNG: Inkscape: trace
Supernote (e-ink)
iPad: Adobe Fresco
Convert PDF to SVG with Inkscape (Cairo option) or pdftocairo)
PNG / Supernote PDF: Combined shapes. Process
Break apart, fracture overlaps
Recombine
Set IDs
Sort paths -> Animation style 1
Adobe Fresco: individual elements in order; landscape feels natural
Animation styles
Animation style 1: Display elements one after another
Animation style 2: Display elements one after another, and also show/hide highlights
Table: slide ID, IDs to add, temporary highlights -> Reveal.js: CSS with transitions
Ideas for next steps:
Explore graphviz & other diagramming tools
Frame-by-frame SVGs
on include
write to files
FFmpeg crossfade
Recording Reveal.js presentations
Use OCR results?
I often have a hard time organizing my thoughts into a linear
sequence. Sketches are nice because they let me jump around and still
show the connections between ideas. For presentations, I'd like to
walk people through these sketches by highlighting different areas.
For example, I might highlight the current topic or show the previous
topics that are connected to the current one. Of course, this is
something Emacs can help with. Before we dive into it, here are quick
previews of the kinds of animation I'm talking about:
Getting the sketches: PDFs are not all the same
Let's start with getting the sketches. I usually export my sketches as
PNGs from my Supernote A5X. But if I know that I'm going to animate a
sketch, I can export it as a PDF. I've recently been experimenting
with Adobe Fresco on the iPad, which can also export to PDF. The PDF I
get from Fresco is easier to animate, but I prefer to draw on the
Supernote because it's an e-ink device (and because the kiddo usually
uses the iPad).
If I start with a PNG, I could use Inkscape to trace the PNG and turn
it into an SVG. I think Inkscape uses autotrace behind the scenes. I
don't usually put my highlights on a separate layer, so autotrace will
make odd shapes.
It's a lot easier if you start off with vector graphics in the first
place. I can export a vector PDF from the SuperNote A5X and either
import it into Inkscape using the Cairo option or use the command-line
pdftocairo tool.
I've been looking into using Adobe Fresco, which is a free app
available for the iPad. Fresco's PDF export can be converted to an SVG
using Inkscape or PDF to Cairo. What I like about the output of this
app is that it gives me individual elements as their own paths and
they're listed in order of drawing. This makes it really easy to
animate by just going through the paths in order.
Animation style 1: displaying paths in order
Here's a sample SVG file that pdfcairo creates from an Adobe Fresco
PDF export:
Adobe Fresco also includes built-in time-lapse, but since I often like
to move things around or tidy things up, it's easier to just work with
the final image, export it as a PDF, and convert it to an SVG.
I can make a very simple animation by setting the opacity of all the
paths to 0, then looping through the elements to set the opacity back
to 1 and write that version of the SVG to a separate file.
From how-can-i-generate-png-frames-that-step-through-the-highlights:
my-animate-svg-paths: Add one path at a time. Save the resulting SVGs to OUTPUT-DIR.
Neither Supernote nor Adobe Fresco give me the original stroke
information. These are filled shapes, so I can't animate something
drawing it. But having different elements appear in sequence is fine
for my purposes. If you happen to know how to get stroke information
out of Supernote .note files or of an iPad app that exports nice
single-line SVGs that have stroke direction, I would love to hear
about it.
Identifying paths from Supernote sketches
When I export a PDF from Supernote and convert it to an SVG, each
color is a combined shape with all the elements. If I want to animate
parts of the image, I have to break it up and recombine selected
elements (Inkscape's Ctrl-k shortcut) so that the holes in shapes are
properly handled. This is a bit of a tedious process and it usually
ends up with elements in a pretty random order. Since I have to
reorder elements by hand, I don't really want to animate the sketch
letter-by-letter. Instead, I combine them into larger chunks like
topics or paragraphs.
The following code takes the PDF, converts it to an SVG, recolours
highlights, and then breaks up paths into elements:
my-sketch-convert-pdf-and-break-up-paths: Convert PDF to SVG and break up paths.
(defunmy-sketch-convert-pdf-and-break-up-paths (pdf-file &optional rotate)
"Convert PDF to SVG and break up paths."
(interactive (list (read-file-name
(format "PDF (%s): "
(my-latest-file "~/Dropbox/Supernote/EXPORT/""pdf"))
"~/Dropbox/Supernote/EXPORT/"
(my-latest-file "~/Dropbox/Supernote/EXPORT/""pdf")
t
nil
(lambda (s) (string-match "pdf" s)))))
(unless (file-exists-p (concat (file-name-sans-extension pdf-file) ".svg"))
(call-process "pdftocairo" nil nil nil "-svg" (expand-file-name pdf-file)
(expand-file-name (concat (file-name-sans-extension pdf-file) ".svg"))))
(let ((dom (xml-parse-file (expand-file-name (concat (file-name-sans-extension pdf-file) ".svg"))))
highlights)
(setq highlights (dom-node 'g'((id . "highlights"))))
(dom-append-child dom highlights)
(dolist (path (dom-by-tag dom 'path))
;; recolor and move
(unless (string-match (regexp-quote "rgb(0%,0%,0%)") (or (dom-attr path 'style) ""))
(dom-remove-node dom path)
(dom-append-child highlights path)
(dom-set-attribute
path 'style
(replace-regexp-in-string
(regexp-quote "rgb(78.822327%,78.822327%,78.822327%)")
"#f6f396"
(or (dom-attr path 'style) ""))))
(let ((parent (dom-parent dom path)))
;; break apart
(when (dom-attr path 'd)
(dolist (part (split-string (dom-attr path 'd) "M " t " +"))
(dom-append-child
parent
(dom-node 'path`((style . ,(dom-attr path 'style))
(d . ,(concat "M " part))))))
(dom-remove-node dom path))))
;; remove the use
(dolist (use (dom-by-tag dom 'use))
(dom-remove-node dom use))
(dolist (use (dom-by-tag dom 'image))
(dom-remove-node dom use))
;; move the first g down
(let ((g (car (dom-by-id dom "surface1"))))
(setf (cddar dom)
(seq-remove (lambda (o)
(and (listp o) (string= (dom-attr o 'id) "surface1")))
(dom-children dom)))
(dom-append-child dom g)
(when rotate
(let* ((old-width (dom-attr dom 'width))
(old-height (dom-attr dom 'height))
(view-box (mapcar 'string-to-number (split-string (dom-attr dom 'viewBox))))
(rotate (format "rotate(90) translate(0 %s)" (- (elt view-box 3)))))
(dom-set-attribute dom 'width old-height)
(dom-set-attribute dom 'height old-width)
(dom-set-attribute dom 'viewBox (format "0 0 %d %d" (elt view-box 3) (elt view-box 2)))
(dom-set-attribute highlights 'transform rotate)
(dom-set-attribute g 'transform rotate))))
(with-temp-file (expand-file-name (concat (file-name-sans-extension pdf-file) "-split.svg"))
(svg-print (car dom)))))
You can see how the spaces inside letters like "o" end up being black.
Selecting and combining those paths fixes that.
If there were shapes that were touching, then I need to draw lines and
fracture the shapes in order to break them apart.
The end result should be an SVG with the different chunks that I might
want to animate, but I need to identify the paths first. You can
assign object IDs in Inkscape, but this is a bit of an annoying
process since I haven't figured out a keyboard-friendly way to set
object IDs. I usually find it easier to just set up an Autokey
shortcut (or AutoHotkey in Windows) to click on the ID text box so
that I can type something in.
Autokey script for clicking
import time
x, y= mouse.get_location()
# Use the coordinates of the ID text field on your screen; xev can help
mouse.click_absolute(3152, 639, 1)
time.sleep(1)
keyboard.send_keys("<ctrl>+a")
mouse.move_cursor(x, y)
Then I can select each element, press the shortcut key, and type an ID
into the textbox. I might use "t-…" to indicate the text for a map
section, "h-…" to indicate a highlight, and arrows by specifying
their start and end.
To simplify things, I wrote a function in Emacs that will go through
the different groups that I've made, show each path in a different
color and with a reasonable guess at a bounding box, and prompt me for
an ID. This way, I can quickly assign IDs to all of the paths. The
completion is mostly there to make sure I don't accidentally reuse an
ID, although it can try to combine paths if I specify the ID. It saves
the paths after each change so that I can start and stop as needed.
Identifying paths in Emacs is usually much nicer than identifying them
in Inkscape.
my-svg-identify-paths: Prompt for IDs for each path in FILENAME.
(defunmy-svg-identify-paths (filename)
"Prompt for IDs for each path in FILENAME."
(interactive (list (read-file-name "SVG: " nil nil
(lambda (f) (string-match "\\.svg$" f)))))
(let* ((dom (car (xml-parse-file filename)))
(paths (dom-by-tag dom 'path))
(vertico-count 3)
(ids (seq-keep (lambda (path)
(unless (string-match "path[0-9]+" (or (dom-attr path 'id) "path0"))
(dom-attr path 'id)))
paths))
(edges (window-inside-pixel-edges (get-buffer-window)))
id)
(my-svg-display "*image*" dom nil t)
(dolist (path paths)
(when (string-match "path[0-9]+" (or (dom-attr path 'id) "path0"))
;; display the image with an outline
(unwind-protect
(progn
(my-svg-display "*image*" dom (dom-attr path 'id) t)
(setq id (completing-read
(format "ID (%s): " (dom-attr path 'id))
ids))
;; already exists, merge with existing element
(if-let ((old (dom-by-id dom id)))
(progn
(dom-set-attribute
old
'd
(concat (dom-attr (dom-by-id dom id) 'd)
" ";; change relative to absolute
(replace-regexp-in-string "^m""M"
(dom-attr path 'd))))
(dom-remove-node dom path)
(setq id nil))
(dom-set-attribute path 'id id)
(add-to-list 'ids id))))
;; save the image just in case we get interrupted halfway through
(with-temp-file filename
(svg-print dom))))))
Then I can animate SVGs by specifying the IDs. I can reorder the paths
in the SVG itself so that I can animate it group by group, like the
way that the Adobe Fresco SVGs were animated element by element.
The way it works is that the my-svg-reorder-paths function removes
and readds elements following the list of IDs specified, so
everything's ready to go for step-by-step animation. Here's the code:
Animation style 2: Building up a map with temporary highlights
I can also use CSS rules to transition between opacity values for more
complex animations. For my EmacsConf 2023 presentation, I wanted to
make a self-paced, narrated presentation so that people could follow
hyperlinks, read the source code, and explore. I wanted to include a
map so that I could try to make sense of everything. For this map, I
wanted to highlight the previous sections that were connected to the
topic for the current section.
I used a custom Org link to include the full contents of the SVG
instead of just including it with an img tag.
#+ATTR_HTML: :class r-stretchmy-include:~/proj/emacsconf-2023-emacsconf/map.svg?wrap=export html
my-include-export: Export PATH to FORMAT using the specified wrap parameter.
I wanted to be able to specify the entire sequence using a table in
the Org Mode source for my presentation. Each row had the slide ID, a
list of highlights in the form prev1,prev2;current, and a
comma-separated list of elements to add to the full-opacity view.
Reveal.js adds a "current" class to the slide, so I can use that as a
trigger for the transition. I have a bit of Emacs Lisp code that
generates some very messy CSS, in which I specify the ID of the slide,
followed by all of the elements that need their opacity set to 1, and
also specifying the highlights that will be shown in an animated way.
my-reveal-svg-progression-css: Make the CSS.
(defunmy-reveal-svg-progression-css (map-progression &optional highlight-duration)
"Make the CSS.map-progression should be a list of lists with the following format:((\"slide-id\" \"prev1,prev2;cur1\" \"id-to-add1,id-to-add2\") ...)."
(setq highlight-duration (or highlight-duration 2))
(let (full)
(format
"<style>%s</style>"
(mapconcat
(lambda (slide)
(setq full (append (split-string (elt slide 2) ",") full))
(format "#slide-%s.present path { opacity: 0.2 }%s { opacity: 1 !important }%s"
(car slide)
(mapconcat (lambda (id) (format "#slide-%s.present #%s" (car slide) id))
full
", ")
(my-reveal-svg-highlight-different-colors slide)))
map-progression
"\n"))))
Since it's automatically generated, I don't have to worry about it
once I've gotten it to work. It's all hidden in a
results drawer. So this CSS highlights specific parts of the SVG with
a transition, and the highlight changes over the course of a second or
two. It highlights the previous names and then the current one. The
topics I'd already discussed would be in black, and the topics that I
had yet to discuss would be in very light gray. This could give people
a sense of the progress through the presentation.
As a result, as I go through my presentation, the image appears to
build up incrementally, which is the effect that I was going for.
I can test this by exporting only my map slides:
Graphviz, mermaid-js, and other diagramming tools can make SVGs. I
should be able to adapt my code to animate those diagrams by adding
other elements in addition to path. Then I'll be able to make
diagrams even more easily.
Since SVGs can contain CSS, I could make an SVG equivalent of the
CSS rules I used for the presentation, maybe calling a function with
a Lisp expression that specifies the operations (ex:
("frame-001.svg" "h-foo" opacity 1)). Then I could write frames to
SVGs.
FFmpeg has a crossfade filter. With a little bit of figuring out, I
should be able to make the same kind of animation in a webm form
that I can include in my regular videos instead of using Reveal.js
and CSS transitions.
I've also been thinking about automating the recording of my
Reveal.js presentations. For my EmacsConf talk, I opened my
presentation, started the recording with the system audio and the
screen, and then let it autoplay the presentation. I checked on it
periodically to avoid the screensaver/energy saving things from
kicking in and so that I could stop the recording when it's
finished. If I want to make this take less work, one option is to
use ffmpeg's "-t" argument to specify the expected duration of the
presentation so that I don't have to manually stop it. I'm also
thinking about using Puppeteer to open the presentation, check when
it's fully loaded, and start the process to record it - maybe even
polling to see whether it's finished. I haven't gotten around to it
yet. Anyhow, those are some ideas to explore next time.
As for animation, I'm still curious about the possibility of
finding a way to access the raw stroke information if it's even
available from my Supernote A5X (difficult because it's a
proprietary data format) or finding an app for the iPad that exports
single line SVGs that use stroke information instead of fill. That
would only be if I wanted to do those even fancier animations that
look like the whole thing is being drawn for you. I was trying to
figure out if I could green screen the Adobe Fresco timelapse videos
so that even if I have a pre-sketch to figure out spacing and remind
me what to draw, I can just export the finished elements. But
there's too much anti-aliasing and I haven't figured out how to do
it cleanly yet. Maybe some other day.
I use Google Cloud Vision's text detection engine to convert my
handwriting to text. It can give me bounding polygons for words or
paragraphs. I might be able to figure out which curves are entirely
within a word's bounding polygon and combine those automatically.
It would be pretty cool if I could combine the words recognized by
Google Cloud Vision with the word-level timestamps from speech
recognition so that I could get word-synced sketchnote animations
with maybe a little manual intervention.
Anyway, those are some workflows for animating sketches with Inkscape
and Emacs. Yay Emacs!
Inspired by Arne Bab (who mentioned being inspired by my sketches)
I've been drawing daily moments since 2023-03-20. Nothing fancy, just
a quick reminder of our day.
I draw while the kiddo watches a bedtime video. Sometimes she suggests
a moment to draw, or flips through the pages and laughs at the
memories.
I also have my text journal (occasionally with photos) and my time
tracker. It doesn't take a lot of time to update them, and I like what
they let me do.
I like this. It makes the path visible. I'm looking forward to seeing
what this is like after years
I used to draw and write monthly reviews. I'd like to get back to
those. They help with the annual reviews, too.
phone: review sketches, jot keywords on phone
computer: draw sketch, braindump, blog
Right now I put 12 days on one A5.
Week? nah, not really needed
More details? longer to review, though. Redirect drawing to monthly notes
Still working on shaping the day/week more proactively. A+ likes to
take the lead, so maybe it's more like strewing.
If you're viewing this on my blog, you might be able to click on the
links below to open them in a viewer and then swipe or use arrow keys
to navigate.
I want to make it easier to process the sketchnotes I make on my
Supernote. I write IDs of the form yyyy-mm-dd-nn to identify my
sketches. To avoid duplicates, I get these IDs from the web-based
journaling system I wrote. I've started putting the titles and tags
into those journal entries as well so that I can reuse them in
scripts. When I export a sketch to PNG and synchronize it, the file
appears in my ~/Dropbox/Supernote/EXPORT directory on my laptop.
Then it goes through this process:
I retrieve the matching entry from my journal
system and rename the file based on the title and tags.
If there's no matching entry, I rename the file based on the ID.
If there are other tags or references in the sketch, I add those to the filename as well.
I recolor it based on the tags, so parenting-related posts are a little purple, tech/Emacs-related posts are blue, and things are generally highlighted in yellow otherwise.
I move it to a directory based on the tags.
If it's a private sketch, I move it to the directory for my private sketches.
If it's a public sketch, I move it to the directory that will eventually get synchronized to sketches.sachachua.com, and I reload the list of sketches after some delay.
#!/usr/bin/python3# -*- mode: python -*-# (c) 2022-2023 Sacha Chua (sacha@sachachua.com) - MIT License# Permission is hereby granted, free of charge, to any person# obtaining a copy of this software and associated documentation files# (the "Software"), to deal in the Software without restriction,# including without limitation the rights to use, copy, modify, merge,# publish, distribute, sublicense, and/or sell copies of the Software,# and to permit persons to whom the Software is furnished to do so,# subject to the following conditions:# The above copyright notice and this permission notice shall be# included in all copies or substantial portions of the Software.# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE# SOFTWARE.import os
import json
import re
import requests
import time
from dotenv import load_dotenv
# Import the Google Cloud client librariesfrom google.cloud import vision
from google.cloud.vision_v1 import AnnotateImageResponse
import sys
sys.path.append("/home/sacha/proj/supernote/")
import recolor # noqa: E402 # muffles flake8 error about import
load_dotenv()
# Set the folder path where the png files are locatedfolder_path = '/home/sacha/Dropbox/Supernote/EXPORT/'public_sketch_dir = '/home/sacha/sync/sketches/'private_sketch_dir = '/home/sacha/sync/private-sketches/'# Initialize the Google Cloud Vision clientclient = vision.ImageAnnotatorClient()
refresh_counter = 0
defextract_text(client, file):
json_file = file[:-3] + 'json'# TODO Preprocess to keep only black textwithopen(file, 'rb') as image_file:
content = image_file.read()
# Convert the png file to a Google Cloud Vision image objectimage = vision.Image(content=content)
# Extract handwriting from the image using the Google Cloud Vision APIresponse = client.document_text_detection(image=image)
response_json = AnnotateImageResponse.to_json(response)
json_response = json.loads(response_json)
# Save the response to a json file with the same name as the png filewithopen(json_file, "w") as f:
json.dump(json_response, f)
defmaybe_rename(file):
# TODO Match on IDjson_file = file[:-3] + 'json'withopen(json_file, 'r') as f:
data = json.load(f)
# Extract the text from the json filetext = data['fullTextAnnotation']['text']
# Check if the text contains a string matching the regex patternpattern = r'(?<!ref:)[0-9]{4}-[0-9]{2}-[0-9]{2}-[0-9]{2}'match = re.search(pattern, text)
ifmatch:
# Get the matched stringmatched_string = match.group(0)
new_name = matched_string
from_zid = get_journal_entry(matched_string).strip()
if from_zid:
new_name = matched_string + ' ' + from_zid
tags = get_tags(new_name, text)
if tags:
new_name = new_name + ' ' + tags
ref = get_references(text)
if ref:
new_name = new_name + ' ' + ref
print('Renaming ' + file + ' to ' + new_name)
# Rename the png and json files to the matched stringnew_filename = os.path.join(os.path.dirname(file), new_name + '.png')
rename_set(file, new_filename)
return new_filename
defget_tags(filename, text):
tags = re.findall(r'(^|\W)#[ \n\t]+', text)
return' '.join(filter(lambda x: x notin filename, tags))
defget_references(text):
refs = re.findall(r'!ref:[0-9]{4}-[0-9]{2}-[0-9]{2}-[0-9]{2}', text)
return' '.join(refs)
defget_journal_entry(zid):
resp = requests.get('https://' + os.environ['JOURNAL_USER']
+ ':' + os.environ['JOURNAL_PASS']
+ '@journal.sachachua.com/api/entries/' + zid)
j = resp.json()
if j andnot re.search('^I thought about', j['Note']):
return j['Note']
defget_color_map(filename, text=None):
if text:
together = filename + ' ' + text
else:
together = filename
if re.search('r#(parenting|purple|life)', together):
return {'9d9d9d': '8754a1', 'c9c9c9': 'e4c1d9'} # parenting is purplishelif re.search(r'#(emacs|geek|tech|blue)', together):
return {'9d9d9d': '2b64a9', 'c9c9c9': 'b3e3f1'} # geeky stuff in light/dark blueelse:
return {'9d9d9d': '884636', 'c9c9c9': 'f6f396'} # yellow highlighter, dark browndefrename_set(old_name, new_name):
if old_name != new_name:
old_json = old_name[:-3] + 'json'new_json = new_name[:-3] + 'json' os.rename(old_name, new_name)
os.rename(old_json, new_json)
defrecolor_based_on_filename(filename):
color_map = get_color_map(filename)
recolored = recolor.map_colors(filename, color_map)
# possibly rename based on the filenamenew_filename = re.sub(' #(purple|blue)', '', filename)
rename_set(filename, new_filename)
recolored.save(new_filename)
defmove_processed_sketch(file):
global refresh_counter
if'#private'infile:
output_dir = private_sketch_dir
elif'#'infile:
output_dir = public_sketch_dir
refresh_counter = 3
else:
returnfilenew_filename = os.path.join(output_dir, os.path.basename(file))
rename_set(file, new_filename)
return new_filename
defprocess_file(file):
json_file = file[:-3] + 'json'# Check if a corresponding json file already existsifnot os.path.exists(json_file):
extract_text(client, file)
ifnot re.search('[0-9]{4}-[0-9]{2}-[0-9]{2}-[0-9]{2} ', file):
file = maybe_rename(file)
recolor_based_on_filename(file)
move_processed_sketch(file)
defprocess_dir(folder_path):
global processed_files
# Iterate through all png files in the specified folderfiles = sorted(os.listdir(folder_path))
forfilein files:
iffile.endswith('.png') and'_'infile:
print("Processing ", file)
process_file(os.path.join(folder_path, file))
defdaemon(folder_path, wait):
global refresh_counter
whileTrue:
process_dir(folder_path)
time.sleep(wait)
if refresh_counter > 0:
refresh_counter = refresh_counter - 1
if refresh_counter == 0:
print("Reloading sketches")
requests.get('https://' + os.environ['JOURNAL_USER'] + ':'+ os.environ['JOURNAL_PASS']
+ '@sketches.sachachua.com/reload?python=1')
if__name__ == '__main__':
# Create a set to store the names of processed filesprocessed_files = set()
iflen(sys.argv) > 1:
if os.path.isdir(sys.argv[1]):
folder_path = sys.argv[1]
daemon(folder_path, 300)
else:
for f in sys.argv[1:]:
process_file(f)
else:
daemon(folder_path, 300)
I'm contemplating writing some annotation tools to make it easier to
turn the detected text into useful text for searching or writing about
because the sketches throw off the recognition (misrecognized text,
low confidence) and the columns mess up the line wrapping. Low priority, though.
My handwriting (at least for numbers) is probably simple enough that I
might be able to train Tesseract OCR to process that someday. And who
knows, maybe some organization will release a pre-trained model for
offline handwriting recognition that'll be as useful as OpenAI Whisper
is for audio files. That would be neat!
The SuperNote lets me draw with black, dark gray (0x9d), gray
(0xc9), or white. I
wanted to make it easy to recolor them, since a little splash of
colour makes sketches more fun and also makes them easier to pick out
from thumbnails. Here's the Python script I wrote:
#!/usr/bin/python3# Recolor PNGs## (c) 2022 Sacha Chua (sacha@sachachua.com) - MIT License## Permission is hereby granted, free of charge, to any person# obtaining a copy of this software and associated documentation files# (the "Software"), to deal in the Software without restriction,# including without limitation the rights to use, copy, modify, merge,# publish, distribute, sublicense, and/or sell copies of the Software,# and to permit persons to whom the Software is furnished to do so,# subject to the following conditions:# The above copyright notice and this permission notice shall be# included in all copies or substantial portions of the Software.# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE# SOFTWARE.import numpy as np
import os
import csv
import argparse
from PIL import Image
DARK_GRAY= 0x9d
GRAY= 0xc9
HEADER_GRAY= 0xca
WHITE= 0xfe
color_dict= {}
defcolor_to_tuple(color_dict, s):
if s in color_dict:
s= color_dict[s]
s= s.lstrip('#')
if (s =='.'):
return (None, None, None)
elif (len(s) == 2):
return (int(s, 16), int(s, 16), int(s, 16))
else:
returntuple(int(s[i:i + 2], 16) for i in (0, 2, 4))
defload_color_dict(filename):
dict= {}
withopen(os.path.expanduser(filename), newline='') as csvfile:
reader= csv.reader(csvfile, delimiter=',', quotechar='"')
for row in reader:
dict[row[0]] = row[1]
returndictdefremove_grid(input):
ifisinstance(input, str):
im= Image.open(input).convert('RGB')
else:
im=inputdata= np.array(im)
freq= get_colors_by_freq(input)
print(freq)
return Image.fromarray(data)
defmap_colors(input, color_map):
ifisinstance(input, str):
im= Image.open(input).convert('RGB')
else:
im=inputdata= np.array(im)
red, green, blue= data[:, :, 0], data[:, :, 1], data[:, :, 2]
for from_c, to_c in color_map.items():
from_r, from_g, from_b= color_to_tuple(color_dict, from_c)
to_r, to_g, to_b= color_to_tuple(color_dict, to_c)
mask= (red == from_r) & (green == from_g) & (blue == from_b)
data[:, :, :3][mask] = [to_r, to_g, to_b]
return Image.fromarray(data)
defset_colors_by_freq(input, color_list):
ifisinstance(input, str):
im= Image.open(input).convert('RGB')
else:
im=inputdata= np.array(im)
red, green, blue= data[:, :, 0], data[:, :, 1], data[:, :, 2]
sorted_colors= get_colors_by_freq(input)
freq=iter(color_list.split(','))
for i, f inenumerate(freq):
if f !='.':
to_r, to_g, to_b= color_to_tuple(color_dict, f)
by_freq= sorted_colors[i][1]
ifisinstance(by_freq, np.uint8):
mask= (red == by_freq) & (green == by_freq) & (blue == by_freq)
else:
mask= (red == by_freq[0]) & (green == by_freq[1]) & (blue == by_freq[2])
data[:, :, :3][mask] = [to_r, to_b, to_g]
return Image.fromarray(data)
defcolor_string_to_map(s):
color_map= {}
colors=iter(args.colors.split(','))
for from_c in colors:
to_c=next(colors)
color_map[from_c] = to_c
return color_map
defget_colors_by_freq(input):
ifisinstance(input, str):
im= Image.open(input).convert('RGB')
else:
im=inputcolors= im.getcolors(im.size[0] * im.size[1])
returnsorted(colors, key=lambda x: x[0], reverse=True)
defprint_colors(input):
sorted_colors= get_colors_by_freq(input)
for x in sorted_colors:
if x[0] > 10:
ifisinstance(x[1], np.uint8):
print('%02x %d'% (x[1], x[0]))
else:
print(''.join(['%02x'% c for c in x[1]]) +' %d'% x[0])
defprocess_file(input):
print(input)
if args.preview:
output=Noneelse:
output= args.output if args.output elseinputif os.path.isdir(output):
output= os.path.join(output, os.path.basename(input))
im= Image.open(input).convert('RGB')
if args.colors:
im= map_colors(im, color_string_to_map(args.colors))
elif args.freq:
im= set_colors_by_freq(im, args.freq)
else:
print_colors(im)
exit(0)
if args.preview:
im.thumbnail((700, 700))
im.show()
elif output:
im.save(output)
if__name__=='__main__':
parser= argparse.ArgumentParser(
description='Recolor a PNG.',
formatter_class=argparse.RawTextHelpFormatter,
epilog="If neither --colors nor --freq are specified, "+"display the most frequent colours in the image.")
parser.add_argument('--colors', help="""Comma-separated list of RGB hex values in the form of old,new,old,new Examples: 9d,ffaaaa,c9,ffd2d2 - reddish c9,ffea96 - yellow highlighter c9,d2d2ff - light blue """)
parser.add_argument('--freq', help="Color replacements in order of descending frequency (ex: .,ffea96). .: use original color")
parser.add_argument('--csv', help="CSV of color names to use in the form of colorname,hex")
parser.add_argument('--preview', help="Preview only", action='store_const', const=True)
parser.add_argument('input', nargs="+", help="Input file")
parser.add_argument('--output', help="Output file. If not specified, overwrite input file.")
args= parser.parse_args()
color_dict= load_color_dict(args.csv) if args.csv else {}
forinputin args.input:
process_file(os.path.join(os.getcwd(), input))
I don't think in hex colours, so I added a way to refer to colours by
names. I converted this list of Copic CSS colours to a CSV by copying
the text, pasting it into a file, and doing a little replacement. It's
not complete, but I can copy selected colours from this longer list. I
can also add my own. The CSV looks a little like this:
It doesn't do any fuzzing or clustering of similar colours, so it
won't work well on antialiased images. For the simple sketches I make
with the SuperNote, though, it seems to work well enough.
I can preview my changes with something like ./recolor.py ~/sketches/"2022-08-02-01 Playing with my drawing workflow #supernote #drawing #workflow #sketching #kaizen.png" --csv colors.csv --freq .,lightyellow --preview , and then I can
take the --preview flag off to overwrite the PNG.
I've had my SuperNote A5X for a month now, and I really like it.
Text from my sketch
I use it for:
untangling thoughts
sketchnoting books
planning
drafting blog posts
drawing
A- uses it for: (she's 6 years old)
practising cursive
doing mazes and dot-to-dots
drawing
reading lyrics
Things I'm learning:
Exporting PNGs at 200% works well for my workflow. I rename them in
Dropbox and upload them to sketches.sachachua.com.
Carefully copying & deleting pages lets me preserve page numbers. I use lassoed titles for active thoughts and maintain a manual index for other things.
Layouts:
Landscape: only easier to review on my laptop
Portrait columns: lots of scrolling up and down
Portrait rows: a little harder to plan, but easier to review
Many books fit into one page each.
Google Lens does a decent job of converting my handwriting to text (print or cursive, even with a background). Dropbox → Google Photos → Orgzly → Org
Draft blog posts go into new notebooks so that I can delete them once converted.
The Super Note helps me reclaim a lot of the time I spend waiting for A-. A digital notebook is really nice. Easy to erase, rearrange, export… It works well for me.
Part of my everyday carry kit
Ideas for growth:
Settle into monthly pages, bullet journaling techniques
Practise drawing; use larger graphic elements & organizers, different shades
Integrate into Zettelkasten
I put my visual book notes and visual library notes into a Dropbox
shared folder so that you can check them out if you have a
Supernote. If you don't have a Supernote, you can find my visual book
notes at sketches.sachachua.com. Enjoy!
W- was happy with his SuperNote A5X, so I ordered one for myself on
July 18. The company was still doing pre-orders because of the
lockdowns in China, but it shipped out on July 20 and arrived on July
25, which was pretty fast.
I noticed that the org-epub export makes verse blocks look
double-spaced on the SuperNote, probably because <br> tags are
getting extra spacing. I couldn't figure out how to fix it with CSS,
so I've been hacking around it by exporting it as a different class
without the <br> tags and just using { white-space: pre }. I also
ended up redoing the templates I made in Inkscape, since the gray I
used was too light to see on the SuperNote.
It was very tempting to dive into the rabbithole of interesting
layouts on /r/supernote and various journaling resources, but I still
don't have much time, so there's no point in getting all fancy about
to-do lists or trackers at the moment. I wanted to focus on just a
couple of things: untangling my thoughts and sketching. Sketchnoting
books would be a nice bonus (and I actually managed to do one on paper
during a recent playdate), but that can also wait until I have more
focused time.
I've had the A5X for five days and I really like it. Writing with the
Lamy pen feels like less work than writing with a pencil or regular
pen. It's smooth but not rubbery. I've still been drawing in landscape
form because that feels a little handier for reviewing on my tablet or
writing about on my blog, but I should probably experiment with
portrait form at some point.
So far, I've:
sketched out my thoughts
I used to use folded-over 8x14" to
sketch out two thoughts, but scanning them was a bit of a pain.
Sometimes I used the backs of our writing practice sheets in order
to reduce paper waste, but then scanning wasn't always as clean. I
really like using the SuperNote to sketch out thoughts like this
one. It's neat, and I can get the note into my archive pretty easily.
sketched stuff from life
This is easier if I take a quick
reference picture on my phone. I could probably even figure out some
kind of workflow for making that available as a template for tracing.
received many kiddo drawings
A- loves being able to use the
eraser and lasso to modify her drawings. Novelty's probably another
key attraction, too. She's made quite a few drawings for me, even
experimenting with drawing faces from the side like the way she's
been seeing me practice doing.
received many kiddo requests
A- likes to ask me to draw things.
She enjoys tracing over them in another layer. More drawing practice
for both of us!
used it to help A- practise coding, etc.
A- wanted to do some
coding puzzles with her favourite characters. I enjoyed being able
to quickly sketch it up, drawing large versions and then scaling
down as needed.
played a game of chess
I drew chess pieces just to see if I
could, and we ended up using those to play chess. I should share
these and maybe add other games as well.
referred to EPUBs and PDFs
I put our favourite songs and poems on
it. I've also started using org-chef to keep a cookbook.
doodled sketch elements
boxes, borders, little icons, people…
Probably should organize these and share them too.
I've figured out how to publish sketches by using my phone to rotate
them and sync them with my online sketches. Now I'm playing around
with my writing workflow to see if I can easily post them to my blog.
At some point, I think I'll experiment with using my phone to record
and automatically transcribe some commentary, which I can pull into
the blog post via some other Emacs Lisp code I've written. Whee!