Lacking Natural Simplicity

Random musings on books, code, and tabletop games.

Failing at Ada Again

Back on May 20th, 2019 I ordered a copy of Programming in Ada by John Barnes and spent some time reading it and working with Ada. I get interested again in Ada periodically. In theory, I ought to like programming in Ada — it's one of the last remaining widely used members of the Pascal language family, which I like. There is a distribution of the GNAT Community Edition for macOS which bundles the GNAT Ada compiler and some useful libraries. But I could never get comfortable.

  • GNAT Community Edition for macOS doesn't include GtkAda, so I couldn't easily write GUI programs.

  • Getting programming libraries was back to the old download and install it from scratch yourself method. I find it much easier to use systems like Chicken Scheme's Eggs Unlimited to find and install software, or OCaml's opam.

  • Brew, a package manager for macOS, doesn't include GNAT and GtkAda. The case is better in MacPorts and much better on Fedora, but not good on Ubuntu. I think there are are fewer packaged Ada libraries across the open source Unixes in general.

  • The lack of a garbage collector is annoying.

  • The type system is strict and not very flexible.

  • And especially irritating to me, I tried using the Ada mode on GNU Elpa and it didn't indent Ada code properly.

I think in my current programming environment using Ada is still a difficult task.

Converting my emacs-lisp repository to GIT and putting it online

I've had a mercurial repository for my Emacs Lisp initialization files since Thursday, Oct 29, 2009, but had actually used it very little. Recently I had occasion to untangle my initialization files somewhat — they had over 12,000 lines of code when I started, and I've reduced that to 6,137 lines and switched over to using Elpa packages for as much stuff as I can.

I decided I'd put it in one of the online repositories, and since I'm already using for my blog I decided to put it there. But that meant converting it to GIT. I used the directions here using fast-export which seemed to work fine.

Fixing more broken links in this blog

I did another round of searching for broken internal links on this blog and fixing them. It took several hours. Along the way I annotated some old blog posts with notes from the present and fixed typos. I think the blog is in pretty good shape now, though.

Apple Podcasts doesn't recognize my iPod Shuffle

Well, after the most recent upgrade to macOS Catalina my iPod Shuffle is not recognized by Apple Podcasts, so I have no way of loading more podcasts on it. This is a pity, because it was the only mp3 player that I actually liked, though I liked the second generation one better than the fourth generation. It had no screen, which is an advantage since I mostly listen to podcasts in the car. The controls were very intuitive and were designed to be easy to operate without looking at them. They were also reasonably hard to press by mistake. I shall miss it. Oh, cruel modern world to have cast it out!

I also had what appears to be an atypical workflow for the Shuffle. In iTunes I would manually pick the episodes of the podcasts I was interested and download them, then copy them manually to the Shuffle. After I listened to them on the Shuffle I'd delete the download manually in iTunes. I never automatically synced the contents of my Shuffle from iTunes. With the second generation Shuffle I could copy the files onto the Shuffle in the specific order I wanted. The reason I didn't like the fourth generation Shuffle was it organized the files alphabetically by podcast name and then by episode date released, and you had to hit a specific button and listen to it enumerate the podcasts to move from one podcast to another.

I have another MP3 player, a SanDisk Clip Sport (manual) which I tried some time ago and didn't like as much as the Shuffle. My main problem with it was that it was harder to operate without looking at it: it has a screen, which you have to look at sometimes, and its controls are not as easy to use without looking at them. The controls are also much more sensitive to pressure than the controls on the Shuffle, so it is easier to activate them accidentally. (I've fast forwarded when my arm accidentally touched the controls while carrying something near my chest.) And, of course, the interface for copying podcasts onto the Clip Sport is just copying files onto external drive the Clip Sport shows up as, into the proper folder for podcasts. So the “listened to” status and the time into each podcast you've reached is not something the podcatcher application can help you track. And the Clip Sport itself doesn't keep track of where you were in a particular podcast, though it does remember which one you were listening to, and where in the podcast you had reached. It's really easy to mess up with the forward button and move on to the next podcast and when you use the reverse button to move back the Clip Sport has forgotten where in the podcast you were. Sigh.

Oh well. I guess I'll have to give the Clip Sport another try. Luckily I had exported my podcast feed from iTunes before I converted to macOS Catalina — I had 122 podcast subscriptions. I installed gpodder on my macOS laptop to manage the podcasts. (I was glad a version of it as a macOS application was available.) It told me that several of my subscriptions didn't have valid feeds any more, or simpler weren't there, so now I have 105 podcast subscriptions. To download a podcast I can right click on the episode name in the episode list and choose “Download”. (It would be convenient if the not downloaded/downloading/downloaded indication icon was an active button to download it with one click.) It has an option, when you right click on a podcast, to open the download directory for that podcast, which makes it easy to copy the files to the Clip Sport. Of course, they end up on the Clip Sport alphabetically by filename, and some podcasts have really weird names — I'd swear some of them were UUIDs. There is a gpodder option to rename the files to the title of the episode, which is good when episodes include the episode number in the title.

Playing with Hashlife

C.P. found Robert Smith's implementation of Bill Gosper's HASHLIFE algorithm and wanted my help with running it. I cloned the repo it was in and ran sbcl in the hashlife directory. Then I entered:

(asdf:operate 'asdf:load-op 'charmlife)

That resulted in a Component CHARMLIFE not found error in sbcl. I thought that was odd — it was right there in the current directory. I looked at asdf:*central-registry*, and the only thing in it was the quicklisp directory. The ASDF howto showed an example of setting asdf:*central-registry*:

(setf asdf:*central-registry*
  ;; Default directories, usually just the ``current directory''

    ;; Additional places where ASDF can find
    ;; system definition files

Noticing that it had the symbol *default-pathname-defaults* in the new value, I guessed that it being missing from mine caused ASDF to not find the system in the current directory. So I added it and tried again. This time ASDF couldn't find cl-charms. I guessed and used Quicklisp to load it. That worked. Then I looked at charmlife.lisp and looked at the main function and figured out how to run it and how to interact with the program when it was running.

Here's what I had to do:

(setf asdf:*central-registry*
      (cons '*default-pathname-defaults*
(ql:quickload "cl-charms")
(asdf:operate 'asdf:load-op 'charmlife)

Looping on 'dnf -y system-upgrade download' until it succeeds

Fedora 31 is out, and fool that I am I'm upgrading to it. Unfortunately, my DSL connection is slow and my DSL modem router is flaky. With over 3000 packages to download,

dnf -y system-upgrade download --refresh --releasever=31

is bound to die in the middle at least once, if not multiple times, and with more than 6 hours estimated for the download to run I can't sit watching it and restarting it every time it dies.

I tried running dnf as the argument to a while loop, but was unable to C-c to interrupt it when I did want to kill it since dnf caught the SIGINT and the loop started the dnf command over again before I could C-c the shell.

Here's the script I came up with to work around the issue:

tryfedoraupdate (Source)

#! /usr/bin/bash
while ! dnf -y system-upgrade download --refresh --releasever=31
    read -t 30 -p "Continue? (y/n) " reply
    if (($?>128)); then
        echo "timed out, continuing..."
    elif [[ "$reply" =~ [Nn] ]]; then
        echo "Exiting..."
        echo "Continuing..."

This way I can stop the script easily, but if it dies itself it will continue after a timeout.

Ulisses Spiele updates Heliograph Space 1889 pdfs with bookmarks and document outlines

Today I got notices from Ulisses Spiele via DriveThruRPG that each of the Heliograph Space 1889 PDFs had been updated, saying:

“An updated PDF with bookmarks and meta information is now available.”

I'm a little surprised to see this at this late date — is there that much demand for these PDFs? Still, I'm pleased to get the updates, and glad there's still interest. I've never tried the Ubiquity based version, though I‘ve been thinking about buying the Savage Worlds versions of those adventures, but the original system has charms of its own.

Converting my pyBloxsom blog into a Nikola blog

Yesterday I decided to try blogging again. I started writing a post at, but that was like wading through a rotting whale corpse. Instead I decided to use GitHub Pages and use the static blog/site generator Nikola to generate the content, editing reStructuredText (ReST) files.

I wrote my first post and it was good! Using ReST again was much better than editing in a GUI like, and having it hosted by GitHub Pages was more restful than running a machine hosting a website.

But then I thought of all the posts I had in my old blog, before I stopped running machine hosting a website. They were all written in ReST — maybe I could put them up on my new blog?

I took a couple three hours and wrote a shell script to find the old pyBloxsom files and feed them into a python script that I also wrote. Along the way I made sure the files all had #published and #tags lines, in that order, immediately following the title line.

Here's the shell script:

drive-pyblox-to-nikola (Source)

#! /usr/bin/env bash

(cd ~/myblog &&
     find notentries/ entries/ -type f -name \*.rst |

Here's the python script:

pyblox-to-nikola (Source)

#! /usr/bin/env python3.7
import os
import os.path
import sys
from datetime import datetime
# datetime.strptime ('2019-11-05 20:32:24', '%Y-%m-%d %H:%M:%S')
# dt.strftime ('%Y-%m-%d %H:%M:%S UTC-05:00')
entries_prefix = 'entries/'
notentries_prefix = 'notentries/'
published_prefix = '#published '
tags_prefix = '#tags '
files_read = 0
for filename in sys.stdin:
    filename = filename.rstrip ()
    basename = os.path.basename  (filename)
    dirname = os.path.dirname (filename)
    if dirname.startswith (entries_prefix):
        category = dirname[len(entries_prefix):]
    elif dirname.startswith (notentries_prefix):
        category = dirname[len(notentries_prefix):]
        category = ''
    (slug, _) = os.path.splitext (basename)
    print ('filename: %s\nbasename: %s\ndirname: %s\ncategory: %s\nslug: %s' %
           (filename, basename, dirname, category, slug))
    inf = open (filename, 'r')
    files_read = files_read + 1
    title = inf.readline ()
    title = title.rstrip ()
    published = inf.readline ()
    published = published.strip ()
    if published.startswith (published_prefix):
        published = published[len(published_prefix):]
        raise ('Unknown line should be #published', published)
    published_date = datetime.strptime (published, '%Y-%m-%d %H:%M:%S')
    nikola_date = published_date.strftime ('%Y-%m-%d %H:%M:%S UTC-05:00')
    datepath = published_date.strftime ('%Y/%m/%d')
    newdir = os.path.join ('/Users/tkb/nikola/newblog/posts', datepath)
    os.makedirs (newdir, exist_ok=True)
    tags = inf.readline ()
    tags = tags.rstrip ()
    if tags.startswith (tags_prefix):
        tags = tags[len(tags_prefix):]
        raise ('Unknown line should be #tags', tags)
    tags = tags.lower ()
    outfname = os.path.join (newdir, basename)
    print ('outfname: %s' % outfname)
    outf = open (outfname, 'w')
    outf.write ('.. title: %s\n' % title)
    outf.write ('.. slug: %s\n' % slug)
    outf.write ('.. date: %s\n' % nikola_date)
    outf.write ('.. tags: %s\n' % tags)
    outf.write ('.. category: %s\n' % category)
    outf.write ('.. link: \n')
    outf.write ('.. description: \n')
    outf.write ('.. type: text\n')
    outf.write ('\n')
    for line in inf:
        outf.write (line)
    inf.close ()
print ('\n\nFiles Read: %d' % files_read)

There were 810 reStructuredText files to process. Once that was done, I had to work through those files multiple times finding all the broken internal links, since many of them were absolute links to my old blog or other pages on my old website. I did grep-find in Emacs multiple times to find all the occurances of my old website's hostname (which went through a couple of variations over time), then looked for site relative links that started with /~tkb, a tedious but not too difficult process.

Getting nxml-mode in emacs to validate DocBook 5 documents

I have occasion to edit and build a DocBook 5 document under both macOS Catalina and Fedora 30.

On macOS I've used homebrew to install the docbook, docbook-xsl, and libxslt (for xsltproc), and fop formulas, and changed my PATH to include the directory where brew installed xsltproc. which will then use /usr/local/etc/xml/catalog to find files, in which brew installed links to the docbook schemas and xsl stylesheets.

On Fedora I've used dnf to install the docbook5-schemas, docbook5-style-xsl, and fop packages.

The document builds fine on both OSes, but Emacs doesn't validate properly against the DocBook RELAX NG schemas, because the *.rnc files supplied with Emacs (26.3 on macOS, 26.2 on Fedora) are for DocBook 4. However, Emacs will look at a schemas.xml file in the same directory as the file you are editing to find the *.rnc files. Unfortunately, of course, they have different locations on macOS with brew and on Fedora.

So I wrote a script, generate-schemas-xml, in bash using xmlcatalog to look up the translation for the URI for the RELAX NG compiled schema file and generate the schemas.xml file and substitute the translation into the file:

generate-schemas-xml (Source)

#! /usr/bin/env bash

schema_location="$(xmlcatalog "" "" |
    grep "^file:///" | sed 's#^file://##')"

cat >schemas.xml <<EOF
<locatingRules xmlns="">
<namespace ns="" uri="$schema_location"/>

Then I had my Makefile generate the schemas.xml file if it was missing.

It was more complicated that I'd have liked, but it does work.