Tools

Dear Microsoft,


Congratulations on completely destroying the sync ability of the iOS version of OneNote. Your mother must be so proud.

Update: Surprisingly, it still worked on my iPhone while being totally borked on my iPad. None of the fixes people have been suggesting in the forums (lots of people hitting this bug this week) fixed it. Failing to authenticate for sync has actually been an issue with the iPad version of OneNote for quite a while, but in the past, force-quitting the app was sufficient to fix it. It looks like a nuke-and-pave of the app is necessary but not sufficient; I’m not actually sure what eventually persuaded it to start working again, but I suspect it was the animal sacrifice.

An app whose functionality depends on reliable sync needs to sync reliably. I migrated everything over from Evernote and let my paid subscription lapse because they were ignoring the core functionality of “sync my notes between phone/laptops/tablets”. Your recent attempt to provide the (not-quite-the-) same (poor) user experience on all platforms is the sort of development diversion that cost them customers.

Oh, and if you really want to make the user experience the same, add the “Recent Notes” tab to the desktop clients. It’s one of the most useful features of the mobile clients, and completely missing on the full app. And bring the Mac client up to feature parity with Windows, maybe?

Update: Happened again on 6/6. I had to delete the app, re-download it, and then re-sync all my notebooks. WTF, MS?

"Oh, FFS, MasterCook!"


MasterCook, currently at version 15, is still the best recipe management software around, mostly because it supports sub-recipes. Most recipe-database software maintainers will give you blank stares when you mention this, even the ones who claim to import MasterCook format; some of them don’t even know about sub-title support in ingredient lists. While the software has changed hands several times over the past 25 years, functionally it hasn’t changed much since version 6. The licensed cookbooks come and go, but OS compatibility is the most significant improvement. (disclaimer: I haven’t tested the pretty clouds in v15 yet)

There are tens of thousands of recipes on the Internet in the two major MC export formats, MXP and MX2. I recently dug up one of the biggest to play with, which is only available through The Wayback Machine.

MXP is a text file meant to be printed out in a fixed-width font, but the format is well-structured enough that it’s easy to import into other software, with some minor loss of information. If you’ve downloaded any recipes off the Internet in the past 20 years, you’ve probably seen the string “* Exported from MasterCook *”.

MX2, introduced in 1999’s MasterCook 5, is not XML. Yes, it looks like XML, and even has an external DTD schema, but trying to feed it through standard XML tools will trigger explosions visible from half a mile. If you want to work with it, your best bet is the swiss-army-knife conversion tool cb2cb. Windows-only, written in Java, and “quirky”, but it handles both MXP and MX2, as well as some other formats, and has built-in cleanup and merge support. Pity it’s not open source, because I suspect there are dozens of comments with some variation of “Oh, for fuck’s sake, MasterCook!”.

What’s wrong with the “XML” and DTD?

  1. The XML header line is invalid. This:
    <?xml version="1.0" standalone="yes" encoding="ISO-8859-1"?>
    must be changed to:
    <?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
  2. The following two characters are not escaped in attributes: "&
  3. Non-printable characters in text (ASCII 2, 5, 17, 18, and 31, in particular).
  4. The mx2.dtd file supplied in every version since 1999 has obviously never been tested, because it is incorrect and incomplete, in several different ways.

Of course, anyone who knows me will correctly guess that I’ve gone to the trouble to fix all of these problems, with a Perl script that massages MX2 into proper UTF-8 XML that validates against a corrected mx2.dtd; part of that script dates back to my old cookbook project from 2002, so yes, this is the first step to reviving that. The script uses xmllint to fix the encoding and double-check that it’s valid XML. I’ve validated over 450 converted MX2 files against the corrected DTD, a total of around 120,000 recipes.

Update: When converting MXP to MX2, many of the options in cb2cb mangle the output. Best to turn them all off, and do some basic cleanup with a script like this one which splits directions on CRLF pairs and safely moves most of the non-direction text into Notes. There are still a few rare errors in the conversion process, but in my case that amounted to 4 ingredient lines in over 10,000 recipes, detected by their failure to validate during the XML conversion.

Pictures and spoilers and smartypants


When the world was young, and this “blogging” thing was new, I maintained my site by hand, typing new content into index.html as I thought of it. Then I spent a great deal of time customizing MovableType to suit my needs, and used it for the next 14 years.

One of the common plugins was SmartyPants, which turned scruffy old typewriter quotes into pretty curved ones. As a long-time type nerd, of course I had to use it. The MT implementation was pretty good, and only rarely guessed wrong about open quotes. The one used by Hugo is, unfortunately, always wrong in a specific case that I use quite often: quotations that start with an ellipsis. For those, I’ve had to go through the archives and manually insert the Unicode zero-width space character &#8203; after the opening quote.

I never used MT’s web form for posting content, because, like so many other people have discovered, it’s too easy to lose an hour of work with a single mis-click or fumble-finger. Ecto was a great tool until it just stopped working one day (long after it stopped being supported), with only one quirk: at random intervals it would lose track of the UTF-8 encoding, and post garbage instead of kanji. A refresh would always fix the problem, so it was just a minor annoyance.

When it stopped working, I switched to MarsEdit, which is an excellent tool, and if I could easily connect it to Hugo, I would. As it is, I’ve gone back to running Emacs in a terminal window, with Perl/Bash scripts and Makefiles wrapped around an assortment of command-line tools.

For images, I insist on supplying proper height and width attributes so that the browser can layout the page properly while waiting for the download. Hugo can automatically insert those for pictures stored locally, but I upload them all to an S3 bucket with s3cmd, so I run them all through ImageMagick’s convert for cleanup and resizing, then Guetzli for JPEG conversion, and embed them with this shortcode:

{{ $link := (.Get "link" | default (.Get "href"))}}
{{ $me := . }}
<div align="center" style="padding:12pt">
  {{if $link}}
    <a href="{{$link}}">
  {{end}}
  <img
    {{ range (split "src width height class title alt" " ") }}
      {{ if $me.Get . }}
        {{. | safeHTMLAttr}}="{{$me.Get .}}"
      {{end}}
    {{end}}
  >
  {{if $link}}
    </a>
  {{end}}
</div>

None of the arguments are mandatory (even src, without which there’s not much point), but it will add any of the listed ones if you’ve supplied them, and allow you to add a link with either “link” or “href”. This can be embedded in the new spoiler shortcode I wrote yesterday (which relies on Bootstrap’s collapse.js):

{{ $id := substr (md5 .Inner) 0 16 }}
{{ $label := (.Get 0 | default "view/hide") }}
{{ $class := (index .Params 1 | default "") }}
<div class="collapse {{$class}}" id="collapse{{$id}}">{{ .Inner }}</div>
<p><a role="button" class="btn btn-default btn-sm"
  data-toggle="collapse" href="#collapse{{$id}}"
  aria-expanded="false" aria-controls="collapse">{{$label}}</a>
</p>

The results look like this, and yes, the picture behind the NSFW tag is NSFW:

{{< spoiler NSFW >}}
{{< blogpic 
  src="https://dotclue.s3.amazonaws.com/img/tumblr_o3wrl58ICr1rlk3g8o1_1280.jpg"
  width="560" height="420"
  class="img-rounded img-responsive"
>}}
{{< /spoiler >}}
…well, Not Safe For Waterfowl, anyway…

It took about 30 seconds to convert my Gelbooru mass-posting script to generate shortcodes instead of HTML, so my most-recent cheesecake post was done this way. Now that I have the NSFW shortcode, I’ll likely include some racier images in the next one…

At some point I’ll pull out all my scripts and customizations into a demo blog on Github, so that I have something to point to when someone asks how to do something that is either not directly supported in Hugo (like monthly archive pages), or is just poorly documented (“damn near everything”).

Jacking up the license plates...


…and changing the car.

Welcome to the first non-trivial update to this blog since 2003. Things are still in flux, but I’m officially retiring the old co-lo WebEngine server in favor of Amazon EC2. After running continuously for fourteen years, its 500MHz Pentium III (with 256MB of RAM and a giant 80GB disk!) can take a well-deserved rest.

The blog is a complete replacement as well, going from MovableType 2.64 to Hugo 0.19, with ‘responsive’ layout by Bootstrap 3.3.7. A few Perl scripts converted the export format over and cleaned it up. LetsEncrypt allowed me to move everything to SSL, which breaks a few graphics, mostly really old Youtube embeds, but cleanup can be done incrementally as I trip over them.

Comments don’t work right now, because Hugo is a static site generator. I’ve worked out how I want to do it (no, not Disqus), but it might be a week or so before it’s in place. All the old comments are linked correctly, at least.

Do I recommend Hugo? TL/DR: Not today.

Getting out of the co-lo has been on my to-do list for years, but I never got around to it, for two basic reasons:

  1. I was hung up on the idea of upgrading to newer blogging software.

  2. I didn’t feel like running the email server any more, and didn’t like the hosting packages that were compatible with MT and other non-PHP blogging tools.

In the end, I went with G-Suite (“Google Apps for Work”) for $5/month. Unlike the hundreds of vendor-specific email addresses I maintain at jgreely.com, I’ve only ever used one here, and all the other people who used to have accounts moved on during W’s first term.

Next up, working comments!

Update

Actually, next turned out to be getting the top-quote to update randomly. The old method was a cron job that used wget to log into the MT admin page and request an index rebuild, which, given the tiny little CPU, had gotten rather slow over the years, so it only ran every 15 minutes.

The site is now published by running hugo on my laptop and rsyncing the output, it’s not feasible or sensible to update the quotes by rebuilding the entire site. So I wrote a tiny Perl script that regexes the current quotes out of all the top-level pages for the site, shuffles them, and reinserts them into those pages. It takes about half a second.

Since there are ~350 pages, there will be decent variety even if I don’t post for a few days and regenerate the set. If I wanted to get fancy, I could parse the full quotes page and shuffle that into the indexes, guaranteeing a different quote on each page (as long as the number of quotes exceeds the number of pages, which means I can add about 800 blog entries before I need to add more quotes. :-)

Thoroughly Random Blogging


More than usual, I mean. I’ve been playing with the static site generator Hugo as a way to move this blog and its comments out of Movable Type.

After clearing the initial hurdle of incomplete and inconsistent Open Source documentation (pro tip: if a project starts numbering versions from 0.1 instead of 1.0, it’s safe to assume that there’s no tech writer on the team), the next step is adding a theme to render your site. There’s no default theme, and half a dozen different recommended ones of varying complexity and compatibility. Short version: I’m not sure Hugo currently has layout functionality equivalent to Movable Type 2.x from 2003, much less any of the modern tools; it might, it’s just that hard to find out.

There’s some support for basic pagination, something that’s always been missing here (and which is partially responsible for the long delay when adding comments), but the built-in paginator includes a link for every page, which is pretty painful when you have 200+ pages. If I get the time, I’ll have to dust off my Go and send them a patch to make it behave sensibly with large numbers.

Rendering all ~3,800 entries (counting quotes and sidebar microblogs) and ~3,500 comments takes about 12 seconds on my laptop, but that’s still too long for iterative testing, and the OS open-file limit makes it impossible to test with the live-rebuild feature of the built-in web server.

So I wrote a quick Bash script to retrieve N random articles from Wikipedia and format them the way Hugo expects, as Markdown with TOML metadata. Why Bash? Because the official Wikipedia API for efficiently retrieving articles and their metadata using generators and continues is either broken or incomprehensible to me, since I spent two hours at it and got a never-ending list of complete and partial articles. So I just looped over the “https://en.wikipedia.org/wiki/Special:Random” URL and piped the output through Pandoc. Rather than pulling in the real metadata, I just generate dates and categories in Bash. Now I can quickly generate a small site with multiple sections and simple categorization, and it’s trivial to add more features like series, tags, authors, etc. [in fact, I did!]

(relevant only to Hugo users after the jump…)

more...

Time or Money?


When you start thinking that 60-grit sandpaper isn’t coarse enough, maybe that hand plane wasn’t such a bargain after all.

Seriously, I can understand not factory-polishing the sole until it’s mirror-bright, but when you don’t even machine it to be vaguely flat near the mouth, I think you’ve cut costs just a bit too much.

[Update: after more than half an hour (plus cooling time) on the belt sander with an 80-grit belt, it’s almost completely flat. Unfortunately, the last of the deep factory-supplied scratches are just in front of the mouth, so it needs a little more work. So, yeah, cheap planes are no bargain; the Chinese manufacturer put a brushed finish on the sole to hide their poor machining.

Also, while I’m on the subject, Spyderco’s ceramic bench stones aren’t even close to flat. I bought them a long time ago and was never really satisfied with the results, but I’d just assumed that they shipped in decent condition. Nope; I pulled them out while I was working on the plane, checked for daylight with my engineer’s square, and started lapping them on a DMT Extra-Coarse diamond stone. It takes a lot of work to clean them up.]

It's such a great idea...


…that it’s been on the market for at least 10 years.

A few weeks back, Jabrwok mentioned in the comments that he was thinking of making a doweling/loose-tenon jig based on this video. I just watched it (perhaps his most annoyingly-presented video since he rebranded himself), and immediately recognized it as the Rockler Doweling Jig. $20 and better-constructed; wonder why he didn’t mention it…

Tool Tips


While the circular saw is a perfectly reasonable rainy-day indoor tool, the random orbit sander is not. Not unless you want to use fine sawdust to locate all the cobwebs in the garage, and convert the “do not close on objects” sensor on the garage door into a “do not close” sensor. My crystal ball suggests that I should put off the rest of the sanding until Saturday morning, when the back yard will be less soggy.

“Need a clue, take a clue,
 got a clue, leave a clue”