Tuesday, October 15 2002

My other PDA is made of wood pulp

I was sitting in a meeting today with my iBook, and a late arrival walked up and asked if we could chat for a bit afterwards. Normally when he needs to talk to me, it’s either to pick my brain on some operational aspect of the service, or see if I’m willing to commit my team to supporting the latest project that’s steam-rollering down on him.

In this case, he wanted to talk about Stickies. Seems he’s been writing a replacement for the current OS X version, and decided that I might have useful input to offer. Something to do with the cascade of 30+ notes covering almost every square inch of my screen, taking full advantage of multiple fonts, text color, note colors, minimization, etc. Also physical post-its attached to flat surfaces on the laptop.

(Continued on Page 97)

Saturday, July 12 2003

The Passing of Tom Emmett

[excerpted from John M. Browning, American Gunmaker, by John Browning and Curt Gentry. © 1964 by the Browning Co. and Curt Gentry.]

The Brownings depended on Tom Emmett for all odd jobs, either at the store or in their homes. He professed no specialized skill but would tackle any job and get it done. On this day he was up on a stepladder near the ceiling of the shop, by the line shaft, taking measurements. His job kept him near the shaft for so short a time that he did not ask to have the power shut off. Nobody paid any attention to what he was doing, except John. He remarked to Ed, “Tom shouldn’t be working up there with the power on.” Ed looked over his shoulder and said, “Oh, he’ll be through in a minute, and I need the lathe.” It happened just then, while John was looking straight at Tom.

(Continued on Page 92)

Tuesday, July 15 2003

Mystery Hill on the level

It happened in 1980, I think. My father and I were vacationing in Michigan, in the general vicinity of Manistee, when some of the local kids told us about a special place they were calling Mystery Hill, where if you put your car in neutral, it would roll uphill.

It’s a fairly common optical illusion that often results in the creation of a cheesy tourist trap. By happy coincidence, on the day we went out to see it, my father had a toolbox in the back of his truck. It contained a carpenter’s level. We set it down on the allegedly-uphill road and let the universe reveal the truth of the matter.

Thursday, July 24 2003

Color combinations for web sites

I’ve stumbled across two interesting tools recently. The first is the Mac application ColorDesigner, which generates random color combinations with lots of tweakable knobs. The second is Cal Henderson’s online color-blindness simulator, designed to show you how your choice of text and background colors will appear to someone with color-deficient vision.

I decided to try to merge the two together into a single web page, using Mason and Cal’s Graphics::ColorDeficiency Perl module. It’s not done yet, but it’s already fun to play with: random web colors.

Right now, the randomizer is hardcoded, but as soon as I get a chance, I’ll be adding a form to expose all of the parameters.

Thursday, June 24 2004

…sticks where it’s supposed to…

Let’s say, hypothetically speaking, that one had recently had an unpleasant encounter with some pavement. And, purely for the sake of argument, let’s say that the clothing one was wearing mostly protected one’s body from being damaged by this encounter, but allowed a relatively small patch of skin to be, in the vernacular, “rubbed raw.”

What over-the counter remedies would one find best suited to dealing with this situation? My list (which isn’t at all hypothetical, more’s the pity):

  • Bayer aspirin
  • Bactine Pain Relieving Cleansing Wipes
  • Nexcare Non-Stick Pads (which, I’m pleased to report, do not, in fact, stick)
  • Nexcare Micropore Paper First Aid Tape (which sticks to hairy arms without subsequently removing said hair)
  • Band-Aid Hurt-Free Cleansing & Infection Protection Foam (which penetrates more deeply than the wipes)

First-aid products might not be a sexy market, but they’ve improved a lot since I last fell off of a two-wheeled vehicle, sometime in the early Seventies.

Friday, July 2 2004

New toy: Motorola v600

Okay, I don’t really have much use for the camera side of my new cellphone; I’m a quality snob who thinks his 5 megapixel digicam is adequate for 4x6 snapshots and web galleries and nothing more, and I’m more interested in switching to larger film than to digital. Still, when you buy a new toy, in this case replacing my Ericsson T68 to get better reception and MP3 ringtones, you should at least try out the features.

How’s the camera? Functional for quick, on-the-spot documentation, but nothing more. For instance, when I was leaving the Reno Hilton (lame casino, skip the steakhouse, eat at Asiana) Wednesday morning, I spotted a big Harley parked on the sidewalk next to a large sign that boldly stated “No motorcycle parking on sidewalk.” That would have been worth a quick snap.

It takes 640x480 pictures, and claims to offer a 4x zoom. Zoom, my ass. This is pure marketing-speak. The viewfinder is what zooms; the resulting picture is either a 320x240 or 160x120 crop. Quality is nothing to write home about, but sufficient for amusement.

Other than that, the phone’s features are quite nice. It has the usual mix of vibrate, speakerphone, BlueTooth, GPRS, games, messaging, etc., and adds MP3 ringtones with quite reasonable fidelity. The reception is also living up to its promise so far, giving me a much stronger signal inside my house, where the Ericsson was prone to dropping calls unless I stood in the sweet spot facing the correct direction.

Motorola doesn’t support Macs for their phones, and Apple hasn’t added SyncML support to iSync, but they still work together over BlueTooth. You can copy phonebook entries, MP3s, and pictures back and forth, and with Ross Barkman’s modem scripts and configuration database, it was easy to set up GPRS and configure my PowerBook to use the phone for wireless Internet access.

And if you call me, everyone nearby will be blessed with the sound of The Carol of the Old Ones. I briefly considered the orgasm scene from When Harry Met Sally, a classic geek sound file, but I still remember what happened when we used it as the out-of-paper noise on our NeXT printer, and my boss tried to print a large document while carrying on a phone conversation with his very young daughter.

It’s easy to switch to a secondary ringtone, so I’m thinking the opening song from Hand Maid May would work nicely.

Thursday, July 8 2004

Apple’s Dashboard: sample gadget

I’m not really a programmer; I’ve been a Perl hacker since ’88, though, after discovering v1.010 and asking Larry Wall where the rest of the patches were (his reply: “wait a week for 2.0”). If I’m anything, I’m a toolsmith; I mostly write small programs to solve specific problems, and usually avoid touching large projects unless they’re horribly broken in a way that affects me, and no one else can be persuaded to fix them on my schedule.

So what does this have to do with learning Japanese? Everything. I’m in the early stages of a self-study course (the well-regarded Rosetta Stone software; “ask me how to defeat their must-insert-CD-to-run copy-protection”), and authorities agree that you must learn to read using the two phonetic alphabets, Hiragana (ひらがな, used for native Japanese words) and Katakana (カタカナ, used for foreign words). A course that’s taught using Rōmaji (phonetic transcriptions using roman characters) gives you habits that will have no value in real life; Rōmaji is not used for much in Japan.

So how do you learn two complete sets of 46 symbols plus their variations and combinations, as well as their correct pronunciations? Flashcards!

The best software I’ve found for this is a Classic-only Mac application called Kana Lab (link goes direct to download), which has a lot of options for introducing the character sets, and includes recordings of a native speaker pronouncing each one. I’ve also stumbled across a number of Java and JavaScript kana flashcards, but the only one that stood out was LanguageBug, which works on Java cellphones (including my new Motorola v600).

When the misconceptions about Apple’s upcoming Dashboard feature in OS X 10.4 were cleared up (sorry, Konfabulator, it will kill your product not by being a clone, but simply by being better), I acquired a copy of the beta (why, yes, I am a paid-up member of the Apple Developer Connection) and took a look, with the goal of building a functional, flexible flashcard gadget.

Unfortunately, I’ve spent the past few years stubbornly refusing to learn JavaScript and how it’s used to manipulate HTML using the DOM, so I had to go through a little remedial course. I stopped at a Barnes & Noble on Sunday afternoon and picked up the O’Reilly JavaScript Pocket Reference and started hacking out a DHTML flashcard set, using Safari 1.2 under Panther as the platform.

Note: TextEdit and Safari do not a great DHTML IDE make. It worked, but it wasn’t fast or pretty, especially for someone who was new to JavaScript and still making stupid coding errors.

I got it working Tuesday morning, finished off the configuration form Wednesday afternoon, and squashed a few annoying bugs Wednesday night. Somewhere in there I went to work. If you’re running Safari, you can try it out here; I’ve made no attempt to cater to non-W3C DOM models, so it won’t work in Explorer or Mozilla.

There’s a lot more it could do, but right now you can select which character sets to compare, which subsets of them to include in the quiz, and you can make your guesses either by clicking with the mouse or pressing the 1-4 keys on the keyboard. I’ve deliberately kept the visual design simple, not just because I’m not a graphic designer, but also to show how Apple’s use of DHTML as the basis for gadgets makes it possible for any experienced web designer to come in and supply the chrome.

So what does it take to turn my little DHTML web page into a Dashboard gadget?

(Continued on Page 2024)

Monday, July 12 2004

Why I just deleted Konfabulator

It lasted about fifteen minutes on my laptop. Why? First, because the supplied widgets were primarily designed to be pretty. The weather and calendar widgets are translucent; you can’t make them not be translucent, even if you have wallpaper on your screen that makes them unreadable. The (thankfully not translucent) to-do list doesn’t allow you to edit in-place; anything you want to do with a to-do item involves popping up a bog-standard Mac dialog box and clicking “Okay”, which pretty much renders it useless as a “quick! write that down!” tool or organizing tool. Most of the other standard widgets are similarly long on chrome and short on function, to the point that I have trouble remembering them mere minutes after trying them out.

I was already underwhelmed by the contents of their user-submitted widget gallery, so I’m left with no possible reason to purchase this product, nor can I imagine it ever becoming a significant commercial success. This renders the whole “Apple stole our idea” and “Dashboard was designed to be a Konfabulator killer” claims completely moot. Konfabulator in its current form could never have made its way onto the desktops of a significant percentage of Mac users; it’s just not that interesting.

Will there be a lot of high-chrome, low-content Dashboard gadgets? Sure; as the man said, 90% of everything is crap. The difference is that you don’t need to learn a proprietary development environment to create gadgets for Dashboard. Hell, you don’t even need to learn JavaScript; Dashboard will cheerfully run Flash applications with a trivial DHTML wrapper. You can also embed Java applications, QuickTime videos, etc.

Konfabulator can’t do any of that.

If, for instance, I wanted to build a nice kana/kanji chart around this remarkable collection of QuickTime videos that demonstrate the correct stroke order for the entire hiragana and katakana syllabaries as well as all 1,945 Jōyō kanji, I could (and likely will, if only for my personal use), because a Dashboard gadget is just a web page, and web pages can have embedded QuickTime videos.

The closest thing they’ve got over in Konfab-land is the new Kanji-A-Day widget, which uses /usr/bin/curl to scrape a Japanese web site and import its content into a (cough) pretty window. Maybe that’s the one that will justify the $25 they want for the product…

Sunday, November 14 2004

Still waiting for Java

Gamer friend Scott just discovered that the reason he was having so much trouble with PCGen under Linux was that the JVM was defaulting to a rather small heap size, effectively thrashing the app into oblivion when he tried to print.

Now, while it’s true that PCGen is as piggy as a perl script when it comes to building complex data structures in memory, it’s still fundamentally a straightforward application, and yet it exceeds the default maximum heap settings. He had plenty of free RAM, gigs of free VM, and here was Sun’s Java, refusing to use any of it unless he relaunched the application with a command-line override. Doing so not only fixed printing, it made the entire application run substantially faster. Feh.

I’d noticed a slowdown with recent versions of PCGen on my Mac as well, but Apple was good enough to compile their JVM with defaults sufficient to at least make it run completely. Sure enough, though, increasing the default heap settings makes it run faster, by eliminating a whole bunch of garbage collection.

In other words, with Java, Sun has managed to replicate the Classic MacOS annoyance of adjusting memory allocation on a per-application basis, and made it cross-platform!

PCGen is still the only major Java app I have any use for on a regular basis, although there’s another one that has recently entered my arsenal of special-purpose tools, Multivalent. I have no use for 99% of its functionality, but it includes robust tools for splitting, merging, imposing, validating, compressing, and uncompressing PDF files, as well as stripping the copy/print/etc limitations from any PDF you can open and read.

There’s another Java application out there that might join the list sometime soon, Dundjinni, but first the manufacturers have to finish porting it from Windows to the Mac…

Wednesday, April 6 2005

fun with Google Maps

Looks like they ripped out my old apartment building in Columbus and replaced it with something bigger and better. Good thing, too, since it burned down at least once, while I was living there (hmm, now there’s an old Usenet post I should resurrect here; I used to use it as a great counter-example to the “ban guns because they make domestic squabbles fatal” argument).

The concrete canyon Brian and I lived in before that is still there, though, and probably unchanged. Trigger-happy towing, unsafe parking, and next door to a neighborhood pool; a bunch of kids once broke into my car just to steal the change from my ashtray so they could get in. Cute girls running around, though, and our storage closet was big enough to hide a dozen illegal immigrants in. Two dozen if they were close friends.

America’s Largest Community Of Brick Homes hasn’t changed a bit. Nearly 50,000 houses based on four floorplans, so you always knew where the bathrooms were at your friend’s houses. Not the easiest neighborhood to deliver pizza in, especially in Dominos’ 30-minutes-or-free days, but the tips were always good.

As for the first home I remember, Old Powell Road is almost unrecognizable. They got rid of the sharp curves that used to send cars into the abandoned gravel pit (that, along with date-rape attempts, was the most common reason someone would knock on our front door after dark), the formerly-toxic landfill appears to be capped and made pretty, but it’s still sparsely developed. The house is long-gone, but I knew that already.

These days home looks like this:

Home sweet home

Friday, July 8 2005

Photoshop tips

Apropos of nothing, I thought I’d mention that the two most recently posted pictures here were resized in Photoshop CS, using the new(-ish) Bicubic Sharper resampling method, available in the Image Size dialogue box. I hadn’t seen any mention of it until about two weeks ago, and had been using Mac OS X’s command-line tool sips for quick resizing.

Bicubic Sharper is much better than the standard Photoshop resizing, sips, or iPhoto. It’s particularly good for rendered images with fine detail. I’ve been working on a Roborally tile set for Dundjinni, creating my basic floor texture with Alien Skin Eye Candy 5: Textures. Dundjinni expects 200x200 tiles, but Eye Candy renders best at larger sizes. Resizing down from 800x800 using the straight Bicubic method produced an unusable image. Bicubic Sharper? Dramatically better.

I found the tip in a discussion of photo-processing workflow, which makes sense. For a long time, photographers have been making Unsharp Mask the final step in their workflows, because if they sharpened at full size, the slight softness introduced by resizing for print or web use would force them to use Unsharp Mask again, which tends to look pretty nasty. Integrating it into the resizing algorithm takes advantage of the data you’re discarding, reducing the chance of introducing distracting artifacts.

Saturday, August 6 2005

Sake cheat card

After trying out a few types of sake and doing a little reading on the subject, I decided to gather up all of the useful information commonly printed on labels and menus, and arrange it on a double-sided 3x5 card. It was as much an excuse to play with the new version of Adobe Illustrator as anything else, but it should come in handy the next time I try to figure out what to buy at Mitsuwa.

Tuesday, October 4 2005

Audio cleanup?

While my Japanese class is going well so far (the pace is a bit slow, due in part to the overhead of community-college drop/add handling), I’ve found one serious annoyance: the audio CDs are crap.

There’s nothing wrong with the content; the material is presented clearly by native speakers, and the original mastering was well-done. Unfortunately, it was mastered for cassette tape, and the CDs were apparently converted from that format. How well was this done? Here’s a thousand words on the subject:

Situational Functional Japanese Audio CDs

That’s what it looked like when I loaded a track into Audacity. I can crank up the gain, but then the hiss becomes objectionable, and Audacity’s noise filter introduces some rather obnoxious artifacts, even at its gentlest setting. I’ll be using these CDs until March, so it’s worth a little time and money to me to get them cleaned up. Any recommendations for a good tool?

Friday, October 21 2005

Dodge Caravan

I got sent to Denver (okay, Longmont) for a few days to do some setup work for our office here, and the folks at Budget begged me to take a free upgrade to a minivan, to help alleviate their shortage of smaller vehicles.

Anyway, it’s got a gutless engine, poor sound insulation, and a whopping big blind spot when you’re merging, but otherwise it’s actually a pretty nice travel-mobile. Very comfortable to drive, plenty of room for adult passengers, and decent handling for its size. It’s no comparison to my Lexus RX-300, but still, a lot better than I expected.

Sunday, March 26 2006

Short Review: Nisus Writer Express

If your (Mac-only) word-processing needs fit within Writer Express’s feature list, the generally sensible UI will make it a superior alternative to Word. Within its limitations, it’s an excellent, useable program.

However, if you need table support that’s better than an ancient version of Netscape, real Word interoperability, or precision layout tools, look elsewhere. For now, at least; they’re working hard to improve the product.

Note to people with fond memories of the Mac OS Classic Nisus Writer: Express implements a subset of the old features, along with a bunch of new ones.

Customizing for Usability, Bad Haiku Edition

I’m doing 45 minutes of cardio (most) every day on my LifeFitness 5500 elliptical cross-trainer. Doctor’s orders. I like working out on this machine, and it’s certainly good for me, but I’ve always had a problem occupying my mind. In the past, I’ve simply listened to music on my iPod, generally a PopTarts mix (or, more recently, JPopTarts). Studying kanji and vocabulary for my Japanese class would be an ideal use of this time, but I never ordered the optional magazine stand, and it doesn’t look like they make it any more.

So, I stopped at an office supply store and bought the only non-ridiculous copy-holder they sold. Just setting it on top of the crosstrainer worked fairly well, but hid the display. I really needed it to sit above the display section, but there was no obvious way to accomplish this feat. And then, a moment of clarity:

How to attach this…
What mounting system will work?
Ah! Some gaffer tape!

Saturday, April 8 2006

“What do you do with a B6 notebook?”

(note: for some reason, my brain keeps trying to replace the last two words in the subject with “drunken sailor”; can’t imagine why)

Kyokuto makes some very nice notebooks. Sturdy covers in leather or plastic, convenient size, and nicely formatted refill pages. I found them at MaiDo Stationery, but Kinokuniya carries some of them as well. I like the B6 size best for portability; B5 is more of an office/classroom size, and A5 just seems to be both too big and too small. B6 is also the size that Kodansha publishes all their Japanese reference books in, including my kanji dictionary, which is a nice bonus.

[This is, by the way, the Japanese B6 size rather than the rarely-used ISO B-series. When Japan adopted the ISO paper standard, the B-series looked just a wee bit too small, so they redefined it to have 50% larger area than the corresponding A-series size. Wikipedia has the gory details.]

I really like the layout of Kyokuto’s refill paper. So much so, in fact, that I used PDF::API2::Lite to clone it. See? The script is a little rough at the moment, mostly because it also does 5mm grid paper, 20x20 tategaki report paper, and B8/3 flashcards, and I’m currently adding kanji practice grids with the characters printed in gray in my Kyoukasho-tai font. I’ll post it later after it’s cleaned up.

Why, yes, I was stuck in the office today watching a server upgrade run. However did you guess?

On a related note, am I the only person in the world who thinks that it’s silly to spend $25+ on one of those gaudy throwaway “journals” that are pretty much the only thing you can find in book and stationery stores these days? Leather/wood/fancy cover, magnet/strap/sticks to hold it shut, handmade/decorated (possibly even scented) papers, etc, etc. No doubt the folks who buy these things also carry a fountain pen with which to engrave their profound thoughts upon the page.

Or just to help them impress other posers.

Friday, May 19 2006

Automating PDF cleanup with Acrobat and AppleScript

As I mentioned earlier, I’m generating lots of PDF files that don’t work in Preview.app, and are also a tad on the large side. Resolving this problem requires the use of Adobe Acrobat and Acrobat Distiller. Automating this solution requires AppleScript. AppleScript is evil.

Just in case anyone else wants to do something like this from the command line, here’s what I ended up with, which is run as “osascript pdfcleaner.scpt myfile.pdf”:

on run argv
	set input to POSIX file ((system attribute "PWD") & "/" & (item 1 of argv))
	set output to replace_chars(input as string, ".pdf", ".ps")
	tell application "Adobe Acrobat 7.0 Standard"
		open alias input
		save the first document to file output using PostScript Conversion
		close all docs saving no
	end tell
	tell application "Acrobat Distiller 7.0"
		Distill sourcePath POSIX path of output
	end tell
	set nullCh to ASCII character 0
	set nullFourCharCode to nullCh & nullCh & nullCh & nullCh
	tell application "Finder"
		set file type of input to nullFourCharCode
		set creator type of input to nullFourCharCode
	end tell
	tell application "Terminal"
	end tell
end run
on replace_chars(this_text, search_string, replacement_string)
	set AppleScript's text item delimiters to the search_string
	set the item_list to every text item of this_text
	set AppleScript's text item delimiters to the replacement_string
	set this_text to the item_list as string
	set AppleScript's text item delimiters to ""
	return this_text
end replace_chars

[I wiped out the file type and creator code to make sure that the resulting PDFs opened by default with Preview.app, not Acrobat; I swiped that code from Daring Fireball. The string-replace function came from Apple’s AppleScript sample site.]

Thursday, June 1 2006


I want a better text editor. What I really, really want, I think, is Gnu-Emacs circa 1990, with Unicode support and a fairly basic Cocoa UI. What I’ve got now is the heavily-crufted modern Gnu-Emacs supplied with Mac OS X, running in Terminal.app, and TextEdit.app when I need to type kanji into a plain-text file.

So I’ve been trying out TextWrangler recently, whose virtues include being free and supporting a reasonable subset of Emacs key-bindings. Unfortunately, the default configuration is J-hostile, and a number of settings can’t be changed for the current document, only for future opens, and its many configuration options are “less than logically sorted”.

What don’t I like?

First, the “Documents Drawer” is a really stupid idea, and turning it off involves several checkboxes in different places. What’s it like? Tabbed browsing with invisible tabs; it’s possible to have half a dozen documents open in the same window, with no visual indication that closing that window will close them all, and the default “close” command does in fact close the window rather than a single document within it.

Next, I find the concept of a text editor that needs a “show invisibles” option nearly as repulsive as a “show invisibles” option that doesn’t actually show all of the invisible characters. Specifically, if you select the default Unicode encoding, a BOM character is silently inserted at the beginning of your file. “Show invisibles” won’t tell you; I had to use /usr/bin/od to figure out why my furiganizer was suddenly off by one character.

Configuring it to use the same flavor of Unicode as TextEdit and other standard Mac apps is easy once you find it in the preferences, but fixing damaged text files is a bit more work. TextWrangler won’t show you this invisible BOM character, and /usr/bin/file doesn’t differentiate between Unicode flavors. I’m glad I caught it early, before I had dozens of allegedly-text files with embedded 文字化け. The fix is to do a “save as…”, click the Options button in the dialog box, and select the correct encoding.

Basically, over the course of several days, I discovered that a substantial percentage of the default configuration settings either violated the principle of least surprise or just annoyed the living fuck out of me. I think I’ve got it into a “mostly harmless” state now, but the price was my goodwill; where I used to be lukewarm about the possibility of buying their higher-end editor, BBEdit, now I’m quite cool: what other unpleasant surprises have they got up their sleeves?

By contrast, I’m quite fond of their newest product, Yojimbo, a mostly-free-form information-hoarding utility. It was well worth the price, even with its current quirks and limitations.

Speaking of quirks, my TextWrangler explorations yielded a fun one. One of its many features, shared with BBEdit, is a flexible syntax-coloring scheme for programming languages. Many languages are supported by external modules, but Perl is built in, and their support for it is quite mature.

Unfortunately for anyone writing an external parser, Perl’s syntax evolved over time, and was subjected to some peculiar influences. I admit to doing my part in this, as one of the first people to realize that the arguments to the grep() function were passed by reference, and that this was really cool and deserved to be blessed. I think I was also the first to try modifying $a and $b in a sort function, which was stupid, but made sense at the time. By far the worst, however, from the point of view of clarity, was Perl poetry. All those pesky quotes around string literals were distracting, you see, so they were made optional.

This is still the case, and while religious use of use strict; will protect you from most of them, there are places where unquoted string literals are completely unambiguous, and darn convenient as well. Specifically, when an unquoted string literal appears in list context followed by the syntactic sugar “=>” [ex: (foo => “bar”)], and when it appears in scalar context surrounded by braces [ex: $x{foo}].

TextWrangler and BBEdit are blissfully unaware of these “bareword” string literals, and make no attempt to syntax-color them. I think that’s a reasonable behavior, whether deliberate or accidental, but it has one unpleasant side-effect: interpreting barewords as operators.

Here’s the stripped-down example I sent them, hand-colored to match TextWrangler’s incorrect parsing:


use strict;

my %foo;
$foo{a} = 1;
$foo{x} = 0;

my %bar = (y=>1,z=>1,x=>1);

$foo{y} = f1() + f2() + f3();

sub f1 {return 0}
sub f2 {return 1}

sub f3 {return 2}

Sunday, November 12 2006


This seems like a nice tool for syntax-coloring code. I rarely feel the need for this feature myself, but it’s nice when it works, and this one works a lot better than BBedit’s, although it shares a less-extreme version of the coloring bug I found when I was testing TextWrangler.

I don’t have nice things to say about the download/install process and documentation, though; it looks like the Python community is trying to come up with something similar to CPAN, but it doesn’t seem to be ready for general release yet.

Tuesday, November 21 2006

Truth in advertising

Spotted this at Border’s today. I like products that match their descriptions…

Lighted Magnifier

The name is effective, though. When I pointed it out to Jeff so he could laugh at it too, the woman in line behind him asked to see it, and ended up buying one.

It’s probably nice, but I think the Zelco Lumifier is better for carrying around. It’s my furigana tool.

Friday, December 22 2006

The Guardian of my World

A while back, I mentioned that I was tinkering with jQuery for updating my pop-up furigana. This dovetails nicely with my attempts to improve my Japanese reading skills, which currently involve working my way through Breaking into Japanese Literature and ボクのセカイをまもるヒト.

The first one is a parallel text with all vocabulary translated on the same page. I wish he’d formatted it a bit differently, and my teacher isn’t pleased with some of the translation, but it’s a useful learning tool, and there’s a free companion audiobook on the web site.

The second is the first in a new light novel series from Nagaru Tanigawa, also responsible for The Melancholy of Haruhi Suzumiya, and it includes furigana for almost all of the kanji. My goal is to read it, not translate, but I have to look up an awful lot of vocabulary, and there’s not enough room on the page to annotate.

So I’m typing it in, and using a Perl script to add my shiny new pop-up furigana.

(and, yes, I’m deliberately over-annotating; I don’t actually need many of those annotations, but someone else might, and it’s not that much work)

[Update: I should mention that I’m using Jim Breen’s translation server to speed up the glossing process. The parser gets lost occasionally, but it’s still very helpful, often finding idiomatic phrases that cover several words.]

Oh, here’s the cover, courtesy of Amazon:

(Continued on Page 2658)

Monday, January 22 2007

Starting anew…

About six months ago, The Former Employer With Whom I Signed A Non-Disparagement Agreement decided to close their field offices and consolidate everything at the main office in Kirkland. Some folks were asked to relocate, some were laid off immediately, and a Lucky Few were asked to stay around for a while to manage the transition.

I fell into the third group, with the promise of a reasonable quantity of extra cash should I complete my tasks to their satisfaction. This cash was in fact received on schedule, so I have no immediate plans to test their tolerance for disparagement.

We said our goodbyes at the end of 2006, and I spent the first week of 2007 in Las Vegas, courtesy of a “three-free-nights” offer at the Luxor. While I was out there, Ooma, the company many former co-workers had already fled to, called me up to arrange interviews. I went in on the 10th, went back to meet the CEO on the 15th, accepted their offer on the 16th, flew home to Ohio to quickly see my family on the 19th, and started work today.

What do we do at Ooma? Can’t tell you. Ask again in (can’t tell you).

Thursday, May 3 2007

Best $25 we’ve spent recently

This is a remarkably useful gadget that’s paid for itself several times in the past month. What it does: connect an IDE or SATA drive via USB2 without putting it in an enclosure. It’s faster to work with the bare drive when you just need to grab some data from a failed machine or scrub a disk before reuse or service.

Monday, December 10 2007

Ooma update

Now available from Amazon.

Wednesday, December 12 2007

Don’t fear the Washlet

(all vacation entries)

…just be sure to check its aim.

Toto Washlet with expert mode

Tuesday, December 18 2007

Apple Aperture: sluggish but useful

[Update: Grrr. Aperture won’t let you updateor create GPS EXIF tags, and the only tool that currently works around the problem only supports interactively tagging images one at a time in Google Earth. Worse, not only do you have to update the Sqlite database directly, you have to update the XML files that are used if the database ever has to be rebuilt.]

I’ve played with Aperture in the past, but been put off by the terrible performance and frequent crashes. Coming back from Japan, though, I decided to give the latest version a good workout, and loaded it up with more than a thousand image files (which represented about 850 distinct photos, thanks to the RAW+JPEG mode on my DSLR).

On a MacBook with a 2GHz Core Duo and 2GB of RAM, there’s a definite wait-just-a-moment quality to every action I take, but it’s not long enough to be annoying, except when it causes me to overshoot on the straighten command. The fans quickly crank up to full speed as it builds up a backlog of adjustments to finalize, but background tasks don’t have any noticeable impact on the GUI response.

My biggest annoyance is the lack of a proper Curves tool. I’m used to handling exposure adjustments the Photoshop way, and having to split my attention between Levels, Exposure, Brightness, Contrast, and Highlights & Shadows is a learning experience. I think I’ve managed so far, and my Pantone Huey calibrates the screen well enough to make things look good.

I have three significant wishes: finer-grain control over what metadata is included in an export, real boolean searches, and the ability to batch-import metadata from an external source. Specifically, I want to run my geotagger across the original JPEG images, then extract those tags and add them to the managed copies that are already in Aperture’s database. Aperture is scriptable, so I can do it, but I hate writing AppleScripts. I could have geotagged them first, but for some reason MacOS X 10.4.11 lost the ability to mount my Sony GPS-CS1 as a flash drive, and I didn’t have a Windows machine handy to grab the logs. [Sony didn’t quite meet the USB mass-storage spec with this device; when it was released, it wouldn’t work on PowerPC-based Macs at all, and even now it won’t mount on an Asus EEE]

For the simple case of negating a keyword in a search, there’s a technique that mostly works: the IPTC Keywords field is constantly updated to contain a comma-separated list of the keywords you’ve set, and it has a “does not contain” search option. This works as long as none of your keywords is a substring of any other.

I’ll probably just write a metadata-scrubber in Perl. That will let me do things that application support will never do, like optionally fuzz the timestamps and GPS coordinates if I think precise data is too personal. The default will simply be to sanitize the keyword list; I don’t mind revealing that a picture is tagged “Japan, Hakone, Pirate Ship”, but the “hot malaysian babes” tag is personal.

Thursday, January 10 2008

Dear Nolobe,

[Update: just received an apology for the mistake, an updated license key, and a partial refund to bring my price down to the current $39 promotion.]

[Update: I can’t currently recommend this application, for the simple reason that I made the mistake of buying it four days before the release of 9.0, and they charge $29 for the upgrade. Until March, it’s only $39 for a brand-new license, but if I want 9.0, my total cost ends up being $88, which is more than the app is worth. Worse, the updater offered me the new version without mentioning the fact that it would revert to a trial license and require new payment. Fortunately, I was able to revert to 8.5.4.]

Your file-transfer app, Interarchy, is very nice. I particularly appreciate its solid support for Amazon S3. In the latest version, the thing I like most is the fact that permissions settings for uploads are now an honest-to-gosh preference, rather than being buried in some pulldown menu.

I question your decision to make the new version look like the unholy love-child of Finder and Safari, however, especially since your Bookmarks Bar and Side Bar are only cosmetically related to their inspiration, and share none of their GUI behaviors. It looks like a duck, and it sort of quacks like a duck, but it’s really just a cartoon duck, and not worth eating.

And I haven’t the slightest idea why you thought it would be a good idea to have the first item on the Bookmarks Bar be a menu containing every URL in the user’s personal Address Book. Considering that the user can’t rearrange or remove items on the Bookmarks Bar, you’re wasting an awful lot of valuable real estate on a very marginal feature.

Sunday, January 13 2008

Two-car garage, Kyoto-style

(all vacation entries)
private parking in Kyoto

Wednesday, January 30 2008

Dear Sony,

[Update: Thanks, guys; the check is in the mail. More new-camera-porn here.]

Now that you’re releasing a 24+ megapixel full-frame 35mm CMOS sensor, don’t you feel a little stupid for making some of your high-end Zeiss lenses for the Alpha line APS-C-only? I doubt you’ve actually sold many of them, given the price and scarce distribution, but still, you had to know that full-frame was a requirement for a serious player in the DSLR market, and your recent announcements show that you’re not just keeping the low end of the old Minolta lineup.

Just to be clear on this: if you put that sensor into a body that’s the equivalent of Minolta’s 7 or 9 series (pleasepleaseplease a 9!), you’ve got a customer here already waiting in line.

Wednesday, April 16 2008

Dear Google,

I like Google Earth. I even pay for the faster performance and enhanced features. A few things, though:

  • Why can’t I keep North at the top of the screen? I hate constantly double-clicking the “N” in the gaudy navigation scroll-wheel.
  • Why do you auto-enable new layers in my view, so that, for instance, I suddenly see every golf course on the planet, even though I had that entire category disabled?
  • Why can’t I switch between different sets of enabled layers?
  • Why is the “Google Earth Community” layer such a dumping ground of unsorted crap? For instance, what value does this have to anyone who’s not an airline pilot? Or this, where points scattered around the globe are all labeled, “here’s my collection of 4,728 placemarks”.

I’m sure I can come up with more if I think about it for a bit…

[update: ah, press ‘n’ for north, ‘r’ for a total view reset, and then figure out how to fix all of the KMZ files that were broken by the upgrade]

Monday, May 19 2008

Importing furigana into Word

Aozora Bunko is, more or less, the Japanese version of Project Gutenberg. As I’ve mentioned before, they have a simple markup convention to handle phonetic guides and textual notes. The notes can get a bit complicated, referring to obsolete kanji and special formatting, but the phonetic part is simple to parse.

I can easily convert it to my pop-up furigana for online use (which I think is more useful than the real thing at screen resolution), but for my reading class, it would be nice to make real furigana to print out. A while back I started tinkering with using Word’s RTF import for this, but gave up because it was a pain in the ass. Among other problems, the RTF parser is very fragile, and syntax errors can send it off into oblivion.

Tonight, while I was working on something else, I remembered that Word has an allegedly reasonable HTML parser, and IE was the first browser to support the HTML tags for furigana. So I stripped the RTF code out of my script, generated simple HTML, and sent it to Word. Success! Also a spinning beach-ball for a really long time, but only for the first document; once Word loaded whatever cruft it needed, that session would convert subsequent HTML documents quickly. It even obeys simple CSS, so I could set the main font size and line spacing, as well as the furigana size.

Two short Perl scripts: shiftjis2utf8 and aozora-ruby.

[Note that Aozora Bunko actually supplies XHTML versions of their texts with properly-tagged furigana, but they also do some other things to the text that I don’t want to try to import into Word, like replacing references to obsolete kanji with PNG files.]

Sunday, June 15 2008

Ooma goes retail!

We’ve been taking it slow, but we’ve finally got a retail trial running. If you’re in the Los Angeles area, you can finally see our product before buying one, at Best Buy.

“Now throw the switch and let us begin the battle for the planet.”
— The Brain

Thursday, July 24 2008

Sony Reader firmware update, finally!

[Update: sample picture of a PDF with kanji and furigana below the fold]

Quite a while ago, Sony promised to update their e-ink reader (the 505 model, at least; owners of the original 500 are SOL) to support Adobe Digital Editions (emerging DRM ebook standard), as well as fix a lot of bugs and in general support the product. People have been wondering if it would ever happen, or if it would be a new model. The recent UK release of the 505 was a head-scratcher as well, since it came without any announcement about the overdue update.

It took a while, but it’s here (more precisely, it’s linked from here; there’s no direct download link). Lots of other improvements, including SDHC compatibility and… (wait for it)… kanji in PDF files! You still need to use one of the hacks to see Chinese and Japanese text in text files and menus, but now that there’s a real firmware installer for the 505, you can recover from bad hacks.

Looks good so far.

[Update: the PDF reflow works pretty well for straightforward text-heavy PDFs with sensible internal layout. That is, the order the text was generated in the PDF file is the order it will appear; it doesn’t understand “columns” as such. Unfortunately, the Microsoft Word equation editor violates this constraint, and furigana in Word is implemented as an equation. Net result: Japanese PDFs may turn into crap when you ask the reader to reflow them, so you should format them for its page size.

This also means that graphics-heavy PDF files can’t be resized at all. Maps and complex diagrams must be converted to JPG to be useful, because the PDF viewer still doesn’t scroll, and the resize button is always a reflow button now.

Generally, the UI is much faster (except the date-entry screen, which is glacial), and page-turning is slightly faster. The only EPUB-format document I’ve tried turned out to be very graphics-heavy, which basically locked up the device during rendering. I haven’t tried an SDHC card, but people are reporting very mixed results. I’m loving the kanji support in PDFs, and look forward to trying an updated version of the Unicode font hack to get kanji working in text files as well.]

(Continued on Page 3066)

Wednesday, July 30 2008

Make More People!

I’m doing some load-testing for our service, focusing first on the all-important Christmas Morning test: what happens when 50,000 people unwrap their presents, find your product, and try to hook it up. This was a fun one at WebTV, where every year we rented CPUs and memory for our Oracle server, and did a complicated load-balancing dance to support new subscribers while still giving decent response to current ones. [Note: it is remarkably useful to be able to throw your service into database-read-only mode and point groups of hosts at different databases.]

My first problem was deciphering the interface. I’ve never worked with WSDL before, and it turns out that the Perl SOAP::WSDL package has a few quirks related to namespaces in XSD schemas. Specifically, all of the namespaces in the XSD must be declared in the definition section of the WSDL to avoid “unbound prefix” errors, and then you have to write a custom serializer to reinsert the namespaces after wsdl2perl.pl gleefully strips them all out for you.

Once I could register one phony subscriber on the test service, it was time to create thousands of plausible names, addresses, and (most importantly) phone numbers scattered around the US. Census data gave me a thousand popular first and last names, as well as a comprehensive collection of city/state/zip values. Our CCMI database gave me a full set of valid area codes and prefixes for those zips. The only thing I couldn’t find a decent source for was street names; I’m just using a thousand random last names for now.

I’m seeding the random number generator with the product serial number, so that 16728628 will always be Elisa Wallace on W. Westrick Shore in Crenshaw, MS 38621, with a number in the 662 area code.

Over the next few days, I’m going to find out how many new subscribers I can add at a time without killing the servers, as well as how many total they can support without exploding. It should be fun.

Meanwhile, I can report that Preview.app in Mac OS X 10.5.4 cheerfully handles converting a 92,600-page PostScript file into PDF. It took about fifteen minutes, plus a few more to write it back out to disk. I know this because I just generated half a million phony subscribers, and I wanted to download the list to my Sony Reader so I could scan through the output. I know that all have unique phone numbers, but I wanted to see how plausible they look. So far, not bad.

The (updated! yeah!) Sony Reader also handles the 92,600-page PDF file very nicely.

[Update: I should note that the “hook it up” part I’m referring to here is the web-based activation process. The actual “50,000 boxes connect to our servers and start making phone calls” part is something we can predict quite nicely based on the data from the thousands of boxes already in the field.]

Wednesday, August 20 2008

Three points define a plane…

…four points define a wobble. Some months back, I left myself a note to buy the Manfrotto Modo Pocket camera stand when it finally reached the US. I had taken their tabletop tripod with me to Japan, but hadn’t used it much because of the overhead: pull it out of the bag, find a dinner-plate-sized surface to set it up on, take the shot.

I didn’t bother buying any of the other “quickie” mini-tripods that are out there, because most of them struck me as gimmicks first, stabilizers second. The Modo Pocket, though, looked eminently practical:

Small enough to be left on the camera while it’s in your pocket, with a passthrough socket to mount on a larger tripod or monopod. Usable open or closed. Solidly constructed, like most Manfrotto products. A design that derives its cool looks directly from its functionality. It’s even a nice little fidget toy.

What it isn’t is a tripod. If you put a three-legged camera stand down on a surface, it might end up at an odd angle, or even fall over if there’s too much height variation between the legs, but it’s not going to wobble. A four-legged stand is going to wobble on any surface that’s not perfectly flat, and is also going to be subject to variations in manufacture.

The legs on my shiny new Modo Pocket are about two sheets of paper off from being perfectly aligned, which means that it can wobble a bit during long exposures. Adjusting it to perfection is trivial, but even once it’s perfectly aligned on perfectly flat surfaces, it won’t be that way out in the real world.

It can’t be, because it has four fixed-length legs. This is a limitation, not a flaw. Just like it’s not designed to work with an SLR and a superzoom (it would fall over in a heartbeat), it’s not designed to replace a tripod. It’s designed to help the camera in your pocket grab a sharp picture quickly, before you lose the chance. I expect to get some very nice, sharp pictures with this gadget, and I don’t regret the $30 in the least.

Saturday, September 6 2008

Dear Open Source community,

This is the sort of attitude that makes me want to bitch-slap some sense into the lot of you.

(Continued on Page 3104)

Tuesday, December 2 2008

Yeah, we’re geeks; deal with it.

An Ooma Christmas

Monday, April 27 2009

Dictionary update

[Update update: I’ve made a small change to add the full JMnedict name dictionary; a lot of things that used to be in Edict/JMdict have been moved over to this much-larger secondary dictionary, and I finally got around to integrating it. The English translations aren’t searchable yet, mostly because I need to rework the form and add the kanji dictionary to Xapian as well, so that I have J↔E, N↔E, and K↔E.]

One downside of moving a lot of stuff onto my new shared-hosting account is that I have to give up a lot of control over what’s running. Not only do I have to work through an Apache .htaccess file instead of reconfiguring the server directly, but I can’t run my own servers on their machine.

So, goodbye Sphinx search engine, hello Xapian (thanks, Pixy). While it suffers from a lack of documentation between “baby’s first search” and “211-page C++ API document”, it has a lot to offer, and doesn’t require a server. One thing it has is a full-featured query parser, so you can create searches like “pos:noun usage:common lunch -keyword:vulgar” to get common lunch-related nouns that don’t include sexual slang (such as the poorly-attributed usage of ekiben as a sexual position). That allows me to use the same tagging for the E-J searches that I use in Sqlite for the J-E searches. [note: everything’s just filed under “keyword:” in this first pass, and the valid values are the same as the advanced-search checkboxes]

I need a full-text search to do English-Japanese, because the JMdict data isn’t really designed for it. There are hooks in the XML schema, but they’re not used yet. As a result, my search results are a bit half-assed, which makes the new query support useful for refining the results. I can also split out the French, German, and Russian glosses into their own correctly-stemmed searches; with Sphinx, there was one primary body field to search, so all the glosses were lumped together. With a small code change, I can tag each gloss with the correct ISO language code and index them correctly.

The new version is now live on jgreely.net/dict, which means I should be able to move that domain over to the shared-hosting account soon.

Once I figured out how to use Xapian (through the Search::Xapian Perl module, of course), replacing Sphinx and adding the keyword support took a few minutes and maybe half a page of code, total. In theory, I could use it for the J-E searches as well, but I’d lose the ability to put wildcards anywhere in the search string, which comes in handy when I’m trying to track down obscure or obsolete words.

One thing I haven’t figured out is why I can’t use add_term with kanji arguments; both Xapian and Perl are working entirely in Unicode, but passing non-ASCII arguments to add_term throws an error. The workaround is to set the stemmer to “none” and use index_text, and that’s fast enough that I don’t need to worry about it right now.

The most annoying thing about the Xapian documentation is how well-hidden the prefix support is. The details aren’t in the API at all; you can learn how to add them to a term generator or query parser, but the really useful explanation is over in the Omega docs.

Things that suck

  1. 10% packet loss on my DSL line.
  2. Three hours diagnosing the problem so that I could convince tech support it wasn’t my equipment. And, yes, I even had a spare DSL modem lying around.
  3. At least four hours spent on the phone with support at various levels, mostly spent listening to Muzak and repeating parts of item #2.
  4. Being told that it will be 11 days before someone can physically come out and check the lines, since resetting the DSLAM didn’t fix it.
  5. Discovering that every other service provider in the area (cable, wireless, etc) has at least a 5-day lead time, and juicy up-front costs for the required gear.

Sunday, June 14 2009

Abbyy FineReader Pro 9.0, quick tests

I’ve gotten pretty good at transcribing Japanese stories and articles, using my DS Lite and Kanji sonomama to figure out unfamiliar kanji words, but it’s still a slow, error-prone process that can send me on half-hour detours to figure out a name or obsolete character. So, after googling around for a while, I downloaded the free 15-day demo of FineReader Pro and took it for a spin. Sadly, this is Windows software, so I had to run it in a VMware session; the only product that claims to have a kanji-capable Mac product has terrible reviews and shows no sign of recent updates.

First test: I picked up a book (Nishimura’s murder mystery collection Ame no naka ni shinu), scanned a two-page spread at 600-dpi grayscale, and imported it into FineReader. I had to shut off the auto-analysis features, turn on page-splitting, and tell it the text was Japanese. It then correctly located the two vertically-written pages and the horizontally-written header, deskewed the columns (neither page was straight), recognized the text, and exported to Word. Then I found the option to have it mark suspect characters in the output, and exported to Word again. :-)

Results? Out of 901 total characters, there were 10 errors: 6 cases of っ as つ, one あ as ぁ, one 「 as ー, one 呟 as 眩, and one 駆 recognized as 蚯. There were also two extra “.” inserted due to marks on the page, and a few places where text was randomly interpreted as boldface. Both of the actual kanji errors were flagged as suspect, so they were easy to find, and the small-tsu error is so common that you might as well check all large-tsu in the text (in this case, the correct count should have been 28 っ and 4 つ). It also managed to locate and correctly recognize 3 of the 9 instances of furigana in the scan, ignoring the others.

I’d already typed in that particular section, so I diffed mine against theirs until I had found every error. In addition to FineReader’s ten errors, I found two of mine, where I’d accepted the wrong kanji conversion for words. They were valid kanji for those words, but not the correct ones, and multiple proofreadings hadn’t caught them.

The second test was a PDF containing scanned pages from another book, whose title might be loosely translated as “My Youth with Ultraman”, by the actress who played the female team member in the original series. I’d started with 600-dpi scans, carefully tweaked the contrast until they printed cleanly, then used Mac OS X Preview to convert them to a PDF. It apparently downsampled them to something like 243 dpi, but FineReader was still able to successfully recognize the text, with similar accuracy. Once again, the most common error was small-tsu, the kanji errors were flagged as suspect, and the others were easy to find.

For amusement, I tried Adobe Acrobat Pro 9.1’s language-aware OCR on the same PDF. It claimed success and looked good on-screen, but every attempt to export the results produced complete garbage.

Both tests were nearly best-case scenarios, with clean scans, simple layouts, and modern fonts at a reasonable size. I intend to throw some more difficult material at it before the trial expires, but I’m pretty impressed. Overall, the accuracy was 98.9%, but when you exclude the small-tsu error, it rises to 99.6%, and approaches 99.9% when you just count actual kanji errors.

List price is $400, but there’s a competitive upgrade available for customers with a valid license for any OCR software for $180. Since basically every scanner sold comes with low-quality OCR software, there’s no reason for most people to spend the extra $220. They use an activation scheme to prevent multiple installs, but it works flawlessly in a VMware session, so even if I didn’t own a Mac, that’s how I’d install it.

[updates after the jump]

(Continued on Page 3362)

Friday, June 19 2009

Using Abbyy FineReader Pro for Japanese OCR

[Update: if you save your work in the Finereader-specific format, then changes you make after that point will automatically be saved when you exit the application; this is not clear from the documentation, and while it’s usually what you want, it may lead to unpleasant surprises if you decide to abandon changes made during that session.]

After several days with the free demo, in which I tested it with sources of varying quality and tinkered with the options, I bought a license for FineReader Pro 9.0 (at the competitive upgrade price for anyone who owns any OCR product). I then spent a merry evening working through an album where the liner notes were printed at various angles on a colored, patterned background. Comments follow:

  • Turn off all the auto features when working with Japanese text.
  • In the advanced options, disable all target fonts for font-matching except MS Mincho and Times New Roman. Don’t let it export as MS Gothic; you’ll never find all of the ー/一 errors.
  • Get the cleanest 600-dpi scan you can. This is sufficient for furigana-sized text on a white background.
  • Set the target language to Japanese-only if your source is noisy or you’re sure there’s no random English in the text. Otherwise, it’s safe to leave English turned on.
  • Manually split and deskew pages if the separation isn’t clean in the scan.
  • Adjust the apparent resolution of scans to set the output font size, before you tell it to recognize the text.
  • Manually draw recognition areas if there’s anything unusual about your layout.
  • Rearrange the windows to put the scan and the recognized text side-by-side.
  • Don’t bother with the spell-checker; it offers plausible alternative characters based on shape, but if the correct choice isn’t there, you have to correct it in the main window anyway. Just right-click as you work through the document to see the same data in context.
  • You can explicitly save in a FineReader-specific format that preserves the entire state of your work, but it creates a new bundle each time, and it won’t overwrite an existing one with the same name. This makes it very annoying when you want to simply save your progress as you work through a long document; each new save includes a complete copy of the scans, which adds up fast.
  • If you figure out how to get it stop deleting every full-width kanji whitespace character, let me know; it’s damned annoying when you’re trying to preserve the layout of a song.
  • Once you’ve told it to recognize the text, search the entire document for these common errors:
    • っ interpreted as つ and vice-versa
    • ー interpreted as 一 and vice-versa; check all other nearby katakana for “small-x as x” errrors while you’re at it
    • 日 interpreted as 曰
    • Any English-style punctuation other than “!”, “:”, “…”, or “?”; most likely, they should be the katakana center-dot, but it might have torn a character apart into random fragments (rare, unless your background is noisy).
    • The digits 0-9; if your source is noisy, random kanji and kana can be interpreted as digits, even when English recognition is disabled.
  • Delete any furigana it happens to recognize, unless you’re exporting to PDF; it just makes a mess in Word.
  • In general, export to Word as Formatted Text, with the “Keep line breaks” and “Highlight uncertain characters” options turned on.
  • If your text is on a halftoned background and you’re getting a lot of errors, load up the scan in Photoshop, use the Strong Contrast setting in Curves, then try out the various settings under Black & White until you find one that gets rid of most of the remaining halftone dots (I had good luck with Neutral Density). After that, you can Despeckle to get rid of most of the remaining noise, and use Curves again to force the text to a solid black.

Tuesday, July 7 2009


Vlc hits version 1.0. Now they can start working on the user interface!

Thursday, July 9 2009


Just for amusement…

ORK4 QRcode

Friday, July 24 2009

Dear Bryan O’Sullivan,

Here’s your definitive manual’s complete comparison of Perforce to Mercurial:

Perforce has a centralised client/server architecture, with no client-side caching of any data. Unlike modern revision control tools, Perforce requires that a user run a command to inform the server about every file they intend to edit.

The performance of Perforce is quite good for small teams, but it falls off rapidly as the number of users grows beyond a few dozen. Modestly large Perforce installations require the deployment of proxies to cope with the load their users generate.

In order, I say, “bullshit”, “feature”, “buy a server, dude”, and “you’re doing it wrong”.

In fairness, the author admits up front that his comments about other tools are based only on his personal experience and biases, and the inline comments for this section point out its flaws. Still, it’s clear that his personal experience with Perforce was… limited. Also, he’s either not aware of the features it has that Mercurial lacks, or simply discounts them as “not relevant to the way Our Kind Of People work”.

I’m not criticizing the tool itself, mind you; I’ve tried out several distributed SCMs in the past few years, and Mercurial seems to be fast, stable, easily extensible, and well-supported. I’m switching several of my Japanese projects to it from Bazaar, and it cleanly imported them. It handles Unicode file names and large files a lot better, which were causing me grief in the other tool.

There are things I can’t do in Mercurial that I do in Perforce, though, and some of them will likely never be possible, given the design of the tool. [Update: for-instance deleted; it appears that if you always use the -q option to hg status, you avoid walking the file system, and you can set it as a default option on a per-repository basis. If the rest of the commands play nice, that will work. The real value of explicit checkouts, even in that example, is the information-sharing, something that devs often value less than Operations does.]

Thursday, July 30 2009

Ah, Emacs…

Emacs 23 natively uses Unicode. This means I can run it in a Terminal window, like God intended, and still have full Japanese support. Previous versions did funky Shift-JIS conversions that made its behavior… “eccentric” on a Mac, especially with cut-and-paste.

Now all I have to do is strip out all of the cruft from the elisp directory, and I’ll have the perfect text editor. Actually, it’ll be easier to delete everything and just add back the non-cruft as needed. There’s not much that I don’t consider cruft, so it will be pretty darn small.

[side note: a release comment says something to the effect that the internal encoding is a superset of Unicode with four times the space, which would make it a 34-bit system. WTF? Update: ah, I see; UTF-32 has a lot of empty space, with only a bit over 20 bits allocated in the Unicode standard. UTF-8 was also designed with considerable headroom, which is no surprise, given that it was invented during dinner by Ken Thompson.]

Tuesday, November 24 2009

Shock the Monkey

In this well-linked news, a team of researchers has reported success at curing erectile dysfunction with shockwaves. When describing how much force is being applied to the penis, they chose a very revealing comparison:

“These are very, very low energy shock waves,” Vardi said. Each shockwave applied roughly 100 bar of pressure — some 20 times the air pressure in a bottle of champagne, but less than the pressure exerted by a woman in stiletto heels who weighs 132 lbs. (60 kg).

Apparently medical research is now being performed in full dominatrix gear. Who knew?

Thursday, June 17 2010

Back from the dead…

Fontographer 5.0 is out. I knew they’d done a cleanup release after acquiring the old code, but I hadn’t expected the FontLab folks to do major new development on it.

Thursday, December 16 2010

“Okay, is the light red or green?”

Many years ago, I was working setup at a trade show, and the network guy asked me to run down to the other end of the conference hall and check out a piece of equipment for him. When I got there, he called me on the radio and asked me what color the blinkenlights were.

“Oh, you didn’t know I’m partially color-blind.”

Today, Dan Kaminsky has released a new iPhone/Android app that does real-time color filtering to allow you to compensate for these problems. I don’t have any compatible devices at the moment, but they seem to have matured enough that it will be worth buying one soon, and this will be a must-buy app.

Sunday, February 20 2011

PDF metadata on Kindle

PDF version 1.5 doesn’t work for metadata (apparently because it compresses objects to reduce the output size); save as 1.3 for it to be parsed correctly, and you’ll still need to set the filename to the title you want displayed in the main book listing, even though the device actually parses it out of the file to display on the detail page. Blech.

You can insert the metadata with pdftk as per bloovis, or some other tools (the full version of Adobe Acrobat works great, but is not exactly free…). LaTeX users can use a sledgehammer to swat this fly with the hyperref package, but you’ll need to use dvipdfmx -V3 to downrev the PDF output to 1.3.

Sony got their PDF software from Adobe (for the DRM, mostly), so their Readers don’t have this problem. Sadly, this means that a file generated for the Kindle will display much slower on the Sony, since the object-compression is quite useful.

Thursday, February 24 2011

Automagic JIS/ShiftJIS/EUC to UTF8

Finally got sick of constantly dealing with the variety of encoding schemes used for Japanese text files. I still convert everything to UTF-8 before any serious use, but for just looking at a random downloaded file, I wanted to eliminate a step.

less supports input filters with the LESSOPEN environment variable, but you need something to put into it. Turns out the Perl Encode::Guess module works nicely for this, and now I no longer care if a file is JIS, ShiftJIS, CP932, EUC-JP, or UTF-8. Code below the fold.

(Continued on Page 3729)

Friday, February 25 2011

Appending metadata to a PDF file

The Kindle has generally excellent support for reading PDF files, but absolutely terrible support for displaying embedded metadata. If FOO5419.pdf contains properly-specified Title and Author fields, it will appear on your Kindle as, you guessed it, FOO5419. It might show the Author on the right-hand side of the screen, and it might show Title and Author on the detail screen, but likely not.

It will work if you generate PDF version 1.3 with a self-contained Info dictionary (that is, “/Title(My Book)”, but not “13 0 obj (My Book) … /Title 13 0 R”). It will work if you do an append-only update to a v1.3 file in Adobe Acrobat Pro. It will work if you do a rewrite of a v1.3 file with pdftk.

What should work, for all PDF files, is an append-only update that uses only v1.3-ish features to create a self-contained Info dictionary. I hadn’t hacked PDF by hand since 1993, but I dusted off my reference manuals and wrote a script that correctly implements the spec.

It doesn’t work on a Kindle. Acrobat sees my data, Mac OS X Preview sees it, pdftk sees it, and every other tool I’ve tried agrees that my script generates valid PDF files with updated metadata. However, if I use my tool and then ask pdftk to convert the append-only update into a rewrite, the Kindle can see it (but only if it started out as v1.3).

I therefore declare their parser busted. The actual PDF viewer works fine, but whatever cheesy hack they’re using to quickly scan for metadata, it ain’t the good cheese.

Monday, April 4 2011

Lego that book!

But what’s it for?

(Continued on Page 3757)

Tuesday, May 24 2011

Car update

So the Camry Hybrid crossed 6,000 miles yesterday, just in time for me to drop it off for its first service. Average mileage over that period settled down to a pleasant 38.2 miles/gallon on Regular. My only complaint at the moment is that when the service-me-now timer goes off, the convenient in-dash display of range, mileage, etc, is overridden; you can get it back for a few seconds, and scroll through the different displays, but it always reverts to MAINT REQUIRED. I could find no way to reset the timer; I could add a dozen categories of new timers, but not clear one that’s already gone off.

For amusement, while I was waiting at the dealership, I sat behind the wheel of a Prius 4-door hatchback. Well, the idea was amusing, anyway; the actual experience was distinctly uncomfortable. Nice storage space with the rear seats down, though.

Wednesday, September 14 2011

Tokyo Surfing

[Note: this is one of those “braindump so I don’t miss a step when I tell someone how to do it” posts]

Let’s say that you’ve come across a web site that refuses to serve up its content to people located outside of a certain geographical region. For instance, “Japan” (or UK for BBC streams, etc).

There are two basic ways to go about this: pointing your web browser at an HTTP/HTTPS proxy service that’s located in Japan, or opening a VPN connection to a server in Japan. I chose the second method, in part because it isn’t limited to web traffic (allowing you to do things like bypass your ISP’s outgoing SMTP blocking), and in part because I already knew how.

My weapons of choice were Amazon EC2, OpenVPN (free Community Edition, easy-rsa, OpenVPN GUI for Windows, and Tunnelblick for Mac), and DynDNS plus ddclient.

(Continued on Page 3888)

Saturday, March 3 2012

Goofy Mecab error

When I ran the third Louie book through my custom-reader scripts (being nearly halfway through book 2…), it warned me about a conjugation pattern it didn’t know how to handle. This happens occasionally, since my de-conjugator is based on a limited sample of Mecab output, but the word it was complaining about was a real surprise: the yodan verb 戦ふ (written “tatakafu”, but pronounced “tatakau”), conjugated into 戦はない.

The sentence was “人の死なない戦はない”, which should be read as “Hito no shinanai ikusa wa nai”. For some reason, the context matcher did not correctly determine that “人の死なない” was a clause modifying the noun “戦”, and instead fell back all the way to a pre-1946 classical conjugation of the modern verb 戦う, which would have translated into the nonsensical “person’s won’t die won’t fight”. One of the many reasons human translators still have jobs!

(the sentence actually means “this is not a battle in which no one dies”, or perhaps “there are no wars where no one dies”; I’ll have to look at the context when I get there)

Thursday, March 15 2012

I can haz Gatsby?

The product is actually quite useful, and I wish I’d brought back a few more packs from Kyoto, since none of the Japanese stores around here seems to carry it.

It’s basically a mentholated wet-nap, good for cooling down after a workout, sold by a company that markets primarily to pretty-boys. And, yes, there are worse versions of the ad.

Thursday, June 7 2012

Odd layout limitation in Word: columns

Microsoft Word has very good standards-based HTML import. It’s much more capable than OpenOffice, and advanced layout can be set with custom CSS. It’s robust enough that the majority of Word features will survive a round-trip through the HTML exporter. Your brain may explode if you try to edit the exported HTML, which IMHO is only useful for figuring out how to use their custom CSS; for sanity and source-control, write in HTML, layout with Word, print to PDF.

[and only use LaTeX if you need to do tricky things like post-process DVI and AUX files to generate cross-referenced vocabulary lists and delete content without repaginating…]

[Update: an odd limit to the importer is handling indentation of nested lists. At a certain point, it reverts to left-indented]

(Continued on Page 4031)

Friday, June 22 2012

Okay, I think I’m set for a while

All these years, I didn’t know they sold it in this size:

Simichrome polish 8.82oz can

[Update: and for the serious user, 1000 gram can.]

Sunday, July 29 2012

Things you can do more easily in Word than in InDesign…

[Update: “…without spending an extra $180 on a third-party tool that unlocks hidden, unsupported functionality”]

Layout a sentence that contains a mixture of English and Japanese.

In Word, you can say “use this font for Japanese characters only”, automatically leaving the rest of the sentence in a more-appropriate font. If you want to do this in InDesign, you must assign a character class to each string of Japanese text, or else layout the whole sentence in the same Japanese-capable font.

And that character class will not be applied if the sentence is used in a running header. Which means that you cannot use character-class-based styling in text that will be used as a header.

The workaround, which doesn’t work, is to use position-based nested styles in the header.

The workaround for the workaround, which doesn’t work, is to use regular-expression-based styling in the header. You can do something half-assed with regexps in a normal paragraph style, but the exact same regexp that works in the body text doesn’t work in a header style; the regexps are apparently applied before the variable substitution (which, come to think of it, is likely the problem with nested styles as well).

You can probably do Word-style font-mixing in the Japanese version of InDesign, along with vertical text, furigana, and all of the other things Word gives you in all versions, but I can’t buy that in the US. And, frankly, it’s far too expensive to ever consider trying to import a copy just to get potentially prettier printouts than Word.

[Update: it is claimed in a number of places that all of the Japanese functionality is present in the US version of InDesign, but that none of it is exposed in the UI. So, if someone sent you a document made in the Japanese version, you could print it, but not edit it. This suggests that it would be possible to export such a document to either the Tagged Text or XML formats and do some scripting work.]

Friday, August 3 2012

Useful iOS app: Systematic

I wanted something simple: an app that allowed me to enter a list of tasks and how frequently I want to do them (daily, twice a week, etc), and sort the ones I’ve been neglecting to the top. It should show me when I last did them, and have a calendar view showing my historical performance. And it doesn’t really need to do anything else. Systematic doesn’t have the calendar view yet, but it does everything else, and it’s dead simple.

You have two buttons at the top of the screen: add a task, and edit the task list. Below that is your list of tasks, with the do-soon ones at the top. Tapping on any task starts a timer that tracks how much time you’ve spent on it, and you can stop, pause, restart, or adjust the time spent. Your progress and deadline show up in small print on the task button.

In the editor, you name the task, select an icon, a frequency (once, daily, weekly, monthly), a duration (from 5 minutes to 50 hours), a repeat count (1-50 times per period), and a deadline. So, I can say that I want to practice Go-San-Go three times a week for ten minutes per session, with my success evaluated on Sundays.

And that’s it. Until the author adds the calendar view, you can only see your previous session for each task, but it uses Core Data for storage, which means everything is stored in a simple SQLite schema, and the DB itself is available from the File Sharing pane in iTunes, so it’s trivial to extract the data yourself.

$2.99, designed for iPhone-sized screen; I suspect it just looks huge on an iPad right now.

Sunday, August 5 2012

Hacking Illustrator with JavaScript

You can do some entertaining and evil things to an Illustrator document with Scriptographer. For instance, I implemented a static version of the XScreenSaver module Interaggregate in about 80 lines of code, which by itself isn’t terribly practical, but being able to generate hundreds of randomly-sized circles each with their own vectors and calculate their intersections over time does suggest some interesting art-hackery.

Thursday, August 30 2012

LibreOffice: becoming useful

I just installed the latest Mac-native version of LibreOffice, and found that the HTML import is now mostly usable, not only correctly handling encodings and most CSS-based layout, but even recognizing Word-specific CSS and flagging it using the document-review functionality (sadly, it still ignores ruby tagging for furigana, but ruby is basically broken anyway). Also, the Draw module imports CorelDraw documents back to version 7, with most features intact (I still need version 3 and 4, but I can work around that with an old copy of 7 that I have running in a virtual).

The basic functionality has been there for a while, but quirkiness, lack of stability, and iffy interoperability were always problems, and it looks like the Libre team is serious about addressing them, which didn’t seem to be the case in the OpenOffice days.

Wednesday, September 12 2012

Dear CAS Hanwei,

Katanas are held in their scabbards by friction. I should not have to tell you this.

Monday, October 8 2012

Kindle Paperwhite

Mine arrived today, with the leather cover. First take, I love the high-resolution front-lit screen, and the redraw is fast enough to make the touchscreen navigation workable, if still a bit pokey. My custom PDF Japanese novels look great, and page-turning performance is excellent.

It’s a bit sluggish at handling PDFs with lots of line graphics (like the JNTO tourist guides), but better than my old 3rd-gen, and the multitouch gestures work reasonably well for zooming and navigating large images, and quite well for zooming in on text.

The front-lighting is almost perfect, only becoming irregular at the very bottom of the screen, which in ordinary use is simply the status line. However, if you switch to landscape, with the default margins it can intrude into the text a bit, so a slight negative.

The wireless setup fails to correctly handle WPA2 Enterprise EAP-TTLS/PAP; it lets me set everything correctly, but my Radius logs show it still trying to use EAP-MD5. Minor nuisance, since I didn’t buy the 3G version, but I can work around it.

The worst thing I can say about it right now is that they shipped a crappy USB cable that kept losing the connection while I had it plugged into my computer. Visually it’s identical to the cable from my older Kindle, but that one fits fine, and this one is flaky.

Oh, and they added an onscreen Japanese keyboard. I’ll have to play with that later, but it seems to work.


The leather smartcover feels nice in the hand, and does the auto-on/off trick the kids are so fond of today.

However, attempting to queue up a bunch of my books for download made Kindle go boom:

(Continued on Page 4093)

Tuesday, April 16 2013

Good cutlery shops in Kyoto

If you’re in Kyoto and looking for good Japanese-style kitchen knives, pocket knives, or woodworking tools, Minamoto no Hisahide has excellent stuff and reasonable prices. They’re in the Teramachi shopping arcade off of Shijo-dori, right around the corner from Nishiki Tenmangu shrine (which, by the way, is why the food/kitchen street that runs west from here is called Nishiki Market).

Aritsugu, not far away, is a high-end shop with excellent handmade knives and hammered-copper pots and pans. I don’t like anyone enough to buy gifts there, and I really couldn’t justify filling my luggage with heavy copper that would never get used, so I only window-shopped there.

[Update: I found the receipt, and the third knife shop not only wasn’t in the Teramachi arcade, it wasn’t even in Kyoto! No wonder I never found it again. It was actually the Ichimonji outlet on the Doguyasuji kitchen street in Osaka’s Namba district. We’d stopped in there the first day we were in town, so my memories were quite blurred by the end of the trip.] There’s another knife shop on Teramachi, where I picked up a very nice (and quite affordable!) damascus nakiri for a friend, as well as some of the standard-grade Higo no Kami pocket knives, but at the moment, I can’t find the name. I’ll have to hunt through my receipts.

Sunday, May 5 2013

Evernote iOS gotcha

Evernote is an extremely useful cross-platform application, allowing you to keep lightly-formatted documents in sync across Windows, Mac, iOS, and Android devices. Heck, they even support Blackberry, Windows Phone, and Windows RT tablets, and if you’re masochistic enough to run a Linux desktop, you can at least run it in Chrome.

The basic product is free, and most of their money seems to come from an array of partnerships rather than the small monthly fee for premium use. The friends I know who use it mostly don’t even know there is a premium option; they just like the convenient syncing.

The feature that made premium useful for me was offline notebooks; my phone and laptop are usually online, but I tend to leave the wireless off on my Sony Android tablet unless I’m actively using it, because it drains the battery. However, it turns out that there’s another feature that is really, really useful, and that allows you to recover from an annoying issue in the iOS client.

I was using my iPhone to make a small change to a long note that was filled with images, and I wanted to remove some gratuitous formatting from a paragraph. When you pull up the formatting panel, there are two buttons side by side: “Simplify” and “Plain Text”. If you accidentally hit the second one, all formatting including embedded images is removed from the note, and there’s no undo. If your phone has a data connection, your change will sync up as soon as you close the note, and wipe out the good version everywhere else.

(technically, there is one level of undo, but most people don’t know that “shake the device” is the iOS gesture for “undo typing/delete”; I certainly never would have guessed it after two years with an iPhone and several more running apps on an iPod Touch, because 90% of apps that implement shake do something else with it, and it’s usually something stupid that I want to turn off. Coincidentally, a lot of people apparently would love to turn off “shake to undo”…)

Fortunately, one of the other features Evernote premium gives you is version history; if the good version was ever synced up, you can get it back… from the desktop or web clients, at least; this feature hasn’t been implemented in iOS yet. It’s also possible to use offline editing to modify the good version that’s cached on another device, and generate a sync conflict that preserves both versions.

If you don’t have premium, your only real option is generating a sync conflict by editing on another device before closing the note on the iOS device.

Why was I messing with the formatting in the first place? Because Evernote’s cross-platform nature often results in some really hideous font and text-size issues when you paste things in on the different clients. I have no idea what’s going to happen when I paste text into it.

Tuesday, May 7 2013

Roughen, Expand, Simplify

Words to live by, possibly even outside the context of producing a distressed look in Illustrator.

(and the infectious “snap to pixel grid” setting needs to die in a fire)

[Update: Wow, they really broke the actions support in CS5.x; simple operations do not work in the standard “accelerated” mode, and you can’t just tell the damn thing to always use step-by-step mode]

Thursday, May 9 2013

Ink Pickpocket Boy

The Google translation of this is amusing, but easily understood: 墨すり小僧 (sumi-suri kozou) means “ink-rubbing apprentice”. However, there are several godan verbs that conjugate as “suri”, with meanings that include printing, shaving, frosting, rubbing, and… picking pockets. Kozou isn’t quite as versatile, but youngster/errand boy/apprentice still leaves you plenty of room to guess the wrong context.

Also, this 30-minute process shows you why a lot of people buy their calligraphy ink as bottles of liquid these days.

Sunday, June 16 2013

Adobe Swatch Exchange file format

The only things I can really add to this excellent description of the format are:

  • the code for grayscale is actually “Gray”, not “GRAY”, although it’s possible some software will accept either.
  • all values are stored big-endian.
  • numbers are single-precision floating point (in Perl pack() terms, “f>”), strings are UTF-16 with a trailing NUL word.
  • LAB colors are in the range 0-1, -128-127, -128-127; no adjustment is necessary.
  • Apparently none of the generated files he examined included the end-palette chunk, which has type 0xC0020000 and does not include a name field.
  • In actual parsing, Illustrator ignores the end-palette chunk anyway, though; all colors have to be part of some group when imported, so they’re added to the most recently named group.

With those additions, my little Perl script is capable of reading everything that comes with Illustrator or is generated by the current version of the Kuler service. Piping the output of my ase2txt script into txt2ase produces identical files, so I’m pretty sure I’ve got everything right.

For fun, I even added the ability to sort swatches by lightness in the L*ab space, and merge in color names using the closest match in Aubrey Jaffer’s collection of color dictionaries (using the conversion and distance formulas from EasyRGB).

Combining the NBS/ISCC dictionary with the results of the XKCD color survey produces a quite reasonable set of names (except for the NBS-ISCC definition of “black”, which might be useful for surface colors, but is useless for monitors). The Resene paint colors offer excellent coverage, but the names are just too eccentric for general description (ex: jon, shark, zeus, cello, haiti, nero, merlin, etc).

Thursday, June 20 2013

The fine art of Calling Bullshit

Fast Design magazine writes a puffy little piece on the decline of wood in consumer electronics design, filled with quotes about the mystical and spiritual qualities of this natural material that make it ill-suited to modern use.

Commenter Bradley Gawthrop calls bullshit:

Wood stuff hasn’t been made at scale by craftsmen who whisper at trees in a very very long time. Go to a Thomasville factory and see for yourself. It’s treated like any other industrial material, it’s just more expensive.

The reason electronics engineers don’t use wood is because it’s poorly suited to the product. It’s not rigid in thin cross sections, it doesn’t hold tight tolerances well over humidity changes, it’s expensive to procure (especially in China), and manufacturing with it is expensive, not because it requires hand craftsmanship (it doesn’t) but because all wood manufacturing is subtractive. it can’t be molded and cast and extruded the way glass and aluminum and plastic can be. It arrives in unpredictably sized slabs which have to be milled and milled and milled again until you reach the desired shape.

Also, we manufacture all this stuff in China, where they don’t even have enough timber for their own domestic use.

Friday, July 5 2013

More stupid emacs defaults

Emacs 23 adds line-move-visual, on by default, which changes the behavior of previous-line and next-line commands to take you to the next row on the screen. That is, if your line wrapped because it was too long to fit in the window, Control-N takes you to whatever position in that line happens to be one row below the current position.

This is the way the NeXT-derived widgets in MacOS X implemented their “emacs-like” editing, but it’s not the way Emacs has ever worked, which makes it a baffling choice for a default behavior. Especially since it breaks keyboard macros for line-by-line operations, something quickly noticed by users (update: actually, the official release notes don’t even mention a tenth of what they changed; you just have to guess, apparently).

So, using Emacs as an actual text editor now requires, at minimum:

(defun set-auto-mode (&optional foo) (interactive "p") (fundamental-mode))
(setq-default initial-major-mode 'fundamental-mode)
(setq-default enable-local-variables nil)
(global-set-key (kbd "TAB") 'self-insert-command)
(setq-default inhibit-eol-conversion t)
(setq line-move-visual nil)

[and why didn’t I notice when I installed it on my laptop several years ago? Because I reverted to the vendor-supplied Emacs at some point (I no longer recall why), and the Linux distros on our servers only recently upgraded to it]

Thursday, August 29 2013

“Shut up and take my money”

Jeff Atwood and Weyman Kwong are making a sturdy programmer’s keyboard with silent mechanical keyswitches. While I enjoy the ear-shattering clatter of my current mechanical keyboad, I’m less fond of the shoddy physical construction and poor multi-keypress handling (and, of course, I’d swallow broken glass before dealing with the assholes at Matias ever again), so this is definitely on my must-buy list.

Sunday, October 13 2013

The death of the camcorder?

I was in two large Costcos this weekend, with their displays all set up for the holiday season. The only camcorder they stocked was the GoPro. Everything else was a digicam advertising 1080p HD video support, in one of the usual form factors. Up to 42x optical zoom, although for the most part the ISO rating didn’t go high enough to compensate for the f/6-ish aperture at the long end.

For all the trash talk about the iPhone killing off the digicam, it looks like the digicam killed the camcorder first.

Tuesday, December 31 2013

Fitbit Force

My health has been… peculiar for the last several months, and it’s quite frustrating to be repeatedly told that the latest round of tests came back negative. Generally good, since it means that the most comprehensive physical I’ve ever had says that the major systems are working perfectly, but it means that we still didn’t know what’s causing the problem that’s left me horribly short of breath and both physically and mentally fatigued.

When we finally got around to the sleep clinic, the take-home sleep study came back “inconclusive” after two weeks, and they scheduled an in-lab study two weeks later, with results coming two weeks after that.

Several of my friends have been Fitbit fans for quite a while, so while this was going on, I pre-ordered the new Force model, which in addition to steps and stairs, tracks sleep time and disruptions (based on movement during the night). It also has “social” features like auto-shaming, which I will never be taking advantage of.

(Continued on Page 4358)

Friday, February 21 2014

Fitbit Force recall

Fitbit has started a voluntary recall of the Force fitness tracker, due to people experiencing allergic reactions. Initially, the reactions were thought to be caused by indifferent cleaning practices leading to sweat and grime buildup, but they’re now thinking it’s a combination of allergies to the nickel content and to the glue used to bond the wristband.

I’m betting on the glue, because while I haven’t had an allergic reaction, mine developed a strong, foul odor that definitely wasn’t due to lack of cleaning. In fact, the Japanese Type Cover 2 that I bought for my Surface Pro 2 arrived with the exact same odor, although not quite as strong.

I started an RMA to have mine exchanged for a new one, but when they said it would take 4+ weeks to process, I decided to see if I could deodorize both the Fitbit and the Type Cover myself. I was successful, so I canceled the RMA a few days ago.

My weapon of choice was a pound of activated carbon, purchased in the fishtank aisle at the local pet shop. I buried them in the stuff for several days. For the Fitbit, I taped over the display and the charging port (being sure to cover the two small holes for the altimeter), and for the Type Cover, I taped a paper towel over it.

Fitbit is working on a new model, and I hope that in addition to switching materials, they come up with a new clasp design that’s less likely to catch on clothing and pop open.

Thursday, February 27 2014

Surface Pro 2

So, right after Christmas, I caught the brief window where the Microsoft Surface Pro 2 was back in stock in my preferred configuration (8GB RAM, 256GB SSD), and bought one, along with the only keyboard cover that was available in a non-hideous color, the first-generation Touch Cover. Since they still haven’t released the Power Cover that gives you real keys and an extra 50% battery life, I made do with that for a while, and then got a good price on the second-generation Type Cover at Amazon Japan.

[Note: for the last few releases, it’s been a lot less painful to switch keyboard types in Windows; you used to have to hack the registry when you had US Windows and a Japanese keyboard, now you can just add the correct layout and manually switch. It still can’t auto-detect different keyboards the way the Mac has been doing for a long time, but it’s progress.]

There are a few quirks that have been discussed in the many reviews of the Surface Pro, but it’s genuinely good hardware, marred only by the immaturity of Windows’ handling of high-resolution displays. Basically, every piece of software that isn’t rebuilt to use the (apparently-incomplete, at least that’s why Adobe says they’re having so much trouble) HiDPI APIs will be scaled to make the text readable, and this breaks all sorts of layouts. For many applications, your choices are “big and fuzzy”, “way too small”, and occasionally “missing most menus and dialog text” (yes, that means you, FontExplorer Pro). “Way too small” is particularly annoying with a touchscreen, but the pen and trackpad have the resolution to handle tiny targets. And the problem goes away if you connect an external HDMI display.

My only complaints about the keyboard covers have to do with the trackpad. First, there’s no way to shut off tapping. There’s an app that claims to offer this feature, but it simply doesn’t work on the Pro 2, and there’s no hint of an update. On any tap-enabled trackpad, I’m constantly mis-clicking while trying to move the pointer across the screen, and it drives me nuts. They’re just too damn sensitive about the amount of pressure required to “tap”.

The second problem with the trackpad is that it often doesn’t work if you plug a USB device in. Because Microsoft’s own USB/Ethernet adapter is a 10Mbit USB2 device, I bought a third-party USB3 gigabit adapter that also includes a 3-port hub. It works great, but if I plug in the adapter and then wake up the tablet, the keyboard cover doesn’t get enough power to run the trackpad. Reverse the order and all is well.

Typical battery life is 8+ hours, unless I’m playing Skyrim, in which case I get a bit over 4. It never gets uncomfortably warm, and the fans are nice and quiet. The two-position kickstand is a nice upgrade over the first-generation Pro, and makes it possible to play Skyrim in bed on a lap desk. The speakers are quite loud for a tablet, and better than most laptops I’ve used.

It’s fantastic for Illustrator since the last update, but until Adobe gets the resolution problems sorted out, Photoshop is annoying to use, both because you need a hack to make the icons visible, and because the 64-bit version has issues with the Pro 2’s graphics drivers. Lightroom is fine, and InDesign is reportedly working well, too. [all of these being the pay-to-play CC versions, which is a rant for another day. Let’s just say there are some cranky pros out there annoyed by a combination of incompatible changes and workflow-crippling bugs]

The “app” market is, as expected, filled with iPaddish crap. I’ve deleted most of the apps that I’ve tried, and I haven’t found a lot of good ones to try. If I had to choose between a standard Surface and an iPad, I’d buy the iPad and complain about it; instead, I get to enjoy the Pro 2.

How do I feel about Windows 8.1? It was designed for a tablet, works well on one, and sucks elsewhere. There are some compatibility issues compared to Windows 7 (VPN software, assorted third-party drivers, etc). On the little netbook I upgraded, I needed to hunt down a Start-menu replacement to make it tolerable; not good, just tolerable.

Oh, and how did I pay for it? A friend sold off a bunch of my old Magic: The Gathering cards on eBay. Just a handful of high-value cards paid for the tablet, keyboard, gigabit adapter, HDMI adapter, and a new Bluetooth mouse, with money left over. We still need to go through the rest of my cards and put them all up as a big batch. And then see if anyone wants to buy a big batch of INWO, black-border Jyhad, XXXenophile, etc…

Thursday, March 27 2014

Microsoft Surface Power Cover

Microsoft finally released the Surface Power Cover recently, and it was worth the wait.

There is only one downside: the keys aren’t backlit, like the standard Touch and Type Covers. I can’t imagine why they did this, since the set of people who want significantly more battery life and don’t want a backlighting option has to be pretty small.

Physically, it’s twice as thick and twice as heavy as the standard Type Cover, giving my Surface Pro 2 more of a netbook feel to carry, but not unpleasantly so. It came with a warning label telling you to make sure you have all the latest software updates before attaching it, but since I preordered mine the moment they flipped the switch on the Microsoft site, I got it the day before the official release date, and the firmware updates didn’t show up until the next day. It worked fine, though, and after the update, the battery levels were tracked separately.

How well does it work? Well, I just finished 45 minutes on the elliptical with a ripped Bluray disc playing, at 2/3 volume and 100% brightness, with WiFi turned on. The system reported that I had just over 15 hours of battery life left, and based on how the Pro 2 has performed the past few months, I believe it.

Wednesday, February 4 2015

No, your other left!

Dear DeWalt,

The instruction manual for the DWP611 compact router is quite clear and straightforward, with one slight exception:

Dewalt: your other left

Tuesday, February 17 2015

Virtualizing my dead MacBook Pro

My laptop died recently after 5+ years of loyal service, and between the fact that pretty much all the Macs are due for a refresh soon, that I only reluctantly upgraded to Mountain Lion a while back and have no desire to migrate my very stable environment to the iPad-and-Helvetica beta known as Yosemite, and that I just don’t want to spend $3-4K right now, I had the office buy me one instead. My official work laptop had just turned 8 years old, so it seemed a reasonable request.

They didn’t want to spend $3-4K either, so now I’ve got a 13-inch Retina MacBook Pro with a Core i5 and 512GB of SSD rather than the i7 with 1TB that I wanted. Got the 16GB of RAM, at least, which makes it possible to allocate 6-8GB for a VMware session containing my old hard drive. This allows me to split off my work and personal environments (which wasn’t a problem when I owned the hardware…). I’m waiting on a new USB3 enclosure for my old 1TB SSD, so at the moment I’m running the virtual on a Western Digital 2TB USB3 drive, and the spinning disk makes things take a bit longer than I’d like. Fully functional, though.

I have only a few things to complain about migrating my old environment into VMware Fusion:

  1. Added: vmware-vmx can take several minutes to exit after suspending the virtual machine and completely exiting VMware Fusion; this prevents me from ejecting the external drive. Specifically, the logs show that “pagefile sync to disk” starts running after the GUI shows the virtual as suspended, and doesn’t finish until 2-3 minutes later; the GUI doesn’t seem to know about this, and cheerfully exits.
  2. random resolution changes every time I switched between fullscreen and windowed (fixed by manually editing the preferences file to include pref.autoFitGuestToWindow = “FALSE” and pref.autoFitFullScreen = “stretchGuestToHost” and then forcing it to use the non-Retina screen resolution).
  3. iTunes crashes immediately (known problem with workaround: sudo nvram boot-args=’vmw_gfx_caps=0’).
  4. Photoshop CS5.5 doesn’t seem to be working correctly; the most obvious flaw is the lack of item highlighting in menus. Illustrator seems to be fine, though. (update: the iTunes workaround also fixes Photoshop)
  5. Aperture doesn’t work, because it spits on the emulated graphics card.
  6. collision between real and virtual Mission Control hot corners in fullscreen; not much I can do about that one, it seems.
  7. installation hell: I needed a Mac virtual to bootstrap the copy of my old drive into VMware container format, and every single one of the painstakingly-saved installers I have for Lion, Mountain Lion, Mavericks, and Yosemite failed at the end of the install. Re-downloading Mountain Lion fixed that, “somehow”, which led to the next problem, which was incredibly slow copy speeds in SuperDuper. The trial copy of Carbon Copy Cloner worked, although it was originally going to take forever, too, because of a Yosemite bug that I had to work around.
  8. The Yosemite bug: if you create your user account as part of the Yosemite installation, and link it with iCloud, then a mandatory security policy is set that forces screen-locking after five minutes of idle time. This cannot be disabled, even by shutting off iCloud and breaking the link to the account. Performance of your VMware session goes to hell when the screen is locked, which I consider another OS bug. There are only two fixes: create a new user account that has never known the whip-hand of iCloud, or install Caffeine from the app store to fake out the idle timer.
  9. General Mac cruftiness: far too many preferences are tied to your hardware ID. Some of the stuff I had to reconfigure was just stupid.

I’ve migrated most of the work stuff over to the physical machine already, and with Homebrew and Perlbrew I’m almost fully functional again, and no longer need to carry a Mac Mini back and forth every day. I need to carry an external drive now, though, along with Thunderbolt-to-Ethernet and Thunderbolt-to-Firewire adapters. And a USB optical drive for those Special Occasions…