Web

No, I did not know that


Amazon.com home page: “Did You Know Amazon.com Sells NCAA Garden Gnomes?”

No, nor did I know that they sold matching Cheerleader Garden Gnomes.

Dear Amazon,


Fail.

[concept] A tech blog that doesn't suck


  1. The use of marketing phrases to describe the release of a product shall be forbidden. In particular, the word "drop" shall be reserved for articles announcing that a product has been removed from the market.
  2. Press releases for objects which are not only unreleased, but nonexistent, shall be clearly labeled "concept art," "design project," "spec work," "wishful thinking," "investor pitch," "violates second law of thermodynamics," or some other appropriate, factual description.
  3. Press releases that contain neither release dates nor specifications shall be considered to fall into one of the above categories.
  4. Factual information contained in press releases shall be clearly presented above the fold.
  5. Embedded links to manufacturers and products shall go directly to the most appropriate page on the most official source available at the time. The practice of burying the source to generate additional ad impressions shall not be tolerated.
  6. Contributors shall demonstrate at least a grade-school grasp of English composition.
  7. Except when discussing products and services specifically designed for adult-only audiences, contributors shall write for a general audience, not a locker room.
  8. Contributors and editors shall not suffer a shill to live.

Ruined for life


After far too many years online, I was initially unable to parse the following photograph correctly:

more...

More fun with furigana and jQuery


When I first started playing with pop-up furigana, I was aware of the official method of specifying them in HTML, using the RUBY, RB, and RT tags. They’re only supported in IE, though, and the standard half-size presentation simply doesn’t make sense for the low resolution of displays, even with good anti-aliasing.

Some folks are using them anyway, like the University of Virginia Library Japanese Text Initiative, which is another good source of free literature. If you’re not running IE (or the Firefox extension that they say works), the furigana degrade relatively gracefully into full-sized kana in parentheses following the glossed word, with no indication of how many of the preceding kanji are being glossed.

Tonight, I had the sudden urge to adapt my system to work with the will-eventually-work-in-other-browsers RUBY tags. This turned out to be pretty easy, for the simple case. I just added this code right before my gloss script:

$(document).ready(function(){
$("ruby").each(function(){
	var rb=$(this).children("rb").eq(0).text();
	var rt=$(this).children("rt").eq(0).text();
	var gloss=$('' + rb + '');
	$(gloss).attr('title',rt);
	$(this).replaceWith(gloss);
})});

Dear Google,


I like Google Earth. I even pay for the faster performance and enhanced features. A few things, though:

  • Why can't I keep North at the top of the screen? I hate constantly double-clicking the "N" in the gaudy navigation scroll-wheel.
  • Why do you auto-enable new layers in my view, so that, for instance, I suddenly see every golf course on the planet, even though I had that entire category disabled?
  • Why can't I switch between different sets of enabled layers?
  • Why is the "Google Earth Community" layer such a dumping ground of unsorted crap? For instance, what value does this have to anyone who's not an airline pilot? Or this, where points scattered around the globe are all labeled, "here's my collection of 4,728 placemarks".

I’m sure I can come up with more if I think about it for a bit…

[update: ah, press ‘n’ for north, ‘r’ for a total view reset, and then figure out how to fix all of the KMZ files that were broken by the upgrade]

Safari Cookies


Safari now uses a completely different method of storing cookies, which unfortunately means that the only decent management tool I ever found, Cocoa Cookies, doesn’t work any more.

So I rolled my own:

(/usr/libexec/PlistBuddy -c print
~/Library/Cookies/Cookies.plist |
awk '/Domain = / {x++;print x-1,$0}' |
awk '!/mee.nu|amazon/{print $1}' |
sort -rn | sed -e 's/^/delete :/';
echo save;echo quit) |
/usr/libexec/PlistBuddy
~/Library/Cookies/Cookies.plist

Note that you really don’t want to run this as-is, and probably want something more robust than a shell one-liner anyway. The bits that matter are:

  1. run "/usr/libexec/PlistBuddy -c print" to dump all your cookies in an easily-parsed format.
  2. The array of cookies is zero-based.
  3. The array shrinks as you delete things from it with "delete :N", so you want to start at the end and work forward.
  4. The original file isn't altered until you send a "save".
  5. Safari seems to write this file out whenever you get a cookie, and notices when it's changed on disk.

Slight change to the site...


So, the downside to adding jquery to all my pages is that, with my carefully throttled bandwidth, it ended up adding significantly more time to the page load than you’d expect. This was recently explained very clearly over on Surfin’ Safari.

As a result, I moved all the JS libraries over to Amazon S3, where I’m already hosting my pictures. This turned out to be a bad idea, because while their service is very quick, every once in a while it fails to deliver a page. And if the jquery library doesn’t get loaded, my comment-spam trap becomes lethal.

The system I came up with a while ago, that has proven to be 100% effective, is to set the form-submission URL to “imacommentspammer”, and use JavaScript to replace it with the real URL once the page finishes loading. My log-scanning script checks the Apache logs for this and other “interesting” URLs, and immediately adds the associated IP address to the firewall’s block list. Spammers that scan the static HTML pages never see the correct URL, so into the trap they go.

The unfortunate side-effect was that if S3 failed to deliver the jquery library, any attempt to post a comment resulted in my site vanishing from your view of the Internet. That’s a little extreme even for me, so I added a second step: the form submit button is disabled in the HTML, and enabled by the same script that fixes the URL.

[I noticed this because the script tried to ban me; fortunately, I have a whitelist for just such occasions.]

“Need a clue, take a clue,
 got a clue, leave a clue”