Amazon.com home page: “Did You Know Amazon.com Sells NCAA Garden Gnomes?”
No, nor did I know that they sold matching Cheerleader Garden Gnomes.
Fail.
After far too many years online, I was initially unable to parse the following photograph correctly:
When I first started playing with pop-up furigana, I was aware of the official method of specifying them in HTML, using the RUBY, RB, and RT tags. They’re only supported in IE, though, and the standard half-size presentation simply doesn’t make sense for the low resolution of displays, even with good anti-aliasing.
Some folks are using them anyway, like the University of Virginia Library Japanese Text Initiative, which is another good source of free literature. If you’re not running IE (or the Firefox extension that they say works), the furigana degrade relatively gracefully into full-sized kana in parentheses following the glossed word, with no indication of how many of the preceding kanji are being glossed.
Tonight, I had the sudden urge to adapt my system to work with the will-eventually-work-in-other-browsers RUBY tags. This turned out to be pretty easy, for the simple case. I just added this code right before my gloss script:
$(document).ready(function(){
$("ruby").each(function(){
var rb=$(this).children("rb").eq(0).text();
var rt=$(this).children("rt").eq(0).text();
var gloss=$('' + rb + '');
$(gloss).attr('title',rt);
$(this).replaceWith(gloss);
})});
I like Google Earth. I even pay for the faster performance and enhanced features. A few things, though:
I’m sure I can come up with more if I think about it for a bit…
[update: ah, press ‘n’ for north, ‘r’ for a total view reset, and then figure out how to fix all of the KMZ files that were broken by the upgrade]
Safari now uses a completely different method of storing cookies, which unfortunately means that the only decent management tool I ever found, Cocoa Cookies, doesn’t work any more.
So I rolled my own:
(/usr/libexec/PlistBuddy -c print
~/Library/Cookies/Cookies.plist |
awk '/Domain = / {x++;print x-1,$0}' |
awk '!/mee.nu|amazon/{print $1}' |
sort -rn | sed -e 's/^/delete :/';
echo save;echo quit) |
/usr/libexec/PlistBuddy
~/Library/Cookies/Cookies.plist
Note that you really don’t want to run this as-is, and probably want something more robust than a shell one-liner anyway. The bits that matter are:
So, the downside to adding jquery to all my pages is that, with my carefully throttled bandwidth, it ended up adding significantly more time to the page load than you’d expect. This was recently explained very clearly over on Surfin’ Safari.
As a result, I moved all the JS libraries over to Amazon S3, where I’m already hosting my pictures. This turned out to be a bad idea, because while their service is very quick, every once in a while it fails to deliver a page. And if the jquery library doesn’t get loaded, my comment-spam trap becomes lethal.
The system I came up with a while ago, that has proven to be 100% effective, is to set the form-submission URL to “imacommentspammer”, and use JavaScript to replace it with the real URL once the page finishes loading. My log-scanning script checks the Apache logs for this and other “interesting” URLs, and immediately adds the associated IP address to the firewall’s block list. Spammers that scan the static HTML pages never see the correct URL, so into the trap they go.
The unfortunate side-effect was that if S3 failed to deliver the jquery library, any attempt to post a comment resulted in my site vanishing from your view of the Internet. That’s a little extreme even for me, so I added a second step: the form submit button is disabled in the HTML, and enabled by the same script that fixes the URL.
[I noticed this because the script tried to ban me; fortunately, I have a whitelist for just such occasions.]