"Don't lecture me about opinions I don't actually have."
— J.D. VanceOf course, to truly pilot this mech, she should be wearing a sailor suit…
Remember when Britney Spears was a really cute teenage girl? And then a few weeks later she was the sex kitten, a position from which she ruled the world until she was old enough to make her own career decisions and revealed that she’s about as bright and sophisticated as Tom Cruise is sane? I actually saw this magazine cover three times at stores without recognizing her.
It makes me (mostly) appreciate the Japanese idol system, where both the industry and the fans will cheerfully ostracize a star for even slightly tarnishing her image.
This is a long-winded way of saying that I expect Maki’s sex-kitten phase to last much longer than Britney’s, with no trailer-trash serial weddings to disrupt the fun.
Here’s her latest video. J like.
“…is a new Teenage Mutant Ninja Turtles movie!”
I’m not entirely sure why, but it must be so. Perhaps it’s the unique vision of writer/director Kevin Munroe, as displayed in his previous hit, the D-grade console game Freaky Flyers. [disclaimer: I’ve never heard of this game, and apparently neither has anyone else]
Nice rendering in the trailer, though.
[this is the full story behind my previous entry on the trouble with tar pipelines]
One morning, I logged in from home to check the status of our automated nightly builds. The earliest builds had worked, but most of the builds that started after 4am had failed, in a disturbing way: the Perforce (source control) server was reporting data corruption in several files needed for the build.
This was server-side corruption, but with no obvious cause. Nothing in the Perforce logs, nothing in dmesg output, nothing in the RAID status output. Nothing. Since the problem started at 4am, I used find to see what had changed, and found that 66 of the 600,000+ versioned data files managed by our Perforce server had changed between 4:01am and 4:11am, and the list included the files our nightly builds had failed on. There were no checkins in this period, so there should have been no changed files at all.
A quick look at the contents of the files revealed the problem: they were all truncated. Not to 0 bytes, but to some random multiple of 512 bytes. None of them contained any garbage, they just ended early. A 24-hour-old backup confirmed what they should have looked like, but I couldn’t just restore from it; all of those files had changed the day before, and Perforce uses RCS-style diffs to store versions.
[side note: my runs-every-hour backup was useless, because it kicked off at 4:10am, and cheerfully picked up the truncated files; I have since added a separate runs-every-three-hours backup to the system]
I was stumped. If it was a server, file system, RAID, disk, or controller error, I’d expect to see some garbage somewhere in a file, and truncation at some other size, perhaps 1K or 4K blocks. Then one of the other guys in our group noticed that those 66 files, and only those 66 files, were now owned by root.
Hmm, is there a root cron job that kicks off at 4am? Why, yes, there is! And it’s… a backup of the Perforce data! Several years ago, someone wrote a script that does an incremental backup of the versioned data to another server mounted via NFS. My hourly backups use rsync, but this one uses tar.
Badly:
cd ~perforce/really_important_source_code find . -mtime -1 -print > $INTFILES tar cpf - -T $INTFILES | (cd /mountpoint/subdir; tar xpf -)
Guess what happens when you can’t cd to /mountpoint/subdir, for any reason…
Useful information for getting yourself out of this mess: Perforce proxy servers store their cache in the exact same format as the main server, and even if they don’t contain every version, as long as someone has requested the current tip-of-tree revision through that proxy, the diffs will all match. Also, binary files are stored as separate numbered files compressed with the default gzip options, so you can have the user who checked it in send you a fresh copy. Working carefully, you can quickly (read: “less than a day”) get all of the data to match the MD5 checksums that Perforce maintains.
And then you replace that backup script with a new one…
I am here, at the corner of Boronda and N. Main in Salinas. I wish to go here, to a parking garage in Palo Alto.
Yahoo thinks I should take US-101 to CA-85 to I-280 to Page Mill Road to Alma to University. 76.2 miles, 78 minutes.
Mapquest thinks I should take US-101 to University. 74.32 miles, 78 minutes.
I recently upgraded my car’s GPS navigation system, replacing the 2001 software with the 2005 version. Before the upgrade, it thought I should take US-101 to Oregon Expressway to Middlefield to University, which (perhaps accidentally) recognizes that University is a lousy place to get off the highway. It was a good route.
Imagine my surprise when the four-years-newer firmware proposed the following route: US-101 to Espinosa to Castroville Road to CA-1 to CA-17 to CA-85 to US-101 to University. 78.25 miles, 95 minutes.
This is not a “scenic route” option; the car thinks it’s offering good advice, despite the fact that you can persuade it to admit that both Yahoo’s and Mapquest’s suggestions are both shorter and faster. They’ve significantly changed the way they weight different roads, and I haven’t figured out where the “don’t be stupid” button is.
And I need to, because this is a mind-bogglingly stupid route, starting with the very first turn. Espinosa is a two-lane highway with heavy farm traffic, and getting onto it from US-101 requires making a left turn across southbound traffic, from a full stop. When you eventually make it to Castroville, the speed limit in town is 25mph. CA-1 is two lanes of lovely coastal highway up to Santa Cruz, with lots of trucks struggling to navigate the hills and curves. CA-17 is a very pretty—and ridiculously crowded—drive through the Santa Cruz Mountains. CA-85 isn’t nearly as bad as it used to be, but even with the recent improvements, merging back onto US-101 at the north end can be messy during rush hour.
Unfortunately, my ability to control routing decisions is limited to flavor (“shortest”, “fastest”, “maximize highway”) and “avoid this road”. I don’t want to tell it to avoid Espinosa, because when it’s not rush hour, it’s the fastest route to the Borders in Seaside, and it’s always the fastest way back from the coast.
[update: when calculating the B-to-A route, the new version of the software agrees with the old one, and combining that with the Mapquest picture gives me a pretty good clue about what’s going on. I think US-101 between Salinas and Watsonville is being weighted as a non-highway road, making Espinosa/Castroville the shortest path to a highway. Either the old data gave that stretch of 101 a better rating, or the new software optimizes for starting road conditions at the expense of overall trip quality.
So, if I tell it to avoid the stretch of Castroville Road just before CA-1 North, it should change the weights enough to send me up 101 when going to Palo Alto without interfering with route planning to Seaside. The potential downside is that it might try to route me up 101 to CA-17 to reach Santa Cruz, but that depends on just how heavily it weights the “avoid this road” markers.]
[update: oh, this is getting good. I set an “avoid this area” marker on Castroville Road just past the 156 south exit, so it wouldn’t interfere with routing to Seaside and Monterey. The car recommended taking 156 south to the next exit to get onto CA-1.
So I moved the marker a bit further down the road, past that exit, and the car took 156 south a bit further to reach CA-1. So I moved the “avoid this area” marker onto CA-1, and the car routed through Hollister.
Along San Juan Grade Road. This is a paved goat path running through the Gabilan Mountains.
However, if I remove all of the “avoid” markers and set a waypoint along 101 near San Juan Batista, the car gives me a perfectly sensible route, which it admits is shorter and faster than all previous recommendations (74 miles, 69 minutes).]
Great job getting Flash Player 9 out the door, after absorbing Macromedia. Pity it doesn’t install correctly on a Mac.
I ran the installer, and it launched Safari when it finished, taking me to your “gee, isn’t Flash cool?” page. Which didn’t load, because I didn’t have Flash. So I ran the installer again. And again. Still no Flash.
So I deleted the old copy from /Library/Internet Plug-Ins, and ran the installer again. It failed, because it didn’t have permission to write to that directory. Yes, it’s true; when you want to create files that require administrative privileges, your installer actually has to request administrative privileges. It can’t just hope that the user is logged in as root, or has foolishly made that directory world-writable.
The good news is that the folks at via.net rebooted something and got the ping times down from 500ms to 80ms. The bad news is that I’m still seeing 30% packet loss to every machine in their data center, so there’s more work to be done.
[update: ah, 17ms, no packet loss. much better]
[update: apparently there was a DDOS attack on one of the other servers they host.]
I just admire how thoroughly the photographer covered this little World Cup promotional tie-in. Two girls, two bikinis, two DVDs, seventy pictures. Yeah, that sounds about right.