Setuid bits at half mast

Dennis Ritchie has died.

"I feared that the committee would decide to go with their previous decision unless I credibly pulled a full tantrum."
    -- dmr@alice.UUCP

Bash Quiz

What is the result of feeding this expression to a Bash shell?


More fun with Anaconda

After I thought I had a decent script for figuring out what packages Anaconda would install from a Fedora 10 DVD, I decided to test it against reality. Reality made the script cry like a little girl, so it was back to the drawing board.

The problem, simply put, was that I had over-estimated the internal consistency of the data. Here’s what I learned in the process of producing a 100% match between my script and an actual default install of Fedora 10:

  1. Packages listed in comps.xml don't necessarily exist, even if they're marked mandatory (like iprutils).
  2. Conditional packages in comps.xml often depend on packages that are not themselves listed, but get included during dependency resolution (all of the OpenOffice language packs, for instance).
  3. Packages often require themselves (more than two-thirds of perl's requirements are met by... perl).
  4. Some packages specify a requirement for files installed by a package rather than features provided by that package (such as /usr/bin/perl; in an amusing note, this file is required by, but not a feature provided by, perl).
  5. A few packages require files that are not installed by any package, and that's not considered an error.
  6. It's not unusual for multiple packages to satisfy the same requirement, and when they do, Anaconda chooses the one with the shortest name. Seriously.
  7. ...unless it's obsoleted by the other one, as in the case of the synaptics driver.
  8. RPM doesn't actually care about all this nonsense; when it wants to know what libraries a package depends on, it opens it up and runs ldd on the contents.
  9. This is required exactly once during a default Fedora 10 install, to discover the fact that totem-mozplugin requires mozplugger. I had to fake that one.

At some point, this knowledge will be put to use upgrading my EEE PC from Fedora 9, but now that I can declare victory and stop tinkering with the script for a while, I’m going to go finish the Japanese novel I’m currently working my way through (60 pages down, 200 to go).

Dear Macports port maintainers,

Um, “fail”.

building 'pycurl' extension
creating build/temp.macosx-10.3-i386-2.5
creating build/temp.macosx-10.3-i386-2.5/src
-DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes
 -I/opt/local/include -I/opt/local/include/python2.5
 -c src/pycurl.c -o
unable to execute -DNDEBUG: No such file or directory
error: command '-DNDEBUG' failed with exit status 1

Two packets enter, one packet leaves!

Okay, I’m stumped. We have a ReadyNAS NV+ that holds Important Data, accessed primarily from Windows machines. Generally, it works really well, and we’ve been pretty happy with it for the last few months.

Monday, the Windows application that reads and writes the Important Data locked up on the primary user’s machine. Cryptic error messages that decrypted to “contact service for recovering your corrupted database” were seen.

Nightly backups of the device via the CIFS protocol worked fine. Reading and writing to the NAS from a Mac via CIFS worked fine. A second Windows machine equipped with the application worked fine, without any errors about corrupted data. I left the user working on that machine for the day, and did some after-hours testing that night.

The obvious conclusion was that the crufty old HP on the user’s desk was the problem (it had been moved on Friday), so I yanked it out of the way and temporarily replaced it with the other, working Windows box.

It didn’t work. I checked all the network connections, and everything looked fine. I took the working machine back to its original location, and it didn’t work any more. I took it down to the same switch as the NAS, and it didn’t work. My Mac still worked fine, though, so I used it to copy all of the Important Data from the ReadyNAS to our NetApp.

Mounting the NetApp worked fine on all machines in all locations. I can’t leave the data there long-term (in addition to being Important, it’s also Confidential), but at least we’re back in business.

I’m stumped. Right now, I’ve got a Mac and a Windows machine plugged into the same desktop gigabit switch (gigabit NICs everywhere), and the Mac copies a 50MB folder from the NAS in a few seconds, while the Windows machine gives up after a few minutes with a timeout error. The NAS reports:

smbd.log: write_data: write failure in writing to client Error Connection reset by peer smbd.log: write_data: write failure in writing to client Error Broken pipe

The only actual hardware problem I ever found was a loose cable in the office where the working Windows box was located.

[Update: It’s being caused by an as-yet-unidentified device on the network. Consider the results of my latest test: if I run XP under Parallels on my Mac in shared (NAT) networking mode, it works fine; in bridged mode, it fails exactly like a real Windows box. Something on the subnet is passing out bad data that Samba clients ignore but real Windows machines obey. The NetApp works because it uses licensed Microsoft networking code instead of Samba.]

[8/23 Update: A number of recommended fixes have failed to either track down the offending machine or resolve the problem. The fact that it comes and goes is more support for the “single bad host” theory, but it’s hard to diagnose when you can’t run your tools directly on the NAS.

So I reached for a bigger hammer: I grabbed one of my old Shuttles that I’ve been testing OpenBSD configurations on, threw in a second NIC, configured it as an ethernet bridge, and stuck it in front of the NAS. That gave me an invisible network tap that could see all of the traffic going to the NAS, and also the ability to filter any traffic I didn’t like.

Just for fun, the first thing I did was turn on the bridge’s “blocknonip” option, to force Windows to use TCP to connect. And the problem went away. I still need to find the naughty host, but now I can do it without angry users breathing down my neck.]

Ah, a bit of sanity...

It should really be called World Domination 050, because it’s providing remedial education that the student should have had before coming to college, but it’s a start:

Linux on the desktop has been a year or two away for over a decade now, and there are reasons it's not there yet. To attract nontechnical end-users, a Linux desktop must work out of the box, ideally preinstalled by the hardware vendor.


When somebody with a degree in finance or architecture or can grab a Linux laptop and watch episodes of The Daily Show off of Comedy Central's website without a bearded Linux geek walking them through an elaborate hand-configuration process first, maybe we'll have a prayer.


You can't win the desktop if you don't even try. Right now, few in the Linux world are seriously trying. And time is running out.


Unfortunately "good" isn't the same as "ready to happen". The geeks of the world would like a moonbase too, and it's been 30 years without progress on that front. Inevitability doesn't guarantee that something will happen within our lifetimes. The 64-bit transition is an opportunity to put Linux on the desktop, but right now it's still not ready. If the decision happened today, Linux would remain on the sidelines.

[Update: as usual, those wacky kids on Slashdot just don’t get it.]

No comment...

Mark Shuttleworth, Ubuntu guy, on Linux success:

"If we want the world to embrace free software, we have to make it beautiful. I’m not talking about inner beauty, not elegance, not ideological purity... pure, unadulterated, raw, visceral, lustful, shallow, skin deep beauty.

We have to make it gorgeous. We have to make it easy on the eye. We have to make it take your friend’s breath away."

This just in: Ubuntu is crap

Well, at least in the area of configuration, maintenance, and release management, the current version shows its dark roots. Before anyone speaks up, I’ll say that I’m generally happy with using FC5 and RedHat Enterprise on our servers at work, but someone had recommended Ubuntu server as a possible base OS for virtualizing my personal machine with VMWare Server.

It installed correctly, but wouldn’t boot. The solution I located required the following steps:

  1. boot from the install CD
  2. discover the command-line option for booting in rescue mode
  3. guess the partition name to mount as / for rescue
  4. open a root shell
  5. manually add a nameserver to /etc/resolv.conf
  6. type apt-get install linux-686
  7. say yes to all the dependencies this requires
  8. watch and wait
  9. reboot

“Fixed in next release,” supposedly, but between that early-warning sign and some of the obvious eccentricities I tripped over, I don’t think I’ll bother with it.

“Need a clue, take a clue,
 got a clue, leave a clue”