So, the ISP who hosts jgreely.com has been sending email since February announcing an upcoming transition to a new platform.
On November 2nd, they sent one that said “we may not get to your domain before our November 28th deadline, so if you don’t want it to be shut off, you might want to run our migration tool yourself and do your own testing.”
On November 15th, they sent a friendly reminder.
On November 16th, they said the migration had been completed successfully, and I should now update my registrar with their new name servers.
Not being an idiot, I queried the new servers, and found: no MX record, no A records, and only one lonely little CNAME pointing ftp.jgreely.com to (nonexistent) www.jgreely.com. The new IP address, available only from their web console, did not listen for SMTP, POP, or IMAP, but a manual connection to port 80 showed that my trivial home page was there. The control panel also showed that my mail config had been modified, but that no data had been copied over from the old server (someone clearly doesn’t understand how IMAP works…).
There is no published support email address. Their online chat never connects. I spent 72 minutes on hold waiting for someone to pick up, and ten minutes explaining the problem to an arrogant moron. I demanded he escalate the call, and he put me on hold for another 20 minutes. I explained the problem again, in detail, and this guy understood, and said they’d regenerate the zone file and it would be fine in a little while.
And, oh-by-the-way, since the transition of my domain was marked complete in their system, the old server could be shut down at any time. But if I noticed it and called, they’d be happy to turn it back on for a little while.
Two hours later, dig still shows no MX, no A, and one pointless CNAME.
Oh, and the “obsolete platform” had shell access; the shiny new one does not. It does, however, have a lot of overpriced add-on services, like “backup/restore” (!), SEO optimization, blahblahblah. And while on hold for over an hour, they kept telling me how paid audio and video services would “keep customers on my site longer”, and other bullshit.
If they don’t get their shit together Real Soon Now, I’ll name and shame them. And, of course, move the account elsewhere. Maybe I’ll just host it on Amazon and run it myself.
[Update: DNS and old email finally showed up. I haven’t switched yet, since it takes a while for name server changes to propagate, and I still don’t really trust these clowns. First, I’m going to backup my mail archives, then switch the IMAP config to point to the old IP address, then create a brand new account that points to the new IP addresses, so I don’t lose days of incoming mail.]
I disabled my Hurricane Electric IPv6 tunnel for now, because it was breaking Netflix and Tumblr. Netflix assumes that the existence of any tunnel means you’re trying to bypass regional content restrictions, and Tumblr because browsers prefer IPv6 when it’s offered, but the tunnel simply can’t handle the massively-parallel image-loading of a typical endlessly-scrolling cheesecake tumblr.
It was an interesting experiment that came in handy when I went to set up a test IPv6 config at work, and I can get native IPv6 at home now just by asking, but there’s really no point. I have a static /29, so until they pry it out of my cold, dead hands, there’s no benefit to me.
The Perforce conference was entertaining and educational. San Francisco is a hole, but we were pretty much in the hotel all the time, except for a party/“dinner” and a walk down the street to the training facility for those of us who took classes as well.
When I first mentioned the conference, David commented about how one of his clients who tested it had serious performance issues and felt it “really, really disliked binaries being checked in”. This would come as news to the many companies who talked about checking in all their build artifacts, and to the large game companies who check in hundreds of gigabytes of digital assets for all their projects. (or even the last three companies I’ve worked at…) Git and Mercurial have problems with large files and big repositories, but Perforce? Nah.
(not that there aren’t companies who’ll sell you solutions to speed up Perforce, but that means things like “massive parallel checkouts in seconds” and “scaling to petabyte repos”)
My only complaint about the show was the limited amount of vendor swag. They tied everything into a “social” app where you checked in by scanning QR codes and built up points by posting chatty little updates. This was of course gamed to the point that the only actual prize, an Oculus Rift, was won by the person who relentlessly spammed the app with “social” updates. Most of us were there to actually pay attention to presentations and corner the development team, so it wasn’t much of a contest. I figure I’ll have our new sales rep scrounge up some of the leftover swag when she comes out to meet the team.
For me personally, I brought home a lot of good information about how to improve our current server and integrate our wanna-git devs. Their current interim gitlab-to-p4d shim is working for a lot of people, but I have to workaround some issues to use it in our environment (being a little too git-like, it bypasses some of the security features in Perforce, which I can’t allow).
9/10: would kill bad robots again.
After weeks of occasional mystery outages on our office network, lasting minutes to hours, always ending as mysteriously as they started, this morning I was able to get into the router and get something that looks an awful lot like a smoking gun: connection attempts to port 80 on a single IP address from 725,000+ machines around the world.
The catch? The destination address wasn’t on our network. It belongs to an ISP in Spain.
So, somehow, our ISP’s global routing table decided to forward this attack to us. Given that their response to the previous outages was “gee, looks fine to us”, I’m looking forward to eavesdropping on our network manager’s conversation with them.
Due to a combination of company growth, whiny devs, and SOX compliance, I’m off to the Perforce Conference, aka “Merge”, April 11th-15th.
My dinner plans for the week mostly consist of “avoiding aggressive bums who piss on the sidewalk”, so unless I run into someone I know, I’ll either eat at the hotel or send out for pizza.
You know the one thing that’s really, truly gotten better about system administration? Now, when you’re sitting in the computer room poking at a sick server and checking the status of hours-long processes with lsof, strace, and tcpdump, you can watch Doctor Who on your phone.
Does it strike anyone else odd to see recommendations on how to secure your privacy from someone whose only accomplishment in life was stealing confidential data? It’s a bit like asking a cat how to store tuna; his motives and expertise are not aligned with your interests.
When a disk fails in a RAID array, the primary risk associated with replacing it is that another disk will fail before the replacement is fully populated. At which point you’ve lost all your data.
So you can understand my concern yesterday morning when, as I was walking into the computer store to buy a replacement SSD for a machine that had failed unexpectedly, I got email from a NAS reporting a failed RAID5 disk, and discovered that I had two servers to fix.
The good news is that the RAID array finished rebuilding successfully while I was rebuilding the server that needed the SSD replaced.
The bad news is that as soon as I finished the long drive home, I got email that the brand-new disk I’d just installed failed. Crib death is possible, but this time the GUI wasn’t responding reliably either, and a root shell on the NAS got hung when I ran dmesg. Which means it was the 5-year-old NAS itself failing, and the disks were probably fine. If I could get them swapped into an identical chassis. That part will have to wait until Tuesday, since while I could buy something today, Amazon Marketplace can’t get me a ReadyNAS Pro 6 on Labor Day.
I’d be more upset if the NFS mounts weren’t still working, allowing me to copy most of the data off to random free space elsewhere. I haven’t quite come up with 8.3TB yet, but a lot of that is archived logs that may have to wait.
Oh, and the original, unrelated SSD replacement? I’m still babysitting that one, too, since the system involved is a fairly gross hack, held together with twist-ties and bubblegum.
My holiday weekend is going just rosy, thanks. How’s yours?