I just got email touting the new “Adobe XD CC” app, which promises:
“The future of experience design. No experience required.”
This sounds like exactly what happened when Apple set fire to their
decades of usability studies and
sucked on a pistol Boldly
Invented New Paradigms.
Several years ago, the personally-owned MacBook Pro (Togetsukyō) that I used for work went flaky, and I didn’t feel like spending the three grand or so that it would take to buy a fully-tricked-out replacement. So I had my boss buy me the best he could get approved, which was the 13-inch model with 16 GB of RAM and a 512GB SSD. I named it Hello!Party, inspired by Scott’s ‘favorite’ song, and carefully downsized onto the smaller SSD.
It served me well for three years, but around the same time AppleCare gave out, so did the right-side USB port. So he ordered me a new one, and the approval process didn’t require so many compromises, so I ended up with the 15-inch touchbar model with 16 GB and a 1 TB SSD, which I named Macchi: it’s not headless, but it’s formidably proportioned. Apart from the mind-bogglingly terrible keyboard and the mostly-just-annoying touchbar, it’s a spiffy thing, and the extra space gives plenty of room for VMware Fusion virtuals.
But there was a problem: a bunch of data from Tog never fit onto H!P, and was only available in an archive on the house server, a refurbished Mini named Melwin. I’d also decided to keep a cleaner separation between work and home, so I had two accounts on H!P, one with all my work stuff, one with personal email, iTunes, Yojimbo, etc. For more fun, H!P was bound to our AD domain, so the work account had funky permissions. For more fun, when I got Macchi, I only moved the work account, so I had to carry H!P if I wanted access to personal stuff.
Ten days ago, I noticed that Apple’s refurbished store had just gotten a new batch of 12-inch MacBooks in stock, which seemed like the perfect opportunity to clean up my personal/work accounts once and for all. Buying refurbs directly from Apple is actually the best way to get a reliable machine with a full warranty, as clearly explained by the folks at MacRumors. Brand new hardware is a crap shoot (especially models released in the past 3-4 months), and no one else’s refurbs are eligible for AppleCare. It’s got a Core i7, 16 GB of RAM, and a 512GB SSD, and I kept the name Hello!Party.
TL/DR, I’ve spent the last three days figuring out how to merge the best bits from multiple accounts on three machines to create a fully-populated personal account where everything actually works and has the correct ownership, while stripping out the last of the personal stuff from the current work machine.
(well, the naughty personal stuff, for sure; unlike most of the people who certified that they’d read the employee handbook this year, I actually read the employee handbook…)
Side note: after I scrubbed and reinstalled the 13-inch MBP, the right-side USB port seems to work again. Sigh.
One thing I’m doing with the new Synology NAS is making sure that everything is successfully migrated from my ancient Infrant ReadyNAS NV+.
There are two basic reasons for this:
The NV+ uses a non-standard power supply, and both of the ones we had at the office eventually burned out, requiring a temporary swap of mine until the data could be retrieved and migrated elsewhere. Mine’s still good, but if it goes…
While the firmware has been updated to cope with the most famous SMB security hole, it’s otherwise an ancient version of Debian on a custom SPARC chip, and even with the RAM upgraded to 1GB, it’s painfully slow at serving up files. It has decent write speeds, but when it comes time to get your terabyte of data back off, it takes forever, especially if you’ve got lots of little files.
I figure the copies should finish by the weekend. Maybe. On the bright side, it’s so slow that the Synology has plenty of bandwidth left to handle copies of every other old drive in the house…
After letting it chug along overnight, it’s averaging a steady 2 MB/s. With 500 GB to go, that’s just about 3 more days. This is so ridiculous that I had to double-check that it really is getting a full-duplex gigabit connection and not falling back to something like 10-megabit/half. No, it’s not the network; it’s just that mind-bogglingly slow. When this is all done, I’m going to reset it to factory defaults and do some testing.
I grabbed a spare 1TB USB drive, formatted it as EXT3, mounted it on the old ReadyNAS, and told it to back up the largest of the volumes (301 GB). It’s rapidly catching up to the aborted rsync job. As a bonus, the built-in backup job uses Perl. 😜
The 301 GB sneakernet finished considerably faster than the rsync job, and my other ReadyNAS just took a few hours, so I now have all my eggs in one basket.
And my basket is now running:
find . -not -name '.*' -type f -size +4096
-print0 | xargs -0 md5sum to figure out just how much duplication
there is in the ~6 TB of files (not counting the 2 TB of Acronis,
SuperDuper, and Time Machine images…). I figure there will be at
least six copies of this video:
Update: nine copies of it. 😊
Just bought a new NAS for home, and decided on a Synology DS918+ with 4 10TB drives ($539 + 4 x $310). Why not another ReadyNAS? A combination of price and vague dissatisfaction with the ones I’ve used in the past; I may write that up sometime.
Why not FreeNAS? Because I didn’t feel like building one from scratch right now (as much as I like the idea of a ZFS-based NAS), and the prebuilt unit we once bought from iXsystems ended up going back due to being a piece of junk. Both Synology and ReadyNAS use BTRFS as their filesystem format these days, which offers a lot of what you get with ZFS without the need to occasionally resort to command-line incantations. (“Not That There’s Anything Wrong With That!”)
Drive installation was painless (simple snap-in hot-swap trays), and while I found the “desktop” web GUI a bit overdone, everything works well. The biggest annoyance was figuring out which of the “private cloud” packages to add, because they recently changed all that, resulting in some confusion. (short version: only install the Drive package and desktop/mobile clients, and open TCP ports 5000, 5001, and 6690; also use the builtin LetsEncrypt support and set everything to require SSL)
The “EZ-Internet” cloud/firewall config was useless; it’s just a UPnP wrapper, and when it realized that it couldn’t auto-configure my OpenBSD router, the only help it offered was “hey, you should open some ports”, with no indication of which ones were actually required for the installed packages (see above).
Side note: I was amused and pleased that Drive, their latest, greatest personal cloud solution, required installing the Perl package. 😜
I went with their ‘hybrid’ RAID config, SHR-1, because it resizes
better when you add more drives or swap in larger drives. This gives
me 26 TB in usable space (9.1 * 3 - overhead), which is plenty for
now. Down the road,
if when media, disk images, and automated backups
start to fill that up, I’ll add the DX517 expansion chassis and
another 5 10TB drives and bring it up to 52 TB usable.
If you’re following along at home, you may wonder why adding 5 drives doesn’t give closer to 70 TB, and the answer is paranoia. SHR-1 uses a single parity drive, which means you can only afford to lose one disk. This is generally not a huge problem if you have a spare on-hand and swap it in immediately, but there’s a non-trivial risk that another drive will fail while the first one is rebuilding.
If you think about it, this is even more likely when you buy all your RAID disks at once from the same manufacturing batch, so you really want two parity disks and a hot spare, so that the system can start rebuilding as soon as one disk fails, and can survive losing another one during the rebuild. Having only one data disk in a four-disk chassis isn’t terribly useful, so for now I’m running in a cheaper, less-paranoid configuration. When I’m sure that I like the Synology enough to really rely on it, I’ll buy the expansion and convert the RAID to SHR-2 with a hot spare. And buy a cold spare disk as well.
Additional performance enhancements I can add include bonding the two 1-gigabit ports together, bumping the memory (official max 8GB, but there are reports that 16GB works), and adding SSD cache drives. That last is specifically why I chose the 918+, since it has a pair of M.2 slots on the bottom, and some of their other models require you to buy an expansion card first.
Building the volume was quick, but it took ~16.25 hours to run the initial parity consistency check, so performance was sub-optimal until that finished. The GUI was occasionally a bit sluggish during that time.
Next up: setting up dedicated Time Machine volumes for the Macs and testing their Windows backup client.
Oh, and I named it Index.
First Time Machine backup complete. Just because I was curious how well it would work, I backed up 425 GB over wireless, which took about 7.5 hours.
Just picked up the smaller Orbi bundle at Costco. This is the SKU they’ve added recently (RBK22-100NAS) that only has two units (“router” and “satellite”, both with ethernet backhaul); I didn’t really need a 3-pack of the original model, just one on each floor.
The hardest part of the setup was switching off the builtin NAT and running it in AP mode; you can’t do it from the iOS app. The second hardest was discovering that the app artificially limits you to short passwords; the web GUI will let you enter up to 63 characters, as expected for WPA2-PSK.
Preliminary results look good. I may tweak the placement of the units (I just grabbed the first available power and ethernet, since the old wireless is still running), and turn on the optional beamforming, etc. At the very least, I should get better performance on my front porch.
I figure it’ll take me a few days to find all my wireless devices and switch them over. :-)
The optional beamforming is off by default for a reason. It apparently has disconnect issues.
Reading through the manual for an LG TV, I came across the following line in the notes:
When connecting via a wired LAN, it is recommended to use a CAT 7 cable.
This must be for the Skynet upgrade, so when your TV asks you to upgrade to a 10 gigabit switch and add multiple fiber drops to the house, just say “no”.
I figure it’ll start crying like a little girl when I plug it into the 5-year-old gigabit switch that connects to the 50Mb/s cable line through the house’s Cat 5 wiring.
“…is a delicate matter”.
There’s a new MacBook Pro waiting on my desk when I get there today. It needs a name.
My first MBP at this company was named Exodar. Once I purchased a much-faster personal one with a Japanese keyboard, named Togetsukyou, I mostly used it for testing and virtual machines. Both flaked out years ago, and almost precisely three years ago they bought me a new one, which I named HelloParty. That one is down to one working USB port, which I should be able to get fixed under AppleCare after I migrate to the new one (giving me a good test machine again).
Other entries in my stable are Mone (Raspberry Pi), Melwin (Mac Mini), Courier (Surface Pro 2), Ririka (old Asus gaming laptop), Bentenmaru (less-old Asus gaming desktop), and Akatsuki (OpenBSD router).
My initial thought, due to the fact that it has the non-tactile touch-strip that replaces the top row of the keyboard, was to call it NoEscape, but that would get depressing after a while.
Currently I’m thinking either Macchi or Sakie. Leaning a bit toward Sakie, because as a laptop it won’t be running headless…
Any other suggestions?
Sort-of-contest extended, because it turns out the Mac that arrived wasn’t mine! I was getting ready to start migrating all my data over, and mentioned being sad about only getting the 512GB SSD instead of the 1TB I wanted, and one of the other guys said, “no, we got approval for the 1TB SSD; this must be someone else’s.”
So, I bought a Raspberry Pi 3 recently, and since I had no immediate plans to dabble in hardware-hacking, went with the official starter kit, so that I’d get a known tested international power supply, a decent case, and a cute little compact keyboard. (also a three-button mouse and an HDMI cable)
Everything worked perfectly out of the box, including auto-detecting
the 1920x1200 resolution of my little Eyoyo 10″ monitor. Then I ran
the software updates, rebooted, and *poof*, no more display. After
confirming that the Pi still booted, grabbing the latest image, and
booting in safe mode, I discovered that the only way to get a
fully-updated Raspbian Jessie or Stretch install to show video on the
Eyoyo was to add something like this to
hdmi_force_hotplug=1 #insist there's a monitor there hdmi_ignore_edid=0xa5000080 # don't query the monitor hdmi_group=2 # use monitor-style resolutions hdmi_mode=69 # use 1920x1200, 60Hz hdmi_drive=2 # turn on HDMI audio
But then I can’t plug it into anything else without either blindly booting to safe mode or ssh-ing in, changing the config, and rebooting. After a fruitless (heh) search of forums and FAQs, I went through the download archives and found the first Jessie release after the Pi 3 came out (2016-02-26). It worked perfectly. A quick binary search between that and the last Jessie release revealed that the last release that correctly auto-detected my monitor was 2017-01-11. None of the Stretch releases work, and the release notes for Jessie 2017-02-16 don’t have anything that screams “hey we broke the EDID parsing”.
But that appears to be what they did. Adding one line to my config re-enabled auto-detection on the Eyoyo:
It came up as 720p, but that’s better than “blank”. Adding this commonly-FAQd line brought it to a more reasonable 1680x1050, while still allowing it to work with other monitors:
So, a quite pleasant out-of-the-box experience, a disaster of an
update, and the recovery process boils down to “mount the SD card on
your PC/Mac, Google for help, then blindly tinker with
/boot/config.txt until you get it working again.”
I suppose this is one way to find all the future sysadmins in your fifth-grade classroom…
I do have one specific project in mind for work. Now that we’re in a new building with lots and lots of windows, I should be able to get a decent view of the sky and build Pi-based stratum 1 ntp servers.
Oh, and I named it Mone. Because inside every Pi is delicious cake.
I just installed FreeBSD 12-CURRENT, which doesn’t support wireless,
but gave me a completely different OS to test against. It bootstraps
itself using the same sort of
config.txt, and sure enough, it also
avoid_edid_fuzzy_match=1 to work with my monitor.
Just to round things out, I installed Ubuntu MATE as well, and since it’s not as up-to-date as Raspbian, it auto-detects fine. It’s subtly broken in the typical Ubuntu way (can’t run the GUI software updater, and updating from CLI broke several things, including Firefox), so I won’t use it for anything. I expect that it’ll pick up the EDID bug in the next release.
By the way, I’m booting all this stuff off of a 5-pack of 16GB
MicroSD cards, stored in this cute little holder. This little Anker
USB3 card reader is the fastest and most useful I’ve found for
imaging MicroSD cards and mounting the
/boot partition to fix the
So, OpenSUSE has a 64-bit build. It installed cleanly, so I let it run a whole bunch of updates, and then I decided to see if the performance was better for things like watching video. So I opened up Firefox and went to Youtube. Or, more precisely, I tried to go to Youtube, because Firefox absolutely refused to open the page, claiming that it used outdated encryption that was evil and fattening and probably voted for Trump. There appears to be no way to say, “just fucking open the site, okay?“.