The entire Moderation Team for the Rust programming language has resigned. This came as a surprise to everyone who didn’t know that Rust had a moderation team, or what it was contributing to the development of the language.
The answer is, of course, Toxic CoC Syndrome:
Remarks that moderators find inappropriate, whether listed in the code of conduct or not, are also not allowed.
In other words, “we’re the Tone Police, and whatever we say goes”.
And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could’ve communicated better — remember that it’s your responsibility to make your fellow Rustaceans comfortable.
So if someone says “your pull request is worthless garbage that breaks the build”, they are required to apologize and stop harassing you over your earnest desire to decolonize the language by renaming all problematic functions.
For more fun, this CoC incorporates by reference the “Citizen Code of Conduct”, which includes this hilarious little gem:
No weapons will be allowed at [COMMUNITY_NAME] events, community spaces, or in other spaces covered by the scope of this Code of Conduct. Weapons include but are not limited to guns, explosives (including fireworks), and large knives such as those used for hunting or display, as well as any other item used for the purpose of causing injury or harm to others.
Since the “covered spaces” of the Rust-y CoC primarily consist of Discord channels, online forums, and git checkins, I’m assuming they’re referring to weaponized emoji here:
I’m amused to see Ghostscript described as a small library, but deeply annoyed that a critical “run arbitrary code on your machine” vulnerability disclosed sometime last year is still unfixed, despite it having been verified and bug bounties paid out.
The new proof-of-concept exploit is only 20 lines of Python.
Since it’s embedded in all sorts of software, you may not know that you’re affected by this hole, or for how long you’ll be vulnerable. All it takes is for something in your daily workflow to decide to render a downloaded SVG file via GS.
My home router (an old Shuttle running OpenBSD 6.3) went down due to bad blocks on /var. I had recent backups, but my cold spare still had a 5.x install on it, so after manually fscking the old machine enough to get it back online for the day, I downloaded a fresh copy of OpenBSD 6.9, installed it, copied over all the config files, and swapped it into place.
It didn’t work. More precisely, everything worked except sending traffic out the public interface to the Internet. I couldn’t reach the gateway. I could ssh into the router from my laptop over the private interface just fine, though.
Thinking perhaps that I’d outsmarted myself by trying to preserve the MAC address of the old server to deal with the common cable-modem issue of fixating on a specific MAC, I removed that clause from the config and rebooted both router and modem.
That didn’t help, so I fired up
tcpdump on the public interface to
see if there was anything showing up at all, and everything started
Kill tcpdump, packets stop. Start it back up, packets flow. In other words, everything works perfectly as long as the public interface is in promiscuous mode. This isn’t one I’ve run into before in my 15-ish years of managing OpenBSD routers, or even the 6 years I’ve owned this Shuttle DS61.
I’m going to have to swap a new SSD into the other router (identical
hardware), install the same configs, and do some testing. Which is a
lot easier if I’m online, so for now,
nohup tcpdump -n -i re0 -w /dev/null icmp &
The experience of being suddenly forced to shutdown Lightroom in the middle of an editing session because you decided that I wasn’t logged into your Clown service any more is sub-optimal. Also a pretty good way to encourage customers to stop paying you a monthly fee and look elsewhere for software.
As I mentioned earlier, I’ve been doing some simple load-testing of Jira instances using Gatling. Detailed sample code after the jump, because I couldn’t find anyone else’s and I’ve got decent pagerank.
Adobe CC periodically removes downloaded fonts that it decides you’re not using… in Adobe CC. Using them in other applications doesn’t count, apparently. To get them back from the Clown, you can either completely deactivate the family, switch menus, and reactivate, or click the Clown-down icon for each and every font file that’s been removed.
In other news, Adobe will finally be abandoning support for Type 1 fonts in their product line in an upcoming release. In J-specific news, Adobe doesn’t include my go-to poster font Barmeno in their Clown, and it’s $300 from Berthold. I think I’ll try converting it with Fontforge first…
(Adobe disguises the location of their Clown font files, so to use them in my PDF::Cairo scripts, I have a Perl script that symlinks them into directories scanned by Fontconfig)
There are several lessons to be learned from the Samsung Blu-ray player fiasco, in which pretty much their entire product line turned into a useless pile of e-waste.
You don’t know what your Internet-connected appliances are doing, and the manufacturer won’t tell you. Customer service probably doesn’t even know about most of it.
The people designing your appliances often don’t think about or thoroughly test boot or update processes.
XML makes a terrible config-file format. Ditto YAML and Apple’s Plist format (both of which are just as complex and unforgiving as XML).
When I was at WebTV, every client release meeting included someone who had precise statistics on how many devices were bricked by each previous release, how much it cost to replace them, and the effect on customer churn. This neatly negated the efforts by development and marketing to take shortcuts with QA.
On the service side, we were usually able to just roll back to a previous code or content release within a few minutes of detecting a problem, but there were occasional out-of-band updates, as well as external dependencies. One that bypassed QA one night was an update to the XML config file that controlled ad rotation on the home page. As each ad server retrieved the new file and parsed it, they locked up. When I traced the appropriate process, I saw it spinning in a tight loop trying to parse a comment; someone had manually removed one ad from the rotation. At least, that’s what they thought they’d done, with their limited understanding of XML syntax.
In our case, the code checked for errors, but never got there because it was stuck in an infinite loop; the Samsung startup code simply didn’t check for errors. If the file was syntactically valid, of course it must be semantically valid.
The latest “branded” vulnerability that’s getting hysterical coverage is “Thunderspy”, in which all your data are belong to us if your computer has a Thunderbolt port. In less than five minutes. With only $400 in off-the-shelf hardware.
Except the details of the story contradict that. First is the assumption that your powered-down computer is available to the attacker for long enough that they can crack the case and reflash the Thunderbolt port’s firmware; five minutes on a desktop, maybe, but most laptops? A quick look at the sites that crack them open and test for repairability suggests that it’s not going to be as easy as the claimed “unscrew the backplate, attach a device momentarily, reprogram the firmware, reattach the backplate”.
Second is the assumption that the attacker will be able to return when your computer is sleeping and exfiltrate your data through the compromised port. Admittedly, Thunderbolt is fast at data transfer, but how many trips do you have to make before you find it in the right state?
The mitigation strategy is simply “power down or hibernate”. Even after compromising your ports, physical access to a powered-up or sleeping computer is required to access your encrypted data. (if your data wasn’t encrypted, they didn’t need a hardware hack to steal it in the first place)
researcher branding agent does offer a second scenario that’s
at least plausible: find a not-currently-plugged-in Thunderbolt
peripheral (monitor, etc) that has previously been connected to your
computer, steal the 64-bit ID code that was used to establish a trust
arrangement, flash that to a naughty data-exfiltration device, and
then plug it into your awake-or-sleeping computer.
Mitigation strategy? “power down or hibernate”.
Or use a Mac, which apparently is only vulnerable if it’s been rebooted into Windows with Boot Camp and then put to sleep.
So, if you care enough about security to fully encrypt your laptop, but care so little about security that you casually leave it running unattended or just put it to sleep for convenience, and you don’t notice when it was power-cycled while you were out of the room, then this can be used to steal all your data.
That pretty much restricts the vulnerable population to senior executives at tech companies. The rest of us are safe.
(and, yes, state actors can easily accomplish this, but we already knew that they were compromising unattended phones and laptops to spy on foreign executives and politicians, especially in Corona-chan’s motherland)