I forgot to bitch about this when I first saw it…
New in Emacs 26.1:
** The Emacs server now has socket-launching support. This allows
socket based activation, where an external process like systemd can
invoke the Emacs server process upon a socket connection event and
hand the socket over to Emacs. Emacs uses this socket to service
emacsclient commands. This new functionality can be disabled with the
configure option '--disable-libsystemd'.
** A systemd user unit file is provided. Use it in the standard way:
'systemctl --user enable emacs'.
Honestly, I never saw the attraction of emacsclient
in the first
place. I open text editors in terminal windows, like Zod intended, and
I edit text files in them. My entire .emacs
file is devoted to
turning off all ‘features’ unrelated to editing text.
I should mention that Emacs also has launchd
integration on the Mac,
which I’ll never use. It’s the systemd part that bugs me; it’s like
what you’d get if you crossed kudzu and cockroaches. Note: do not
mention this within earshot of Lennart Poettering. He might try it!
ProtectHome
considered harmful.
Seriously, WTF? I looked at four recently-kickstarted CentOS 7.x servers and said, “hey, /home ended up on the small partition, so I’ll move it to a bigger one”. I could not do this.
Removing this bullshit from two daemon configurations (NetworkManager? chronyd?) and rebooting managed to fix it on two of them, but not on the other two, and they were all kickstarted with the same config (not a great config, but it wasn’t done by me, and blowing them away and starting over would undo recent work by external con$ultant$).
After some unknown action on your server has silently deleted most
repo/wiki directories for a group
(~git/git-data/repositories/$group/$project.git
), how do I tell it
that I have restored the data from my hourly backups?
Currently it shows “The repository for this project does not exist”.
Honestly, it looks like something tried to delete the entire group and aborted 2/3 of the way through.
Ah, the answer is gitlab-rake cache:clear
; now, about how they were
deleted in the first place…
Given the recent news about large dumps of user-account data from various hacked sites, I downloaded the full list of records for my mail email domain from HaveIBeenPwned, and found nothing new and interesting. Just the adobe, linkedin, kickstarter, and dropbox hacks from several years ago.
Oddly, none of the email addresses used by Honor Hacker and friends in attempts to extort bitcoin show up in their DB, even though one of those was actually a legit closed account (I briefly had a Livejournal account for commenting, with a unique name and strong password, and the “hacker” included the correct password).
The amusing one was that the “Onliner Spambot” collection from 2017 had a confirmed hit for user “xoratmusoqxee” at my domain. That one doesn’t even show up in my spam, despite being at least as plausible as “hand04”, “quinones12”, “bain66”, “Donnell4Stark”, or the ever-popular “ekgknmfylvtl” (seriously, my spam folder gets daily messages directed to that username, all of them in Japanese).
…to my sanity.
Manager set up a Perforce client on his Windows box, then we changed
the directory that was set for its root. We could not get p4v
to
use the new directory. Even deleting the workspace, restarting the
client, refreshing the workspaces, and creating a brand new workspace
with the same name didn’t work. It still thought the files should be
located in the non-existent directory from the earlier incarnation of
the client.
We had to use a different client name to avoid this over-aggressive local cache of data it had no business caching in the first place.
Also, to make the process more funtedious, the client-editing
window kept spontaneously resizing itself to be slightly taller than
the screen, every time we opened it or tried to resize it to fit.
“Embrace the healing power of ‘and’”.
The latest in shutdown theater is expiring SSL certs for government web sites. Either they didn’t bother to order new certs for all the sites they knew were expiring soon, or they deliberately didn’t install them.
Reminder: today’s is the first paycheck that’s delayed for federal employees. Any work they’ve skipped the past few weeks has been by choice.
When booting OpenBSD 6.3 (at least), the /etc/rc
startup script
reads /root/.profile
. This can produce some rather entertaining boot
failures, including things like syslogd timing out on startup,
preventing you from getting any log data about what might be wrong…
I’m quite certain this wasn’t the case in earlier releases, but I’m not sure when it crept in.
# Simple confirmation:
echo sleep 60 >> /root/.profile
reboot
# It will take an extra ~8 minutes to boot
It looks like they try to work around this by setting HOME=/
in
/etc/rc
, and having a separate /.profile
, but it doesn’t work; it
still reads /root/.profile
.
Ah, there it is! /etc/rc.d/rc.subr
:
...
rc_start() {
${rcexec} "${daemon} ${daemon_flags}"
}
...
[ -z "${daemon_user}" ] && daemon_user=root
...
rcexec="su -l -c ${daemon_class} -s /bin/sh ${daemon_user} -c"
So, anything executed from a proper start/stop rc script gets executed
in a fresh su -l
session (even if it’s running as root), and that
resets $HOME
.
The machine I was upgrading pre-dates the rc.d scripts, so it didn’t have the problem.
Sometime this morning, someone rebooted a KVM server. We don’t know who, yet, but this would only have been a minor problem if it weren’t for the fact that another unknown someone had accidentally deleted the disk images for some of the VMs running on that server.
A month ago. No one noticed because they kept running on the unlinked open files…
Daily full backups are your friend!
NetworkManager: threat or menace?
Seriously, who even configures a server with it, and who came up with the idea of instantly taking down the interfaces the moment you save ifcfg-eth0 to switch from NM to static config? Fortunately, IPMI meant that I didn’t have to physically plug a monitor into the server to get back in.
(although Neal did have to plug in the IPMI interface for me; the perils of setting up new servers in a hurry…)