It felt lonely in here, so I got Isso working for comments. Easy to
nuke-and-pave if I don’t like it, at least. The whole “Python
virtualenv” experience was a real pain in the ass, though, since
install repeatedly claimed to have installed all the dependencies,
pip list called bullshit on that.
I’ll probably have to put Monit on the server in case it crashes, but that can wait.
It’s possible to have Isso dynamically update the comment count in the article metadata block, but I just spent about an hour failing to get it to work, between Isso’s and Hugo’s overlapping limitations.
Until I ran into Hugo’s variable-scoping. When you render content in a
list context, you’re really fully rendering each page in its own
context and then including the results. So, inside a template, variables
$.URL refer to the individual article’s context,
as if you were currently writing out that one article to disk. And of
the two completely different ways you can set variables, one of them
is strictly block-scoped, and the other is strictly page-scoped. You
can’t pass either down into a partial template.
(there’s a partial-with-arguments called a shortcode, but that’s a completely different beast, and I’m not sure it is either effective or efficient to replace all your partials with shortcodes) UPDATE: completely impossible, in fact; shortcodes don’t work in template files, and partial templates don’t work in content files. They’re completely different things with completely different behaviors. You have to construct a custom dictionary and pass it into a partial template, which is butt-ugly and error-prone.
So, yeah, no comment-count on the home page at the moment.
I wrote my own bitty Jquery function to use Isso’s API directly and insert the comment count on page-load. It would be nice if the API returned “0” instead of 404 errors when there aren’t any comments, though.
Welcome to the first non-trivial update to this blog since 2003. Things are still in flux, but I’m officially retiring the old co-lo WebEngine server in favor of Amazon EC2. After running continuously for fourteen years, its 500MHz Pentium III (with 256MB of RAM and a giant 80GB disk!) can take a well-deserved rest.
The blog is a complete replacement as well, going from MovableType 2.64 to Hugo 0.19, with ‘responsive’ layout by Bootstrap 3.3.7. A few Perl scripts converted the export format over and cleaned it up. LetsEncrypt allowed me to move everything to SSL, which breaks a few graphics, mostly really old Youtube embeds, but cleanup can be done incrementally as I trip over them.
Comments don’t work right now, because Hugo is a static site generator. I’ve worked out how I want to do it (no, not Disqus), but it might be a week or so before it’s in place. All the old comments are linked correctly, at least.
Do I recommend Hugo? TL/DR: Not today.
Getting out of the co-lo has been on my to-do list for years, but I never got around to it, for two basic reasons:
I was hung up on the idea of upgrading to newer blogging software.
I didn’t feel like running the email server any more, and didn’t like the hosting packages that were compatible with MT and other non-PHP blogging tools.
In the end, I went with G-Suite (“Google Apps for Work”) for $5/month. Unlike the hundreds of vendor-specific email addresses I maintain at jgreely.com, I’ve only ever used one here, and all the other people who used to have accounts moved on during W’s first term.
Next up, working comments!
Actually, next turned out to be getting the top-quote
to update randomly. The old method was a cron job that used
to log into the MT admin page and request an index rebuild, which,
given the tiny little CPU, had gotten rather slow over the years,
so it only ran every 15 minutes.
The site is now published by running
hugo on my laptop and
rsyncing the output, it’s not feasible or sensible to update the
quotes by rebuilding the entire site. So I wrote a tiny Perl script
that regexes the current quotes out of all the top-level pages for
the site, shuffles them, and reinserts them into those pages. It
takes about half a second.
Since there are ~350 pages, there will be decent variety even if I don’t post for a few days and regenerate the set. If I wanted to get fancy, I could parse the full quotes page and shuffle that into the indexes, guaranteeing a different quote on each page (as long as the number of quotes exceeds the number of pages, which means I can add about 800 blog entries before I need to add more quotes. :-)
More than usual, I mean. I’ve been playing with the static site generator Hugo as a way to move this blog and its comments out of Movable Type.
After clearing the initial hurdle of incomplete and inconsistent Open Source documentation (pro tip: if a project starts numbering versions from 0.1 instead of 1.0, it’s safe to assume that there’s no tech writer on the team), the next step is adding a theme to render your site. There’s no default theme, and half a dozen different recommended ones of varying complexity and compatibility. Short version: I’m not sure Hugo currently has layout functionality equivalent to Movable Type 2.x from 2003, much less any of the modern tools; it might, it’s just that hard to find out.
There’s some support for basic pagination, something that’s always been missing here (and which is partially responsible for the long delay when adding comments), but the built-in paginator includes a link for every page, which is pretty painful when you have 200+ pages. If I get the time, I’ll have to dust off my Go and send them a patch to make it behave sensibly with large numbers.
Rendering all ~3,800 entries (counting quotes and sidebar microblogs) and ~3,500 comments takes about 12 seconds on my laptop, but that’s still too long for iterative testing, and the OS open-file limit makes it impossible to test with the live-rebuild feature of the built-in web server.
So I wrote a quick Bash script to retrieve N random articles from Wikipedia and format them the way Hugo expects, as Markdown with TOML metadata. Why Bash? Because the official Wikipedia API for efficiently retrieving articles and their metadata using generators and continues is either broken or incomprehensible to me, since I spent two hours at it and got a never-ending list of complete and partial articles. So I just looped over the “https://en.wikipedia.org/wiki/Special:Random" URL and piped the output through Pandoc. Rather than pulling in the real metadata, I just generate dates and categories in Bash. Now I can quickly generate a small site with multiple sections and simple categorization, and it’s trivial to add more features like series, tags, authors, etc. [in fact, I did!]
(relevant only to Hugo users after the jump…)
“…it’s just someone else’s computer.”
[Update: it’s back for read access, so my pictures are online again, but apparently they’re still working on the write functionality, which must be painful for the many services that rely on S3. I don’t need writes unless I’m uploading new pictures, so I’m good. Amusingly, one key item that was broken all morning was Amazon’s service status page. Because it’s hosted on S3. Someone had to make some manual edits just to provide basic info about the problem.]
This is a terrible list of possible security questions.
How many of these questions are trivial to answer for anyone with even a small social-media presence? How many of the rest are things that people over 30 probably don’t even remember? And seriously, zip code? Date of birth? Last four digits of SSN? To stop someone from trying to break into my brokerage account?!?
How about these, instead?
Don’t answer in the comments, or I’ll steal your identity…
I hacked Gifify to generate MP4 files instead of animated GIFs. Let’s see how this embeds:
This is 256 KB. 5.5 MB GIF version, created with the exact same parameters, below the fold.
Google just announced that their new translation engine now handles Japanese. Let’s see what it does with the exact same text I fed to Office 2007 about seven years ago, the first scene of Kyōtarō Nishimura’s murder mystery, Ame no naka ni shinu. TLDR: it’s quite a bit better, although the pronouns are all over the place, and the “embroidered sleeves” bit is hilarious:
It was raining.
It is cold winter rain. It was close to the sleeves.
Even if it enters at night, there is no sign of stopping. Because of that, if it passed ten o’clock, the embroidery figures rapidly decreased.
Even when that man pressed his belly with one hand and came out from behind the alley, there was no sign of a person in the rain.
He was a middle-aged man. The tired suit was wet with rain and was dark.
The man gripped by the utility pole with one hand. However, as if suddenly bravely fumbled, it crumbled and broke into a wet pavement.
From around the man’s belly, red blood is blowing out. The blood is raining.
Said the man. However, the low screams have been erased by the sound of rain.
With a splash of water, a taxi passed by. The driver took a look at the man who fell over and down, but he thought he was drunk. Just dropping the speed a bit, I passed by.
The man lifted his face and looked around. There are no figures of people anywhere. I opened my mouth, but it seemed that no voice cried for help.
Blood still continues to flow. The face of a man gradually lost his blood and went.
The man tried to write something on the pavement with a fingertip stained in blood. However, the raining lasting will erase it.
Despair seemed to have caught him.
The man wants to let you know something. However, there are no signs of people, and letters written on pavement are erased by rain.
Also, the taxi passed by. However, for men, there was no power left to raise their hands.
The man looked at his palm stained with blood with a blank eye. The fingertips were stiff.
The man slowly folded the little finger of his left hand. On top of that, my thumb was broken and piled up. The index finger, middle finger, ring finger stretched out.
, The man muttered with a small voice. But it was not almost a voice. The man fired off his last power and stretched his left hand. Just say that you would like someone to see.
But no one had seen it.
There was the darkness of the night, only the rain continued.
Amazon.com surprised me a bit today:
Google tells me it’s Turkish meaning “suggested for you”, and after several hours, it’s still there in a fresh browser session, despite the rest of the UI still being in English.
[Update: still there a day later, for several of my friends as well. I don’t see an easy way to contact Amazon to find out if they know.]