More precisely, flash meter back, from the dead.
After I bought the little LitraPro LED light and started playing with it, I dug out my Minolta Flash Meter V (no actual digging was involved, but lots of box-shifting and rummaging over the course of about a week), only to discover that I left the battery inside last time I used it. Several years ago.
Vinegar and a q-tip cleaned out the visible corrosion in the battery compartment, but when I took the back off, I found more, and as soon as the vinegar started bubbling it off, the negative wire snapped off completely.
Fortunately we have a highly-skilled hardware team at the office, so I begged pretty-please, and Todd was willing to remove the old wire (the corrosion had wicked its way all the way to the main board) and solder on a new one.
Which is good, because the only real big-name company left in the flash-meter biz is Sekonic, and now I don’t have to spend ~$220 on a cool new one.
Oh, and as soon as I found the bad battery in the Minolta, I continued digging until I found my spot meter, and that one had been stored correctly without a battery. Whew.
Last week I took another look at the various “digital pen” products, and once again couldn’t find one that was worth buying. I like the idea of the Livescribe, etc, but none of them seem to actually work very well, with poor ergonomics, poor performance, poor support, or “all of the above”.
So I took a look at Rocketbook, which is a series of notebooks with custom paper, marked up to be scanned in with an iPhone/Android app and sent to your email account and/or various cloud services. OCR is not handled by Rocketbook, so unless you send it to a cloud service that does that, you get the image only.
If you use any of the upload options, you can’t really rely on confidentiality, of course, but you can always leave the destination checkboxes blank, and uses your phone’s native sharing services to store the scans. (note that the email goes through a third-party service, then to Rocketbook’s server, then to your email provider, rather than just using the phone’s API)
It looked straightforward, and since the app is free and there are sample PDFs available for download, I could try it out before buying. It recognized their markup quickly and reliably, and produced a decent image, so I went ahead and ordered the Everlast notebook, which has a special paper that turns FriXion erasable pens into wet-erase markers. In theory, the Everlast pages can be reused forever, unlike the 5-6 times their heat-erasable product claims.
I’ve had it for a few days now, and after adding a pen loop to keep the supplied FriXion pen handy (3” strip of gaffer tape, with a 1.75” strip stuck across the middle of the sticky side, leaving room at both ends to attach it to the sturdy edge of the cover), I quite like it. I have some color FriXion pens as well, and it captures them nicely.
But of course I want to print my own, and not just the dot-grid they supply. The first hurdle was that the PDF that works just fine on my office color laser printer is unscannable when printed on my home inkjet. Seriously, if I place two printouts side by side, the laser-printed one is recognized instantly, while the inkjet version leaves the app fumbling for several minutes before it figures out that it’s a black box with a QR code in the lower right corner.
I’ve ripped the PDF apart in Illustrator, so I know there’s no hidden magic that’s not reproducing correctly on the inkjet, but somehow it makes a difference. The ink is just as black, the paper is just as white, the resolution is just fine. One thing I did discover is that there are several different versions of the free PDFs, and the one I originally tested with has a relatively narrow black border. The most recent one has a wider border that works better on an inkjet, but someone at the company hacked it together, so it isn’t really an 8.5x11 page, and the destination icons are bitmaps.
My first attempt at a custom Rocketbook PDF is here, and replaces their dot-grid with an isometric grid. This one’s still a bit finicky on the inkjet, but a lot better than their original PDF.
I did it in Perl with PDF::API2::Lite, so I can tweak it until I figure out exactly what their app is looking for. My guess is that the “V” section in the QR code indicates paper type, and the app has a lookup table containing the aspect ratio and relative location of the destination icons, but that by itself can’t explain the difference between inkjet and laser printouts.
Just tested with the new version of the app, and it’s much more reliable at scanning inkjet prints of both their PDFs and mine.
Now that I’ve had the Arlo Pro cameras for a few months now, what do I think?
Setup and placement is easy, although I recommend buying better anchors for their dome mounts.
The cameras trigger reliably and record video and audio with plenty of quality for their intended use.
Battery life is excellent at the default settings, and they recharge at a reasonable speed.
Notifications are usually quite quick, to the point that when I get home, the alert arrives on my phone before I can unlock the front door. I have one camera set to high sensitivity, and when it goes off in the morning, I know a cat has shown up (or the wind is strong enough to blow the bamboo in front of the camera).
The week of cloud storage from the free base tier is sufficient for my needs.
Viewing, managing, and downloading recorded videos is quick and easy.
The app/website frequently reports that the cameras and base
station are not reachable, and insists that they must not be
connected to the Internet. Never mind that the base station is
pingable at all times and I can see it sending traffic to their
servers. This appears to be a problem with dropped connections at
their end.
Connecting to the camera can take a very long time even when the
app/website can reach them, and often times out. For instance,
just now it took me nearly ten minutes of trying to get it connect
to one of my cameras, and it repeatedly claimed my base station
must be “offline”.
Even when the app does connect, I’ve never gotten the intercom functionality to work. For instance, when the neighborhood kids were playing hide-and-seek, I couldn’t tell the kid who kept setting off my cameras to go hide somewhere else.
Since you can only configure the base station and cameras through the app/website, all administration is blocked when it claims you’re “offline”.
When notifications don’t arrive instantly, they can show up hours later, but the app doesn’t tell you which videos are new; they’re silently sorted in with the ones that showed up on time. Basically, when you get an alert on your phone, you have no idea if something just happened or if an hours-old video finally showed up.
The USB storage is basically useless. When I recently pulled out the drive, it had video from March and July, but nothing in between. And the only way to view what it recorded is to log into the app, hope it connects successfully, tell it to stop writing to the drive, then connect it to your PC.
A security camera that sometimes alerts you promptly is not terribly useful.
I suspect their servers are overloaded, and all of the problems they blame on my (rock-solid) Internet connection are on their end. I also suspect that the service would magically improve if I upgraded to a paid tier…
Bottom line, if your primary requirement is prompt, reliable notification of security events, buy something else right now. The Arlo Pro will record the event, but you might not find out about it for hours, and might not be able to get a real-time view of the scene without wasting several minutes waiting for a successful connection (which can require force-quitting the app).
If, like me, you’re mostly interested in package deliveries and wandering cats, it’s flaky but acceptable. Hopefully they’ll resolve these problems with server, client, and app updates, but right now it’s pretty Beta.
The app and website no longer show my base station and cameras constantly going offline. So, one major Con removed. I haven’t retested the notification, USB drive, or intercom issues yet, but if they got those sorted out as well, I’m much happier with the product.
The gentle slope on the top of my new clothes dryer came as quite a surprise to me.
And to the open bottle of fabric softener.
On the bright side, the floor in the laundry room smells very nice now.
By the way, did it occur to anyone that with all of the advanced controls on your user interface, you could implement sound controls more sophisticated than “completely silent” and “play a jaunty ‘done’ tune and echo every keypress REALLY REALLY LOUD”?
Ongoing MasterCook molesting now up on
github. I took
advantage of the very restricted formatting of their XML-ish MX2
format to write cat/grep/ls tools, as well as a quick-fix for a common
format issue in files imported from MXP format. It would be trivial to
write those sort of tools for the XML versions, but working in MX2 is
handy for things you plan to re-import to MasterCook (since I haven’t
written an xmltomx2
converter yet…).
One of the challenges with Hugo is that, out of the box, it doesn’t do anything. Create a site, fill it with content, run the generator, and you get… nothing. You need to download or create a theme in order to actually render your content; there isn’t one built into the site-creator, although several volunteers are working on something (much the same way that usable documentation is largely a volunteer effort).
It is not immediately obvious that the theme gallery is sorted by update date, so that the farther down the list you go, the less likely they are to work. There’s a top-level set of feature tags, but they’re applied by the theme authors, and don’t include useful things like “scales beyond 100 pages”.
As part of my ongoing MasterCook molesting, I decided to take the now-sane XML files and render them to Hugo’s mix of TOML and Markdown, generating a static cookbook site with sections and categories. Having done some experimentation in response to a forum post, I knew that a site with 56,842 pages would take several minutes to build, so I grabbed the simple, clean Zen theme and fired it off.
And waited. And waited. And watched the memory usage climb to over 40GB of compressed pages.
The Hugo developers pride themselves on rendering speed, but when I checked the disk, it was taking upwards of a second to render a single content page. Looking at one of them made it obvious why: the theme designer included every content page in the dropdown menus and sidebar. It had honestly never occurred to him that someone might have more than about 8 categories with about 20 pages each. In fairness, this is a port of a Drupal theme, and the original might have had the same problem.
After modifying the templates to only use the first 20 from each category, I got the site to render in about 10 minutes. The category menu looks horrible, because I split the recipes up alphabetically into chunks of about a thousand, and the theme only allocated enough space for about 2/3 of them, with the rest covering the title field. The actual recipe rendering is excellent, including the handling of sub-recipes and referenced recipes.
I could modify the Zen theme until it did everything right, or spend several hours rebuilding a small sample site with other themes until I found one that required less work, but once you’ve built one theme from scratch, it’s just faster and easier to do that than to try to use any of the pre-built themes. Their real value is as examples of “how do you do this in Hugo”, which you can’t generally find in the documentation.
There are also quite a few working code snippets in the forums (some provided by me; problem-solving is kinda my thing, if you haven’t guessed by now), but with so much of the code under active development, any forum example more than a few months old is likely to be wrong now.
It’ll be a while before I bring the cookbook back up, since this is definitely a copious-free-time project, and not only do I have to knock together a theme and set up search (most likely Xapian Omega again, since I’m fresh on it), but also molest the recipe data and impose some consistency on categorization, tagging, and ingredient naming. Currently it has 782 distinct categories, many of which differ by only a few characters, and about 2/3 of them should really be tags instead. All of these issues should really be fixed in the MX2 files, so that they can be cleanly imported back into MasterCook, but since that’s not XML, the scripting is a little more “interesting”.
Tentatively, I’m going to start with my blog theme, since I’ve already
tested it at
scale
(and learned that large taxonomies are a significant bottleneck). I
can strip out a lot of the blog-specific stuff without much effort,
I’ve already done the work to switch over to dropdown menus for
categorization, so the only real trick will be embedding any
referenced recipes in a hidden DIV at the bottom of each page, and
setting up a print-only stylesheet that hides the nav and exposes the
embedded recipes. The references are already turned into links to the
appropriate recipe’s page, thanks to the builtin relref
shortcode.
Congratulations on completely destroying the sync ability of the iOS version of OneNote. Your mother must be so proud.
Update: Surprisingly, it still worked on my iPhone while being totally borked on my iPad. None of the fixes people have been suggesting in the forums (lots of people hitting this bug this week) fixed it. Failing to authenticate for sync has actually been an issue with the iPad version of OneNote for quite a while, but in the past, force-quitting the app was sufficient to fix it. It looks like a nuke-and-pave of the app is necessary but not sufficient; I’m not actually sure what eventually persuaded it to start working again, but I suspect it was the animal sacrifice.
An app whose functionality depends on reliable sync needs to sync reliably. I migrated everything over from Evernote and let my paid subscription lapse because they were ignoring the core functionality of “sync my notes between phone/laptops/tablets”. Your recent attempt to provide the (not-quite-the-) same (poor) user experience on all platforms is the sort of development diversion that cost them customers.
Oh, and if you really want to make the user experience the same, add the “Recent Notes” tab to the desktop clients. It’s one of the most useful features of the mobile clients, and completely missing on the full app. And bring the Mac client up to feature parity with Windows, maybe?
Update: Happened again on 6/6. I had to delete the app, re-download it, and then re-sync all my notebooks. WTF, MS?
MasterCook, currently at version 15, is still the best recipe management software around, mostly because it supports sub-recipes. Most recipe-database software maintainers will give you blank stares when you mention this, even the ones who claim to import MasterCook format; some of them don’t even know about sub-title support in ingredient lists. While the software has changed hands several times over the past 25 years, functionally it hasn’t changed much since version 6. The licensed cookbooks come and go, but OS compatibility is the most significant improvement. (disclaimer: I haven’t tested the pretty clouds in v15 yet)
There are tens of thousands of recipes on the Internet in the two major MC export formats, MXP and MX2. I recently dug up one of the biggest to play with, which is only available through The Wayback Machine.
MXP is a text file meant to be printed out in a fixed-width font, but the format is well-structured enough that it’s easy to import into other software, with some minor loss of information. If you’ve downloaded any recipes off the Internet in the past 20 years, you’ve probably seen the string “* Exported from MasterCook *”.
MX2, introduced in 1999’s MasterCook 5, is not XML. Yes, it looks like XML, and even has an external DTD schema, but trying to feed it through standard XML tools will trigger explosions visible from half a mile. If you want to work with it, your best bet is the swiss-army-knife conversion tool cb2cb. Windows-only, written in Java, and “quirky”, but it handles both MXP and MX2, as well as some other formats, and has built-in cleanup and merge support. Pity it’s not open source, because I suspect there are dozens of comments with some variation of “Oh, for fuck’s sake, MasterCook!”.
What’s wrong with the “XML” and DTD?
<?xml version="1.0" standalone="yes" encoding="ISO-8859-1"?>
<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
mx2.dtd
file supplied in every version since 1999 has
obviously never been tested, because it is incorrect and
incomplete, in several different ways.Of course, anyone who knows me will correctly guess that I’ve gone to
the trouble to fix all of these problems, with a Perl
script
that massages MX2 into proper UTF-8 XML that validates against a
corrected
mx2.dtd;
part of that script dates back to my old cookbook project from 2002,
so yes, this is the first step to reviving that. The script uses
xmllint
to fix the encoding and double-check that it’s valid XML.
I’ve validated over 450 converted MX2 files against the corrected
DTD, a total of around 120,000 recipes.
Update: When converting MXP to MX2, many of the options in cb2cb
mangle the output. Best to turn them all off, and do some basic
cleanup with a script like this
one
which splits directions on CRLF pairs and safely moves most of the
non-direction text into Notes. There are still a few rare errors in
the conversion process, but in my case that amounted to 4 ingredient
lines in over 10,000 recipes, detected by their failure to validate
during the XML conversion.