This cookbook was built to prototype my ideas on how to design dynamic web sites. It's also more convenient and portable than MasterCook (which I rather like, even though it only runs on Windows).
Most of the recipes were imported from free online archives in MasterCook's text-export format. There are currently 99 collections containing a total of 10,548 recipes, with a lot of duplicates. I'll clean it up soon.
The Googlish logo is just for my personal amusement, and isn't intended to infringe on their trademarks, trade dress, copyrighted look-and-feel, or other protected intellectual property. For a really cool parody, go to Cthuugle.
The index has been built with SWISH-E's stemming option, which makes searching for 'egg' and 'eggs' equivalent. There's an implicit and between each word in the search, which you can override by separating words with or. There's no support for phrase searching. You can limit a search to specific fields as follows:
I already had a parser for MasterCook's MX2 XML-export, and I didn't want to write another one for a format that's considerably less precise, so I used MC-Tagit. After converting several thousand recipes (only a few dozen of which required hand-correction), I discovered that it doesn't quite write valid MX2 files—it writes hundreds of valid MX2 documents in one file, with a plain-text header. A little massaging with Perl, though, and it's fine.
… although it was a little surprising to see 300 pages of output when I searched for "category=import". I'll have to go back and re-tag those sometime. With a Perl script, of course.
My own recipes are stored in DTF format, which is based on my as-yet-unreleased Data::TextFields Perl module. DTF is plain text with simple keyword handling, capable of encoding all the useful bits of MasterCook's XML schema while still being easy to type.
Both recipe formats are parsed into Perl data structures and stored on disk with the Cache::FileCache module. Each collection is in a separate cache namespace, and a lookup table is stored in the "_guid" namespace. Each collection is also indexed separately with SWISH-E, and the indexes are merged together for the main search page. I've left hooks for searching individual collections and having private collections for logged-in users, but I haven't written it all yet.
Why use a slightly-awkward combination of SWISH-E and Cache::FileCache? Because I want to design web pages, not become a MySQL DBA. There are all sorts of things you can do with a real database server, but if I don't need to do those things, why use one? By not introducing another complex system that likes to listen on the network, I avoid having to track Yet Another set of security patches. And as far as efficiency goes, most of the lookups can be avoided completely with Mason's built-in component caching; there are scaling limits, but it's already handling more than 10,000 recipes, and most of the datastores I build with it will be smaller.
In any case, most of the awkwardness will be remedied once I finish my generic "datastore" wrapper script, bringing together the caching, the full-text indexing, and my DTF format.
|“Did I leave anyone out?”|