Mecab/Unidic has fairly strict ideas about morphemes. For instance, in the older Ipadic morphological dictionary,日本語 is one unit, “Nihongo = Japanese language”, while in Unidic, it’s 日本+語, “Nippon + go = Japan + language”. This has some advantages, but there are two significant disadvantages, both related to trying to look up the results in the JMdict J-E dictionary.
First, I frequently need to concatenate N adjacent morphemes before looking them up, because the resulting lexeme may have a different meaning. For instance, Unidic considers the common expression 久しぶりに to consist of the adjective 久しい, the suffix 振り, and the particle に. It also thinks that the noun 日曜日 is a compound of the two nouns 日曜 and 日, neglecting the consonant shift on the second one.
Second, there are a fair number of cases where Unidic considers something a morpheme that JMdict doesn’t even consider a distinct word. For instance, Unidic has 蹌踉き出る as a single morpheme, while JMdict and every other dictionary I’ve got considers it to be the verb 蹌踉めく (to stagger) plus the auxiliary verb 出る (to come out). The meaning is obvious if you know what 蹌踉めく means, but I can’t automatically break 蹌踉き出る into two morphemes and look them up separately, because Unidic thinks it’s only one.
To fix the second problem, I’m going to need to add a bit of code to the end of my lookup routines that says, “if I still haven’t found a meaning for this word, and it’s a verb, and it ends in a common auxiliary verb, then strip off the auxiliary, try to de-conjugate it, and run it through Mecab again”. I haven’t gotten to this yet, because it doesn’t happen too often in a single book.
To fix the first problem, I start by trying to lookup an entire clause as a single word, then trim off the last morpheme and try again, repeating until I either get a match or run out of morphemes. I had built up an elaborate and mostly-successful set of heuristics to prevent false positives, but they had the side effect of also eliminating many perfectly good compounds and expressions. And JMdict actually has quite a few lengthy expressions and set phrases, so while half-assed heuristics were worthwhile, making them better would pay off quickly.
Today, while working on something else entirely, I realized that there was a simple way (conceptually simple, that is; implementation took a few tries) to eliminate a lot of false positives and a lot of the heuristics: pass the search results back through Mecab and see if it produces the same number of morphemes with the same parts of speech.
So, given a string of morphemes like: (い, て, も, 立っ, て, も, い, られ, ない, ほど, に, なっ, て, いる), on the sixth pass I lookup いても立ってもいられない and find a match (居ても立っても居られない, “unable to contain oneself”) that breaks down into the same number of morphemes of the same type. There’s still a chance it could have chosen a wrong word somewhere (and, in fact, it did; the initial い was parsed as 行く rather than 居る, so a stricter comparison would have failed), but as a heuristic, it works much better than everything else I’ve tried, and has found some pretty impressive matches:
I’ve updated the samples I created a few weeks ago to use the new parsing. Even on that short story, it makes a few sections clearer.
[Update: one of the rare false positives from this approach: 仲には and 中には break down the same, and since the second one is in JMdict as an expression, doing a kana-only lookup on なかには will falsely apply the meaning “among (them)” to 仲には. Because of variations in orthography, I have to allow for kana-only lookups, especially for expressions, but fortunately this sort of false match is rare and easy to catch while reading.]