Yet Another Word Puzzle
Dr. Dobb's Portal
November, 2007 Article on DDJ site |
As I’ve confessed in the past, I’m a sucker for word puzzles. My recent post on a Will Shortz puzzle from NPR Morning Edition ended up provoking a surprising amount of comment, much of it in the vein of Watch me solve it better, faster, and with more style using language XXX.
I certainly enjoyed watching other people solve the problem, and found their solutions instructive. As the XP crowd has figured out, we programmers spend too much time working on our own problems and not enough time watching how other people work. There’s a lot to learn, both good and bad, from getting a peek inside another person’s head.
Beth Katz |
Out of the Blue
Which brings me to the puzzle at the center of this article.
In what at first seemed to be an incident completely unrelated to word play, I had a pleasant email exchange with Beth Katz, who was teaching a Data Structures class at Millersville University. I happened to look at Beth’s current homework assignment for her class, and you can imagine my reaction when I saw the problem she had posted for her class:
We define word reduction as removing a single letter from a word while leaving the remaining letters in their original order so that the resulting sequence of characters is also a word. A good word can be reduced step-by-step until all that is left is a or i. Your program will answer the question: what is the biggest word in the given dictionary that can be reduced?
Beth gave a short example of a good word: planets:
planets
plants
pants
pant
ant
an
a
As you can see, you can remove one letter at a time, and each time you are left with a valid word one character shorter.
This makes for an interesting problem indeed. I’ve read that the average English speaker has a vocabulary of perhaps 15,000 to 20,000 words, but many reasonable word lists have upwards of 100,000 English words. How many of these words qualify as good words?
As I discussed in the previous word puzzle article, the highly evolved pattern matching facility in the human mind is often pretty good at solving these problems, and I think this is the case (in a limited way) for this particular problem. If I give you a word (like planets) above, I think you’d be able to find a possible reduction path quickly, subject to the limitations of your own vocabulary.
But the human mind is not so good at certain variations on the same problems. Asking you for the biggest word that fits this pattern presents you with an almost impossible task. Basically, it requires you to be able to iterate through the words you know, ordered by length, and test each one. Unless you are subject to some pretty incredible flashes of insight, I think you’re going to need a computer for this.
Going Bottom Up
Maybe the feeling isn’t universal among programmers, but when I look at a problem, my first instinct is usually to try a top-down approach. For this problem, a top-down approach would mean identifying the longest words in the dictionary, then attempting to decompose them into successively shorter words.
This approach will work, but a little mental analysis shows that it might be a little resource heavy. Imagine that you are decomposing a 10 letter word by taking away one letter at a time. In the worst case, you might find all 9 shorter words in the dictionary, and then you could find all 720 8 letter words, and so on. Although in the general case you might only find one or two matches, particularly at the long lengths, even the potential for factorial growth leaves some room for concern.
So I took a shot at a bottom-up approach instead. It isn’t usually my first choice, but I think you’ll see that in this case it yields a much more satisfactory and efficient solution to the problem.
The Inner Loop
For this program to succeed, it must terminate its processing with a container that holds the
longest good words in the dictionary. For this particular problem, my choice of container is
the C++ hash_set
, which is non-standard but universally implemented.
My bottom-up approach means that I will fill in the hash_set
container for words of length 1
first, then words of length 2, and so on. For reasons of efficiency, it works out better if I
keep a separate hash_set
for each word length, so the hash_set
objects are actually
stored in a map
that is indexed on the word size:
std::map<size_t,hash_set<std::string&> > good_words
To fill in the hash for size i, I need a loop that iterates over all words of
size i, removing one character at a time and then testing to see if the result is a good
word of size i-1. If it is, I add it to good_words[ i ]
. When I’m done iterating over
all words of that size, I have a hash_set
that contains all good words of that size, and I can
move up to the next larger size.
So if we’re testing words of size i the innermost part of the loop will look like this:
In this inner part of the loop, we repeatedly remove a character from the word, testing to see if the resulting word appears in the list of words saved in the previous size hash set. If a match occurs, the word is inserted into the hash set for the current word size, and the loop breaks.
This bottom-up approach seems like the ideal way to build my lists of good words for a couple of reasons. First, I don’t go to the expense of adding words to the hash sets unless they are good words. Second, determining that a given word is a good word only requires a test against the words at level i-1; I don’t have to test for a complete path down to ‘a’ or ‘i’.
Building The Input Data
In the previous section I mentioned that the innermost loop was going to be called as I iterated over all the dictionary words of a given size. So how do I get all the words of a given size?
The first thing that might occur to you is that you could read in all the words from the dictionary, then sort them by size. This approach would work, but given that the standard sort routines available to you in the C++ library are all going to work in O(n·lgn) time, it might get kind of expensive as the dictionary grows to hundreds of thousands of words.
The good news is that with this data set we’re in position to take advantage of a linear sort.
Yes, we can sort data and do substantially better than O(n·lgn) when we know that the
data to be sorted is constrained to a small set of values.
In this case, I just create one linked list for each word size, and as I read the input file, I add each input word to the front of the appropriate linked list. This would be a true linear algorithm if I constrained the input size to a fixed number, say 25, but for convenience I actually store the lists in a map that looks like this:
std::map<size_t,std::list<std::string> > words_by_length
As a result the input code runs in close to linear time. The actual loop that reads in the data is nice and simple:
Once all the words are read in, I can access the list of words of a given size with a simple map
lookup: words_by_length[ i ]
.
Word Lists
One final detail before I can compile my library - I need some lists of words! Lists of words are not hard to come by, although coming up with a suitable one for this exercise might take some work.
One of the first places to look is is on your local Linux system. My system has a list of words in
/usr/share/dict/words
, which is used by the spell checker application, and possibly by the
password app. One one of my systems, this dictionary has a whopping 483,524 words, which means it
is packed with obscure words. Just as an example, a typical 12-character good word and its
derivation found using /usr/share/dict/words
yields this head-scratching sequence:
abranchiate
branchiate
branchiae
branchia
branchi
branch
ranch
rach
ach
ch
h
Probably the first thing you want to do with that file is go through and remove all the single letter words except ‘a’ and ‘i’, but even so, you’re going to be boggled by some of what you see.
Another good alternative are the collection of word lists distributed on Project Gutenberg as the Moby Word List. This includes a wide variety of lists of various sizes.
Beth Katz had several good dictionaries listed along with her homework assignment, including a short one called kids.dict that is nice and short, making it good for debugging runs.
Finally, Google searches for “word lists” will turn up many other good choices.
Wrapping it up
Once I had the bottom-up good word builder working with the file-based word list, I was ready to
put it all together. The core of main()
now looks like this:
Procedure read_words()
simply reads all the words from the file into the map of
hash_set
containers called word_by_length
, as described earlier.
Then build_up_words
collects all the good words and stores them, organized by size, in the
map
of hash_set
containers called good_words
. The core of that routine was described earlier.
I show the results of the top two levels in print_good_words()
, which is simple enough to read
up on in the source code.
The Results
By now you are no doubt dying to see some results from the program. But first, to scope the size
of the problem, here’s the output from the program as the good_words
hashes are populated on a
run against SINGLE.TXT
, a 354,985 word database:
Loading words from: SINGLE.TXT
Loaded 354985 words from SINGLE.TXT
Found 431 eligible words out of 431 total at length 2
Found 2075 eligible words out of 2092 total at length 3
Found 6213 eligible words out of 6758 total at length 4
Found 11322 eligible words out of 15047 total at length 5
Found 12495 eligible words out of 28473 total at length 6
Found 8939 eligible words out of 40094 total at length 7
Found 4295 eligible words out of 49528 total at length 8
Found 1210 eligible words out of 51216 total at length 9
Found 174 eligible words out of 43964 total at length 10
Found 20 eligible words out of 36082 total at length 11
Found 0 eligible words out of 28009 total at length 12
So that means about 13% of the words in this vocabulary were good words. I’m a little surprised
that it’s that high. To add some sanity to the mix, I removed all the single character words from
SINGLE.TXT
with the exception of ‘a’ and ‘i’, and the ratio went down to a more
reasonable 8%.
You can also see, as you would expect, that the proportion of good words goes down at each level. At lengths 2, 3, and 4 nearly all words are good words, but by the time we get to length 11, we’re down to less than one-tenth of one percent good.
Even with my modified version of SINGLE.TXT
, you’re bound to get plenty of esoteric
words when working your way through the derivation of an 11 or 10 character good word. Of the 18
words of eleven characters, the derivation that works best with my vocabulary would be the following:
sparklingly
sparkingly
sparingly
springly
springy
spring
sprig
prig
pig
pi
i
With the more manageable scrabble.dict dictionary, containing 79,340 words, some of the first sequences that pop out include:
shopping hopping hoping oping ping pig pi i
breaches beaches baches aches aces ace ae a
marchese marches arches aches aces ace ae a
prawning pawning awning awing wing win in i
stablest stalest stales tales ales als as a
bravoing braving raving ravin rain ain in i
failings filings flings lings lins ins is i
relaters elaters elates elate late ate ae a
roadster roaster raster rater rate ate ae a
semioses semises seises seise seis sis is i
clambers lambers lamber lamer lame lam am a
claviers clavers lavers avers aves ave ae a
shrieves shrives shives hives hies his is i
stalkier talkier talker taker take tae ae a
statutes statues states tates ates ate ae a
swarming warming waring wring ring rin in i
brambled rambled ambled amble able ale ae a
stratous stratus status stats tats tas as a
paddlers paddles padles pales ales als as a
thirling tirling tiring iring ring rin in i
trucking trucing truing ruing ring rin in i
brawlier brawler bawler baler bale ale ae a
frilling filling filing fling ling lin in i
carouses arouses arouse arose arse are ae a
No doubt there are still plenty of obscure words here, but remember, this dictionary is probably composed of at least 50% words that aren’t in your working vocabulary.
Efficiency
When run against a word list with 350K+ entries on my anemic notebook computer, it takes almost 10 seconds for the program to terminate, including display time. The vast majority of that time is spent checking words for goodness, which requires removing characters one at a time, then checking to see if their small descendants are in the word list.
Obviously, if you want to optimize this program for better performance, that’s the place to do
it. My guess would be that the std::string
class member to erase characters from a word is
probably far from optimal, and could be replaced by a hand-coded routine designed to do the same
task with much greater speed.
Issues With Non-Standard Library Functions
Because hashed containers did not make it into the original C++ standard, there is a somewhat
higher level of peril when using them. Problems ranging from syntactic inconsistency to lack of
performance guarantees definitely make hash_set
and hash_map
second class citizens compared
to the other standard containers. I saw a good example of this when I first started work on this article.
When solving this problem, the first thing that seemed obvious to me was that we were going to be storing references to dictionary words in hash tables. And I thought it might be interesting to see how well the C++ hash classes were going to be able to handle input data with hundreds of thousands of words.
I thought a good test program would be one that simply reads in the text file and adds it to a hash set:
Under Visual C++ .NET 2005 on a fairly slow laptop, I immediately saw a nasty problem with the
Microsoft implementation of hash_set<T>
. Every time the counter hit a an even power of two,
there was a bit of a pause. The pause grew longer and longer as the count grew larger, until by
the time I was up to 128K words, it stretched out to many seconds.
Lesson learned. Hash table resizing can be expensive under some implementations of this non-standard class. Just resizing hash tables in any implementation can be difficult, but the additional requirements imposed on C++ library containers adds significantly to the work that must be done at this point.
I hoped that I would find a reserve()
method or a constructor option that would let me
preallocate a hash_set
with perhaps 200K buckets, but this doesn’t seem to be possible with
Microsoft’s implementation. The good news is that the hash_set
replacement in
TR1,
unordered_set
, will impose a requirement that conforming libraries allow for a bucket count as
part of the container’s constructor.
It turned out to not be too important, however. As I worked on the implementation of the algorithm, I drastically reduced the number of strings that were stored in any one hash, making this a moot point.
Source Code
You can download the project here, including project files for Visual Studio 2003 and 2005, and a simple Makefile for gcc. The code has been checked under gcc 3.4, but I make no claims that it will work with all later versions of the library.
References
Ward, Grady. “Moby Word Lists by Grady Ward - Project Gutenberg.” Main Page - Gutenberg. 12 Nov. 2007 http://www.gutenberg.org/etext/3201. “ISO/IEC JTC1/SC22/WG21 - The C++ Standards Committee.” Open Standards. 13 Nov. 2007 http://www.open-std.org/jtc1/sc22/wg21/.