0.4.2 (2006-12-31):
- Added more overloading: (token ^ message) results in a new token which will raise ParseError with `message' if it fails to match.
- Removed the `scanMultiple' and `take' methods.
- Added a `TakeToken` token type, which is initialized with a length, and matches exactly that number of characters.
- As usual, optimized/updated examples to take advantage of new features. sexp.py is down to 34 lines, from 77 in the original release!
- Removed the `quotey' example. Turns out there's a much simpler way to do that with Python's re module. That's what I get for having worked with PHP for so long, eh?
- Speaking of PHP and examples -- added an example of parsing PHP's serialize format. 29 zesty lines!

0.4.1 (2006-12-28):
- Tiny bugfix. Zoop! (That's what I get when I make a last-minute change that I assume will work and don't test it, eh?)

0.4.0 (2006-12-27): Big update! Big, compatibility-breaking update!
- Added CallbackFor function intended to be used as a decorator. Call it as @CallbackFor(someToken) above a function, and the function will be replaced with that token, itself becoming that token's callback.
- To save space when you don't need certain informations, callbacks can now take between one and three arguments. If three, it will be passed, as usual, the parser, the data, and the original cursor. If two, it will be passed the parser and the data. If one, it will be simply passed the data.
- Added some overloading magic. Now you can construct CompositeTokens by chaining other tokens together with the | operator, and TokenSequences by chaining other tokens together with the + operator.
- More overloading magic: (token >> callable) results in a copy of `token' with its callback set to `callable'. Useful for concisely specifying a callback for the outcome of a + or | sequence, since you don't get direct access to the initializer in such cases.
- Removed `name' as the first argument to tokens' initializers. It was not very useful, in retrospect. If you do need something like that for debugging, you can pass a name in the `name' keyword argument.
- Several optimizations. Now zestier than ever!
- Added RawToken class, which matches only an exact string. Faster than using regex matching if you're only looking for a constant.
- ZestyParser.scan() now returns the matching data or callback result directly, rather than as the second item in a tuple (the first being the token object). Instead, you can access the last-matched token via ZestyParser's `last' property.
- ZestyParser.addTokens() can also take keyword arguments; each value will be taken as a token to be added, and its key will be set as its name.
- Added ZestyParser.skip() method, which takes one token as an argument, matches it, and returns a boolean indicating whether it matched or not. Can be faster than using scan() if you're just doing something like skipping whitespace.
- Added an example for parsing BitTorrent's bencode format.
- Rewrote calcy.py example yet again. Zestiness (and readability) increased!
- Improved sexp.py example similarly.

0.3.0 (2006-12-21)
- The previous distributions didn't actually include the examples. Oho!
- Added a ParseError exception, for programs to raise when they encounter something they weren't expecting.
- Mainly useful for the ParseError exception, but abstract enough to be used in other ways, added a coord() method to ZestyParser, returning the current row and column of the cursor.

0.2.0 (2006-12-18)
- Added some worthwhile internal abstractions.
- Rewrote calcy.py example to use new features and be a bit more concise.
- Changed NotMatched to an exception. Duh.
- Added built-in ReturnDirect token callback. I'm not going to bother explaining it here, because I'm only writing this as of 0.4.0, by which point ReturnDirect is obsolete.
- Added addToken method on ZestyParser class, allowing code to refer to tokens by name instead of by reference; useful for mutually-recursive tokens.

0.1.0 (2006-12-17)
- Initial release.
