Namely, in the file named ‘foo’, on the first line, change the 9th token from ‘true’ to ‘false’. These patches are the same as hunk patches except a tokenization algorithm first converts a single line into multiple lines with one token per line. This is similar to how wdiff operates.

It’s possible that word-based hunks are a subset of character-based hunks, but I found no documentation about the latter.