Tuesday, February 9, 2010

Spell Checking in HTML

I recently completed a spell checking module as part of a larger project for an HTML-based editor. Some of the challenges of the project were interesting; here are my experiences, the problems I encountered, and how I solved them.

The first step of spell checking a block of text is determining what the text actually is. For a simple text document, this is quite easy: it's no more than a string of characters. However, for an HTML document, it is not so simple. Markup introduces a lot of extra items to deal with: we want to check the text as it would appear to a human, not to a computer.

In other words, it is one thing to detect the word "html" in a string of words: "blah blah blah, html, blah blah blah," but quite another thing to detect the same word as it would appear to a human if embedded in a complex DOM structure: <p><b>h</b><i>t</i><em>m</em><strong>l</strong></p>.

One obvious way of solving this problem is to derive the textual value of a given parent element. This is quite easy: innerText works in IE, and textContent works in Mozilla. Thus, a human version of the text of a DOM element can be obtained thusly:

var parent = document.body;
var text = parent.textContent; // for Mozilla
if (text === undefined) {
  text = parent.innerText; // for IE
}

However, this leaves a more significant problem to deal with: once we determine which words are misspelled, how will we re-associate those words with the correct DOM structures? If "html" comes back as being incorrect, we will need to know which text nodes originally produced the word, for how else will we correct it? I have a suspicion that this could be solved with a clever application of TextRange objects in IE and Range objects in Mozilla, but that isn't the route I took (working with IE's TextRange is a headache all by itself anyway).

My basic strategy is to walk through the DOM and derive two simultaneous structures that represent each text node: one version contains the text as it would appear to a user, and the other version contains a map between the "human" version and the original DOM.

Rendering the HTML as text is not exactly straightforward either; one cannot simply concatenate the values of each individual text node. There are a number of HTML elements which introduce white space for humans, such as <p> or <div> or even some inline ones, like <br>. Similarly, there are some HTML elements which do not introduce white space, such as <b>, or <i>, or <em>. Other rules apply as well: if you have <b style="display: block;"> then you have a <b> that adds white space even when <b> normally wouldn't.

To resolve this, I spent some time empirically testing each suspicious HTML element to determine whether it would cause visible white space to be inserted inside a word. Using this list and a variety of other rules (floating nodes always make white space, display: block nodes always make white space, etc), I construct a text string out of the DOM that more-or-less resembles the words as a human would see them. Notably, it is not necessary to obtain an exact representation of white space; I don't particularly need to render tables as tables, I just need to know that a </td><td> causes a break in a word but a </span><span> doesn't.

As I construct this string, I keep track of each text node that I encounter in the DOM, and meticulously record the starting and ending indices of that node's textual content in my generated string.

As an example, consider the following DOM fragment, with text nodes shown explicitly:

<p>
  <b>
    <textNode 1>h</textNode 1>
  </b>
  <i>
    <textNode 2>t</textNode 2>
  </i>
  <em>
    <textNode 3>m</textNode 3>
  </em>
  <strong>
    <textNode 4>l</textNode 4>
  </strong>
  <textNode 5> is complex</textNode 5>
</p>

After parsing the above, I would end up with a structure similar to this:

result = {
  html: [
    { node: <textNode 1>, start: 0, end:  1 },
    { node: <textNode 2>, start: 1, end:  2 },
    { node: <textNode 3>, start: 2, end:  3 },
    { node: <textNode 4>, start: 3, end:  4 },
    { node: <textNode 5>, start: 4, end: 15 }
  ],
  text: 'html is complex'
}

Once this process is complete, I split the result.text value into a list of unique individual words. Locating each word is easily accomplished through regular expressions, although special provisions must be made for hyphenated words and contractions. Unicode "smart quote" characters and their brethren are also normalized into regular ASCII quotes. This list is finally sent up to the server, where all the real work happens.

Given any list of words, there are several methods to perform spell checking upon it. For my purposes, I've implemented a dictionary-based approach, as this gives me a greater sense of confidence and considerably more control than statistical analysis. Locating misspelled words is therefore a trivial matter of determining whether each provided word is in the dictionary or not.

However, it is not enough to know which words are incorrect: I must also offer spelling suggestions. I accomplished this through implementing the Damerau-Levenshtein Distance algorithm, which has been more fun than I have had in code in quite some time. While an in-depth explanation can be found on Wikipedia, put simply, the algorithm can compare any two words and generate a numeric score that indicates how similar those two words are.

Armed with this capability, I am able to generate spelling suggestions by calculating the Damerau-Levenshtein Distance between each misspelled word, and each word in the dictionary. The algorithm is quite speedy, but as the dictionary is quite large, some optimizations are in order: for instance, only words of a comparable length to the original word are tested.

While I am given to understand that a relational database is not ideal for this, I have implemented this algorithm as a C# assembly on SQL Server 2005. Although I feel that additional optimizations are no doubt possible, I am already able to process comparatively large documents in under a second using this method.

As a result of all this, the server sends back a list of misspelled words. The JavaScript code takes these words and locates each one within the result.text string that it previously constructed. For each word's matching indices in result.text, it scans through the list of nodes in result.html until it finds the text node or nodes that made up that word.

Finally! The code now knows which words are misspelled AND which nodes correspond to those words. From here, it is a relatively simple matter of presenting this information to the user and letting them decide how to handle it. For the time being, I've chosen to provide a dialog box, similar to how MS Word's spellchecker operates. This gives the user familiar buttons for things like "Ignore, Ignore All, Add to Dictionary, etc." I am rather tempted to go back and implement a Google-style interface, wherein the incorrect words are highlighted and the user is given a context menu instead. However, there are many more features left to complete, and I have to move on at some point.

As soon as the user decides on a new spelling for a word, I alter each text node that composed the original word, updating it to contain the new values instead. Should a word be split across multiple text nodes, I fill in the letters one at a time, from left to right, letting the final node be the one to expand or contract if there is a difference in the number of letters.

And that's all there is to it!

No comments:

Post a Comment