In information theory and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (i.e. insertions, deletions or substitutions) required to change one word into the other. It is named after Vladimir Levenshtein, who considered this distance in 1965.[1]

In approximate string matching, the objective is to find matches for short strings in many longer texts, in situations where a small number of differences is to be expected. The short strings could come from a dictionary, for instance. Here, one of the strings is typically short, while the other is arbitrarily long. This has a wide range of applications, for instance, spell checkers, correction systems for optical character recognition, and software to assist natural language translation based on translation memory.

The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical. Thus, when used to aid in fuzzy string searching in applications such as record linkage, the compared strings are usually short to help improve speed of comparisons.

the Hamming distance allows only substitution, hence, it only applies to strings of the same length.

Edit distance is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA sequence alignment algorithms such as the Smith–Waterman algorithm, which make an operation's cost depend on where it is applied.

This is a straightforward, but inefficient, recursive pseudocode implementation of a LevenshteinDistance function that takes two strings, s and t, together with their lengths, and returns the Levenshtein distance between them:

Unfortunately, this straightforward recursive implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings many times.

A more efficient method would never repeat the same distance calculation. For example, the Levenshtein distance of all possible prefixes might be stored in an array d[][] where d[i][j] is the distance between the first i characters of string s and the first j characters of string t. The table is easy to construct one row at a time starting with row 0. When the entire table has been built, the desired distance is d[len_s][len_t]. While this technique is significantly faster, it will consume len_s * len_t more memory than the straightforward recursive implementation.

Computing the Levenshtein distance is based on the observation that if we reserve a matrix to hold the Levenshtein distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix in a dynamic programming fashion, and thus find the distance between the two full strings as the last value computed.

This is a straightforward pseudocode implementation for a function LevenshteinDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them:

function LevenshteinDistance(char s[1..m],char t[1..n]):// for all i and j, d[i,j] will hold the Levenshtein distance between// the first i characters of s and the first j characters of t;// note that d has (m+1)*(n+1) values
declare int d[0..m,0..n]
set each element in d to zero
// source prefixes can be transformed into empty string by// dropping all charactersfor i from 1 to m:
d[i,0]:= i
// target prefixes can be reached from empty source prefix// by inserting every characterfor j from 1 to n:
d[0, j]:= j
for j from 1 to n:for i from 1 to m:if s[i]= t[j]:
d[i, j]:= d[i-1, j-1]// no operation requiredelse:
d[i, j]:= minimum(d[i-1, j]+1,// a deletion
d[i, j-1]+1,// an insertion
d[i-1, j-1]+1)// a substitutionreturn d[m, n]

Note that this implementation does not fit the definition precisely: it always prefers matches, even if insertions or deletions provided a better score. This is equivalent; it can be shown that for every optimal alignment (which induces the Levenshtein distance) there is another optimal alignment that prefers matches in the sense of this implementation.[4][self-published source?]

Two examples of the resulting matrix (hovering over a number reveals the operation performed to get that number):

k

i

t

t

e

n

0

1

2

3

4

5

6

s

1

1

2

3

4

5

6

i

2

2

1

2

3

4

5

t

3

3

2

1

2

3

4

t

4

4

3

2

1

2

3

i

5

5

4

3

2

2

3

n

6

6

5

4

3

3

2

g

7

7

6

5

4

4

3

S

a

t

u

r

d

a

y

0

1

2

3

4

5

6

7

8

S

1

0

1

2

3

4

5

6

7

u

2

1

1

2

2

3

4

5

6

n

3

2

2

2

3

3

4

5

6

d

4

3

3

3

3

4

3

4

5

a

5

4

3

4

4

4

4

3

4

y

6

5

4

4

5

5

5

4

3

The invariant maintained throughout the algorithm is that we can transform the initial segment s[1..i] into t[1..j] using a minimum of d[i,j] operations. At the end, the bottom-right element of the array contains the answer.