I have a standard {-1,+1} machine learning problem. The main difference is that the data points are binary strings, so their prooximity is measured by Hamming distance. Can SVM be applied in this case? What SVM library is suited better for this task?
-
I've just realized that Hamming distance is L1-distance, so which SVM library can correctly handle L1-distance between data points? – Allllex Apr 05 '11 at 11:47
-
1You might be interested in http://alumni.cs.ucr.edu/~hli/paper/hli04tis.pdf or any of the references at https://en.wikipedia.org/wiki/String_kernel. – Danica May 04 '13 at 02:43
4 Answers
If a kernel k is positive definite for any pair of examples x and z the determinant of the gram matrix is non negative.
|k(x, x) k(x, z)|
| | = k(x,x)k(z,z) - k(x,z)^2 >= 0
|k(z, x) k(z, z)|
For a distance (hamming distance included) the following properties hold:
For any x, y:
1) d(x, z) >= 0 and d(x, z) = 0 <=> x = z
2) symmetry d(x, z) = d(z, x)
3) triangular inequality d(x, z) <= d(x, y) + d(y, z)
Considering k to be the hamming distance, according to 1) we would have:
a) k(x,x) = k(z,z) = 0
But in order to be a positive definite kernel we need:
b) k(x,x)k(z,z) - k(x,z)^2 >= 0
applying a) to b) we have:
-k(x,z)^2 >= 0
k(x,z)^2 <= 0
which means that k(x,z) is not a real value and thus it is not a valid kernel.
Unless I'm missing something, I think it is a valid kernel, because it is an inner product in the following space: K("aab","baa") = [0,1,0,1,1,0] \dot [1,0,0,1,0,1].
This is a nice way to define a feature for a kernel, but it is not the hamming distance. The hamming distance between "aab" and "baa" is 2 the first and the third character are different. but
[0,1,0,1,1,0] \dot [1,0,0,1,0,1] = 1.
If the hamming instance is not positive definite it doesn't mean that it can't be used with SVM, but for sure you loose the benefits of solving a convex optimization problem.

- 31
- 1
- 4
-
One thing you can do with many distance metrics is put them in a Gaussian kernel: `e^{-(d(x, y) / sigma)^2}`. This is a kernel if `d` is the L2 distance (proof e.g. [here](http://cs.nyu.edu/~dsontag/courses/ml12/slides/lecture6.pdf)). It also works, of course, with any metric which can be isometrically embedded into L2. Unfortunately, it seems that L1 is not exactly embeddable into L2: [these notes](http://www.cs.princeton.edu/courses/archive/spr05/cos598B/lecture11.pdf) describe an algorithm with `O(sqrt{log n} log log n)` distortion, and mention there is a lower bound less than `log log n`. – Danica May 04 '13 at 02:29
-
That said, in practice you may be able to get good results by finding the matrix of Hamming distances on your samples, passing them through a Gaussian, and then projecting this to be positive semidefinite (either through zeroing out negative eigenvalues from its eigendecomposition, or adding to the diagonal an amount equal to the most-negative eigenvalue; the former [tends to work better in practice](http://icml2008.cs.helsinki.fi/papers/531.pdf) but the latter is easier to compute). Alternatively, you could use an approximate embedding into L2 like the one above and use any kernel on that. – Danica May 04 '13 at 02:35
This is probably best handled by using a SVM library that allows you to create a custom kernel function (e.g. libSVM, SVMLight, scikits). Then you would have to write a Hamming distance function to compute the distance between two strings and plug it in as the kernel function.
The only problem is, I'm not sure Hamming distance actually is a kernel, as in it satisfies Mercer's conditions. It's obviously symmetric, but I don't know whether it's positive definite.

- 15,833
- 1
- 33
- 38
This paper proposes a kernel for Hamming distances measured between categorical features. It's simply a matter of replacing the Euclidean distance in the standard exponential kernel with Hamming.
It is also possible to combine Euclidean and Hamming distances into one kernel, which would be good for datasets with a mix of continuous and discrete variables.
The good news is that they also prove that this kernel is indeed positive definite (on page 14).

- 655
- 7
- 18
Like StompChicken says it is unclear that the Hamming distance is a valid kernel.
Unless I'm missing something, I think it is a valid kernel, because it is an inner product in the following space: K("aab","baa") = [0,1,0,1,1,0] \dot [1,0,0,1,0,1].
After understanding this "encoding" you can really use any SVM library that supports a linear kernel, an encode your strings like in the previous example.

- 12,022
- 4
- 45
- 62