WordNet loading status:

WUP(s1, s2) = 2*dLCS.depth / ( min_{dlcs in dLCS}(s1.depth - dlcs.depth)) + min_{dlcs in dLCS}(s2.depth - dlcs.depth) ), where dLCS(s1, s2) = argmax_{lcs in LCS(s1, s2)}(lcs.depth).

- min score = 0.0
- max score = 1.0
- error score = -1.0
- acceptable pos pairs = [['n', 'n'], ['v', 'v']]
- use all senses = true
- use root node = true

JCN(s1, s2) = 1 / jcn_distance where jcn_distance(s1, s2) = IC(s1) + IC(s2) - 2*IC(LCS(s1, s2)); when it's 0, jcn_distance(s1, s2) = -Math.log_e( (freq(LCS(s1, s2).root) - 0.01D) / freq(LCS(s1, s2).root) ) so that we can have a non-zero distance which results in infinite similarity.

- min score = 0.0
- max score = Infinity
- error score = -1.0
- acceptable pos pairs = [['n', 'n'], ['v', 'v']]
- use all senses = true
- use root node = true

LCH(s1, s2) = -Math.log_e( LCS(s1, s2).length / ( 2 * max_depth(pos) ) ).

- min score = 0.0
- max score = Infinity
- error score = -1.0
- acceptable pos pairs = [['n', 'n'], ['v', 'v']]
- use all senses = true
- use root node = true
- max depth N = 20
- max depth V = 14

LIN(s1, s2) = 2*IC(LCS(s1, s2) / (IC(s1) + IC(s2)).

- min score = 0.0
- max score = 1.0
- error score = -1.0
- acceptable pos pairs = [['n', 'n'], ['v', 'v']]
- use all senses = true
- use root node = true

RES(s1, s2) = IC(LCS(s1, s2)).

- min score = 0.0
- max score = Infinity
- error score = -1.0
- acceptable pos pairs = [['n', 'n'], ['v', 'v']]
- use all senses = true
- use root node = true

PATH(s1, s2) = 1 / path_length(s1, s2).

- min score = 0.0
- max score = 1.0
- error score = -1.0
- acceptable pos pairs = [['n', 'n'], ['v', 'v']]
- use all senses = true
- use root node = true

LESK(s1, s2) = sum_{s1' in linked(s1), s2' in linked(s2)}(overlap(s1'.definition, s2'.definition)).

- min score = 0.0
- max score = Infinity
- error score = -1.0
- acceptable pos pairs = [['a', 'a'], ['a', 'r'], ['a', 'n'], ['a', 'v'], ['r', 'a'], ['r', 'r'], ['r', 'n'], ['r', 'v'], ['n', 'a'], ['n', 'r'], ['n', 'n'], ['n', 'v'], ['v', 'a'], ['v', 'r'], ['v', 'n'], ['v', 'v']]
- use all senses = true
- use stemmer = false
- use stop words = false
- normalize score = false
- word weighting = false

Computational cost is relatively high since recursive search is done on subtrees in the horizontal, upward and downward directions.

HSO(s1, s2) = const_C - path_length(s1, s2) - const_k * num_of_changes_of_directions(s1, s2)

- min score = 0.0
- max score = 16.0
- error score = -1.0
- acceptable pos pairs = [['a', 'a'], ['a', 'r'], ['a', 'n'], ['a', 'v'], ['r', 'a'], ['r', 'r'], ['r', 'n'], ['r', 'v'], ['n', 'a'], ['n', 'r'], ['n', 'n'], ['n', 'v'], ['v', 'a'], ['v', 'r'], ['v', 'n'], ['v', 'v']]
- use all senses = true

- Frontend Instance Class: F2 (1200Hz, 256MB)
- Max Idle Instances: 1
- Min Pending Latency: 15.0s