Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
412909 | Neurocomputing | 2010 | 14 Pages |
Self-organizing neural network models have recently been extended to more general data structures, such as sequences or trees. We empirically compare three recursive models of the self-organizing map—SOMSD, MSOM and RecSOM—using three different tree data sets with the increasing level of complexity: binary syntactic trees, ternary linguistic propositions and 5-ary graphical data. We evaluate the models in terms of our four proposed measures focusing on unit's memory depth and on capability to differentiate among the trees, and using two additional measures introduced earlier, focusing on statistics of labels distribution and the spatiotemporal information encoded into the maps. We focus on models’ ability to differentiate among the trees and find out that the models differ in how they balance the effects of leaves and the tree structure in achieving this task. SOMSD turns out to have the highest sensitivity to structural information that affects the tree clustering. With respect to input differentiation, only RecSOM can assign unique output representations to different inputs in all three data sets. We also argue that MSOM, despite the commutativity of operation (with respect to children) for calculating context representation, has the potential to differentiate the binary trees with permuted children.