Conference paper

Learning vector quantization for multimodal data


Authors listHammer, B; Strickert, M; Villmann, T

Publication year2002

Pages370-376

JournalLecture notes in computer science

Volume number2415

ISSN0302-9743

ISBN3-540-44074-7

DOI Linkhttps://doi.org/10.1007/3-540-46084-5_60

Conference12th International Conference on Artifical Neural Networks (ICANN 2002)

PublisherSpringer


Abstract
Learning vector quantization (LVQ) as proposed by Kohonen is a simple and intuitive, though very successful prototype-based clustering algorithm. Generalized relevance LVQ (GRLVQ) constitutes a modification which obeys the dynamics of a gradient descent and allows an adaptive metric utilizing relevance factors for the input dimensions. As iterative algorithms with local learning rules, LVQ and modifications crucially depend on the initialization of the prototypes. They often fail for multimodal data. W,e propose a variant of GRLVQ which introduces ideas of the neural gas algorithm incorporating a global neighborhood coordination of the prototypes. The resulting learning algorithm, supervised relevance neural gas, is capable of learning highly multimodal data, whereby it shares the benefits of a gradient dynamics and an adaptive metric with GRLVQ.



Authors/Editors




Citation Styles

Harvard Citation styleHammer, B., Strickert, M. and Villmann, T. (2002) Learning vector quantization for multimodal data, Lecture notes in computer science, 2415, pp. 370-376. https://doi.org/10.1007/3-540-46084-5_60

APA Citation styleHammer, B., Strickert, M., & Villmann, T. (2002). Learning vector quantization for multimodal data. Lecture notes in computer science. 2415, 370-376. https://doi.org/10.1007/3-540-46084-5_60


Last updated on 2025-06-06 at 11:37