Konferenzpaper

Learning vector quantization for multimodal data


AutorenlisteHammer, B; Strickert, M; Villmann, T

Jahr der Veröffentlichung2002

Seiten370-376

ZeitschriftLecture notes in computer science

Bandnummer2415

ISSN0302-9743

ISBN3-540-44074-7

DOI Linkhttps://doi.org/10.1007/3-540-46084-5_60

Konferenz12th International Conference on Artifical Neural Networks (ICANN 2002)

VerlagSpringer


Abstract
Learning vector quantization (LVQ) as proposed by Kohonen is a simple and intuitive, though very successful prototype-based clustering algorithm. Generalized relevance LVQ (GRLVQ) constitutes a modification which obeys the dynamics of a gradient descent and allows an adaptive metric utilizing relevance factors for the input dimensions. As iterative algorithms with local learning rules, LVQ and modifications crucially depend on the initialization of the prototypes. They often fail for multimodal data. W,e propose a variant of GRLVQ which introduces ideas of the neural gas algorithm incorporating a global neighborhood coordination of the prototypes. The resulting learning algorithm, supervised relevance neural gas, is capable of learning highly multimodal data, whereby it shares the benefits of a gradient dynamics and an adaptive metric with GRLVQ.



Autoren/Herausgeber




Zitierstile

Harvard-ZitierstilHammer, B., Strickert, M. and Villmann, T. (2002) Learning vector quantization for multimodal data, Lecture notes in computer science, 2415, pp. 370-376. https://doi.org/10.1007/3-540-46084-5_60

APA-ZitierstilHammer, B., Strickert, M., & Villmann, T. (2002). Learning vector quantization for multimodal data. Lecture notes in computer science. 2415, 370-376. https://doi.org/10.1007/3-540-46084-5_60


Zuletzt aktualisiert 2025-06-06 um 11:37