Konferenzpaper

Relevance LVQ versus SVM


AutorenlisteHammer, B; Strickert, M; Villmann, T

Jahr der Veröffentlichung2004

Seiten592-597

ZeitschriftLecture notes in computer science

Bandnummer3070

ISSN0302-9743

ISBN3-540-22123-9

DOI Linkhttps://doi.org/10.1007/978-3-540-24844-6_89

Konferenz7th International Conference on Artificial Intelligence and Soft Computing

VerlagSpringer


Abstract
The support vector machine (SVM) constitutes one of the most successful current learning algorithms with excellent classification accuracy in large real-life problems and strong theoretical background. However, a SVM solution is given by a not intuitive classification in terms of extreme values of the training set and the size of a SVM classifier scales with the number of training data. Generalized relevance learning vector quantization (GRLVQ) has recently been introduced as a simple though powerful expansion of basic LVQ Unlike SVM, it provides a very intuitive classification in terms of prototypical vectors the number of which is independent of the size of the training set. Here, we discuss GRLVQ in comparison to the SVM and point out its beneficial theoretical properties which are similar to SVM whereby providing sparse and intuitive solutions. In addition, the competitive performance of GRLVQ is demonstrated in one experiment from computational biology.



Autoren/Herausgeber




Zitierstile

Harvard-ZitierstilHammer, B., Strickert, M. and Villmann, T. (2004) Relevance LVQ versus SVM, Lecture notes in computer science, 3070, pp. 592-597. https://doi.org/10.1007/978-3-540-24844-6_89

APA-ZitierstilHammer, B., Strickert, M., & Villmann, T. (2004). Relevance LVQ versus SVM. Lecture notes in computer science. 3070, 592-597. https://doi.org/10.1007/978-3-540-24844-6_89


Zuletzt aktualisiert 2025-06-06 um 12:05