Konferenzpaper
Autorenliste: Hammer, B; Strickert, M; Villmann, T
Jahr der Veröffentlichung: 2004
Seiten: 592-597
Zeitschrift: Lecture notes in computer science
Bandnummer: 3070
ISSN: 0302-9743
ISBN: 3-540-22123-9
DOI Link: https://doi.org/10.1007/978-3-540-24844-6_89
Konferenz: 7th International Conference on Artificial Intelligence and Soft Computing
Verlag: Springer
Abstract:
The support vector machine (SVM) constitutes one of the most successful current learning algorithms with excellent classification accuracy in large real-life problems and strong theoretical background. However, a SVM solution is given by a not intuitive classification in terms of extreme values of the training set and the size of a SVM classifier scales with the number of training data. Generalized relevance learning vector quantization (GRLVQ) has recently been introduced as a simple though powerful expansion of basic LVQ Unlike SVM, it provides a very intuitive classification in terms of prototypical vectors the number of which is independent of the size of the training set. Here, we discuss GRLVQ in comparison to the SVM and point out its beneficial theoretical properties which are similar to SVM whereby providing sparse and intuitive solutions. In addition, the competitive performance of GRLVQ is demonstrated in one experiment from computational biology.
Zitierstile
Harvard-Zitierstil: Hammer, B., Strickert, M. and Villmann, T. (2004) Relevance LVQ versus SVM, Lecture notes in computer science, 3070, pp. 592-597. https://doi.org/10.1007/978-3-540-24844-6_89
APA-Zitierstil: Hammer, B., Strickert, M., & Villmann, T. (2004). Relevance LVQ versus SVM. Lecture notes in computer science. 3070, 592-597. https://doi.org/10.1007/978-3-540-24844-6_89