Core Set Sequential Feed-Forward Neural Networks

  IJRES-book-cover  International Journal of Recent Engineering Science (IJRES)  
  
© 2015 by IJRES Journal
Volume-2 Issue-5
Year of Publication : 2015
Authors : Shuxia Lu, Chenxu Zhu, Yangfan Zhou
DOI : 10.14445/23497157/IJRES-V2I5P104

How to Cite?

Shuxia Lu, Chenxu Zhu, Yangfan Zhou, " Core Set Sequential Feed-Forward Neural Networks," International Journal of Recent Engineering Science, vol. 2, no. 5, pp. 22-26, 2015. Crossref, https://doi.org/10.14445/23497157/IJRES-V2I5P104

Abstract
A core set sequential feed-forward neural networks (CS-SFFN) approach is proposed in order to deal with large datasets classification problem. In the first stage, the core set can be obtained efficiently by using the generalized core vector machine (GCVM) algorithm. For the second stage, the sequential feedforward neural networks (SFFN) can be used to implement classification. A strategy proposed within CS-SFFN is to take hidden-layer input weights as a subset of the core set (input strategy). Experiments show that the CS-SFFN has comparable performance with SV-SFFNs and EM-ELM.

Keywords
Core vector machine; Core set; Error minimized extreme learning machine; Support vector sequential feed-forward neural networks; Sequential approximations

Reference
[1] G. B. Huang, Q. Y. Zhu, C. K. Siew, Extreme learning machine: theory and applications, Neurocomputing, 70, 2006, 489-501.
[2] R. Zhang, Y. Lan, G. B. Huang, Z. B. Xu, Universal approximation of extreme learning machine with adaptive growth of hidden node, IEEE Transactions on Neural Networks and Learning Systems, 23(2), 2012, 365-371.
[3] C. Panagiotakopoulos, P. Tsampouka, The Margitron: A Generalized Perceptron with Margin, IEEE Trans. Neural Netw, 22 (3), 2011, 395–407.
[4] L. Yujian, L. Bo, Y. Xinwu, F. Yaozong, L. Houjun, Multiconlitron: a general piecewise linear classifier, IEEE Trans. Neural Netw. 22 (2), 2011, 276–289.
[5] X. Peng, Y. Wang, Geometric algorithms to large margin classifier based on affine hulls, IEEE Trans. Neural Netw. 23 (2), 2012, 236–246.
[6] J. Platt, B. Schölkopf, C. Burges, and A. Smola, Eds., Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods – Support Vector Learning. Cambridge, MA: MIT Press, 1999, 185– 208.
[7] D. Achlioptas, F. McSherry, and B. SchÖlkopf, Sampling techniques for kernel methods, Advances in neural information processing systems, 14, 2002, 335-342.
[8] I. W. Tsang, J. T. Kwok, and P. M. Cheung, Core vector machines: fast SVM training on very large data sets, Journal of Machine Learning Research, 6, 2005, 363-392.
[9] I. W. Tsang, J. T. Kwok, and J. M. Zurada, Generalized core vector machines, IEEE Transactions on Neural Networks, 17(5), 2006, 1126-1140.
[10] W. J. Hu, F. L. Chung, S. T. Wang, The Maximum Vector Angular Margin Classifier and its fast training on large datasets using a core vector machine, Neural Networks, 27, 2012, 60-73.
[11] G. Feng, G. B. Huang, Q. Lin, & R Gay, Error minimized extreme learning machine with growth of hidden nodes and incremental learning, IEEE Transactions on Neural Networks, 20, (2009)1352–1357.
[12] E. Romero, & D Toppo, Comparing support vector machines and feedforward neural networks with similar hidden-layer weights, IEEE Transactions on Neural Networks, 18, 2007, 959-963.
[13] L Guo, J H Hao, M Liu. An incremental extreme learning machine for online sequential learning problems, Neurocomputing, 128, 2014, 50-58.
[14] A. Frank, A. Asuncion, UCI machine learning repository, 2010. URL http: //archive.ics.uci.edu/ml.