A straightforward implementation of a GPU-accelerated ELM in R with NVIDIA graphic cards

  1. Alia-Martinez, M. 1
  2. Antonanzas, J. 1
  3. Antonanzas-Torres, F. 1
  4. Perńia-Espinoza, A. 1
  5. Urraca, R. 1
  1. 1 Universidad de La Rioja
    info

    Universidad de La Rioja

    Logroño, España

    ROR https://ror.org/0553yr311

Revista:
Lecture Notes in Computer Science

ISSN: 0302-9743

Año de publicación: 2015

Volumen: 9121

Páginas: 656-667

Tipo: Artículo

beta Ver similares en nube de resultados
DOI: 10.1007/978-3-319-19644-2_54 SCOPUS: 2-s2.0-84958529316 WoS: WOS:000363689900054 GOOGLE SCHOLAR

Otras publicaciones en: Lecture Notes in Computer Science

Resumen

General purpose computing on graphics processing units (GPGPU) is a promising technique to cope with nowadays arising computational challenges due to the suitability of GPUs for parallel processing. Several libraries and functions are being released to boost the use of GPUs in real world problems. However, many of these packages require a deep knowledge in GPUs' architecture and in low-level programming. As a result, end users find trouble in exploiting GPGPU advantages. In this paper, we focus on the GPU-acceleration of a prediction technique specially designed to deal with big datasets: The extreme learning machine (ELM). The intent of this study is to develop a user-friendly library in the open source R language and subsequently release the code in https://github.com/maaliam/EDMANS-elmNN-GPU.git. Therefore R users can freely implement it with the only requirement of having a NVIDIA graphic card. The most computationally demanding operations were identified by performing a sensitivity analysis. As a result, only matrix multiplications were executed in the GPU as they take around 99% of total execution time. A speedup rate up to 15 times was obtained with this GPU-accelerated ELM in the most computationally expensive scenarios. Moreover, the applicability of the GPU-accelerated ELM was also tested with a typical case of model selection, in which genetic algorithms were used to fine-tune an ELM and training thousands of models is required. In this case, still a speedup of 6 times was obtained. © Springer International Publishing Switzerland 2015.