Skip to main navigation Skip to search Skip to main content

A GPU implementation for LBG and SOM training

Yi Xiao, Chi Sing Leung, Tze-Yui Ho, Ping-Man Lam

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Vector quantization (VQ) is an effective technique applicable in a wide range of areas, such as image compression and pattern recognition. The most time-consuming procedure of VQ is codebook training, and two of the frequently used training algorithms are LBG and self-organizing map (SOM). Nowadays, desktop computers are usually equipped with programmable graphics processing units (GPUs), whose parallel data-processing ability is ideal for codebook training acceleration. Although there are some GPU algorithms for LBG training, their implementations suffer from a large amount of data transfer between CPU and GPU and a large number of rendering passes within a training iteration. This paper presents a novel GPU-based training implementation for LBG and SOM training. More specifically, we utilize the random write ability of vertex shader to reduce the overheads mentioned above. Our experimental results show that our approach can run four times faster than the previous approach. © 2010 Springer-Verlag London Limited.
Original languageEnglish
Pages (from-to)1035-1042
JournalNeural Computing and Applications
Volume20
Issue number7
DOIs
Publication statusPublished - Oct 2011

Research Keywords

  • Graphics processing units
  • LBG, SOM
  • Vector quantization

Fingerprint

Dive into the research topics of 'A GPU implementation for LBG and SOM training'. Together they form a unique fingerprint.

Cite this