Difference between revisions of "Samragh2017"

From ACES

(Import from BibTeX)
 
m (Import from BibTeX)
Line 2: Line 2:
|author=Mohammad Samragh and Mohsen Imani and Farinaz Koushanfar and Tajana Rosing
|author=Mohammad Samragh and Mohsen Imani and Farinaz Koushanfar and Tajana Rosing
|abstract=<p>Neural networks are machine learning models that have been successfully used in many applications. Due to the high computational complexity of neural networks, deploying such models on embedded devices with severe power/resource constraints is troublesome. Neural networks are inherently approximate and can be simplified. We propose LookNN, a methodology to replace floating-point multiplications with look-up table search. First, we devise an algorithmic solution to adapt conventional neural networks to LookNN such that the model\&$\#$39;s accuracy is minimally affected. We provide experimental results and theoretical analysis demonstrating the applicability of the method. Next, we design enhanced general purpose processors for searching look-up tables: each processing element of our GPU has access to a small associative memory, enabling it to bypass redundant computations. Our evaluations on AMD Southern Island GPU architecture shows that LookNN results in 2.2-fold energy saving and 2.5-fold speedup running four different neural network applications with zero additive error. For the same four applications, if we tolerate an additive error of less than 0.2\%, LookNN can achieve an average of 3-fold energy improvement and 2.6-fold speedup compared to the traditional GPU architecture.</p>
|abstract=<p>Neural networks are machine learning models that have been successfully used in many applications. Due to the high computational complexity of neural networks, deploying such models on embedded devices with severe power/resource constraints is troublesome. Neural networks are inherently approximate and can be simplified. We propose LookNN, a methodology to replace floating-point multiplications with look-up table search. First, we devise an algorithmic solution to adapt conventional neural networks to LookNN such that the model\&$\#$39;s accuracy is minimally affected. We provide experimental results and theoretical analysis demonstrating the applicability of the method. Next, we design enhanced general purpose processors for searching look-up tables: each processing element of our GPU has access to a small associative memory, enabling it to bypass redundant computations. Our evaluations on AMD Southern Island GPU architecture shows that LookNN results in 2.2-fold energy saving and 2.5-fold speedup running four different neural network applications with zero additive error. For the same four applications, if we tolerate an additive error of less than 0.2\%, LookNN can achieve an average of 3-fold energy improvement and 2.6-fold speedup compared to the traditional GPU architecture.</p>
|year=2017
|booktitle=Design Automation and Test in Europe (DATE)
|booktitle=Design Automation and Test in Europe (DATE)
|title=LookNN: Neural Network with No Multiplication
|title=LookNN: Neural Network with No Multiplication
|entry=inproceedings
|entry=inproceedings
|date=2017-01-01
}}
}}

Revision as of 03:43, 4 September 2021

Samragh2017
entryinproceedings
address
annote
authorMohammad Samragh and Mohsen Imani and Farinaz Koushanfar and Tajana Rosing
booktitleDesign Automation and Test in Europe (DATE)
chapter
edition
editor
howpublished
institution
journal
month
note
number
organization
pages
publisher
school
series
titleLookNN: Neural Network with No Multiplication
type
volume
year2017
doi
issn
isbn
url
pdf


Icon-email.png
Email:
farinaz@ucsd.edu
Icon-addr.png
Address:
Electrical & Computer Engineering
University of California, San Diego
9500 Gilman Drive, MC 0407
Jacobs Hall, Room 6401
La Jolla, CA 92093-0407
Icon-addr.png
Lab Location: EBU1-2514
University of California San Diego
9500 Gilman Dr, La Jolla, CA 92093