Difference between revisions of "Mirhoseini2016perform"

From ACES

(Import from BibTeX)
 
m (Import from BibTeX)
Line 4: Line 4:
|keywords=Machine Learning
|keywords=Machine Learning
|abstract=<p>We propose Perform-ML, the first Machine Learning (ML) framework for analysis of massive and dense data which customizes the algorithm to the underlying platform for the purpose of achieving optimized resource efficiency. PerformML creates a performance model quantifying the computational cost of iterative analysis algorithms on a pertinent platform in terms of FLOPs, communication, and memory, which characterize runtime, storage, and energy. The core of Perform-ML is a novel parametric data projection algorithm, called Elastic Dictionary (ExD), that enables versatile and sparse representations of the data which can help in minimizing performance cost. We show that Perform-ML can achieve the optimal performance objective, according to our cost model, by platform-aware tuning of the ExD parameters. An accompanying API ensures automated applicability of Perform-ML to various algorithms, datasets, and platforms. Proof-of-concept evaluations of massive and dense data on different platforms demonstrate more than an order of magnitude improvements in performance compared to the state-of-the-art, within guaranteed user-defined error bounds.</p>
|abstract=<p>We propose Perform-ML, the first Machine Learning (ML) framework for analysis of massive and dense data which customizes the algorithm to the underlying platform for the purpose of achieving optimized resource efficiency. PerformML creates a performance model quantifying the computational cost of iterative analysis algorithms on a pertinent platform in terms of FLOPs, communication, and memory, which characterize runtime, storage, and energy. The core of Perform-ML is a novel parametric data projection algorithm, called Elastic Dictionary (ExD), that enables versatile and sparse representations of the data which can help in minimizing performance cost. We show that Perform-ML can achieve the optimal performance objective, according to our cost model, by platform-aware tuning of the ExD parameters. An accompanying API ensures automated applicability of Perform-ML to various algorithms, datasets, and platforms. Proof-of-concept evaluations of massive and dense data on different platforms demonstrate more than an order of magnitude improvements in performance compared to the state-of-the-art, within guaranteed user-defined error bounds.</p>
|month=6
|year=2016
|booktitle=Design Automation Conference (DAC)
|booktitle=Design Automation Conference (DAC)
|title=Perform-ML: Performance Optimized Machine Learning by Platform and Content Aware Customization
|title=Perform-ML: Performance Optimized Machine Learning by Platform and Content Aware Customization
|entry=inproceedings
|entry=inproceedings
|date=2016-Ju-01
}}
}}

Revision as of 04:43, 4 September 2021

Mirhoseini2016perform
entryinproceedings
address
annote
authorAzalia Mirhoseini and Bita Darvish Rouhani and Songhori, Ebrahim M. and Farinaz Koushanfar
booktitleDesign Automation Conference (DAC)
chapter
edition
editor
howpublished
institution
journal
month6
note
number
organization
pages
publisher
school
series
titlePerform-ML: Performance Optimized Machine Learning by Platform and Content Aware Customization
type
volume
year2016
doi
issn
isbn
urldl.acm.org/citation.cfm?id=2898060
pdf


Icon-email.png
Email:
farinaz@ucsd.edu
Icon-addr.png
Address:
Electrical & Computer Engineering
University of California, San Diego
9500 Gilman Drive, MC 0407
Jacobs Hall, Room 6401
La Jolla, CA 92093-0407
Icon-addr.png
Lab Location: EBU1-2514
University of California San Diego
9500 Gilman Dr, La Jolla, CA 92093