000 | 03764nam a22005415i 4500 | ||
---|---|---|---|
001 | 978-3-031-01766-7 | ||
003 | DE-He213 | ||
005 | 20240730163717.0 | ||
007 | cr nn 008mamaa | ||
008 | 220601s2020 sz | s |||| 0|eng d | ||
020 |
_a9783031017667 _9978-3-031-01766-7 |
||
024 | 7 |
_a10.1007/978-3-031-01766-7 _2doi |
|
050 | 4 | _aTK7867-7867.5 | |
072 | 7 |
_aTJFC _2bicssc |
|
072 | 7 |
_aTEC008010 _2bisacsh |
|
072 | 7 |
_aTJFC _2thema |
|
082 | 0 | 4 |
_a621.3815 _223 |
100 | 1 |
_aSze, Vivienne. _eauthor. _4aut _4http://id.loc.gov/vocabulary/relators/aut _980111 |
|
245 | 1 | 0 |
_aEfficient Processing of Deep Neural Networks _h[electronic resource] / _cby Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel S. Emer. |
250 | _a1st ed. 2020. | ||
264 | 1 |
_aCham : _bSpringer International Publishing : _bImprint: Springer, _c2020. |
|
300 |
_aXXI, 254 p. _bonline resource. |
||
336 |
_atext _btxt _2rdacontent |
||
337 |
_acomputer _bc _2rdamedia |
||
338 |
_aonline resource _bcr _2rdacarrier |
||
347 |
_atext file _bPDF _2rda |
||
490 | 1 |
_aSynthesis Lectures on Computer Architecture, _x1935-3243 |
|
505 | 0 | _aPreface -- Acknowledgments -- Introduction -- Overview of Deep Neural Networks -- Key Metrics and Design Objectives -- Kernel Computation -- Designing DNN Accelerators -- Operation Mapping on Specialized Hardware -- Reducing Precision -- Exploiting Sparsity -- Designing Efficient DNN Models -- Advanced Technologies -- Conclusion -- Bibliography -- Authors' Biographies. | |
520 | _aThis book provides a structured treatment of the key principles and techniques for enabling efficient processing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics-such as energy-efficiency, throughput, and latency-without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems. The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas. | ||
650 | 0 |
_aElectronic circuits. _919581 |
|
650 | 0 |
_aMicroprocessors. _980112 |
|
650 | 0 |
_aComputer architecture. _93513 |
|
650 | 1 | 4 |
_aElectronic Circuits and Systems. _980113 |
650 | 2 | 4 |
_aProcessor Architectures. _980114 |
700 | 1 |
_aChen, Yu-Hsin. _eauthor. _4aut _4http://id.loc.gov/vocabulary/relators/aut _980115 |
|
700 | 1 |
_aYang, Tien-Ju. _eauthor. _4aut _4http://id.loc.gov/vocabulary/relators/aut _980116 |
|
700 | 1 |
_aEmer, Joel S. _eauthor. _4aut _4http://id.loc.gov/vocabulary/relators/aut _980117 |
|
710 | 2 |
_aSpringerLink (Online service) _980118 |
|
773 | 0 | _tSpringer Nature eBook | |
776 | 0 | 8 |
_iPrinted edition: _z9783031000638 |
776 | 0 | 8 |
_iPrinted edition: _z9783031006388 |
776 | 0 | 8 |
_iPrinted edition: _z9783031028946 |
830 | 0 |
_aSynthesis Lectures on Computer Architecture, _x1935-3243 _980119 |
|
856 | 4 | 0 | _uhttps://doi.org/10.1007/978-3-031-01766-7 |
912 | _aZDB-2-SXSC | ||
942 | _cEBK | ||
999 |
_c84901 _d84901 |