000 | 03960nam a22005655i 4500 | ||
---|---|---|---|
001 | 978-3-319-04561-0 | ||
003 | DE-He213 | ||
005 | 20200421112228.0 | ||
007 | cr nn 008mamaa | ||
008 | 140125s2014 gw | s |||| 0|eng d | ||
020 |
_a9783319045610 _9978-3-319-04561-0 |
||
024 | 7 |
_a10.1007/978-3-319-04561-0 _2doi |
|
050 | 4 | _aTA1637-1638 | |
050 | 4 | _aTA1634 | |
072 | 7 |
_aUYT _2bicssc |
|
072 | 7 |
_aUYQV _2bicssc |
|
072 | 7 |
_aCOM012000 _2bisacsh |
|
072 | 7 |
_aCOM016000 _2bisacsh |
|
082 | 0 | 4 |
_a006.6 _223 |
082 | 0 | 4 |
_a006.37 _223 |
100 | 1 |
_aWang, Jiang. _eauthor. |
|
245 | 1 | 0 |
_aHuman Action Recognition with Depth Cameras _h[electronic resource] / _cby Jiang Wang, Zicheng Liu, Ying Wu. |
264 | 1 |
_aCham : _bSpringer International Publishing : _bImprint: Springer, _c2014. |
|
300 |
_aVIII, 59 p. 32 illus., 9 illus. in color. _bonline resource. |
||
336 |
_atext _btxt _2rdacontent |
||
337 |
_acomputer _bc _2rdamedia |
||
338 |
_aonline resource _bcr _2rdacarrier |
||
347 |
_atext file _bPDF _2rda |
||
490 | 1 |
_aSpringerBriefs in Computer Science, _x2191-5768 |
|
505 | 0 | _aIntroduction -- Learning Actionlet Ensemble for 3D Human Action Recognition -- Random Occupancy Patterns -- Conclusion. | |
520 | _aAction recognition is an enabling technology for many real world applications, such as human-computer interaction, surveillance, video retrieval, retirement home monitoring, and robotics. In the past decade, it has attracted a great amount of interest in the research community. Recently, the commoditization of depth sensors has generated much excitement in action recognition from depth sensors. New depth sensor technology has enabled many applications that were not feasible before. On one hand, action recognition becomes far easier with depth sensors. On the other hand, the drive to recognize more complex actions presents new challenges. One crucial aspect of action recognition is to extract discriminative features. The depth maps have completely different characteristics from the RGB images. Directly applying features designed for RGB images does not work. Complex actions usually involve complicated temporal structures, human-object interactions, and person-person contacts. New machine learning algorithms need to be developed to learn these complex structures. This work enables the reader to quickly familiarize themselves with the latest research in depth-sensor based action recognition, and to gain a deeper understanding of recently developed techniques. It will be of great use for both researchers and practitioners who are interested in human action recognition with depth sensors. The text focuses on feature representation and machine learning algorithms for action recognition from depth sensors. After presenting a comprehensive overview of the state of the art in action recognition from depth data, the authors then provide in-depth descriptions of their recently developed feature representations and machine learning techniques, including lower-level depth and skeleton features, higher-level representations to model the temporal structure and human-object interactions, and feature selection techniques for occlusion handling. | ||
650 | 0 | _aComputer science. | |
650 | 0 | _aUser interfaces (Computer systems). | |
650 | 0 | _aImage processing. | |
650 | 0 | _aBiometrics (Biology). | |
650 | 1 | 4 | _aComputer Science. |
650 | 2 | 4 | _aImage Processing and Computer Vision. |
650 | 2 | 4 | _aBiometrics. |
650 | 2 | 4 | _aUser Interfaces and Human Computer Interaction. |
700 | 1 |
_aLiu, Zicheng. _eauthor. |
|
700 | 1 |
_aWu, Ying. _eauthor. |
|
710 | 2 | _aSpringerLink (Online service) | |
773 | 0 | _tSpringer eBooks | |
776 | 0 | 8 |
_iPrinted edition: _z9783319045603 |
830 | 0 |
_aSpringerBriefs in Computer Science, _x2191-5768 |
|
856 | 4 | 0 | _uhttp://dx.doi.org/10.1007/978-3-319-04561-0 |
912 | _aZDB-2-SCS | ||
942 | _cEBK | ||
999 |
_c57840 _d57840 |