000 06085nam a22006735i 4500
001 978-3-642-15998-5
003 DE-He213
005 20240730202054.0
007 cr nn 008mamaa
008 100910s2010 gw | s |||| 0|eng d
020 _a9783642159985
_9978-3-642-15998-5
024 7 _a10.1007/978-3-642-15998-5
_2doi
050 4 _aQA76.9.N38
072 7 _aUYQL
_2bicssc
072 7 _aCOM073000
_2bisacsh
072 7 _aUYQL
_2thema
082 0 4 _a006.35
_223
245 1 0 _aMultilingual and Multimodal Information Access Evaluation
_h[electronic resource] :
_bInternational Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings /
_cedited by Maristella Agosti, Nicola Ferro, Carol Peters, Maarten de Rijke, Alan Smeaton.
250 _a1st ed. 2010.
264 1 _aBerlin, Heidelberg :
_bSpringer Berlin Heidelberg :
_bImprint: Springer,
_c2010.
300 _aXIII, 145 p. 21 illus.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aInformation Systems and Applications, incl. Internet/Web, and HCI,
_x2946-1642 ;
_v6360
505 0 _aKeynote Addresses -- IR between Science and Engineering, and the Role of Experimentation -- Retrieval Evaluation in Practice -- Resources, Tools, and Methods -- A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages -- A New Approach for Cross-Language Plagiarism Analysis -- Creating a Persian-English Comparable Corpus -- Experimental Collections and Datasets (1) -- Validating Query Simulators: An Experiment Using Commercial Searches and Purchases -- Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation -- Experimental Collections and Datasets (2) -- MapReduce for Information Retrieval Evaluation: "Let's Quickly Test This on 12 TB of Data" -- Which Log for Which Information? Gathering Multilingual Data from Different Log File Types -- Evaluation Methodologies and Metrics (1) -- Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements -- On the Evaluation of Entity Profiles -- Evaluation Methodologies and Metrics (2) -- Evaluating Information Extraction -- Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation -- Automated Component-Level Evaluation: Present and Future -- Panels -- The Four Ladies of Experimental Evaluation -- A PROMISE for Experimental Evaluation.
520 _aIn its ?rst ten years of activities (2000-2009), the Cross-Language Evaluation Forum (CLEF) played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain, such as cro- language question answering, image and geographic information retrieval, int- activeretrieval,and many more.It also promotedthe study andimplementation of appropriateevaluation methodologies for these diverse types of tasks and - dia. As a result, CLEF has been extremely successful in building a wide, strong, and multidisciplinary research community, which covers and spans the di?erent areasofexpertiseneededto dealwith thespreadofCLEFtracksandtasks.This constantly growing and almost completely voluntary community has dedicated an incredible amount of e?ort to making CLEF happen and is at the core of the CLEF achievements. CLEF 2010 represented a radical innovation of the "classic CLEF" format and an experiment aimed at understanding how "next generation" evaluation campaigns might be structured. We had to face the problem of how to innovate CLEFwhile still preservingits traditionalcorebusiness,namely the benchma- ing activities carried out in the various tracks and tasks. The consensus, after lively and community-wide discussions, was to make CLEF an independent four-day event, no longer organized in conjunction with the European Conference on Research and Advanced Technology for Digital Libraries (ECDL) where CLEF has been running as a two-and-a-half-day wo- shop. CLEF 2010 thus consisted of two main parts: a peer-reviewed conference - the ?rst two days - and a series of laboratories and workshops - the second two days.
650 0 _aNatural language processing (Computer science).
_94741
650 0 _aUser interfaces (Computer systems).
_911681
650 0 _aHuman-computer interaction.
_96196
650 0 _aInformation storage and retrieval systems.
_922213
650 0 _aData mining.
_93907
650 0 _aApplication software.
_9169941
650 0 _aComputational linguistics.
_96146
650 1 4 _aNatural Language Processing (NLP).
_931587
650 2 4 _aUser Interfaces and Human Computer Interaction.
_931632
650 2 4 _aInformation Storage and Retrieval.
_923927
650 2 4 _aData Mining and Knowledge Discovery.
_9169942
650 2 4 _aComputer and Information Systems Applications.
_9169943
650 2 4 _aComputational Linguistics.
_96146
700 1 _aAgosti, Maristella.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9169944
700 1 _aFerro, Nicola.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9169945
700 1 _aPeters, Carol.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9169946
700 1 _ade Rijke, Maarten.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9169947
700 1 _aSmeaton, Alan.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
_9169948
710 2 _aSpringerLink (Online service)
_9169949
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783642159978
776 0 8 _iPrinted edition:
_z9783642159992
830 0 _aInformation Systems and Applications, incl. Internet/Web, and HCI,
_x2946-1642 ;
_v6360
_9169950
856 4 0 _uhttps://doi.org/10.1007/978-3-642-15998-5
912 _aZDB-2-SCS
912 _aZDB-2-SXCS
912 _aZDB-2-LNC
942 _cELN
999 _c96853
_d96853