ACMMM 2014, 22nd ACM International Conference on Multimedia, November 3-7, 2014, Orlando, Florida, USA
      
  This paper introduces a framework for establishing links between related media fragments within a collection of videos. A set of analysis techniques is applied for extracting information from dierent types of data.  Visual-based shot and scene  segmentation  is  performed  for  dening  media  fragments  at  dierent  granularity  levels,  while  visual  cues  are detected from keyframes of the video via concept detection
and optical character recognition (OCR). Keyword extraction is applied on textual data such as the output of OCR, subtitles and metadata.  This set of results is used for the
automatic identication and linking of related media fragments.  The proposed framework exhibited competitive performance in the Video Hyperlinking sub-task of MediaEval
2013, indicating that video scene segmentation can provide more  meaningful  segments,  compared  to  other  decomposition methods, for hyperlinking purposes.
Type:
        Conférence
      City:
        Orlando
      Date:
        2014-11-03
      Department:
        Data Science
      Eurecom Ref:
        4398
      Copyright:
        © ACM, 2014. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACMMM 2014, 22nd ACM International Conference on Multimedia, November 3-7, 2014, Orlando, Florida, USA http://dx.doi.org/10.1145/2647868.2655041
      See also:
        
       
 
 
     
                       
                      