Feature Content Extraction in Videos Using Dynamic Ontology Rule Approach

  IJCOT-book-cover
 
International Journal of Computer & Organization Trends  (IJCOT)          
 
© 2014 by IJCOT Journal
Volume - 4 Issue - 6
Year of Publication : 2014
Authors : CH.Vengaiah , S. Venu Gopal
DOI : 10.14445/22492593/IJCOT-V15P312

Citation

CH.Vengaiah , S. Venu Gopal "Feature Content Extraction in Videos Using Dynamic Ontology Rule Approach", International Journal of Computer & organization Trends (IJCOT), V4(6):28-33 Nov - Dec 2014, ISSN:2249-2593, www.ijcotjournal.org. Published by Seventh Sense Research Group.

Abstract

Recent rise in using video-based applications has revealed the demand for extracting your content in videos. Raw data and low-level features alone aren`t sufficient to fulfill the user needs; that is undoubtedly, a deeper understanding of your unique content for the semantic method of income is required. Currently, manual techniques, which happen to be inefficient, subjective and expensive over time and limit the querying capabilities, are now being made use to bridge the gap between low-level representative features and high-level semantic content. Inside the existing work an ontology-based fuzzy video semantic content model that makes use of spatial/temporal relations in event and concept definitions. This metaontology definition offers a wide-domain applicable rule construction standard that lets the buyer to produce an ontology to acquire a given domain.This is clearly not optimal as a consequence of user domain selection and just for only metaontology. Digital video databases have come to be more pervasive and finding video clips quickly in large databases becomes a major challenge. As a result of the nature of video, accessing items in video is very tough and time-consuming. With content-based video systems today, there exists a major gap involving the users information as well as what the buzzinar viral sales funnel is able to offer. Therefore, enabling intelligent way to interpretation on video content, semantics annotation and retrieval are necessary topics of research. In this particular work, we understand semantic interpretation of one`s contents as annotation tags for video clips, giving a retrieval-driven and use oriented semantics extraction, annotation and retrieval model for video content database management system. This product design employs an algorithm on objects relation it also could show the semantics defined with fast real-time computation. The video content of video is analyzed in relation to low-level features extracted from the clip. These primarily constitute color, shape and texture features. In this particular work, we identified the novel and interactive systems based upon visual paradigm by which low-level feature plays an important role in video retrieval using Autocorrelation feature extraction process. correlation between observations at different times. The desirable of autocorrelation coefficients arranged being a part of separation over time happens to be the sample autocorrelation function .

References

[1] Yakup Yildirim,Adnan Yazici, Turgay Yilmaz, “Automatic Semantic Content Extraction in Videos Using a Fuzzy Ontology and Rule-Based Model," IEEE Transations knowl. Data Eng.25(1): 47-61(2013).
[2] M. Petkovic and W. Jonker, “An Overview of Data Models and Query Languages for Content-Based Video Retrieval," Proc. Int`l Conf. Advances in Infrastructure for EBusiness, Science, and Education on the Internet, Aug. 2000.
[3] M. Petkovic and W. Jonker, “Content-Based Video Retrieval by Integrating Spatiotemporal and Stochastic Recognition of Events," Proc. IEEE Int`l Workshop Detection and Recognition of Events in Video, pp. 75-82, 2001.
[4] L. Bai, S.Y. Lao, G. Jones, and A.F. Smeaton, “Video Semantic Content Analysis Based on Ontology,” IMVIP ’07: Proc. 11th Int’l Machine Vision and Image Processing Conf., pp. 117-124, 2007.
[5] G.G. Medioni, I. Cohen, F. Bre´mond, S. Hongeng, and R. Nevatia, “Event Detection and Analysis from Video Streams,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 23, no. 8, pp. 873-889, Aug. 2001.
[6] S. Hongeng, R. Nevatia, and F. Bre´mond, “Video-Based Event Recognition: Activity Representation and Probabilistic Recognition Methods,” Computer Vision and Image Understanding, vol. 96, no. 2, pp. 129-162, 2004.
[7] A. Hakeem and M. Shah, “Multiple Agent Event Detection and Representation in Videos,” Proc. 20th Nat’l Conf. Artificial Intelligence (AAAI), pp. 89-94, 2005.