I want to how to query the motion information from a cricket match video and how to play the video from a specific place after querying the video?
+ Reply to Thread
Results 1 to 7 of 7
Thread: content based video analysis
Query how!? And to what purpose?
Video (or any other kind of) analysis charts a particular set of parameters along a certain set of scales, with the intention of of qualifying or operating on the signal in a manner that isn't natively obvious without the analysis.
this is for my final year project and I hope to query a cricket match video through motion detection where for example if we want to quickly look at the wickets that have been fallen we might be able to only query the necessary wicket fallen frames and play the necessary parts only. i am trying to do the project in C# . i tried with converting video file into mpeg 7 but its not working . any other ways to do the project ?
1. You cannot "convert to MPEG7" because like I mentioned in that other thread, the video file contents itself is not part of MPEG7, only the METADATA (though, possibly the container). Regardless, putting something "in mpeg7" won't get you any closer to your goal.
2. If you have the opportunity to have a constrained, locked-down camera + zoom setting THE WHOLE TIME, you could use something like what was mentioned in another thread about Object matching in AVISynth...
you are talking about MACHINE VISION. This requires giving the computer the understanding of what an "object" is (based on size, shape, color, position, pattern, etc) and then using both motion vector analysis, image isolation/masking, threshold detection and frame marking to get to where you want. Beyond my capabilities. But do a google search on "machine vision" and you might get some good pointers (though most of that stuff is at universities, where they are keeping a lid on code to their real goodies until they can cash in on them).
edit: funnily enough, MPEG4 has certain options that could have taken advantage of this very thing (object layering, composition adjustment...), but that part of the spec hasn't really been publicly expanded upon.
Did I understand this right?
a. a video with cricket match
b. a facility/program that lets you 'see' the motion vectors in the video (probably not hard when using libavcodec)
now you want to:
a. add descriptions to each frame (based on the motion vectors<- this is the hard part, because you need some sort of specification to categorize the frames; I guess combining SIFT with the motion vectors might work,..)
b. later run a query on these descriptions to get a number of frames that match the content you want based on you query (that's easy)
hi Cu Selur,
yes you have understood it correctly and i need to play the queried frames as well, so can you help me to do this project
Like I wrote libav (may be ffms) should provide the facility to a. decoder and visualize the content and b. provide methods to get (and visualize) the motion vectors, so I would recommend to ask in one of the IRC channels where people related to libav are.
Other than that it's 'just' figuring out how to connect the collected data with some kind of meta model that represents the semantic content you are interested in. (like I wrote this is probably the hard part)
-> Since I haven't done motion vector oder object recognition related stuff in years and never really used C# (I'm more the c++ guy) I'm not really the guy to help.
Ps.: Cu is an acronym for 'see you' and not part of my name,..
PPs.: http://www.codeproject.com/Articles/10248/Motion-Detection-Algorithms might be interesting,...
Last edited by Selur; 5th Apr 2012 at 08:13.