Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.

Description

Despite recent successes of small object search in images, the search and localization of actions in crowded videos remains a challenging problem because of (1) the large variations of human actions and (2) the intensive computational cost of searching the video space. To address these challenges, we propose a fast action search and localization method that supports relevance feedback from user. By characterizing videos as spatio-temporal interest points and building a random forest to index and match these points, our query matching is robust and efficient. To enable efficient action localization, we propose a coarse-to-fine subvolume search scheme, which is several orders faster than the existing video branch and bound search. The challenging cross-data search of several actions validates the effectiveness and efficiency of our method.

Questions and Answers

You need to be logged in to be able to post here.