skip to main content

Search for: All records

Creators/Authors contains: "Lan, S."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. For many applications with limited computation, com- munication, storage and energy resources, there is an im- perative need of computer vision methods that could select an informative subset of the input video for efficient pro- cessing at or near real time. In the literature, there are two relevant groups of approaches: generating a “trailer” for a video or fast-forwarding while watching/processing the video. The first group is supported by video summa- rization techniques, which require processing of the entire video to select an important subset for showing to users. In the second group, current fast-forwarding methods de- pend on eithermore »manual control or automatic adaptation of playback speed, which often do not present an accurate rep- resentation and may still require processing of every frame. In this paper, we introduce FastForwardNet (FFNet), a re- inforcement learning agent that gets inspiration from video summarization and does fast-forwarding differently. It is an online framework that automatically fast-forwards a video and presents a representative subset of frames to users on the fly. It does not require processing the entire video, but just the portion that is selected by the fast-forward agent, which makes the process very computationally efficient. The online nature of our proposed method also enables the users to begin fast-forwarding at any point of the video. Experiments on two real-world datasets demonstrate that our method can provide better representation of the input video (about 6%-20% improvement on coverage of impor- tant frames) with much less processing requirement (more than 80% reduction in the number of frames processed).« less
  2. Free, publicly-accessible full text available March 1, 2023
  3. Free, publicly-accessible full text available December 1, 2022
  4. Free, publicly-accessible full text available September 1, 2022
  5. Free, publicly-accessible full text available August 1, 2022
  6. Free, publicly-accessible full text available August 1, 2022