skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.

Search for: All records

Creators/Authors contains: "Lan, S."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. For many applications with limited computation, com- munication, storage and energy resources, there is an im- perative need of computer vision methods that could select an informative subset of the input video for efficient pro- cessing at or near real time. In the literature, there are two relevant groups of approaches: generating a “trailer” for a video or fast-forwarding while watching/processing the video. The first group is supported by video summa- rization techniques, which require processing of the entire video to select an important subset for showing to users. In the second group, current fast-forwarding methods de- pend on either manual control or automatic adaptation of playback speed, which often do not present an accurate rep- resentation and may still require processing of every frame. In this paper, we introduce FastForwardNet (FFNet), a re- inforcement learning agent that gets inspiration from video summarization and does fast-forwarding differently. It is an online framework that automatically fast-forwards a video and presents a representative subset of frames to users on the fly. It does not require processing the entire video, but just the portion that is selected by the fast-forward agent, which makes the process very computationally efficient. The online nature of our proposed method also enables the users to begin fast-forwarding at any point of the video. Experiments on two real-world datasets demonstrate that our method can provide better representation of the input video (about 6%-20% improvement on coverage of impor- tant frames) with much less processing requirement (more than 80% reduction in the number of frames processed). 
    more » « less
  2. Free, publicly-accessible full text available February 1, 2024
  3. Free, publicly-accessible full text available February 1, 2024
  4. Free, publicly-accessible full text available February 1, 2024
  5. Free, publicly-accessible full text available February 9, 2024
  6. Abstract Partons traversing the strongly interacting medium produced in heavy-ion collisions are expected to lose energy depending on their color charge and mass. We measure the nuclear modification factors for charm- and bottom-decay electrons, defined as the ratio of yields, divided by the number of binary nucleon–nucleon collisions, in $$\sqrt{s_{\textrm{NN}}}=200$$ s NN = 200 GeV Au+Au collisions to p + p collisions ( $$R_{\textrm{AA}}$$ R AA ), or in central to peripheral Au+Au collisions ( $$R_{\textrm{CP}}$$ R CP ). We find the bottom-decay electron $$R_{\textrm{AA}}$$ R AA and $$R_{\textrm{CP}}$$ R CP to be significantly higher than those of charm-decay electrons. Model calculations including mass-dependent parton energy loss in a strongly coupled medium are consistent with the measured data. These observations provide evidence of mass ordering of charm and bottom quark energy loss when traversing through the strongly coupled medium created in heavy-ion collisions. 
    more » « less