skip to main content


Search for: All records

Creators/Authors contains: "Wang, Oliver"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Mazurek, Michelle ; Sher, Micah (Ed.)

    Web tracking by ad networks and other data-driven businesses is often privacy-invasive. Privacy laws, such as the California Consumer Privacy Act, aim to give people more control over their data. In particular, they provide a right to opt out from web tracking via privacy preference signals, notably Global Privacy Control (GPC). GPC holds the promise of enabling people to exercise their opt out rights on the web. Broad adoption of GPC hinges on its usability. In a usability survey we find that 94% of the participants would turn on GPC indicating a need for such efficient and effective opt out mechanism. 81% of the participants in our survey also have a correct understanding of what GPC does ensuring that their intent is accurately represented by their choice. The effectiveness of GPC is dependent on whether websites' GPC compliance can be enforced. A site's GPC compliance can be analyzed based on privacy flags, such as the US Privacy String, which is used on many sites to indicate the opt out status of a web user. Leveraging the US Privacy String for GPC purposes we implement a proof-of-concept browser extension that successfully and correctly analyzes sites' GPC compliance at a rate of 89%. We further implement a web crawler for our browser extension demonstrating that our analysis approach is scalable. We find that many sites do not respect GPC opt out signals despite being legally obligated to do so. Only 54/464 (12%) sites with a US Privacy String opt out users after having received a GPC signal.

     
    more » « less
  2. While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on ImageNet classification has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new dataset of human perceptual similarity judgments. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by large margins on our dataset. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations. 
    more » « less