skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, May 23 until 2:00 AM ET on Friday, May 24 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Kosar, Tevfik"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The increase and rapid growth of data produced by scientific instruments, the Internet of Things (IoT), and social media is causing data transfer performance and resource consumption to garner much attention in the research community. The network infrastructure and end systems that enable this extensive data movement use a substantial amount of electricity, measured in terawatt-hours per year. Managing energy consumption within the core networking infrastructure is an active research area, but there is a limited amount of work on reducing power consumption at the end systems during active data transfers. This paper presents a novel two-phase dynamic throughput and energy optimization model that utilizes an offline decision-search-tree based clustering technique to encapsulate and categorize historical data transfer log information and an online search optimization algorithm to find the best application and kernel layer parameter combination to maximize the achieved data transfer throughput while minimizing the energy consumption. Our model also incorporates an ensemble method to reduce aleatoric uncertainty in finding optimal application and kernel layer parameters during the offline analysis phase. The experimental evaluation results show that our decision-tree based model outperforms the state-of-the-art solutions in this area by achieving 117% higher throughput on average and also consuming 19% less energy at the end systems during active data transfers. 
    more » « less
  2. Adaptive bitrate (ABR) algorithms aim to make optimal bitrate de- cisions in dynamically changing network conditions to ensure a high quality of experience (QoE) for the users during video stream- ing. However, most of the existing ABRs share the limitations of predefined rules and incorrect assumptions about streaming pa- rameters. They also come short to consider the perceived quality in their QoE model, target higher bitrates regardless, and ignore the corresponding energy consumption. This joint approach results in additional energy consumption and becomes a burden, especially for mobile device users. This paper proposes GreenABR, a new deep reinforcement learning-based ABR scheme that optimizes the energy consumption during video streaming without sacrificing the user QoE. GreenABR employs a standard perceived quality metric, VMAF, and real power measurements collected through a streaming application. GreenABR’s deep reinforcement learning model makes no assumptions about the streaming environment and learns how to adapt to the dynamically changing conditions in a wide range of real network scenarios. GreenABR outperforms the existing state-of-the-art ABR algorithms by saving up to 57% in streaming energy consumption and 60% in data consumption while achieving up to 22% more perceptual QoE due to up to 84% less rebuffering time and near-zero capacity violations. 
    more » « less
  3. With the proliferation of data movement across the Internet, global data traffic per year has already exceeded the Zettabyte scale. The network infrastructure and end-systems facilitating the vast data movement consume an extensive amount of electricity, measured in terawatt-hours per year. This massive energy footprint costs the world economy billions of dollars partially due to energy consumed at the network end-systems. Although extensive research has been done on managing power consumption within the core networking infrastructure, there is little research on reducing the power consumption at the end-systems during active data transfers. This paper presents a novel cross-layer optimization framework, called Cross-LayerHLA, to minimize energy consumption at the end-systems by applying machine learning techniques to historical transfer logs and extracting the hidden relationships between different parameters affecting both the performance and resource utilization. It utilizes offline analysis to improve online learning and dynamic tuning of application-level and kernel-level parameters with minimal overhead. This approach minimizes end-system energy consumption and maximizes data transfer throughput. Our experimental results show that Cross-LayerHLA outperforms other state-of-the-art solutions in this area. 
    more » « less
  4. In parallel with big data processing and analysis dominating the usage of distributed and cloud infrastructures, the demand for distributed metadata access and transfer has increased. In many application domains, the volume of data generated exceeds petabytes, while the corresponding metadata amounts to terabytes or even more. In this paper, we propose a novel solution for efficient and scalable metadata access for distributed applications across wide-area networks, dubbed SMURF. Our solution combines novel pipelining and concurrent transfer mechanisms with reliability, provides distributed continuum caching and prefetching strategies to sidestep fetching latency, and achieves scalable and high-performance metadata fetch/prefetch services in the cloud. We also study the phenomenon of semantic locality in real trace logs which is not well utilized in metadata access prediction. We implement our predictor based on this observation and compare it with three existing state-of-the-art prefetch schemes on Yahoo! Hadoop audit traces. By effectively caching and prefetching metadata based on the access patterns, our continuum caching and prefetching mechanism greatly improves local cache hit rate and reduces the average fetching latency. We replayed approximately 20 Million metadata access operations from real audit traces, in which our system achieved 80% accuracy during prefetch prediction and reduced the average fetch latency 50% compared to the state-of-the-art mechanisms. 
    more » « less
  5. With the emergence of data deluge, the energy footprint of global data movement has surpassed 100 terawatt hours, costing more than 20 billion US dollars to the world economy. During an active data transfer, depending on the number of hops between the source and destination, the networking infrastructure consumes between 10% - 75% of the total energy, and the rest is consumed by the end systems. Even though there has been extensive research on reducing the power consumption at the networking infrastructure, the work focusing on saving energy at the end systems has been limited to the tuning of a few application-level parameters. In this paper, we introduce a novel cross-layer optimization framework which jointly considers application-level and kernel-level parameters to minimize the energy consumption without sacrificing from the transfer throughput. We present three different algorithms which can dynamically tune the CPU frequency level, number of active CPU cores, number of active transfer threads, number of parallel TCP streams, and the level of transfer command pipelining to achieve different user-set goals. Experimental results show that our proposed algorithms outperform the state-of-the-art solutions, achieving up to 80% higher throughput while consuming 48% less energy. 
    more » « less
  6. Mobile data traffic will exceed PC Internet traffic by 2020. As the number of smartphone users and the amount of data transferred per smartphone grow exponentially, limited battery power is becoming an increasingly critical problem for mobile devices which depend on the network I/O. Despite the growing body of research in power management techniques for the mobile devices at the hardware layer as well as the lower layers of the networking stack, there has been little work focusing on saving energy at the application layer for the mobile systems during network I/O. In this paper, we propose a novel technique, called FastHLA, that can achieve significant energy savings at the application layer during mobile network I/O without sacrificing the performance. FastHLA is based on historical log analysis and real-time dynamic tuning of mobile data transfers to achieve the optimization goal. FastHLA can increase the data transfer throughout by up to 10X and decrease the energy consumption by up to 5X compared to state-of-the-art HTTP/2.0 transfers. 
    more » « less