skip to main content


Title: Handling Uncertainty in Geo-Spatial Data
An inherent challenge arising in any dataset containing information of space and/or time is uncertainty due to various sources of imprecision. Integrating the impact of the uncertainty is a paramount when estimating the reliability (confidence) of any query result from the underlying input data. To deal with uncertainty, solutions have been proposed independently in the geo-science and the data-science research community. This interdisciplinary tutorial bridges the gap between the two communities by providing a comprehensive overview of the different challenges involved in dealing with uncertain geo-spatial data, by surveying solutions from both research communities, and by identifying similarities, synergies and open research problems.  more » « less
Award ID(s):
1637541
NSF-PAR ID:
10036539
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
33rd IEEE International Conference on Data Engineering (ICDE)
Page Range / eLocation ID:
1467 to 1470
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Adoption of data and compute-intensive research in geosciences is hindered by the same social and technological reasons as other science disciplines - we're humans after all. As a result, many of the new opportunities to advance science in today's rapidly evolving technology landscape are not approachable by domain geoscientists. Organizations must acknowledge and actively mitigate these intrinsic biases and knowledge gaps in their users and staff. Over the past ten years, CyVerse (www.cyverse.org) has carried out the mission "to design, deploy, and expand a national cyberinfrastructure for life sciences research, and to train scientists in its use." During this time, CyVerse has supported and enabled transdisciplinary collaborations across institutions and communities, overseen many successes, and encountered failures. Our lessons learned in user engagement, both social and technical, are germane to the problems facing the geoscience community today. A key element of overcoming social barriers is to set up an effective education, outreach, and training (EOT) team to drive initial adoption as well as continued use. A strong EOT group can reach new users, particularly those in under-represented communities, reduce power distance relationships, and mitigate users' uncertainty avoidance toward adopting new technology. Timely user support across the life of a project, based on mutual respect between the developers' and researchers' different skill sets, is critical to successful collaboration. Without support, users become frustrated and abandon research questions whose technical issues require solutions that are 'simple' from a developer's perspective, but are unknown by the scientist. At CyVerse, we have found there is no one solution that fits all research challenges. Our strategy has been to maintain a system of systems (SoS) where users can choose 'lego-blocks' to build a solution that matches their problem. This SoS ideology has allowed CyVerse users to extend and scale workflows without becoming entangled in problems which reduce productivity and slow scientific discovery. Likewise, CyVerse addresses the handling of data through its entire lifecycle, from creation to publication to future reuse, supporting community driven big data projects and individual researchers. 
    more » « less
  2. null (Ed.)
    The Chicxulub impact crater, on the Yucatán Peninsula of México, is unique. It is the only known terrestrial impact structure that has been directly linked to a mass extinction event and the only terrestrial impact with a global ejecta layer. Of the three largest impact structures on Earth, Chicxulub is the best preserved. Chicxulub is also the only known terrestrial impact structure with an intact, unequivocal topographic peak ring. Chicxulub’s role in the Cretaceous/Paleogene (K-Pg) mass extinction and its exceptional state of preservation make it an important natural laboratory for the study of both large impact crater formation on Earth and other planets and the effects of large impacts on the Earth’s environment and ecology. Our understanding of the impact process is far from complete, and despite more than 30 years of intense debate, we are still striving to answer the question as to why this impact was so catastrophic. During International Ocean Discovery Program (IODP) and International Continental Scientific Drilling Program (ICDP) Expedition 364, Paleogene sedimentary rocks and lithologies that make up the Chicxulub peak ring were cored to investigate (1) the nature and formational mechanism of peak rings, (2) how rocks are weakened during large impacts, (3) the nature and extent of post-impact hydrothermal circulation, (4) the deep biosphere and habitability of the peak ring, and (5) the recovery of life in a sterile zone. Other key targets included sampling the transition through a rare midlatitude Paleogene sedimentary succession that might include Eocene and Paleocene hyperthermals and/or the Paleocene/Eocene Thermal Maximum (PETM); the composition and character of suevite, impact melt rock, and basement rocks in the peak ring; the sedimentology and stratigraphy of the Paleocene–Eocene Chicxulub impact basin infill; the geo- and thermochronology of the rocks forming the peak ring; and any observations from the core that may help constrain the volume of dust and climatically active gases released into the stratosphere by this impact. Petrophysical properties measurements on the core and wireline logs acquired during Expedition 364 will be used to calibrate geophysical models, including seismic reflection and potential field data, and the integration of all the data will calibrate models for impact crater formation and environmental effects. The drilling directly contributes to IODP Science Plan goals: Climate and Ocean Change: How does Earth’s climate system respond to elevated levels of atmospheric CO2? How resilient is the ocean to chemical perturbations? The Chicxulub impact represents an external forcing event that caused a 75% species level mass extinction. The impact basin may also record key hyperthermals within the Paleogene. Biosphere Frontiers: What are the origin, composition, and global significance of subseafloor communities? What are the limits of life in the subseafloor? How sensitive are ecosystems and biodiversity to environmental change? Impact craters can create habitats for subsurface life, and Chicxulub may provide information on potential habitats for life, including extremophiles, on the early Earth and other planetary bodies. Paleontological and geochemical studies at ground zero will document how large impacts affect ecosystems and biodiversity. Earth Connections/Earth in Motion: What mechanisms control the occurrence of destructive earthquakes, landslides, and tsunami? Drilling into the uplifted rocks that form the peak ring will be used to groundtruth numerical simulations and model impact-generated tsunami, and deposits on top of the peak ring and around the Gulf of México will inform us about earthquakes, landslides, and tsunami generated by Chicxulub. These data will collectively help us understand how impact processes are recorded in the geologic record and their potential hazards. IODP Expedition 364 was a Mission Specific Platform expedition designed to obtain subseabed samples and downhole logging measurements from the post-impact sedimentary succession and the peak ring of the Chicxulub impact crater. A single borehole (Hole M0077A) was drilled into the Chicxulub impact crater on the Yucatán continental shelf, recovering core from 505.70 to 1334.69 meters below seafloor (mbsf) with ~99% core recovery. Downhole logs were acquired for the entire depth of the borehole. 
    more » « less
  3. null (Ed.)
    Large-scale real-time analytics services continuously collect and analyze data from end-user applications and devices distributed around the globe. Such analytics requires data to be transferred over the wide-area network (WAN) to data centers (DCs) capable of processing the data. Since WAN bandwidth is expensive and scarce, it is beneficial to reduce WAN traffic by partially aggregating the data closer to end-users. We propose aggregation networks for per- forming aggregation on a geo-distributed edge-cloud infrastructure consisting of edge servers, transit and destination DCs. We identify a rich set of research questions aimed at reducing the traffic costs in an aggregation network. We present an optimization formula- tion for solving these questions in a principled manner, and use insights from the optimization solutions to propose an efficient, near-optimal practical heuristic. We implement the heuristic in AggNet, built on top of Apache Flink. We evaluate our approach using a geo-distributed deployment on Amazon EC2 as well as a WAN-emulated local testbed. Our evaluation using real-world traces from Twitter and Akamai shows that our approach is able to achieve 47% to 83% reduction in traffic cost over existing baselines without any compromise in timeliness. 
    more » « less
  4. Large-scale real-time analytics services continuously collect and analyze data from end-user applications and devices distributed around the globe. Such analytics requires data to be transferred over the wide-area network (WAN) to data centers (DCs) capable of processing the data. Since WAN bandwidth is expensive and scarce, it is beneficial to reduce WAN traffic by partially aggregating the data closer to end-users. We propose aggregation networks for performing aggregation on a geo-distributed edge-cloud infrastructure consisting of edge servers, transit and destination DCs. We identify a rich set of research questions aimed at reducing the traffic costs in an aggregation network. We present an optimization formulation for solving these questions in a principled manner, and use insights from the optimization solutions to propose an efficient, near-optimal practical heuristic. We implement the heuristic in AggNet, built on top of Apache Flink. We evaluate our approach using a geo-distributed deployment on Amazon EC2 as well as a WAN-emulated local testbed. Our evaluation using real-world traces from Twitter and Akamai shows that our approach is able to achieve 47% to 83% reduction in traffic cost over existing baselines without any compromise in timeliness. 
    more » « less
  5. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less