skip to main content


Title: Data Analytics and Computational Thinking Skills in Construction Engineering and Management Education: A Conceptual System
Data analytics and computational thinking are essential for processing and analyzing data from sensors, and presenting the results in formats suitable for decision-making. However, most undergraduate construction engineering and management students struggle with understanding the required computational concepts and workflows because they lack the theoretical foundations. This has resulted in a shortage of skilled workforce equipped with the required competencies for developing sustainable solutions with sensor data. End-user programming environments present students with a means to execute complex analysis by employing visual programming mechanics. With end-user programming, students can easily formulate problems, logically organize, analyze sensor data, represent data through abstractions, and adapt the results to a wide variety of problems. This paper presents a conceptual system based on end-user programming and grounded in the Learning-for-Use theory which can equip construction engineering and management students with the competencies needed to implement sensor data analytics in the construction industry. The system allows students to specify algorithms by directly interacting with data and objects to analyze sensor data and generate information to support decision-making in construction projects. An envisioned scenario is presented to demonstrate the potential of the system in advancing students’ data analytics and computational thinking skills. The study contributes to existing knowledge in the application of computational thinking and data analytics paradigms in construction engineering education.  more » « less
Award ID(s):
2111003
NSF-PAR ID:
10415328
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Construction Research Congress 2022
Page Range / eLocation ID:
204 to 213
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Systems understanding is a skill required to solve many of the world’s most important problems, from climate change to immunotherapy to social decision-making. However, these problems also require communication among experts with diverse skill sets and academic backgrounds. Our long-term goal is to facilitate systems understanding across a range of disciplines through end-user computational modeling tools. This paper presents the Ceptre Editor, a structure editor for the rule-based programming language Ceptre. The Ceptre Editor runs in the browser and offers a visual interface and integrated development environment for Ceptre, following design recommendations from end-user programming, with the goal of providing discoverable affordances for program construction and maintaining syntactic well-formedness at each edit state. We performed a preliminary evaluation of the tool through a qualitative study, assessing the editors effectiveness at helping users understand and extended a system model, and found promising results regarding learnability and mental model accuracy. 
    more » « less
  2. Sustainable provision of food, energy and clean water requires understanding of the interdependencies among systems as well as the motivations and incentives of farmers and rural policy makers. Agriculture lies at the heart of interactions among food, energy and water systems. It is an increasingly energy intensive enterprise, but is also a growing source of energy. Agriculture places large demands on water supplies while poor practices can degrade water quality. Each of these interactions creates opportunities for modeling driven by sensor-based and qualitative data collection to improve the effectiveness of system operation and control in the short term as well as investments and planning for the long term. The large volume and complexity of the data collected creates challenges for decision support and stakeholder communication. The DataFEWSion National Research Traineeship program aims to build a community of researchers that explores, develops and implements effective data-driven decision-making to efficiently produce food, transform primary energy sources into energy carriers, and enhance water quality. The initial cohort includes PhD students in agricultural and biosystems, chemical, and industrial engineering as well as statistics and crop production and physiology. The project aims to prepare trainees for multiple career paths such as research scientist, bioeconomy entrepreneur, agribusiness leader, policy maker, agriculture analytics specialist, and professor. The traineeship has four key components. First, trainees will complete a new graduate certificate to build competencies in fundamental understanding of interactions among food production, water quality and bioenergy; data acquisition, visualization, and analytics; complex systems modeling for decision support; and the economics, policy and sociology of the FEW nexus. Second, they will conduct interdisciplinary research on (a) technologies and practices to increase agriculture’s contributions to energy supply while reducing its negative impacts on water quality and human health; (b) data science to increase crop productivity within the constraints of sustainable intensification; or (c) decision sciences to manage tradeoffs and promote best practices among diverse stakeholders. Third, they will participate in a new graduate learning community to consist of a two-year series of workshops that focus in alternate years on the context of the Midwest agricultural FEW nexus and professional development; and fourth, they will have small-group experiences to promote collaboration and peer review. Each trainee will create and curate a portfolio that combines artifacts from coursework and research with reflections on the broader impacts of their work. Trainee recruitment emphasizes women and underrepresented groups. 
    more » « less
  3. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less
  4. There have been numerous demands for enhancements in the way undergraduate learning occurs today, especially at a time when the value of higher education continues to be called into question (The Boyer 2030 Commission, 2022). One type of demand has been for the increased integration of subjects/disciplines around relevant issues/topics—with a more recent trend of seeking transdisciplinary learning experiences for students (Sheets, 2016; American Association for the Advancement of Science, 2019). Transdisciplinary learning can be viewed as the holistic way of working equally across disciplines to transcend their own disciplinary boundaries to form new conceptual understandings as well as develop new ways in which to address complex topics or challenges (Ertas, Maxwell, Rainey, & Tanik, 2003; Park & Son, 2010). This transdisciplinary approach can be important as humanity’s problems are not typically discipline specific and require the convergence of competencies to lead to innovative thinking across fields of study. However, higher education continues to be siloed which makes the authentic teaching of converging topics, such as innovation, human-technology interactions, climate concerns, or harnessing the data revolution, organizationally difficult (Birx, 2019; Serdyukov, 2017). For example, working across a university’s academic units to collaboratively teach, or co-teach, around topics of convergence are likely to be rejected by the university systems that have been built upon longstanding traditions. While disciplinary expertise is necessary and one of higher education’s strengths, the structures and academic rigidity that come along with the disciplinary silos can prevent modifications/improvements to the roles of academic units/disciplines that could better prepare students for the future of both work and learning. The balancing of disciplinary structure with transdisciplinary approaches to solving problems and learning is a challenge that must be persistently addressed. These institutional challenges will only continue to limit universities seeking toward scaling transdisciplinary programs and experimenting with novel ways to enhance the value of higher education for students and society. This then restricts innovations to teaching and also hinders the sharing of important practices across disciplines. To address these concerns, a National Science Foundation Improving Undergraduate STEM Education project team, which is the topic of this paper, has set the goal of developing/implementing/testing an authentically transdisciplinary, and scalable educational model in an effort to help guide the transformation of traditional undergraduate learning to span academics silos. This educational model, referred to as the Mission, Meaning, Making (M3) program, is specifically focused on teaching the crosscutting practices of innovation by a) implementing co-teaching and co-learning from faculty and students across different academic units/colleges as well as b) offering learning experiences spanning multiple semesters that immerse students in a community that can nourish both their learning and innovative ideas. As a collaborative initiative, the M3 program is designed to synergize key strengths of an institution’s engineering/technology, liberal arts, and business colleges/units to create a transformative undergraduate experience focused on the pursuit of innovation—one that reaches the broader campus community, regardless of students’ backgrounds or majors. Throughout the development of this model, research was conducted to help identify institutional barriers toward creating such a cross-college program at a research-intensive public university along with uncovering ways in which to address these barriers. While data can show how students value and enjoy transdisciplinary experiences, universities are not likely to be structured in a way to support these educational initiatives and they will face challenges throughout their lifespan. These challenges can result from administration turnover whereas mutual agreements across colleges may then vanish, continued disputes over academic territory, and challenges over resource allotments. Essentially, there may be little to no incentives for academic departments to engage in transdisciplinary programming within the existing structures of higher education. However, some insights and practices have emerged from this research project that can be useful in moving toward transdisciplinary learning around topics of convergence. Accordingly, the paper will highlight features of an educational model that spans disciplines along with the workarounds to current institutional barriers. This paper will also provide lessons learned related to 1) the potential pitfalls with educational programming becoming “un-disciplinary” rather than transdisciplinary, 2) ways in which to incentivize departments/faculty to engage in transdisciplinary efforts, and 3) new structures within higher education that can be used to help faculty/students/staff to more easily converge to increase access to learning across academic boundaries. 
    more » « less
  5. Abstract. As cloud-based web services get more and more capable, available, and powerful (CAP), data science and engineering is pulled toward the frontline because DATA means almost anything-as-a-service (XaaS) via Digital Archiving and Transformed Analytics. In general, a web service (via a website) serves customers with web documents in HTML, JSON, XML, and multimedia via interactive (request) and responsive (reply) ways for specific domain problem solving over the Internet. In particular, a web service is deeply involved with UI & UX (user interface and user experience) plus considerate regulations on QoS (Quality of Service) as well, which refers to both information synthesis and security, namely availability and reliability for providential web services. This paper, based on the novel wiseCIO as a Platform-as-a-Service (PaaS), presents digital archiving 3 and transformed analytics (DATA) via machine learning, one of the most practical aspects of artificial intelligence. Machine learning is the science of data analysis that automates analytical model building and online analytical processing (OLAP) that enables computers to act without being explicitly programmed through CTMP. Computational thinking combined with manageable processing is 4 thoroughly discussed and utilized for FAST solutions in a feasible, analytical, scalable and testable approach. DATA is central to information synthesis and analytics (ISA), and digitized archives plays a key role in transformed analytics on intelligence for business, education and entertainment (iBEE). Case studies as applicable examples are discussed over broad fields where archival digitization is required for analytical transformation via machine learning, such as scalable ARM (archival repository for manageable accessibility), visual BUS (biological understanding from STEM), schooling DIGIA (digital intelligence governing instruction and administering), viewable HARP (historical archives & religious preachings), vivid MATH (mathematical apps in teaching and hands-on exercise), and SHARE (studies via hands-on assignment, revision and evaluation). As a result, wiseCIO promotes DATA service by providing ubiquitous web services of analytical processing via universal interface and user-centric experience in favor of logical organization of web content and relational information groupings that are vital steps in the ability of an archivist or librarian to recommend and retrieve information for a researcher. More important, wiseCIO also plays a key role as a content management system and delivery platform with capacity of hosting 10,000+ traditional web pages with great ease. 
    more » « less