skip to main content

This content will become publicly available on May 1, 2023

Title: Enhancing the quality and social impacts of urban planning through community-engaged operations research
While inquiry in operations research (OR) modeling of urban planning processes is long-standing, on the whole, the OR discipline has not influenced urban planning practice, teaching and scholarship at a level of other domains such as public policy and information technology. Urban planning presents contemporary challenges that are complex, multi-stakeholder, data-intensive, and ill structured. Could an OR approach which focuses on the complex, emergent nature of cities, the institutional environment in which urban planning strategies are designed and implemented and which puts citizen engagement and a critical approach at the center enable urban planning to better meet these challenges? Based on a review of research and practice in OR and urban planning, we argue that a prospective and prescriptive approach to planning that is inductive in nature and embraces “methodological pluralism” and mixed methods can enable researchers and practitioners develop effective interventions that are equitable and which reflect the concerns of community members and community serving organizations. We discuss recent work in transportation, housing, and community development that illustrates the benefits of embracing an enhanced OR modeling approach both in the framing of the model and in its implementation, while bringing to the fore three cautionary themes. First, a mechanistic more » application of decision modeling principles rooted in stylized representations of institutions and systems using mathematics and computational methods may not adequately capture the central role that human actors play in developing neighborhoods and communities. Second, as innovations such as the mass adoption of automobiles decades ago led to auto-centric city design show, technological innovations can have unanticipated negative social impacts. Third, the current COVID pandemic shows that approaches based on science and technology alone are inadequate to improving community lives. Therefore, we emphasize the important role of critical approaches, community engagement and diversity, equity, and inclusion in planning approaches that incorporate decision modeling. « less
Award ID(s):
Publication Date:
Journal Name:
Environment and Planning B: Urban Analytics and City Science
Page Range or eLocation-ID:
1167 to 1183
Sponsoring Org:
National Science Foundation
More Like this
  1. Obeid, Iyad Selesnick (Ed.)
    Electroencephalography (EEG) is a popular clinical monitoring tool used for diagnosing brain-related disorders such as epilepsy [1]. As monitoring EEGs in a critical-care setting is an expensive and tedious task, there is a great interest in developing real-time EEG monitoring tools to improve patient care quality and efficiency [2]. However, clinicians require automatic seizure detection tools that provide decisions with at least 75% sensitivity and less than 1 false alarm (FA) per 24 hours [3]. Some commercial tools recently claim to reach such performance levels, including the Olympic Brainz Monitor [4] and Persyst 14 [5]. In this abstract, we describe our efforts to transform a high-performance offline seizure detection system [3] into a low latency real-time or online seizure detection system. An overview of the system is shown in Figure 1. The main difference between an online versus offline system is that an online system should always be causal and has minimum latency which is often defined by domain experts. The offline system, shown in Figure 2, uses two phases of deep learning models with postprocessing [3]. The channel-based long short term memory (LSTM) model (Phase 1 or P1) processes linear frequency cepstral coefficients (LFCC) [6] features from each EEGmore »channel separately. We use the hypotheses generated by the P1 model and create additional features that carry information about the detected events and their confidence. The P2 model uses these additional features and the LFCC features to learn the temporal and spatial aspects of the EEG signals using a hybrid convolutional neural network (CNN) and LSTM model. Finally, Phase 3 aggregates the results from both P1 and P2 before applying a final postprocessing step. The online system implements Phase 1 by taking advantage of the Linux piping mechanism, multithreading techniques, and multi-core processors. To convert Phase 1 into an online system, we divide the system into five major modules: signal preprocessor, feature extractor, event decoder, postprocessor, and visualizer. The system reads 0.1-second frames from each EEG channel and sends them to the feature extractor and the visualizer. The feature extractor generates LFCC features in real time from the streaming EEG signal. Next, the system computes seizure and background probabilities using a channel-based LSTM model and applies a postprocessor to aggregate the detected events across channels. The system then displays the EEG signal and the decisions simultaneously using a visualization module. The online system uses C++, Python, TensorFlow, and PyQtGraph in its implementation. The online system accepts streamed EEG data sampled at 250 Hz as input. The system begins processing the EEG signal by applying a TCP montage [8]. Depending on the type of the montage, the EEG signal can have either 22 or 20 channels. To enable the online operation, we send 0.1-second (25 samples) length frames from each channel of the streamed EEG signal to the feature extractor and the visualizer. Feature extraction is performed sequentially on each channel. The signal preprocessor writes the sample frames into two streams to facilitate these modules. In the first stream, the feature extractor receives the signals using stdin. In parallel, as a second stream, the visualizer shares a user-defined file with the signal preprocessor. This user-defined file holds raw signal information as a buffer for the visualizer. The signal preprocessor writes into the file while the visualizer reads from it. Reading and writing into the same file poses a challenge. The visualizer can start reading while the signal preprocessor is writing into it. To resolve this issue, we utilize a file locking mechanism in the signal preprocessor and visualizer. Each of the processes temporarily locks the file, performs its operation, releases the lock, and tries to obtain the lock after a waiting period. The file locking mechanism ensures that only one process can access the file by prohibiting other processes from reading or writing while one process is modifying the file [9]. The feature extractor uses circular buffers to save 0.3 seconds or 75 samples from each channel for extracting 0.2-second or 50-sample long center-aligned windows. The module generates 8 absolute LFCC features where the zeroth cepstral coefficient is replaced by a temporal domain energy term. For extracting the rest of the features, three pipelines are used. The differential energy feature is calculated in a 0.9-second absolute feature window with a frame size of 0.1 seconds. The difference between the maximum and minimum temporal energy terms is calculated in this range. Then, the first derivative or the delta features are calculated using another 0.9-second window. Finally, the second derivative or delta-delta features are calculated using a 0.3-second window [6]. The differential energy for the delta-delta features is not included. In total, we extract 26 features from the raw sample windows which add 1.1 seconds of delay to the system. We used the Temple University Hospital Seizure Database (TUSZ) v1.2.1 for developing the online system [10]. The statistics for this dataset are shown in Table 1. A channel-based LSTM model was trained using the features derived from the train set using the online feature extractor module. A window-based normalization technique was applied to those features. In the offline model, we scale features by normalizing using the maximum absolute value of a channel [11] before applying a sliding window approach. Since the online system has access to a limited amount of data, we normalize based on the observed window. The model uses the feature vectors with a frame size of 1 second and a window size of 7 seconds. We evaluated the model using the offline P1 postprocessor to determine the efficacy of the delayed features and the window-based normalization technique. As shown by the results of experiments 1 and 4 in Table 2, these changes give us a comparable performance to the offline model. The online event decoder module utilizes this trained model for computing probabilities for the seizure and background classes. These posteriors are then postprocessed to remove spurious detections. The online postprocessor receives and saves 8 seconds of class posteriors in a buffer for further processing. It applies multiple heuristic filters (e.g., probability threshold) to make an overall decision by combining events across the channels. These filters evaluate the average confidence, the duration of a seizure, and the channels where the seizures were observed. The postprocessor delivers the label and confidence to the visualizer. The visualizer starts to display the signal as soon as it gets access to the signal file, as shown in Figure 1 using the “Signal File” and “Visualizer” blocks. Once the visualizer receives the label and confidence for the latest epoch from the postprocessor, it overlays the decision and color codes that epoch. The visualizer uses red for seizure with the label SEIZ and green for the background class with the label BCKG. Once the streaming finishes, the system saves three files: a signal file in which the sample frames are saved in the order they were streamed, a time segmented event (TSE) file with the overall decisions and confidences, and a hypotheses (HYP) file that saves the label and confidence for each epoch. The user can plot the signal and decisions using the signal and HYP files with only the visualizer by enabling appropriate options. For comparing the performance of different stages of development, we used the test set of TUSZ v1.2.1 database. It contains 1015 EEG records of varying duration. The any-overlap performance [12] of the overall system shown in Figure 2 is 40.29% sensitivity with 5.77 FAs per 24 hours. For comparison, the previous state-of-the-art model developed on this database performed at 30.71% sensitivity with 6.77 FAs per 24 hours [3]. The individual performances of the deep learning phases are as follows: Phase 1’s (P1) performance is 39.46% sensitivity and 11.62 FAs per 24 hours, and Phase 2 detects seizures with 41.16% sensitivity and 11.69 FAs per 24 hours. We trained an LSTM model with the delayed features and the window-based normalization technique for developing the online system. Using the offline decoder and postprocessor, the model performed at 36.23% sensitivity with 9.52 FAs per 24 hours. The trained model was then evaluated with the online modules. The current performance of the overall online system is 45.80% sensitivity with 28.14 FAs per 24 hours. Table 2 summarizes the performances of these systems. The performance of the online system deviates from the offline P1 model because the online postprocessor fails to combine the events as the seizure probability fluctuates during an event. The modules in the online system add a total of 11.1 seconds of delay for processing each second of the data, as shown in Figure 3. In practice, we also count the time for loading the model and starting the visualizer block. When we consider these facts, the system consumes 15 seconds to display the first hypothesis. The system detects seizure onsets with an average latency of 15 seconds. Implementing an automatic seizure detection model in real time is not trivial. We used a variety of techniques such as the file locking mechanism, multithreading, circular buffers, real-time event decoding, and signal-decision plotting to realize the system. A video demonstrating the system is available at: The final conference submission will include a more detailed analysis of the online performance of each module. ACKNOWLEDGMENTS Research reported in this publication was most recently supported by the National Science Foundation Partnership for Innovation award number IIP-1827565 and the Pennsylvania Commonwealth Universal Research Enhancement Program (PA CURE). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the official views of any of these organizations. REFERENCES [1] A. Craik, Y. He, and J. L. Contreras-Vidal, “Deep learning for electroencephalogram (EEG) classification tasks: a review,” J. Neural Eng., vol. 16, no. 3, p. 031001, 2019. [2] A. C. Bridi, T. Q. Louro, and R. C. L. Da Silva, “Clinical Alarms in intensive care: implications of alarm fatigue for the safety of patients,” Rev. Lat. Am. Enfermagem, vol. 22, no. 6, p. 1034, 2014. [3] M. Golmohammadi, V. Shah, I. Obeid, and J. Picone, “Deep Learning Approaches for Automatic Seizure Detection from Scalp Electroencephalograms,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York, New York, USA: Springer, 2020, pp. 233–274. [4] “CFM Olympic Brainz Monitor.” [Online]. Available: [Accessed: 17-Jul-2020]. [5] M. L. Scheuer, S. B. Wilson, A. Antony, G. Ghearing, A. Urban, and A. I. Bagic, “Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset,” J. Clin. Neurophysiol., 2020. [6] A. Harati, M. Golmohammadi, S. Lopez, I. Obeid, and J. Picone, “Improved EEG Event Classification Using Differential Energy,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium, 2015, pp. 1–4. [7] V. Shah, C. Campbell, I. Obeid, and J. Picone, “Improved Spatio-Temporal Modeling in Automated Seizure Detection using Channel-Dependent Posteriors,” Neurocomputing, 2021. [8] W. Tatum, A. Husain, S. Benbadis, and P. Kaplan, Handbook of EEG Interpretation. New York City, New York, USA: Demos Medical Publishing, 2007. [9] D. P. Bovet and C. Marco, Understanding the Linux Kernel, 3rd ed. O’Reilly Media, Inc., 2005. [10] V. Shah et al., “The Temple University Hospital Seizure Detection Corpus,” Front. Neuroinform., vol. 12, pp. 1–6, 2018. [11] F. Pedregosa et al., “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res., vol. 12, pp. 2825–2830, 2011. [12] J. Gotman, D. Flanagan, J. Zhang, and B. Rosenblatt, “Automatic seizure detection in the newborn: Methods and initial evaluation,” Electroencephalogr. Clin. Neurophysiol., vol. 103, no. 3, pp. 356–362, 1997.« less
  2. Reddy, S. ; Winter, J.S. ; Padmanabhan, S. (Ed.)
    AI applications are poised to transform health care, revolutionizing benefits for individuals, communities, and health-care systems. As the articles in this special issue aptly illustrate, AI innovations in healthcare are maturing from early success in medical imaging and robotic process automation, promising a broad range of new applications. This is evidenced by the rapid deployment of AI to address critical challenges related to the COVID-19 pandemic, including disease diagnosis and monitoring, drug discovery, and vaccine development. At the heart of these innovations is the health data required for deep learning applications. Rapid accumulation of data, along with improved data quality, data sharing, and standardization, enable development of deep learning algorithms in many healthcare applications. One of the great challenges for healthcare AI is effective governance of these data—ensuring thoughtful aggregation and appropriate access to fuel innovation and improve patient outcomes and healthcare system efficiency while protecting the privacy and security of data subjects. Yet the literature on data governance has rarely looked beyond important pragmatic issues related to privacy and security. Less consideration has been given to unexpected or undesirable outcomes of healthcare in AI, such as clinician deskilling, algorithmic bias, the “regulatory vacuum”, and lack of public engagement. Amidstmore »growing calls for ethical governance of algorithms, Reddy et al. developed a governance model for AI in healthcare delivery, focusing on principles of fairness, accountability, and transparency (FAT), and trustworthiness, and calling for wider discussion. Winter and Davidson emphasize the need to identify underlying values of healthcare data and use, noting the many competing interests and goals for use of health data—such as healthcare system efficiency and reform, patient and community health, intellectual property development, and monetization. Beyond the important considerations of privacy and security, governance must consider who will benefit from healthcare AI, and who will not. Whose values drive health AI innovation and use? How can we ensure that innovations are not limited to the wealthiest individuals or nations? As large technology companies begin to partner with health care systems, and as personally generated health data (PGHD) (e.g., fitness trackers, continuous glucose monitors, health information searches on the Internet) proliferate, who has oversight of these complex technical systems, which are essentially a black box? To tackle these complex and important issues, it is important to acknowledge that we have entered a new technical, organizational, and policy environment due to linked data, big data analytics, and AI. Data governance is no longer the responsibility of a single organization. Rather, multiple networked entities play a role and responsibilities may be blurred. This also raises many concerns related to data localization and jurisdiction—who is responsible for data governance? In this emerging environment, data may no longer be effectively governed through traditional policy models or instruments.« less
  3. Researchers, evaluators and designers from an array of academic disciplines and industry sectors are turning to participatory approaches as they seek to understand and address complex social problems. We refer to participatory approaches that collaboratively engage/ partner with stakeholders in knowledge creation/problem solving for action/social change outcomes as collaborative change research, evaluation and design (CCRED). We further frame CCRED practitioners by their desire to move beyond knowledge creation for its own sake to implementation of new knowledge as a tool for social change. In March and May of 2018, we conducted a literature search of multiple discipline-specific databases seeking collaborative, change-oriented scholarly publications. The search was limited to include peerreviewed journal articles, with English language abstracts available, published in the last five years. The search resulted in 526 citations, 236 of which met inclusion criteria. Though the search was limited to English abstracts, all major geographic regions (North America, Europe, Latin America/Caribbean, APAC, Africa and the Middle East) were represented within the results, although many articles did not state a specific region. Of those identified, most studies were located in North America, with the Middle East having only one identified study. We followed a qualitative thematic synthesis process to examinemore »the abstracts of peer-reviewed articles to identify practices that transcend individual disciplines, sectors and contexts to achieve collaborative change. We surveyed the terminology used to describe CCRED, setting, content/topic of study, type of collaboration, and related benefits/outcomes in order to discern the words used to designate collaboration, the frameworks, tools and methods employed, and the presence of action, evaluation or outcomes. Forty-three percent of the reviewed articles fell broadly within the social sciences, followed by 26 percent in education and 25 percent in health/medicine. In terms of participants and/ or collaborators in the articles reviewed, the vast majority of the 236 articles (86%) described participants, that is, those who the research was about or from whom data was collected. In contrast to participants, partners/collaborators (n=32; 14%) were individuals or groups who participated in the design or implementation of the collaborative change effort described. In terms of the goal for collaboration and/or for doing the work, the most frequently used terminology related to some aspect of engagement and empowerment. Common descriptors for the work itself were ‘social change’ (n=74; 31%), ‘action’ (n=33; 14%), ‘collaborative or participatory research/practice’ (n=13; 6%), ‘transformation’ (n=13; 6%) and ‘community engagement’ (n=10; 4%). Of the 236 articles that mentioned a specific framework or approach, the three most common were some variation of Participatory Action Research (n=30; 50%), Action Research (n=40; 16.9%) or Community-Based Participatory Research (n=17; 7.2%). Approximately a third of the 236 articles did not mention a specific method or tool in the abstract. The most commonly cited method/tool (n=30; 12.7%) was some variation of an arts-based method followed by interviews (n=18; 7.6%), case study (n=16; 6.7%), or an ethnographic-related method (n=14; 5.9%). While some articles implied action or change, only 14 of the 236 articles (6%) stated a specific action or outcome. Most often, the changes described were: the creation or modification of a model, method, process, framework or protocol (n=9; 4%), quality improvement, policy change and social change (n=8; 3%), or modifications to education/training methods and materials (n=5; 2%). The infrequent use of collaboration as a descriptor of partner engagement, coupled with few reported findings of measurable change, raises questions about the nature of CCRED. It appears that conducting CCRED is as complex an undertaking as the problems that the work is attempting to address.« less
  4. Disasters are becoming more frequent as the global climate changes, and recovery efforts require the cooperation and collaboration of experts and community members across disciplines. The DRRM program, funded through the National Science Foundation (NSF) Research Traineeship (NRT), is an interdisciplinary graduate program that brings together faculty and graduate students from across the university to develop new, transdisciplinary ways of solving disaster-related issues. The core team includes faculty from business, engineering, education, science, and urban planning fields. The overall objective of the program is to create a community of practice amongst the graduate students and faculty to improve understanding and support proactive decision-making related to disasters and disaster management. The specific educational objectives of the program are (1) context mastery and community building, (2) transdisciplinary integration and professional development, and (3) transdisciplinary research. The program’s educational research and assessment activities include program development, trainee learning and development, programmatic educational research, and institutional transformation. The program is now in its fourth year of student enrollment. Core courses on interdisciplinary research methods in disaster resilience are in place, engaging students in domain-specific research related to natural hazards, resilience, and recovery, and in methods of interdisciplinary and transdisciplinary collaboration. In addition to courses,more »the program offers a range of professional development opportunities through seminars and workshops. Since the program’s inception, the core team has expanded both the numbers of faculty and students and the range of academic disciplines involved in the program, including individuals from additional science and engineering fields as well as those from natural resources and the social sciences. At the same time, the breadth of disciplines and the constraints of individual academic programs have posed substantial structural challenges in engaging students in the process of building interdisciplinary research identities and in building the infrastructure needed to sustain the program past the end of the grant. Our poster and paper will identify major program accomplishments, but also draw on interviews with students to examine the structural challenges and potential solution paths associated with a program of this breadth. Critical opportunities for sustainability and engagement have emerged through integration with a larger university-level center as well as through increased flexibility in program requirements and additional mechanisms for student and faculty collaboration.« less
  5. Abstract

    A growing number of cities are investing in green infrastructure to foster urban resilience and sustainability. While these nature-based solutions are often promoted on the basis of their multifunctionality, in practice, most studies and plans focus on a single benefit, such as stormwater management. This represents a missed opportunity to strategically site green infrastructure to leverage social and ecological co-benefits. To address this gap, this paper builds on existing modeling approaches for green infrastructure planning to create a more generalizable tool for comparing spatial tradeoffs and synergistic ‘hotspots’ for multiple desired benefits. I apply the model to three diverse coastal megacities: New York City, Los Angeles (United States), and Manila (Philippines), enabling cross-city comparisons for the first time. Spatial multi-criteria evaluation is used to examine how strategic areas for green infrastructure development across the cities change depending on which benefit is prioritized. GIS layers corresponding to six planning priorities (managing stormwater, reducing social vulnerability, increasing access to green space, improving air quality, reducing the urban heat island effect, and increasing landscape connectivity) are mapped and spatial tradeoffs assessed. Criteria are also weighted to reflect local stakeholders’ desired outcomes as determined through surveys and stakeholder meetings and combined to identifymore »high priority areas for green infrastructure development. To extend the model’s utility as a decision-support tool, an interactive web-based application is developed that allows any user to change the criteria weights and visualize the resulting hotspots in real time. The model empirically illustrates the complexities of planning green infrastructure in different urban contexts, while also demonstrating a flexible approach for more participatory, strategic, and multifunctional planning of green infrastructure in cities around the world.

    « less