The rise in crime rates over the past few years is a major issue and is a huge source of worry for police departments and law enforcement organizations. Crime severely harms the lives of victims and the communities they live in many places throughout the world. It is an issue of public disturbance, and large cities often see criminal activity. Many studies, media, and websites include statistics on crime and it is contributing elements, such as population, unemployment, and poverty rate. This paper compares and visualizes the crime data for four different cities in the USA, namely Chicago, Baltimore, Dallas, and Denton. We assess areas that are significantly affected based on zip codes and variations in crime categories. As the crime rates have significantly changed both upward and downward throughout time, these changes are compared to their external causes such as population, unemployment, and poverty. The results show crime frequency and distribution across four different cities and supply valuable information about the complex relationship between social factors and criminal behavior. These results and outcomes will help the police department and law enforcement organizations better understand crime issues, map crime incidents onto a geographical map, and supply insight into factors affecting crime that will help them deploy resources and help in their decision-making process.
more »
« less
Identifying Behavioral Phenotypes of Loneliness and Social Isolation with Passive Sensing: Statistical Analysis, Data Mining and Machine Learning of Smartphone and Fitbit Data
More Like this
-
-
HS BRIDGES (https://bridgesuncc.github.io/bridges-hs/) is a collection of programming projects, including "student scaffolds" and "teacher walkthroughs", that use UNC Charlotte's BRIDGES Java Libraries (https://bridgesuncc.github.io/) in order to enable students' creations of data structure- and real world data visualizations. In this Demo, we show sample projects from the HS BRIDGES collection (https://bridgesuncc.github.io/bridges-hs/). We discuss the pedagogy behind the design of our instructional materials, the importance of our "teacher walkthroughs" as supports for teachers who are new to computer science OR who are new to teaching, and the meaningful learning outcomes that students achieve as they solve project problems. Programming agility and understanding of data structures flourish when engaging problem solving challenges, scaffolded learning materials, and dynamic visualizations converge. Overall, we aim to engage session participants with HS BRIDGES projects during the session, and then back home with their students. We've recently published our collection via the Web and we are eager to share the joy of cool visualizations that make data come alive. This work is supported by NSF TUES and NSF IUSE.more » « less
-
Persistent homology is a method of data analysis that is based in the mathematical field of topology. Unfortunately, the run-time and memory complexities associated with computing persistent homology inhibit general use for the analysis of big data. For example, the best tools currently available to compute persistent homology can process only a few thousand data points in R^3. Several studies have proposed using sampling or data reduction methods to attack this limit. While these approaches enable the computation of persistent homology on much larger data sets, the methods are approximate. Furthermore, while they largely preserve the results of large topological features, they generally miss reporting information about the small topological features that are present in the data set. While this abstraction is useful in many cases, there are data analysis needs where the smaller features are also significant (e.g., brain artery analysis). This paper explores a combination of data reduction and data partitioning to compute persistent homology on big data that enables the identification of both large and small topological features from the input data set. To reduce the approximation errors that typically accompany data reduction for persistent homology, the described method also includes a mechanism of ``upscaling'' the data circumscribing the large topological features that are computed from the sampled data. The designed experimental method provides significant results for improving the scale at which persistent homology can be performedmore » « less