Abstract Background More details about human movement patterns are needed to evaluate relationships between daily travel and malaria risk at finer scales. A multiagent mobility simulation model was built to simulate the movements of villagers between home and their workplaces in 2 townships in Myanmar. Methods An agent-based model (ABM) was built to simulate daily travel to and from work based on responses to a travel survey. Key elements for the ABM were land cover, travel time, travel mode, occupation, malaria prevalence, and a detailed road network. Most visited network segments for different occupations and for malaria-positive cases were extracted and compared. Data from a separate survey were used to validate the simulation. Results Mobility characteristics for different occupation groups showed that while certain patterns were shared among some groups, there were also patterns that were unique to an occupation group. Forest workers were estimated to be the most mobile occupation group, and also had the highest potential malaria exposure associated with their daily travel in Ann Township. In Singu Township, forest workers were not the most mobile group; however, they were estimated to visit regions that had higher prevalence of malaria infection over other occupation groups. Conclusions Using anmore »
Occupation Modularity and the Work Ecosystem
Occupations, like many other social systems, are hierarchical. They evolve with other elements within the work ecosystem including technology and skills. This paper investigates the relationships among these elements using an approach that combines network theory and modular systems theory. A new method of using work related data to build occupation networks and theorize occupation evolution is proposed. Using this technique, structural properties of occupations are discovered by way of community detection on a knowledge network built from labor statistics, based on more than 900 occupations and 18,000 tasks. The occupation networks are compared across the work ecosystem as well as over time to understand the interdependencies between task components and the coevolution of occupation, tasks, technology, and skills. In addition, a set of conjectures are articulated based on the observations made from occupation structure comparison and change over time.
- Publication Date:
- NSF-PAR ID:
- 10298088
- Journal Name:
- International Conference on Information Systems
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
As work changes, so does technology. The two coevolve as part of a work ecosystem. This paper suggests a way of plotting this coevolution by comparing the embeddings - high dimensional vector representations - of textual descriptions of tasks, occupations and technologies. Tight coupling between tasks and technologies - measured by the distances between vectors - are shown to be associated with high task importance. Moreover, tasks that are more prototypical in an occupation are more important. These conclusions were reached through an analysis of the 2020 data release of The Occupational Information Network (O*NET) from the U.S. Department of Labor on 967 occupations and 19,533 tasks. One occupation, journalism, is analyzed in depth, and conjectures are formed related to the ways technologies and tasks evolve through both design and exaptation.
-
Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should producemore »
-
Computer labs are commonly used in computing education to help students reinforce the knowledge obtained in classrooms and to gain hands-on experience on specific learning subjects. While traditional computer labs are based on physical computer centers on campus, more and more virtual computer lab systems (see, e.g., [1, 2, 3, 4]) have been developed that allow students to carry out labs on virtualized resources remotely through the internet. Virtual computer labs make it possible for students to use their own computers at home, instead of relying on computer centers on campus to work on lab assignments. However, they also make it difficult for students to collaborate, due to the fact that students work remotely and there is a lack of support of sharing and collaboration. This is in contrast to traditional computer labs where students naturally feel the presence of their peers in a physical lab room and can easily work together and help each other if needed. Funded by NSF’s Division of Undergraduate Education, this project develops a collaborative virtual computer lab (CVCL) environment to support collaborative learning in virtual computer labs. The CVCL environment leverages existing open source collaboration tools and desktop sharing technologies and adds new functionsmore »
-
We measure the labor-demand effects of two simultaneous forms of technological change—automation of production processes and consolidation of parts. We collect detailed shop-floor data from four semiconductor firms with different levels of automation and consolidation. Using the O*NET survey instrument, we collect novel task data for operator laborers that contains process-step level skill requirements, including operations and control, near vision, and dexterity requirements. We then use an engineering process model to separate the effects of the distinct technological changes on these process tasks and operator skill requirements. Within an occupation, we show that aggregate measures of technological change can mask the opposing skill biases of multiple simultaneous technological changes. In our empirical context, automation polarizes skill demand as routine, codifiable tasks requiring low and medium skills are executed by machines instead of humans, whereas the remaining and newly created human tasks tend to require low and high skills. Consolidation converges skill demand as formerly divisible low and high skill tasks are transformed into a single indivisible task with medium skill requirements and higher cost of failure. We conclude by developing a new theory for how the separability of tasks mediates the effect of technology change on skill demand by changingmore »