skip to main content

Title: Children use artifacts to infer others' shared interests
Artifacts – the objects we own, make, and choose – provide a source of rich social information. Adults use people’s artifacts to judge others’ traits, interests, and social affiliations. Here we show that 4-year-old children (N=32) infer others’ shared interests from their artifacts. When asked who had the same interests as a target character, children chose the character with a conceptually similar object to the target’s – an object used for the same activity – over a character with a perceptually similar object. When asked which person had the same arbitrary property (bedtime, birthday, or middle name), children did not systematically select either character, and most often reported that they did not know. Adults (N=32) made similar inferences, but differed in their tendency to use artifacts to infer friendships. Overall, by age 4, children show a sophisticated ability to make selective, warranted inferences about others’ interests based solely on their artifacts.
Authors:
; ;
Editors:
Fitch, T.; Lamm, C.; Leder, H.; Teßmar-Raible, K.
Award ID(s):
1749551
Publication Date:
NSF-PAR ID:
10281027
Journal Name:
Proceedings of the Annual Conference of the Cognitive Science Society
ISSN:
1069-7977
Sponsoring Org:
National Science Foundation
More Like this
  1. Do children use objects to infer the people and actions that created them? We ask how children judge whether designs were socially transmitted (copied), asking if children use a simple perceptual heuristic (more similar = more likely copied), or make a rational, flexible inference (Bayesian inverse planning). We found evidence that children use inverse planning to reason about artifacts’ designs: When children saw two identical designs, they did not always infer copying occurred. Instead, similarity was weaker evidence of copying when an alternative explanation ‘explained away’ the similarity. Thus, children inferred copying had occurred less often when designs were efficientmore »(Exp1, age 7-9; N=52), and when there was a constraint that limited the number of possible designs (Exp2, age 4-5; N=160). When thinking about artifacts, young children go beyond perceptual features and use a process like inverse planning to reason about the generative processes involved in design.« less
  2. Fitch, T. ; Lamm, C. ; Leder, H. ; Teßmar-Raible, K. (Ed.)
    Listening to music activates representations of movement and social agents. Why? We ask whether high-level causal reasoning about how music was generated can lead people to link musical sounds with animate agents. To test this, we asked whether people (N=60) make flexible inferences about whether an agent caused musical sounds, integrating information from the sounds’ timing and from the visual context in which it was produced. Using a 2x2 within-subject design, we found evidence of causal reasoning: In a context where producing a musical sequence would require self-propelled movement, people inferred that an agent had been present causing the sounds.more »When the context provided an alternative possible explanation, this ‘explained away’ the agent, reducing the tendency to infer an agent was present for the same acoustic stimuli. People can use causal reasoning to infer whether an agent produced musical sounds, suggesting that high-level cognition can link music with social concepts.« less
  3. How do young children develop a concept of equity? Infants prefer dividing resources equally and expect others to make such distributions. Between the ages of 3–8, children begin to exhibit preferences to avoid inequitable outcomes in their distributions, dividing resources unequally if the result of that distribution is a more equitable outcome. Four studies investigated children’s developing preferences for generating equitable distributions, focusing on the mechanisms for this development. Children were presented with two characters with different amount of resources, and then a third character who will distribute more resources to them. Three- to 8-year-olds were asked whether the thirdmore »character should give an equal number of resources to the recipients, preserving the inequity, or an unequal number to them, creating an equitable outcome. Starting at age 7, children showed a preference for equitable distributions (Study 1, N = 144). Studies 2a (N = 72) and 2b (N = 48) suggest that this development is independent of children’s numerical competence. When asked to take the perspective of the recipient with fewer resources, 3- to 6-year-olds were more likely to make an equitable distribution (Study 3, N = 122). These data suggest that social perspective taking underlies children’s prosocial actions, and supports the hypothesis that their spontaneous capacity to take others’ perspectives develops during the early elementary-school years.« less
  4. Abstract

    How do young children develop a concept of equity? Infants prefer dividing resources equally and expect others to make such distributions. Between the ages of 3–8, children begin to exhibit preferences to avoid inequitable outcomes in their distributions, dividing resources unequally if the result of that distribution is a more equitable outcome. Four studies investigated children’s developing preferences for generating equitable distributions, focusing on the mechanisms for this development. Children were presented with two characters with different amount of resources, and then a third character who will distribute more resources to them. Three- to 8-year-olds were asked whether themore »third character should give an equal number of resources to the recipients, preserving the inequity, or an unequal number to them, creating an equitable outcome. Starting at age 7, children showed a preference for equitable distributions (Study 1, N = 144). Studies 2a (N = 72) and 2b (N = 48) suggest that this development is independent of children’s numerical competence. When asked to take the perspective of the recipient with fewer resources, 3- to 6-year-olds were more likely to make an equitable distribution (Study 3, N = 122). These data suggest that social perspective taking underlies children’s prosocial actions, and supports the hypothesis that their spontaneous capacity to take others’ perspectives develops during the early elementary-school years.

    « less
  5. Obeid, Iyad Selesnick (Ed.)
    The Temple University Hospital EEG Corpus (TUEG) [1] is the largest publicly available EEG corpus of its type and currently has over 5,000 subscribers (we currently average 35 new subscribers a week). Several valuable subsets of this corpus have been developed including the Temple University Hospital EEG Seizure Corpus (TUSZ) [2] and the Temple University Hospital EEG Artifact Corpus (TUAR) [3]. TUSZ contains manually annotated seizure events and has been widely used to develop seizure detection and prediction technology [4]. TUAR contains manually annotated artifacts and has been used to improve machine learning performance on seizure detection tasks [5]. Inmore »this poster, we will discuss recent improvements made to both corpora that are creating opportunities to improve machine learning performance. Two major concerns that were raised when v1.5.2 of TUSZ was released for the Neureka 2020 Epilepsy Challenge were: (1) the subjects contained in the training, development (validation) and blind evaluation sets were not mutually exclusive, and (2) high frequency seizures were not accurately annotated in all files. Regarding (1), there were 50 subjects in dev, 50 subjects in eval, and 592 subjects in train. There was one subject common to dev and eval, five subjects common to dev and train, and 13 subjects common between eval and train. Though this does not substantially influence performance for the current generation of technology, it could be a problem down the line as technology improves. Therefore, we have rebuilt the partitions of the data so that this overlap was removed. This required augmenting the evaluation and development data sets with new subjects that had not been previously annotated so that the size of these subsets remained approximately the same. Since these annotations were done by a new group of annotators, special care was taken to make sure the new annotators followed the same practices as the previous generations of annotators. Part of our quality control process was to have the new annotators review all previous annotations. This rigorous training coupled with a strict quality control process where annotators review a significant amount of each other’s work ensured that there is high interrater agreement between the two groups (kappa statistic greater than 0.8) [6]. In the process of reviewing this data, we also decided to split long files into a series of smaller segments to facilitate processing of the data. Some subscribers found it difficult to process long files using Python code, which tends to be very memory intensive. We also found it inefficient to manipulate these long files in our annotation tool. In this release, the maximum duration of any single file is limited to 60 mins. This increased the number of edf files in the dev set from 1012 to 1832. Regarding (2), as part of discussions of several issues raised by a few subscribers, we discovered some files only had low frequency epileptiform events annotated (defined as events that ranged in frequency from 2.5 Hz to 3 Hz), while others had events annotated that contained significant frequency content above 3 Hz. Though there were not many files that had this type of activity, it was enough of a concern to necessitate reviewing the entire corpus. An example of an epileptiform seizure event with frequency content higher than 3 Hz is shown in Figure 1. Annotating these additional events slightly increased the number of seizure events. In v1.5.2, there were 673 seizures, while in v1.5.3 there are 1239 events. One of the fertile areas for technology improvements is artifact reduction. Artifacts and slowing constitute the two major error modalities in seizure detection [3]. This was a major reason we developed TUAR. It can be used to evaluate artifact detection and suppression technology as well as multimodal background models that explicitly model artifacts. An issue with TUAR was the practicality of the annotation tags used when there are multiple simultaneous events. An example of such an event is shown in Figure 2. In this section of the file, there is an overlap of eye movement, electrode artifact, and muscle artifact events. We previously annotated such events using a convention that included annotating background along with any artifact that is present. The artifacts present would either be annotated with a single tag (e.g., MUSC) or a coupled artifact tag (e.g., MUSC+ELEC). When multiple channels have background, the tags become crowded and difficult to identify. This is one reason we now support a hierarchical annotation format using XML – annotations can be arbitrarily complex and support overlaps in time. Our annotators also reviewed specific eye movement artifacts (e.g., eye flutter, eyeblinks). Eye movements are often mistaken as seizures due to their similar morphology [7][8]. We have improved our understanding of ocular events and it has allowed us to annotate artifacts in the corpus more carefully. In this poster, we will present statistics on the newest releases of these corpora and discuss the impact these improvements have had on machine learning research. We will compare TUSZ v1.5.3 and TUAR v2.0.0 with previous versions of these corpora. We will release v1.5.3 of TUSZ and v2.0.0 of TUAR in Fall 2021 prior to the symposium. ACKNOWLEDGMENTS Research reported in this publication was most recently supported by the National Science Foundation’s Industrial Innovation and Partnerships (IIP) Research Experience for Undergraduates award number 1827565. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the official views of any of these organizations. REFERENCES [1] I. Obeid and J. Picone, “The Temple University Hospital EEG Data Corpus,” in Augmentation of Brain Function: Facts, Fiction and Controversy. Volume I: Brain-Machine Interfaces, 1st ed., vol. 10, M. A. Lebedev, Ed. Lausanne, Switzerland: Frontiers Media S.A., 2016, pp. 394 398. https://doi.org/10.3389/fnins.2016.00196. [2] V. Shah et al., “The Temple University Hospital Seizure Detection Corpus,” Frontiers in Neuroinformatics, vol. 12, pp. 1–6, 2018. https://doi.org/10.3389/fninf.2018.00083. [3] A. Hamid et, al., “The Temple University Artifact Corpus: An Annotated Corpus of EEG Artifacts.” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2020, pp. 1-3. https://ieeexplore.ieee.org/document/9353647. [4] Y. Roy, R. Iskander, and J. Picone, “The NeurekaTM 2020 Epilepsy Challenge,” NeuroTechX, 2020. [Online]. Available: https://neureka-challenge.com/. [Accessed: 01-Dec-2021]. [5] S. Rahman, A. Hamid, D. Ochal, I. Obeid, and J. Picone, “Improving the Quality of the TUSZ Corpus,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2020, pp. 1–5. https://ieeexplore.ieee.org/document/9353635. [6] V. Shah, E. von Weltin, T. Ahsan, I. Obeid, and J. Picone, “On the Use of Non-Experts for Generation of High-Quality Annotations of Seizure Events,” Available: https://www.isip.picone press.com/publications/unpublished/journals/2019/elsevier_cn/ira. [Accessed: 01-Dec-2021]. [7] D. Ochal, S. Rahman, S. Ferrell, T. Elseify, I. Obeid, and J. Picone, “The Temple University Hospital EEG Corpus: Annotation Guidelines,” Philadelphia, Pennsylvania, USA, 2020. https://www.isip.piconepress.com/publications/reports/2020/tuh_eeg/annotations/. [8] D. Strayhorn, “The Atlas of Adult Electroencephalography,” EEG Atlas Online, 2014. [Online]. Availabl« less