skip to main content


Title: You Say Brutal, I Say Thursday: Isn't It Obvious?
Award ID(s):
1821444
NSF-PAR ID:
10314436
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
PME-NA
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Sacristán, A. ; Cortés-Zavala, J. & (Ed.)
    Programmatic collaborations involving mathematicians and educators in the U.S. have been valuable but complex (e.g., Heaton & Lewis, 2011; Bass, 2005; Bass & Ball, 2014). Sultan & Artzt (2005) offer necessary conditions (p.53) including trust and helpfulness. Articles in Fried & Dreyfus (2014) and Bay-Williams (2012) describe outcomes from similarly collaborative efforts; however, there is a gap in the literature in attending to how race and gender intersect with issues of professional status, culture, and standards of practice. Arbaugh, McGraw and Peterson (2020) contend that “the fields of mathematics education and mathematics need to learn how to learn from each other - to come together to build a whole that is greater than the sum of its parts” (p. 155). Further, they posit that the two must “learn to honor and draw upon expertise related to both similarities and differences” across disciplines, or cultures. We argue that in order to do this, we must also take into account race, gender, language. For example, words like trust or helpfulness can read very differently when viewed from personal and professional culture, gender, or racial lenses. This poster shares personal vignettes from the perspective of three collaborators – one black male mathematician, one white female mathematics educator, and one white woman who was trained as a mathematician but works as a mathematics educator - illustrating some of the complexity of collaboration. 
    more » « less
  2. Social media users have long been aware of opaque content moderation systems and how they shape platform environments. On TikTok, creators increasingly utilize algospeak to circumvent unjust content restriction, meaning, they change or invent words to prevent TikTok’s content moderation algorithm from banning their video (e.g., “le$bean” for “lesbian”). We interviewed 19 TikTok creators about their motivations and practices of using algospeak in relation to their experience with TikTok’s content moderation. Participants largely anticipated how TikTok’s algorithm would read their videos, and used algospeak to evade unjustified content moderation while simultaneously ensuring target audiences can still find their videos. We identify non-contextuality, randomness, inaccuracy, and bias against marginalized communities as major issues regarding freedom of expression, equality of subjects, and support for communities of interest. Using algospeak, we argue for a need to improve contextually informed content moderation to valorize marginalized and tabooed audiovisual content on social media.

     
    more » « less