skip to main content


Title: Steps toward Understanding the Design and Evaluation Spaces of Bot and Human Knowledge Production Systems
Bots and humans can combine in multiple ways in the service of knowledge production. Designers make choices about the purpose of the bots, their technical architecture, and their initiative. That is, they decide about functions, mechanisms, and interfaces. Together these dimensions suggest a design space for systems of bots and humans. These systems are evaluated along several criteria. One criterion is productivity. Another is their effects on human editors, especially newcomers. A third is sustainability: how they persist in the face of change. Design and evaluation spaces are described as part of an analysis of Wiki-related bots: two bots and their effects are discussed in detail, and an agenda for further research is suggested.  more » « less
Award ID(s):
1717473 1745463 1442840
NSF-PAR ID:
10171256
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Wiki Workshop
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Preventing abuse of web services by bots is an increasingly important problem, as abusive activities grow in both volume and variety. CAPTCHAs are the most common way for thwarting bot activities. However, they are often ineffective against bots and frustrating for humans. In addition, some recent CAPTCHA techniques diminish user privacy. Meanwhile, client-side Trusted Execution Environments (TEEs) are becoming increasingly widespread (notably, ARM TrustZone and Intel SGX), allowing establishment of trust in a small part (trust anchor or TCB) of client-side hardware. This prompts the question: can a TEE help reduce (or remove entirely) user burden of solving CAPTCHAs? In this paper, we design CACTI: CAPTCHA Avoidance via Client-side TEE Integration. Using client-side TEEs, CACTI allows legitimate clients to generate unforgeable rate-proofs demonstrating how frequently they have performed specific actions. These rate-proofs can be sent to web servers in lieu of solving CAPTCHAs. CACTI provides strong client privacy guarantees, since the information is only sent to the visited website and authenticated using a group signature scheme. Our evaluations show that overall latency of generating and verifying a CACTI rate-proof is less than 0.25 sec, while CACTI's bandwidth overhead is over 98% lower than that of current CAPTCHA systems. 
    more » « less
  2. null (Ed.)
    Preventing abuse of web services by bots is an increasingly important problem, as abusive activities grow in both volume and variety. CAPTCHAs are the most common way for thwarting bot activities. However, they are often ineffective against bots and frustrating for humans. In addition, some recent CAPTCHA techniques diminish user privacy. Meanwhile, client-side Trusted Execution Environments (TEEs) are becoming increasingly widespread (notably, ARM TrustZone and Intel SGX), allowing establishment of trust in a small part (trust anchor or TCB) of client-side hardware. This prompts the question: can a TEE help reduce (or remove entirely) user burden of solving CAPTCHAs? In this paper, we design CACTI: CAPTCHA Avoidance via Client-side TEE Integration. Using client-side TEEs, CACTI allows legitimate clients to generate unforgeable rate-proofs demonstrating how frequently they have performed specific actions. These rate-proofs can be sent to web servers in lieu of solving CAPTCHAs. CACTI provides strong client privacy guarantees, since the information is only sent to the visited website and authenticated using a group signature scheme. Our evaluations show that overall latency of generating and verifying a CACTI rate-proof is less than 0.25 sec, while CACTI's bandwidth overhead is over 98% lower than that of current CAPTCHA systems. 
    more » « less
  3. Bots are increasingly being used for governance-related purposes in online communities, yet no instrumentation exists for measuring how users assess their beneficial or detrimental impacts. In order to support future human-centered and community-based research, we developed a new scale called GOVernance Bots in Online communiTies (GOV-BOTs) across two rounds of surveys on Reddit (N=820). We applied rigorous psychometric criteria to demonstrate the validity of GOV-BOTs, which contains two subscales: bot governance (4 items) and bot tensions (3 items). Whereas humans have historically expected communities to be composed entirely of humans, the social participation of bots as non-human agents now raises fundamental questions about psychological, philosophical, and ethical implications. Addressing psychological impacts, our data show that perceptions of effective bot governance positively contribute to users' sense of virtual community (SOVC), whereas perceived bot tensions may only impact SOVC if users are more aware of bots. Finally, we show that users tend to experience the greatest SOVC across groups of subreddits, rather than individual subreddits, suggesting that future research should carefully re-consider uses and operationalizations of the term community.

     
    more » « less
  4. Chenyang Lu (Ed.)

    The design and analysis of multi-agent human cyber-physical systems in safety-critical or industry-critical domains calls for an adequate semantic foundation capable of exhaustively and rigorously describing all emergent effects in the joint dynamic behavior of the agents that are relevant to their safety and well-behavior. We present such a semantic foundation. This framework extends beyond previous approaches by extending the agent-local dynamic state beyond state components under direct control of the agent and belief about other agents (as previously suggested for understanding cooperative as well as rational behavior) to agent-local evidence and belief about the overall cooperative, competitive, or coopetitive game structure. We argue that this extension is necessary for rigorously analyzing systems of human cyber-physical systems because humans are known to employ cognitive replacement models of system dynamics that are both non-stationary and potentially incongruent. These replacement models induce visible and potentially harmful effects on their joint emergent behavior and the interaction with cyber-physical system components.

     
    more » « less
  5. null (Ed.)
    Software bots are used by Open Source Software (OSS) projects to streamline the code review process. Interfacing between developers and automated services, code review bots report continuous integration failures, code quality checks, and code coverage. However, the impact of such bots on maintenance tasks is still neglected. In this paper, we study how project maintainers experience code review bots. We surveyed 127 maintainers and asked about their expectations and perception of changes incurred by code review bots. Our findings reveal that the most frequent expectations include enhancing the feedback bots provide to developers, reducing the maintenance burden for developers, and enforcing code coverage. While maintainers report that bots satisfied their expectations, they also perceived unexpected effects, such as communication noise and newcomers' dropout. Based on these results, we provide a series of implications for bot developers, as well as insights for future research. 
    more » « less