Subversive AI Acceptance Scale (SAIA-8)
Overview
To resist the use of facial recognition by governments and corporations to surveil users through their personal images, researchers have created privacy-enhancing image filters that use adversarial machine learning. These ``subversive AI'' (SAI) image filters aim to defend users from facial recognition by distorting personal images in ways that are barely noticeable to humans but confusing to computer vision algorithms. SAI filters are limited, however, by the lack of rigorous user evaluation that assess their acceptability. We addressed this limitation by creating and validating a scale to measure user acceptance --- the SAIA-8. In a three-step process, we apply a mixed-methods approach that closely adhered to best practices for scale creation and validation in measurement theory. Initially, to understand the factors that influence user acceptance of SAI filter outputs, we interviewed 15 participants. Interviewees disliked SAI filter outputs because of a perceived lack of usefulness and conflicts with their desired self-presentation. Using insights and statements from the interviews we generated 106 potential items for the scale. Employing an iterative process with 215 crowd-source participants, we arrived at the SAIA-8 scale. Finally, we performed a convergent validity study with 30 crowd-source participants; suggesting the SAIA-8 is suitable for measuring user acceptability of privacy-enhancing image perturbations. Moreover, it can aid in prioritizing user acceptability when developing and evaluating new SAI filters.
Work under review at ACM CSCW 2024