With the rising concerns about AI misuse by students, there is a critical need to also consider the potential harm that AI might inflict upon them, particularly through surveillance. While educational institutions are investing in AI detection software to identify students using AI tools, a more silent movement is taking place in classrooms with the introduction of AI chatbots, raising even more concerns.
A notable example of this trend is Khan Academy, which has transformed from a YouTube platform offering free instructional videos into one of the largest online education platforms for K-12 students. Khan Academy has recently introduced a tutor bot named Khanmigo, initially piloted at their private Khan Lab School in Palo Alto, California, and now being tested in public schools in Newark, New Jersey. The founder, Sal Khan, envisions chatbots providing each student in the United States with a personalized and world-class tutor.
However, there is a significant difference between a human tutor and an AI assistant, and this distinction holds high stakes for children. Educators are already expressing concerns about AI chatbots potentially providing incorrect answers or undermining the learning process. Even when interacting with Khanmigo’s Harriet Tubman character, The Washington Post found that the chatbot provided wooden recitations of facts, often misattributing quotes to Tubman and being limited to a narrow focus on the Underground Railroad. The chatbot also couldn’t engage in discussions on topics like reparations, which were relevant during Tubman’s time.
Moreover, evaluating the more opaque aspects of AI technology is challenging. Khanmigo includes “guardrails” to monitor students for signs of self-harm, which raises questions about the accuracy of such assessments. Previous tools based on language analysis, for example, have had a poor track record. The Education Week discovered that the school surveillance system Gaggle would frequently flag students for using the word “gay” or making sarcastic comments, without distinguishing between jokes and genuine threats.
AI chatbots may appear more sophisticated than basic keyword searches, but they can also exhibit more advanced failures. These AI tools reflect biases present in the data they are trained on and the design choices made by humans. This is akin to the biases observed in facial recognition algorithms, which are significantly less accurate when identifying Black women compared to white men. Mental health surveillance AI, like the one potentially integrated into Khan Academy software, is particularly prone to failure as it attempts to predict the future rather than merely recognizing a face. An expression of frustration, such as a student saying “this assignment is going to kill me” in a chat, could be misconstrued by the software as a plan to commit self-harm.
If the “guardrails” implemented by apps like Khanmigo make mistakes, students may face unwanted consequences such as police investigations or psychological interventions. Additionally, for neurodivergent students who already experience various forms of human bias, a system like Khanmigo trained on a dataset of “normal” and “at-risk” students might perceive their differences as dangers. Worse still, for those wrongly flagged as a threat to themselves or others, there is no means to prove their innocence.
While Khan Academy claims to have constraints and safety measures in place for their AI project, it is crucial to acknowledge the risks associated with flawed AI. Until schools can guarantee the effectiveness and absence of discrimination in these systems, it is essential to exercise caution.
Albert Fox Cahn: Founder and Executive Director of Surveillance Technology Oversight Project (S.T.O.P.)
Shruthi Sriram: Undergraduate at Boston College and Advocacy Intern at S.T.O.P.
Source: FAGEN WASANNI TECHNOLOGIES
Add Comment