
An AI security system’s failure to distinguish between a harmless bag of chips and a firearm has reignited concerns about over-policing and racial bias in American schools.
Story Highlights
- A student’s Doritos bag is mistaken for a gun by AI, leading to police intervention.
- The incident raises questions about the reliability and biases of AI security systems.
- Concerns about racial profiling are amplified due to the student’s background.
- Calls for a review of AI deployment in schools grow in Baltimore County.
AI Error Leads to Intense Police Response
On October 20, 2025, Taki Allen, a student at Kenwood High School in Baltimore County, Maryland, was mistakenly targeted by police after an AI-powered security system flagged his crumpled Doritos bag as a potential firearm. The incident unfolded as Allen waited outside the school after football practice, with the AI system’s error prompting a swift and forceful police response. Allen, who was handcuffed and searched, was released after officials determined there was no weapon present.
The use of AI security systems in schools has been increasing across the United States as a measure to prevent school shootings. These systems are designed to scan real-time video feeds for potential weapons. However, Allen’s experience underscores the potential risks associated with these technologies, particularly regarding their accuracy and the potential for racial bias. The situation has sparked renewed debate about the role of AI in ensuring school safety versus the risks of over-policing minority students.
Armed officers held a student at gunpoint after an AI gun detection system mistakenly flagged a Doritos bag as a firearm
"They made me get on my knees, put my hands behind my back, and cuff me" pic.twitter.com/eSU0y5r1Yy
— Dexerto (@Dexerto) October 23, 2025
Community and Official Responses
In response to the incident, Baltimore County officials have announced a review of the AI gun detection system’s deployment at Kenwood High School. Statements from Allen and his family have described the event as traumatic, highlighting the fear and confusion experienced during the police encounter. Allen expressed his fear during the incident, questioning whether his life was in danger with police guns pointed at him.
School officials have pledged to reassess their security protocols and the technology’s implementation, emphasizing the need to prevent similar incidents in the future. The review process is expected to address the system’s accuracy and any potential biases that may have contributed to the misidentification of Allen’s bag.
Implications of AI in School Security
The incident at Kenwood High School has broader implications for the use of AI in educational settings, particularly concerning the balance between safety and civil liberties. Critics argue that reliance on AI can exacerbate existing inequities and lead to unnecessary confrontations, especially in racially diverse communities. The potential for legal or regulatory action against the technology vendor looms as discussions about the ethical deployment of AI in schools continue.
Moving forward, the incident calls for heightened scrutiny of AI security systems, demanding transparency and accountability from vendors. As communities grapple with these challenges, the need for improved testing, oversight, and bias mitigation in AI algorithms becomes increasingly apparent. The debate over AI’s role in school safety is likely to influence national policy discussions, prompting a reevaluation of technology-driven approaches to security.
Sources:
Student handcuffed after Doritos bag mistaken for a gun by school’s AI security system































