A recent RAND American School Leader Panel survey (October 2024) found that 13% of K-12 principals reported incidents of bullying involving AI-generated deepfakes during the 2023-2024 and 2024-2025 school years. This finding illustrates how quickly deepfakes have entered schools, making their risks to student safety and immediate concern. The term “deepfake” is defined by Merriam-Webster as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
While some deepfakes are harmless or satirical, deepfakes are increasingly used to harass students and school staff, or to create false depictions that spread quickly online. Students may be targeted with explicit or defamatory content, putting them at risk for emotional harm, reputational damage and exploitation. Even when the content is not photorealistic, it can still cause serious emotional and reputational harm to the victim and their family.
In May 2025, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks “TAKE IT DOWN” Act. This law establishes clear responsibilities for schools to act quickly, educate their communities, and coordinate with authorities to address these threats.
What is a Deepfake in Schools?
The TAKE IT DOWN Act defines deepfake in the K-12 context primarily as a digital forgery: an intimate digital depiction of an identifiable individual created or altered using artificial intelligence (AI), software, or other technological means. It is any image, video, or audio that shows people saying or doing things they never did, often with striking realism. Key points include:
- The content can be AI-generated or altered from authentic images.
- Coverage is based on reasonable perception: if a reasonable person viewing it would see it as authentic, it falls under the law, even if it is not strictly photorealistic.
- The identifiable individual may be a minor or adult, including unique features such as a face, voice, or birthmark.
- The law focuses on nonconsensual intimate depictions in schools, protecting students from harassment and exploitation.
- Exceptions include: consent shared for legitimate, educational or legal purposes, and self-created consensual material.
Schools’ Obligations Under the TAKE IT DOWN Act
Schools and districts have several specific responsibilities under the law:
- Establish Clear Policies and Procedures
- Schools must create efficient processes to receive, investigate, and act on reports of nonconsensual deepfake content involving students.
- Policies should be communicated to students, parents, and staff so everyone understands how to report incidents and what actions will be taken.
- Timely Removal of Content
- Upon receiving a takedown notice, schools must take steps to remove or restrict access to the reported content within 48 hours to reduce exposure and harm.
- Coordination with Platforms and Law Enforcement
- Schools must work with online platforms and law enforcement to ensure content is promptly removed and enforcement actions are carried out, especially when minors are involved.
- Education and Awareness
- Schools are obligated to educate students, parents, and staff about the dangers of deepfakes, legal protections, and the reporting process to create a safer digital environment.
- Liability and Accountability
- Schools may be held liable for failing to follow notice-and-action requirements. Transparent and prompt handling of incidents is critical to reduce risk and maintain trust.
- Codes of Conduct
- School policies need to explicitly prohibit creating or sharing deepfake content, and disciplinary measures should align with state laws.
How schools handle student offenders varies. Some schools use strict disciplinary measures such as suspension or expulsion, while others pursue restorative approaches focusing on education, accountability, and empathy.
State Laws: Additional Protections
Most states have laws that treat sexually explicit deepfakes of minors as seriously as child sexual abuse material (CSAM), often imposing felony charges for offenders, prison time and financial penalties, and additional sanctions in repeat or aggravated cases.
Some states, including California, Louisiana and New Jersey, have especially strict deepfake laws:
- California: Possession of deepfake CSAM can lead to up to 1 year in jail or fines for a first offense, with higher penalties for repeat offenders.
- Louisiana: Creation or possession can result in 5 years in prison and up to $10,000 in fines, increasing to 10 years and $50,000 if distributed or sold.
- New Jersey: Creating or sharing deceptive media establishes criminal and civil penalties including third- and fourth-degree crimes with potential incarceration, fines and civil remedies such as actual damages, punitive games, and attorney’s fees.
Takeaway
By combining federal requirements, state regulations, and local policies, schools can take a proactive and coordinated approach to prevent and respond to deepfake misuse. A strong, consistent approach allows schools to protect students’ well-being, uphold legal obligations, and foster a safer learning environment in the age of AI.