Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    These ‘serious leisure’ activities promote social cohesion

    February 10, 2026

    Chico State hosts annual Neurodiversity and Disability Symposium – The Orion

    February 10, 2026

    The Magnolia Mindset: Using sleep to support mental health

    February 10, 2026
    Facebook X (Twitter) Instagram
    Trending
    • These ‘serious leisure’ activities promote social cohesion
    • Chico State hosts annual Neurodiversity and Disability Symposium – The Orion
    • The Magnolia Mindset: Using sleep to support mental health
    • Genetics of Anxiety: Landmark Study Reveals Risks
    • India, UK sign pact to end double social security payments; 75,000 Indian workers to benefit
    • The Hidden Struggles of Invisible Disabilities
    • Grounding Techniques That Reconnect You With Your Power
    • The Fight Against Suicide comes to Auburn
    Moving MountainsMoving Mountains
    Facebook X (Twitter) Instagram
    Tuesday, February 10
    • Home
    • Mental Health
    • Life Skills
    • Self-Care
    • Well-Being
    • Awareness
    • Inspiration
    • Workers Comp
    • Social Security
      • Injuries
      • Disability Support
      • Community
    Moving MountainsMoving Mountains
    Home » UNICEF Sounds Alarm as AI Fuels a New Global Child-Exploitation Crisis — Global Issues
    Community

    UNICEF Sounds Alarm as AI Fuels a New Global Child-Exploitation Crisis — Global Issues

    TECHBy TECHFebruary 10, 2026No Comments8 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
    Bluesky
    Share
    Facebook Twitter LinkedIn Pinterest WhatsApp Email


    Millions of children are at risk of facing exploitation and abuse through exposure to and having their images being manipulated through generative AI tools. Credit: Ludovic Toinel/Unsplash

    • by Oritro Karim (united nations)
    • Tuesday, February 10, 2026
    • Inter Press Service

    UNITED NATIONS, February 10 (IPS) – New findings from the United Nations Children’s Fund (UNICEF) reveal that millions of children are having their images manipulated into sexualized content through the use of generative artificial intelligence (AI), fueling a fast-growing and deeply harmful form of online abuse. The agency warns that without strong regulatory frameworks and meaningful cooperation between governments and tech platforms, this escalating threat could have devastating consequences for the next generation.

    A 2025 report from The Childlight Global Child Safety Institute—an independent organization that tracks child sexual exploitation and abuse—shows a staggering rise in technology-facilitated child abuse in recent years, growing from 4,700 cases in the United States in 2023 to over 67,000 in 2024. A significant share of these incidents involved deepfakes: AI-generated images, videos, and audio engineered to appear realistic and often used to create sexualized content. This includes widespread “nudification”, where AI tools strip or alter clothing in photos to produce fabricated nude images.

    A joint study from UNICEF, Interpol, and End Child Prostitution in Asian Tourism (ECPAT) International examined the rates of child sexual abuse material (CSAM) circulated online across 11 countries found that at least 1.2 million children had their images manipulated into sexually explicit deepfakes in the past year alone. This means roughly one in every 25 children—or one child in every classroom—has already been victimized by this emerging form of digital abuse.

    “When a child’s image or identity is used, that child is directly victimised,” a UNICEF representative said. “Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help. Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”

    A 2025 survey from National Police Chiefs’ Council (NPCC) studied the public’s attitudes toward deepfake abuse, finding that deepfake abuse had surged by 1,780 percent between 2019 and 2024. In a UK-wide representative survey conducted by Crest Advisory, nearly three in five respondents reported feeling worried about becoming victims of deepfake abuse.

    Additionally, 34 percent admitted to creating a sexual or intimate deepfake of someone they knew, while 14 percent had created deepfakes of someone they did not know. The research also found that women and girls are disproportionately targeted, with social media identified as the most common place where these deepfakes are spread.

    The study also presented respondents with a scenario in which a person creates an intimate deepfake of their partner, discloses it to them, and later distributes it to others following an argument. Alarmingly, 13 percent of respondents said this behavior should be both morally and legally acceptable, while an additional 9 percent expressed neutrality. NPCC also reported that those who considered this behavior to be acceptable were more likely to be younger men who actively consume pornography and agree with beliefs that would “commonly be regarded as misogynistic”.

    “We live in very worrying times, the futures of our daughters (and sons) are at stake if we don’t start to take decisive action in the digital space soon,” award-winning activist and internet personality Cally-Jane Beech told NPCC. “We are looking at a whole generation of kids who grew up with no safeguards, laws or rules in place about this, and now seeing the dark ripple effect of that freedom.”

    Deepfake abuse can have severe and lasting psychological and social consequences for children, often triggering intense shame, anxiety, depression, and fear. In a new report, UNICEF notes that a child’s “body, identity, and reputation can be violated remotely, invisibly, and permanently” through deepfake abuse, alongside risks of threats, blackmailing, and extortion from perpetrators. Feelings of violation – paired with the permanence and viral spread of digital content – can leave victims with long-term trauma, mistrust, and disrupted social development.

    “Many experience acute distress and fear upon discovering that their image has been manipulated into sexualised content,” Afrooz Kaviani Johnson, Child Protection Specialist at UNICEF Headquarters told IPS. “Children report feelings of shame and stigma, compounded by the loss of control over their own identity. These harms are real and lasting: being depicted in sexualised deepfakes can severely impact a child’s wellbeing, erode their trust in digital spaces, and leave them feeling unsafe even in their everyday ‘offline’ lives.”

    Cosmas Zavazava, Director of the Telecommunication Development Bureau at the International Telecommunications Union (ITU), added that online abuse can also translate to physical harm.

    In a joint statement on Artificial Intelligence and the Rights of the Child, key UN entities, including UNICEF, ITU, the Office of the UN High Commissioner for Human Rights (OHCHR) and the UN Commission of the Rights of the Child (CRC) warned that among children, parents, caregivers and teachers, there was a widespread lack of AI literacy. This refers to the basic ability to understand how AI systems work and how to engage with them critically and effectively. This knowledge gap leaves young people especially vulnerable, making it harder for victims and their support systems to recognize when a child is being targeted, to report abuse, or to access adequate protections and support services.

    The UN also emphasized that a substantial share of responsibility lies with tech platforms, noting that most generative AI tools lack meaningful safeguards to prevent digital child exploitation.

    “From UNICEF’s perspective, deepfake abuse thrives in part because legal and regulatory frameworks have not kept pace with technology. In many countries, laws do not explicitly recognise AI‑generated sexualised images of children as child sexual abuse material (CSAM),” said Johnson.

    UNICEF is urging governments to ensure that definitions of CSAM are updated to include AI-generated content and “explicitly criminalise both its creation and distribution”. According to Johnson, technology companies should be required to adopt what he called “safety-by-design measures” and “child-rights impact assessments”.

    He stressed however that while essential, laws and regulations alone would not be enough. “Social norms that tolerate or minimise sexual abuse and exploitation must also change. Protecting children effectively will require not only better laws, but real shifts in attitudes, enforcement, and support for those who are harmed.”

    Commercial incentives further compound the problem, with platforms benefitting from increased user engagement, subscriptions, and publicity generated by AI image tools, creating little motivation to adopt stricter protection measures.

    As a result, tech companies often introduce guardrails only after major public controversies — long after children have already been affected. One such example is Grok, the AI chatbot for X (formerly Twitter), which was found generating large volumes of nonconsensual, sexualized deepfake images in response to user prompts. Facing widespread, international backlash, X announced in January that Grok’s image generator tool would only be limited to X’s paid subscribers.

    Investigations into Grok are ongoing, however. The United Kingdom and the European Union have opened investigations since January, and on February 3, prosecutors in France raided X’s offices as part of its investigation into the platform’s alleged role in circulating CSAM and deepfakes. X’s owner Elon Musk was summoned for questioning.

    UN officials have stressed the need for regulatory frameworks that protect children online while still allowing AI systems to grow and generate revenue. “Initially, we got the feeling that they were concerned about stifling innovation, but our message is very clear: with responsible deployment of AI, you can still make a profit, you can still do business, you can still get market share,” said a senior UN official. “The private sector is a partner, but we have to raise a red flag when we see something that is going to lead to unwanted outcomes.”

    IPS UN Bureau

    © Inter Press Service (20260210072132) — All Rights Reserved. Original source: Inter Press Service

    Where next?

    Related news

    Browse related news topics:

    Latest news

    Read the latest news stories:

    • “Deepfake Abuse Is Abuse”: UNICEF Sounds Alarm as AI Fuels a New Global Child-Exploitation Crisis Tuesday, February 10, 2026
    • Leveraging Artificial Intelligence and Enhancing Countries’ Preparedness Tuesday, February 10, 2026
    • Trade Liberalisation Undermines Development Tuesday, February 10, 2026
    • Local Resilience Can Mitigate Climate Conflicts in the Pacific Monday, February 09, 2026
    • A Business Necessity: Align With Nature or Risk Collapse, IPBES Report Warns Monday, February 09, 2026
    • After Decades of Denial and Silence, the Suffering of Rohingya People Is Being Heard at the World’s Highest Court’ Monday, February 09, 2026
    • Goal 1: End Poverty in all its Forms –Everywhere Monday, February 09, 2026
    • ‘When Rains Come, Our Hearts Beat Faster’ Monday, February 09, 2026
    • Proposed amnesty law offers political prisoners in Venezuela an ‘opportunity’ Monday, February 09, 2026
    • World News in Brief: Floods in Syria, relief operations in Cuba at risk, ending child labour Monday, February 09, 2026

    In-depth

    Learn more about the related issues:

    Share this

    Bookmark or share this with others using some popular social bookmarking web sites:

    Link to this page from your site/blog

    Add the following HTML code to your page:

    “Deepfake Abuse Is Abuse”: UNICEF Sounds Alarm as AI Fuels a New Global Child-Exploitation Crisis, Inter Press Service, Tuesday, February 10, 2026 (posted by Global Issues)

    … to produce this:

    “Deepfake Abuse Is Abuse”: UNICEF Sounds Alarm as AI Fuels a New Global Child-Exploitation Crisis, Inter Press Service, Tuesday, February 10, 2026 (posted by Global Issues)

    Alarm ChildExploitation Crisis Fuels Global Issues Sounds UNICEF
    TECH
    • Website

    Related Posts

    These ‘serious leisure’ activities promote social cohesion

    February 10, 2026

    ‘It’s Okay to Change Your Mind’ meaning and lyrics

    February 10, 2026

    Transcendental Meditation Found to Calm Genes Associated with Both Aging and Stress

    February 10, 2026
    Leave A Reply Cancel Reply

    Don't Miss
    Community

    These ‘serious leisure’ activities promote social cohesion

    By TECHFebruary 10, 20260

    What do collecting old editions of Dungeons & Dragons monster manuals, securing the same tailgate…

    Chico State hosts annual Neurodiversity and Disability Symposium – The Orion

    February 10, 2026

    The Magnolia Mindset: Using sleep to support mental health

    February 10, 2026

    Genetics of Anxiety: Landmark Study Reveals Risks

    February 10, 2026
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo
    Our Picks

    These ‘serious leisure’ activities promote social cohesion

    February 10, 2026

    Chico State hosts annual Neurodiversity and Disability Symposium – The Orion

    February 10, 2026

    The Magnolia Mindset: Using sleep to support mental health

    February 10, 2026

    Genetics of Anxiety: Landmark Study Reveals Risks

    February 10, 2026

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    About Us

    At Moving Mountains, we believe that every individual has strength, value, and purpose—regardless of mental health challenges or physical disabilities. This platform was created to inspire hope, promote understanding, and empower people to live meaningful and confident lives beyond limitations.

    Latest Post

    These ‘serious leisure’ activities promote social cohesion

    February 10, 2026

    Chico State hosts annual Neurodiversity and Disability Symposium – The Orion

    February 10, 2026

    The Magnolia Mindset: Using sleep to support mental health

    February 10, 2026
    Recent Posts
    • These ‘serious leisure’ activities promote social cohesion
    • Chico State hosts annual Neurodiversity and Disability Symposium – The Orion
    • The Magnolia Mindset: Using sleep to support mental health
    • Genetics of Anxiety: Landmark Study Reveals Risks
    • India, UK sign pact to end double social security payments; 75,000 Indian workers to benefit
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer
    © 2026 movingmountains. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.