A sleepy town in southern Spain is in shock after it emerged that AI-generated naked images of young local girls had been circulating on social media without their knowledge, reports BBC [1].
"When you are the victim of a crime, if you are robbed, for example, you file a complaint and you don't hide because the other person has caused you harm. But with crimes of a sexual nature the victim often feels shame and hides and feels responsible. So I wanted to give that message: it's not your fault."
Police are now investigating and according to reports, at least 11 local boys have been identified as having involvement in either the creation of the images or their circulation via the WhatsApp and Telegram apps.
"If you have a son you worry he might have done something like this; if you have a daughter, you're even more worried, because it's an act of violence."
Technology and Malevolence
This incident outlines the inherent challenges within our technologically-driven society. It highlights the vulnerabilities associated with the digital sphere, where the dangers of technological advancements and malicious intent is nearer that we imagine. The emergence of AI-generated image distortions, stemming largely from images harvested from social media repositories, represents an affront to privacy, outlining the imperative need to counter such abuses. Like all technologies, despite its remarkable potential, it is susceptible to misuse.
That said, technology is a broad term that encompasses the application of scientific knowledge and techniques to create tools, machines, systems, and processes that can solve problems or achieve goals.
- Technology can be seen as a neutral means that can be used for good or evil, depending on the intentions and actions of the users.
- Technology can also be seen as a force that shapes society and culture, influencing human values, behaviors, and relationships.
Malevolence is the quality or state of being malicious, evil, or ill-willed.
- Malevolence can be expressed through actions that intend to harm, injure, or destroy others, either physically or psychologically.
- Malevolence can also be manifested through attitudes that show hatred, contempt, or indifference towards the well-being of others.
Previously, I discussed how psychology can be used as a technology depending on its intent.
The Inherent Duality of Tech
The duality of technology is the crossroad between its progress and perils. Technology is an unparalleled catalyst for progress and innovation. It has revolutionised industries, enhanced our quality of life and facilitated great advancements in communication, healthcare, medicine and science.
At the same time, technology has also led to disruption and created new challenges. While automation and AI has reshaped industries, streamlined processes and boosts efficiencies; job natures have changed. This raises concerns of job displacements as well as talent shortage, and cost related to retaining and unemployment.
Additionally, although technology has connected us in ways previously unimaginable, the same connectivity has also given rise to various social issues and concerns about social isolations, addictions, and its potential impact on mental well-being among others.
The convenience afforded by technology often comes at a cost of privacy and the issue of information ownership. Personal data is collected at unprecendented rate, collected and analyzed for various reasons. Sometimes, these data are misused. We now usher into the era of misinformation, disinformation and mal-information. The ease and speed at which information can spread online poses a challenge to discerning fact from fiction. The question that begs to be answered: are we trading the protection of our information for convenience?
Various ethical dilemmas arises due to the use of technology. They may have paved the way for life-saving treatments and innovations. They also raised questions surrounding topics like genetics engineering and healthcare accessibiity; it addresses environmental challenges but also drives resource consumption and environmental degradation.
The Malevolent Potential of AI
At the heart of this issue that happened in Almendralejo lies the technology used to create the images, which is a form of artificial intelligence (AI). AI can perform tasks such as learning, reasoning, perception, and decision making, using data and algorithms. AI can also generate new content, such as images, texts, sounds, or videos, based on existing data or inputs.
Here, the AI is an instrumental catalyst in contemporary digital malevolence. It showcases the capacity for both constructive innovation and malevolent exploitation. AI algorithms, particularly those associated with deepfake creation, possess the capability to craft deceptive synthetic content with great precision. This technology, it seems, is the linchpin in the exploitation of digital mediums despite its great potential.
with great powers come great responsibilities |
---|
-
Deepfake Manipulation: Deepfake technology has been extensively exploited to create convincingly altered videos and audio recordings. For instance, a woman in Perth was shocked to discover a deepfake porn of herself 10 years ago when she googled for an image of herself [2]. In a separate unrelated case, it was widely reported earlier in 2020, that a deepfake bot on Telegram had used photos publicly posted online to generate fake pornographic images [3].
“You cannot win,” ... “This is something that is always going to be out there. It’s just like it’s forever ruined you.”
-
Chatbot Misuse: In March 2019, a Medium article [4] discussed the growing concern of chatbots and abuse. The article cited several cases where chatbots were subjected to verbal or sexual harassment by users who felt no remorse or accountability for their actions. The article also highlighted the psychological and emotional impact of such abuse on the chatbot developers and the potential legal and ethical issues that arise from it. Cybercriminals have employed AI-powered chatbots to engage in phishing attacks. Another notable example is the use of chatbots for many other malicious intents. In April 2023, a security researcher discovered that a popular chatbot app was vulnerable to “prompt injections”, which are commands that can make the chatbot ignore its safety guardrails and produce harmful or misleading content [5].
-
Algorithmic Bias: In 2020, a man in Detroit was arrested for allegedly stealing five watches after he was wrongfully identified by facial recognition software. There were two other similar cases. While all three cases were eventually dropped, some took almost a year, while another involved 10 days in jail. They all shared a common characteristic - all three cases involved fathers, and are Black [6].
When Williams was arrested, he told Melissa, Julia, and his 4-year-old daughter Rosie that he’d be right back, but he was held by police for 30 hours. Julia still cries when she sees video of her dad being arrested on their front lawn. Her parents wonder how much the experience affected her.
-
Surveillance and Privacy Invasions: Researchers from the Citizen Lab at the University of Toronto's Munk School have discovered that a spyware developed by an Israeli company called QuaDream infected certain individuals' phones [7]. The spyware gained access to victims' devices by sending iCloud calendar invitations, and these invitations came from operators of the spyware, likely government clients. Importantly, the victims were unaware of these calendar invitations because they were sent for past events, rendering them invisible to the targets. This type of attack is termed "zero-click" because it infects mobile phones without any action or interaction required from the users, such as clicking on malicious links
Understanding the Abuse
AI misuse refers to the use of AI systems in ways that are not aligned with human rights, humanitarian law, or ethical principles. According to this paper, AI misuse, can be categorised into the following:
- Data misuse: The use of data that is obtained, processed, or shared without proper consent, security, or quality.
- Model misuse: The use of AI models that are biased, inaccurate, or malicious.
- Application misuse: The use of AI applications that are harmful, deceptive, or illegal.
- System misuse: The use of AI systems that are vulnerable, unreliable, or unaccountable.
Beyond tangible losses, technological malevolence can inflict other types of harm such as:
- Direct harm: The harm that is caused by the immediate effects of AI misuse, such as physical injury, psychological distress, or financial loss.
- Indirect harm: The harm that is caused by the secondary or long-term effects of AI misuse, such as social disruption, environmental damage, or political instability.
- Intentional harm: The harm that is caused by the deliberate actions of malicious actors who use AI for malevolent purposes, such as cyberattacks, fraud, or propaganda.
- Unintentional harm: The harm that is caused by the accidental or unintended consequences of AI misuse, such as errors, failures, or biases.
In this incident, the girls affected by the incident suffered direct harm to their privacy, dignity, and consent. They had their photos taken from their own social media accounts and used without their permission to create nude images of them. They also had their images shared among local boys, some of whom tried to extort money from them by threatening to expose them. This caused them psychological distress, humiliation, and fear and can lead to anxiety, depression, and other mental health issues.
The incident also caused indirect harm to the social and cultural norms of the town where the girls lived. The town, Almendralejo, is a quiet and conservative place where such images are considered taboo and shameful. The incident disrupted the trust and harmony among the residents, and created a sense of insecurity and suspicion. It also damaged the reputation and image of the town as a peaceful and respectful community.
The incident was caused by the intentional actions of malicious actors who used AI for malevolent purposes. The actors used an application that generates an imagined image of a person without clothes on, based on a photo of them fully clothed. They also used WhatsApp and Telegram apps to distribute the images among local boys. They intended to hurt, harm, injure, or destroy the girls affected by the incident, either physically or psychologically.
The incident in Spain, epitomizes the exploitation of AI technology for malicious intent. Regardless of whether it was done for fun or out of ignorance by minors, what happened is a serious indication of a larger problem the society faces.
abuses of AI misuse |
---|
Protecting Against Malevolence
A proactive approach to mitigating technology-related malevolence necessitates a multi-pronged strategy. Strengthening digital literacy, enacting robust privacy protections, fostering awareness, and instigating legal recourse collectively contribute to resilience against such threats.
In this case, some of the measures that can be taken include:
-
Educating themselves and others about the potential risks and benefits of AI, and how to use it responsibly and ethically. They can also learn how to identify and report AI misuse, and how to seek help or redress if they are affected by it. Advocating for the development and implementation of laws, regulations, and standards that ensure the protection of human rights, privacy, and security in the use of AI. They can also demand transparency and accountability from the actors who create, deploy, or use AI systems, and hold them liable for any harm or damage caused by AI misuse.
-
Advocating for the development and implementation of laws, regulations, and standards that ensure the protection of human rights, privacy, and security in the use of AI. They can also demand transparency and accountability from the actors who create, deploy, or use AI systems, and hold them liable for any harm or damage caused by AI misuse. Empowering themselves and others to exercise their rights and choices in relation to AI, and to participate in the governance and oversight of AI systems. They can also seek to influence the design and development of AI systems that are aligned with their values, needs, and preferences.
-
Supporting the creation and adoption of ethical principles, guidelines, and best practices for the use of AI, such as those proposed by the Council of Europe, the European Commission2, or the Singapore Model AI Governance Framework. These include compatibility with fundamental rights, non-discrimination, maintaining quality and security, acting transparently, impartially and fairly and finally ensuring that users of AI are informed actors, in control of their choices.
-
Collaborating with other stakeholders, such as civil society organizations, researchers, policymakers, or industry representatives, to raise awareness, share knowledge, and promote solutions for preventing and addressing AI misuse. They can also join or support initiatives that aim to foster a culture of trust, respect, and responsibility in the use of AI.
a safer digital landscape |
---|
Safeguarding against technology-driven malevolence requires a united front. Communities, individuals, cybersecurity professionals, and legal entities all play pivotal roles in minimizing the impact of technological exploitation. Collaboration is crucial in the pursuit of a safer digital landscape.
References
[1] https://www.bbc.com/news/world-europe-66877718
[2] https://7news.com.au/news/aussie-woman-makes-shocking-discovery-after-googling-herself-c-10369809
[3] https://www.cnet.com/news/privacy/deepfake-bot-on-telegram-is-violating-women-by-forging-nudes-from-regular-pics/
[4] https://medium.com/ruuh-ai/chatbots-and-abuse-a-growing-concern-77f3775f93e6
[5] https://www.technologyreview.com/2023/04/03/1070893/three-ways-ai-chatbots-are-a-security-disaster/
[6] https://www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/
[7] https://www.theguardian.com/technology/2023/apr/11/canadian-security-experts-warn-over-spyware-threat-to-rival-pegasus-citizen-lab