Sitemap

Human Rights and Artificial Intelligence (Recent Developments (Spring–Summer 2025))

15 min readSep 27, 2025
Press enter or click to view image in full size
Human Rights and Artificial Intelligence

Artificial intelligence will not erode human rights by force, but rather by quiet normalization … when surveillance feels ordinary, when bias feels inevitable, and when freedom becomes something we no longer notice is missing. ~ Murat Durmus

Academic Research (Peer-Reviewed Papers)

  • AI Surveillance and Privacy: Hugo Chan & Noble Lo (Journal of Posthumanism, April 2025) examine how AI-powered surveillance tools (facial recognition, predictive policing algorithms, drones, smart sensors, etc.) pose a “significant and unprecedented” threat to the fundamental human right to privacy[1]. They highlight critical deficiencies in current laws and ethics — notably a lack of transparency, fairness, and accountability — which often marginalize vulnerable groups and establish “privatized systems of social control” through pervasive monitoring[2]. The authors warn that normalizing continuous AI surveillance could erode democratic society, creating a dystopian future where individuality and free choice are illusory[3]. To counter this, they advocate human-rights-centric safeguards — including privacy-by-design, algorithmic transparency, and human oversight — to ensure AI innovations respect human dignity, liberty, and democratic values[4].
  • Bias and Discrimination in AI Systems: Andrej Krištofík (Int. Jrnl. for Court Administration, April 2025) discusses how algorithmic decision-making can replicate and amplify existing biases, using the judicial context as a case study. The research notes that data-driven AI models (e.g. bail or sentencing tools) trained on historical decisions may perpetuate racial and gender disparities, even when direct identifiers are removed[5][6]. The paper reviews various types of algorithmic bias and compares emerging technical remedies with legal standards (such as the European Court of Human Rights’ approach to bias)[7][8]. It concludes that addressing AI bias is crucial for fair and rights-respecting use of AI in high-stakes domains, suggesting that existing human rights frameworks (like the right to a fair trial) should inform the regulation of AI to prevent discrimination and uphold justice.

NGO Reports and Briefings

  • Autonomous Weapons and Human Rights: Human Rights Watch — “A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making” (April 2025) — This report documents how “killer robots” (weapons that select targets without human control) could infringe on multiple human rights[9]. It finds that autonomous weapons would have extreme difficulty complying with obligations like the right to life (e.g. making arbitrary lethal decisions without human judgment), the right to peaceful assembly (unable to distinguish peaceful protesters, potentially chilling free expression), human dignity (dehumanizing people as data points), and non-discrimination (AI biases in training data could target marginalized groups)[10][11]. HRW, co-founder of the Stop Killer Robots coalition, calls for a new international treaty to ban or strictly regulate such systems — insisting on “meaningful human control” over the use of force to prevent digital dehumanization in war and law enforcement[12].
  • Big Tech Power and Human Rights: Amnesty International — “Breaking up with Big Tech” briefing (August 2025) — Amnesty warns that the dominance of five tech giants (Alphabet/Google, Meta, Microsoft, Amazon, Apple) has created “unchecked power” over the digital sphere, posing serious risks to human rights[13]. The briefing illustrates how these companies have built and maintained monopoly power across social media, app stores, cloud services, and other platforms, enabling pervasive data harvesting and profiling that is incompatible with privacy rights[14]. It also highlights cases of algorithmic bias and content moderation failures — e.g. inconsistent takedowns and opaque algorithms amplifying harmful content — which threaten freedom of expression and access to information[15][16]. Amnesty urges governments to rein in Big Tech’s dominance by utilizing competition and regulatory tools as a human rights imperative. Recommended actions include investigating tech firms’ anti-competitive practices that harm rights, breaking up companies when necessary, and scrutinizing the emerging generative AI sector for human rights risks[17]. The goal is to create an online environment where privacy, non-discrimination, and free expression are safeguarded by design[18][19].
  • Facial Recognition and Policing: Amnesty International (February 2025) also condemned Google’s reversal of its ban on AI for surveillance and weapons, calling it “a blow for human rights.” In a public statement, Amnesty noted that allowing Big Tech to supply AI for facial recognition, predictive policing, and military uses could accelerate abusive surveillance and repression. They joined other NGOs in stressing that such technologies, if unregulated, pose acute threats to the right to privacy and freedom of assembly and could facilitate unlawful profiling of minorities. Amnesty reiterated its calls for a moratorium or ban on real-time facial recognition in public spaces and for strict legal frameworks to ensure that AI use by governments and corporations does not undermine civil liberties (echoing the need for the “red lines” later taken up by UN experts)[20][21]. (Source: Amnesty USA News, Feb 6, 2025 “Google’s shameful decision… is a blow for human rights”)
  • Children’s Rights and AI: Human Rights Watch — dispatches (June–July 2024, continuing impact) — While slightly older, HRW raised alarms about how AI companies exploit personal data without consent, including children’s photos scraped from social media to train facial recognition and generative AI tools. Reports documented that neither children nor parents are aware that their images fuel AI systems, which violates privacy and dignity. In one case, Brazil’s government intervened to prevent Meta from using people’s Instagram photos to develop AI, citing concerns about human rights. These efforts by civil society in late 2024 laid the groundwork for 2025’s intensified focus on data protection in AI — now reflected in new laws (like the EU AI Act’s provisions on data transparency) and calls for stronger consent and privacy safeguards globally.

(Additional NGO contributions include Human Rights Watch’s advocacy for AI accountability in the US — in a 2023 joint statement, HRW and 85+ groups urged Congress to address AI bias in hiring, social services, and policing that were already “denying people opportunities and civil rights”, especially in marginalized communities[22][23]. This civil society pressure anticipated many themes of 2025 policy debates.)

International Policy Developments

  • United Nations — Human Rights Council: In June 2025, UN human rights experts sounded an alarm about AI’s impact. The UN Working Group on Business and Human Rights presented a report to the Human Rights Council (49th session) urging that AI systems be procured and deployed only with robust human rights safeguards[24]. The Working Group warned of severe harms when governments or companies adopt AI without due diligence: women, children, and minorities are particularly at risk of outcomes like discrimination, invasions of privacy, and social exclusion from biased or unchecked AI tools[21]. The UN experts insisted that “States must act as responsible regulators, procurers, and deployers of AI” and set clear red lines against AI uses that are fundamentally incompatible with human rights — such as remote real-time facial recognition, mass surveillance of the public, or predictive policing algorithms[20]. They noted these applications intrinsically threaten rights to privacy, equality and freedom. The UN report also highlighted the fragmented global landscape of AI governance and called for urgent international cooperation to fill gaps. States and businesses should conduct human rights impact assessments, ensure transparency and accountability, and provide remedies for AI-driven abuses[25]. “AI systems are transforming our societies, but without proper safeguards, they risk undermining human rights,” the Working Group chair warned, underscoring that global standards (aligned with the UN Guiding Principles on Business and Human Rights) are needed to guide AI development in a rights-respecting direction[24][26].
  • European Union — The AI Act: The EU finalized its Artificial Intelligence Act in 2024, and implementation is underway in 2025. This landmark regulation — the world’s first comprehensive AI law — takes a rights-protective, “human-centric” approach to AI governance[27][28]. It uses a risk-based framework: AI systems posing an “unacceptable risk” to people’s safety or fundamental rights are outright banned[29]. Banned practices include things like social scoring of individuals by governments, AI-driven manipulation that exploits vulnerabilities, indiscriminate scraping of online data (or CCTV footage) to build facial recognition databases, and any use of real-time remote biometric identification (facial recognition) in public by law enforcement[30][31] — reflecting Europe’s stance against AI-enabled mass surveillance. Other AI uses are designated “high-risk” — for example, AI in critical infrastructure, education, employment (hiring algorithms), credit scoring, public benefits, law enforcement or migration control — because they can “pose serious risks to… fundamental rights.” These high-risk systems will be subject to strict obligations before and after deployment[32][33]. Developers must implement mitigation measures and oversight: e.g. rigorous risk assessments, high-quality training data to minimize bias and discriminatory outcomes, transparency and documentation requirements, human oversight mechanisms, and accountability provisions[34]. By mandating bias mitigation and banning certain harmful AI practices, the AI Act directly addresses issues of algorithmic discrimination, privacy, and surveillance. Some parts of the law have already taken effect (as of Feb 2025, the bans and certain obligations), with full application by 2026[35]. The EU’s initiative is influencing global AI policy, showcasing how legislation can promote trustworthy AI that “guarantees safety, fundamental rights and human-centric” innovation[36].
  • Council of Europe — Global AI Treaty: The Council of Europe (a 46-nation human rights body) spearheaded the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, which opened for signature on 5 September 2024. This is the first international, legally binding treaty on AI with a human-rights-centric focus[37]. Its aim is to ensure that AI systems’ entire lifecycle — from design and development to deployment — is fully consistent with human rights, democracy, and the rule of law[37]. Over the past year, the Convention has been signed by 17 states (Council of Europe members and observers including the UK, EU, Canada, Japan, USA, among others) and even a non-European country (Uruguay), reflecting broad interest in a global AI governance framework[38]. In September 2025, a conference marking the treaty’s first anniversary gathered experts to discuss its implementation and the challenges ahead[39]. Key themes of the Convention include requirements for risk and impact assessments of AI, promoting transparency and accountability, and mechanisms for international cooperation on AI oversight. The treaty underscores the commitment to manage AI-related risks to human rights while encouraging technological innovation that aligns with democratic values[40]. As negotiations progress toward ratification and enforcement, the CoE AI Convention is poised to set a global baseline for rights-respecting AI governance.
  • Freedom Online Coalition — Joint Statement: In a June 2025 joint statement, the Freedom Online Coalition (an intergovernmental partnership of 34 countries committed to internet freedom) addressed AI’s implications for human rights. The statement acknowledged AI’s potential benefits but voiced alarm at its growing use to “suppress dissent, manipulate public discourse, amplify gender-based violence, enable unlawful… surveillance, and reinforce inequalities and discrimination”[41]. It notes that such trends — from automated censorship to deepfake-driven disinformation — are no longer isolated incidents but increasingly systemic, sometimes even embedded in law enforcement and governance with fewer checks and less transparency[41]. The Coalition welcomed global initiatives (like a new UN resolution on “trustworthy AI” and the Council of Europe Convention) and urged that all AI governance efforts remain rooted in international human rights law[42]. It emphasized protecting freedom of expression, assembly, privacy, and equality in the face of AI advancements[43][44]. The statement also highlighted the need to pay special attention to the rights of those most at risk of AI harms (notably women and girls, who face AI-enabled abuse, and marginalized communities facing algorithmic bias)[44][45]. It calls on both governments and tech companies to incorporate human rights by design — for instance, urging private AI developers to follow the UN Guiding Principles on Business and Human Rights and adopt safety-by-design to prevent abuse[46]. This joint statement represents a collective policy stance that the rapid developments in AI must be met with equally robust human rights safeguards internationally.

Key Issues and Themes

Several common themes emerge across these recent publications and policy updates:

  • AI and Surveillance: There is heightened concern about the use of AI for mass surveillance and its impact on privacy and other rights. Both researchers and rights monitors warn that advanced surveillance technologies (like facial recognition cameras, predictive policing algorithms, and AI-driven drones) can erode the right to privacy and chill civil liberties. For example, Chan & Lo describe AI-enhanced surveillance as creating a “digital Panopticon” that renders “traditional notions of privacy obsolete”[1]. UN experts similarly argue that real-time biometric surveillance in public is fundamentally incompatible with human rights and should be prohibited[20]. Without strict limits, pervasive AI monitoring could deter people from exercising free expression or assembling in public, undermining democracy. This consensus has driven calls for moratoriums or bans on certain surveillance uses of AI, and for privacy-by-design approaches in all AI systems[4].
  • Algorithmic Discrimination: Many recent reports highlight the risk of algorithmic bias leading to unlawful discrimination. AI systems trained on biased data or reflecting their developers’ biases can disproportionately harm marginalized groups. Amnesty International notes that Big Tech’s algorithms and content moderation practices often exhibit bias, with “algorithmic biases highlight[ing] the dangers” of concentrated power over the digital public sphere[15]. The Freedom Online Coalition warned that AI is already being used to “reinforce inequalities and discrimination” at scale[41] — for instance, biased hiring algorithms denying job opportunities or biased policing tools disproportionately targeting minority communities. The EU’s AI Act tackles this issue by requiring high-risk AI models to use high-quality, representative data to minimize discriminatory outcomes, and by outright banning AI that segregates or scores people in ways that violate equality rights[34][30]. There is a clear trend toward viewing algorithmic fairness as a human rights requirement, not just an ethical preference.
  • Privacy and Data Protection: Privacy violations are at the core of human rights critiques of AI. A common thread is that AI’s hunger for big data — often personal and sensitive data — has outpaced existing privacy protections. Amnesty International’s research points out that companies like Google and Meta track vast amounts of personal data (from location to intimate demographics) and that such data harvesting and profiling is “incompatible with the right to privacy”[14]. This sentiment is echoed in policy circles: the UN Working Group urged both governments and companies to conduct human rights due diligence to prevent privacy abuses in AI deployment[21][25]. Moreover, incidents like tech firms using children’s photos from social media to train AI have raised public outrage, reinforcing calls for stricter data protection and consent rules. As a response, new laws (like provisions in the EU AI Act and digital services laws) are starting to require greater transparency in AI training data and give individuals rights over how their data is used in AI systems. Privacy is recognized not only as an individual right but as a prerequisite for freedom of expression and autonomy in the AI age.
  • Freedom of Expression and Information: The impact of AI on freedom of expression has become a pressing topic, especially with the rise of generative AI and automated content moderation. Amnesty’s Breaking up Big Tech briefing describes how a handful of platforms’ AI-driven algorithms govern what information people see online — with opaque moderation and personalized feeds that can distort public discourse[16]. Documented examples include over-removal of legitimate content, “inconsistent moderation” policies, and automated amplification of inflammatory material for profit, all of which can silence certain voices while magnifying harmful speech[15]. Another dimension is AI-generated disinformation: the Freedom Online Coalition noted that AI systems are being used to manipulate content and spread false information, which can poison the information ecosystem and undermine fair elections[41]. Additionally, AI-powered tools have been linked to online harassment, such as deepfakes or bot-driven abuse targeting journalists, women, and activists — blurring the line between real and fake speech and posing novel threats to media freedom and safety. In response, international bodies and experts stress the need for transparency and accountability in how AI curates or moderates content. There are calls for platforms to explain algorithmic decisions, allow appeals for content takedowns, and for regulators to ensure that automated systems do not unjustifiably censor or privilege speech in violation of human rights norms. Balancing the fight against disinformation and hate speech with the protection of free expression is now a key challenge in AI governance.
  • Ethical AI, Accountability and Governance: Across the board, there is strong emphasis on embedding ethical principles and human oversight into AI. Rather than rely on voluntary tech sector promises, recent developments push for binding accountability frameworks. For example, Human Rights Watch advocates that any AI used in life-and-death situations (like weapons or policing) must have “meaningful human control” at all times[12]. The UN Working Group likewise insists that human rights due diligence — assessing and addressing AI’s impact on rights — should be mandatory for both public and private sectors[25]. A key aspect of ethical AI is transparency: people should know when AI is affecting them and how decisions are made. The EU AI Act includes transparency obligations (e.g. informing users when they interact with an AI, and requiring public disclosure for certain high-risk AI systems) as part of a “human-centric” approach[47]. Another aspect is remedy and redress: if AI causes harm, there should be avenues for people to challenge decisions or seek compensation, just as they would for human-caused rights violations[25]. At the international level, the creation of frameworks like the Council of Europe’s AI Convention indicates momentum toward multi-national standards for ethical AI use. These standards mirror long-standing human rights principles — requiring legality, necessity, proportionality, and accountability for any action that affects rights. Overall, the conversation has shifted from abstract AI “ethics” to concrete human rights-based governance: ensuring AI systems are subject to rule of law, oversight, and align with the values of human dignity, freedom, equality, and justice[42][40].

Conclusion

In summary, the past six months have underscored a rapidly growing consensus that artificial intelligence must be harnessed in a way that respects and strengthens human rights, rather than undermining them. Academic researchers are mapping the specific ways AI can threaten rights — from privacy erosion via surveillance, to bias and discrimination in automated decisions — and proposing rights-based fixes. NGOs like Amnesty International and Human Rights Watch are shining a spotlight on corporate and government practices, pressing for immediate reforms and moratoria where AI is deployed recklessly. International and regional bodies (the UN, EU, Council of Europe, among others) have started to move from principles to policy — crafting laws, treaties, and guidelines that place human rights considerations at the center of AI development and use[26][36]. Key human rights themes — privacy, equality, freedom of expression, dignity, and accountability — now dominate the global AI discourse, indicating that ethical AI is no longer just a technical question but a fundamental societal imperative. The flurry of recent publications and policy actions suggests that a framework of responsible, rights-respecting AI governance is beginning to take shape, though much work remains to implement and enforce these standards worldwide. By continuing to prioritize human rights in AI research, advocacy, and regulation, the international community aims to ensure that the benefits of AI can be enjoyed without compromising the rights and freedoms of individuals.

(*This article was created in collaboration with AI.)

Sources:

· Chan, H. & Lo, N. (2025). “A Study on Human Rights Impact with the Advancement of AI.” Journal of Posthumanism, 5(2)[1][2].

· Human Rights Watch (April 2025). “A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making.” Report [9][12].

· Amnesty International (Aug 2025). “Breaking up with Big Tech” — Briefing on Big Tech monopolies and human rights[13][17].

· Amnesty International (Feb 2025). Press Release: “Google’s reversal on AI for weapons and surveillance — a blow for human rights.”[20][21].

· UN Working Group on Business & Human Rights — Report on AI Procurement (HRC 59, June 2025) [21][20]; OHCHR News Release, 23 June 2025.

· Freedom Online Coalition (June 2025). Joint Statement on Artificial Intelligence and Human Rights[41][44].

· European Commission — EU AI Act (Regulation 2024/1689) overview[30][34]; Digital Strategy policy page, updated July 2025.

· Council of Europe (Sept 2025). Framework Convention on AI, Human Rights, Democracy and Rule of Law — First anniversary news[37][38].

· Krištofík, A. (2025). “Bias in AI Decision Making: Old Problems, New Technologies.” Intl. Journal for Court Administration, 16(1)[48][7].

· Additional reports and analyses by Human Rights Watch and Amnesty International on AI (2024–2025), as cited above[22][16].

[1] [2] [3] [4] (PDF) A Study on Human Rights Impact with the Advancement of Artificial Intelligence

https://www.researchgate.net/publication/390522969_A_Study_on_Human_Rights_Impact_with_the_Advancement_of_Artificial_Intelligence

[5] [6] [7] [8] [48] Bias in AI (Supported) Decision Making: Old Problems, New Technologies | International Journal for Court Administration

https://iacajournal.org/articles/10.36745/ijca.598

[9] [10] [11] [12] A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making | HRW

https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making

[13] [15] [17] [18] [19] Global: Amnesty launches ‘Breaking up with Big Tech’ briefing — Amnesty International

https://www.amnesty.org/en/latest/news/2025/08/amnesty-launches-breaking-up-with-big-tech-briefing/

[14] [16] Why are Big Tech companies a threat to human rights? — Amnesty International

https://www.amnesty.org/en/latest/news/2025/08/why-are-big-tech-companies-a-threat-to-human-rights/

[20] [21] [24] [25] [26] UN cautions govts to safeguard human rights in AI procurement | Biometric Update

https://www.biometricupdate.com/202506/un-cautions-govts-to-safeguard-human-rights-in-ai-procurement

[22] [23] US: Congress must regulate artificial intelligence to protect rights | Human Rights Watch

https://www.hrw.org/news/2023/10/17/us-congress-must-regulate-artificial-intelligence-protect-rights

[27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [47] AI Act | Shaping Europe’s digital future

https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[37] [38] [39] [40] First Anniversary of the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law — Artificial Intelligence

https://www.coe.int/en/web/artificial-intelligence/-/first-anniversary-of-the-council-of-europe-framework-convention-on-artificial-intelligence-and-human-rights-democracy-and-the-rule-of-law

[41] [42] [43] [44] [45] [46] Joint Statement on Artificial Intelligence and Human Rights (2025) — Freedom Online Coalition

https://freedomonlinecoalition.com/joint-statement-on-ai-and-human-rights-2025/

--

--

Murat Durmus (CEO @AISOMA_AG)
Murat Durmus (CEO @AISOMA_AG)

Written by Murat Durmus (CEO @AISOMA_AG)

CEO & Founder @AISOMA_AG | Author | #ArtificialIntelligence | #CEO | #AI | #AIStrategy | #Leadership | #Philosophy | #AIEthics | (views are my own)

No responses yet