The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.
Israeli intelligence sources reveal use of ‘Lavender’ system in Gaza war and claim permission given to kill civilians in pursuit of low-ranking militants
I find that one of the most frustrating kinds of AI hype is when people who are actually in a position to use their own expertise to push back instead give in to the FOMO and do the hype for tech companies. Today's case in point is a recent article in The Chronicle of Higher Education...
This essay is based in part on presentations given in the Spring and Summer of 2018 at the Creative AI Meetup at the Photographer’s Gallery in London, the University of Chicago’s Franke Institute for the Humanities, the Aarhus Institute of Advanced Studies in Denmark, INRS in Quebec, and the University of Warwick Centre for Interdisciplinary Methodologies Research Forum. It is the second part of a longer discussion about deep learning, the first part of which is in the essay, “Deep Learning as an Epistemic Ensemble”.
I wrote this essay for the printed magazine of the Elevate Festival 2024. On Friday March 1st. at 2pm I will participate in a panel discussion there on the issue of “AI vs. Democracy” that people can check out live or on stream/watch in a recording later
The prolific use of Artificial Intelligence Large Language Models (LLMs) present new challenges we must address and new questions we must answer. For instance, what do we do when AI is wrong?
L’ouvrage « Quand la machine apprend », de Yann Le Cun, permet de décoder certains mystères de l’intelligence artificielle, en s’intéressant au fonctionnement des neurones du cerveau... humain.
Silicon Valley utopians imagine AI solutions to ecological crisis, while being oblivious to the real material and ecological harms their fantasies wreak.
Algorithmic hiring is the usage of tools based on Artificial intelligence (AI) for finding and selecting job candidates. As other applications of AI, it is vulnerable to perpetuate discrimination. Considering technological, legal, and ethical aspects, the EU-funded FINDHR project will facilitate the prevention, detection, and management of discrimination in algorithmic hiring and closely related areas involving human recommendation.
Tl;dr: The harms from so-called AI are real and present and follow from the acts of people and corporations deploying automated systems. Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.
Bekannte Manager und Experten wie Elon Musk warnen vor den Risiken künstlicher Intelligenz. Sie wollen die Entwicklung stoppen, die Technik müsse den Menschen dienen.
William Eden forecasts an AI winter. He argues that AI systems (1) are too unreliable and too inscrutable, (2) won’t get that much better (mostly due to hardware limitations) and/or (3) won’t be that profitable.
Wie funktionieren Systeme wie ChatGPT? Sind sie wirklich „intelligent“? Was passiert, wenn sie großflächig zum Einsatz kommen? Was hätte das für Auswirkungen auf Bibliotheken?
Here I collect a selected set of critical lenses on so-called1 'AI', including the recently hyped ChatGPT. I hope these resources are useful for others as well, and help make insightful why we need to remain vigilant and resist the AI hype. I expect to be updating this blog as time passes. If you have…
J. Hennrich, E. Ritz, P. Hofmann, and N. Urbach. Capturing artificial intelligence applications’ value proposition in healthcare – a qualitative research study, (2024)
P. Plackis-Cheng, T. Chalasani, and S. Palme. FINDHR Expert Reports, Fairness and Intersectional Non-Discrimination in Human Recommendation (FINDHR), Barcelona, (December 2023)