Weeknotes 290 - designing agon in fair AI systems

This week, another round of AI news. I am thinking about applying agonistic pluralism in generative things and urban robots. And others -more-than-human- news and events to visit.

Weeknotes 290 - designing agon in fair AI systems
Showing conflict of interactions of objects with agency - by Midjourney

Hi, y’all!

Last week, the saga around Sky and Scarlett continued, not contributing to the reputation of OpenAI. There is an interesting angle to address that popped into my mind when listening to a discussion in a podcast (one of the many that kept addressing the topic). We passed the typical uncanny valley tipping point with this voice. The uncanny valley graph is often only referred to as the cringe factor of robotics, trying to be as real as possible but missing the mark for just that last bit. The graph continues after that cringe point, indicating the moment it feels real for real. That is what the Her moment is referring to. Another interesting point to address is that the ‘Her’ reference made by OpenAI is weird in a way, as the movie does not end well in terms of human-AI relations. The benefit, though, is that we will be now prepared and not expecting that we will have a perfect monogamous relationship with our robot friends. Or rethink what monogamy means in that context…

Triggered thought

I would like to connect the triggered thought to a discussion during the PhD defense of Kars Alfrink's work on contestable AI. The research has passed here before regarding earlier papers; it is very valuable in the discourse on how to relate to AI systems and services. A too brief summary, in my words, is that we should not focus on transparency to deal with the impact of AI but focus on contestability, building democratic structures around AI, and especially giving human subjects agency in how they are treated based on the AI interpretations of the interactions. A more extended definition is, of course, available via his website, contestable.ai.

Two topics from the defense that triggered my thoughts. The first was the last question posed: Kars' research focuses on public AI, which is the system that is applied by governments and all. A lot of the systems that are influencing our lives will be made by private organizations and companies. Is there a difference in the impact, the expectations, and the contestability tactics?

There is a different structure in which the AI is part of and embedded. Democratic systems may not be the go-to solution, but it is too easy to propose market mechanics (”voting with your feet”). As the needed data and investment in computation are still very large, we can expect the big players to build AI with the deepest impact (and user value). So regulation is something to look into, more than market dynamics, like the European AI Act. That is the way to enforce contestability.

Another strategy that can be part of regulation or next to it: arrange a form of interaction literacy to express the values. More concrete: for example, with the Wijkbot project, we aimed to create an urban robot prototype kit that can be used in the process of designing real-world interactions with the urban robots and give citizens ways to formulate their wishes and boundaries. Those two-way systems should be part of AI systems to calibrate and control the behavior and ruling continuously. (For me, that is a key driver to continue to work on the WijkbotKit as an empowerment and educational platform).

The second thought is related. There was a very nice interaction with Liesbeth van Zoonen on embedding conflict into contestability, following Mouffe's thinking of agonistic pluralism, as referred to in Kars's work. In short, as I understood, Liesbeth addressed that in the referred work, the conflicts are related to collectives, not individuals. Kars indicated that there is indeed a need for continuous research here. I was thinking of a way to refer to the interaction and conflict as if they are always proxies of collectives. There might be a relation with Wijkbot too (HCIMTAM :-) ), as we initiated a project for students at Industrial Design Engineering on the relation of individuals using Wijkbots in their services in their neighborhood as proxies, with the provocation that these urban robots might form their own collectives and ‘oppose’ the needs of the individual resident. How to deal and use these opposing civic robots?

We organized a series of events back in 2017/2018 on Tech solidarity, wondering how to build a grassroots community of tech workers in the Netherlands advancing the design and development of more just and egalitarian technology. To be honest, that is not an easy job to do. I think using the angle of creating more collective awareness with designers might still be a potential strategy with AI systems. That can stimulate a different mindset and, with that, different outcomes in how the AI interacts with their subjects, and building a real co-performing partnership… build on designing with collectives for collectives, based on new forms of democratic principles.

As always, this is just the beginning of the thinking… tbc.

For the subscribers or first-time readers (welcome!), thanks for joining! A short general intro: I am Iskander Smit, educated as an industrial design engineer, and have worked in digital technology all my life, with a particular interest in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to (critically) explore the near future in the context of cities of things. And organising ThingsCon. I call Target_is_New my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week here, and also can a create more specific versions for you.

Notions from the news

Another week packed with AI, the good and the bad, or the interesting and the flaws…

Human-AI partnerships

The new Google AI Overview function generates some storms by giving false and misleading answers. In that sense, it is comparable to earlier attempts. The problem might not be the answers themselves, but the problem is it draws the wrong conclusions and presents them as truth.

Google’s “AI Overview” can give false, misleading, and dangerous answers
From glue-on-pizza recipes to recommending “blinker fluid,” Google’s AI sourcing needs work.
Google Is Playing a Dangerous Game With AI Search
The search giant’s new tool is answering questions about cancer, heart attacks, and Ozempic.
Google scrambles to manually remove weird AI answers in search
Google’s AI Overview launch showcases that the race for AI domination is perilous.

A new AI tool from Microsoft might have some backlash, too. It is not easy…

Microsoft’s New Recall AI Tool May Be a ‘Privacy Nightmare’
Plus: US surveillance reportedly targets pro-Palestinian protesters, the FBI arrests a man for AI-generated CSAM, and stalkerware targets hotel computers.

Is it an employee or a tool?

Coding With Devin: My New AI Programming Agent
Is it an employee or a tool?

Mapping the mind of Antrophic

Mapping the Mind of a Large Language Model
We have identified how millions of concepts are represented inside Claude Sonnet, one of our deployed large language models. This is the first ever detailed look inside a modern, production-grade large language model.

Readwise summarized an article on MIT’s EmTech conference as follows: “AI experts gathered at EmTech Digital 2024 to discuss the current state and future of artificial intelligence. Presentations covered topics like AI regulation, generative AI complexity, and the importance of human-centered AI systems.” This feels a bit underwhelming.

EmTech Digital 2024: A thoughtful look at AI’s pros and cons with minimal hype
At MIT conference, experts explore AI’s potential for “human flourishing” and the need for regulation.

This might be a short ride for Humane…

Humane Is for Sale, But Who Would Buy Them?
Link to: https://www.bloomberg.com/news/articles/2024-05-22/wearable-ai-startup-humane-is-said-to-explore-potential-sale

Scrolling through these AI papers it feels like something I might want to dive in deeper, or just see it as an exploration of a visual language.

ICLR 2024 — Best Papers & Talks (ImageGen, Vision, Transformers, State Space Models) ft. Christian Szegedy, Ilya Sutskever, Durk Kingma
14 of the best papers out of the 2260 papers presented at the 2024 ICLR conference, in 4 sections covering Image Generation, Vision Learning, Extending Transformers, and State Space Models.

AI impact for scientific research according Ethan Mollick.

Four Singularities for Research
The rise of AI is creating both crisis and opportunity

Only two weeks and we know for sure.

Bloomberg - Are you a robot?

And next year in the Airpods Max?

Noise-canceling headphones use AI to let a single voice through
They could help wearers focus on specific voices in noisy environments, such as a friend in a crowd or a tour guide amid the urban hubbub.

Robotic performances

Matt's lovely train of thoughts on our fiddling nature as humans. “It’s a useful point to put into any industrial design brief. Make sure you can fiddle with it.” Why do I add it here to the robotic performances? I was just wondering how this would connect to the category of new generative things; should these provide us with that fiddling behaviour, just like the AI voice needs the uhs and ahs?

Immersive connectedness

Is it time to be nostalgic for a pandemic?

Let’s burst some bubbles (again)!
The Educationalist. By Alexandra Mihai

It has been a bit silent around the Vision Pro lately, but Apple’s VP of human interface design still believes in the shift.

Vision Pro a “new era of computing” says Apple design head
The Vision Pro will “redefine how we connect and create” says Apple vice president of design Alan Dye, as its operating system wins D&AD Awards’ top prize.

Some years back ThingsCon looked into trusted technology trustmark for connected products. Being certain that a product was not bricked by the digital part before the lifetime of the product was over. Spotify’s is an interesting case study. It is of course not so nice the product will become bricked. Is the product bricked as the physical part is just a skin for the digital service? What is the product here?

Spotify Will Brick Every ‘Car Thing’ It Ever Sold
No refunds or trade-ins for customers. No plans to open source it. The weird little car gadget is going to die on December 9, and there’s nothing you can do about it.
Pluralistic: They brick you because they can (24 May 2024) – Pluralistic: Daily links from Cory Doctorow

Tech societies

AI laws are not only a European issue; states are examining different regulation programs in the US. Speaking of the EU AI Act

Attempts to regulate AI’s hidden hand in Americans’ lives flounder in US statehouses
State lawmakers first attempts at regulating discrimination from artificial intelligence have floundered in states across the country.
AI Regulation: And Now? – THE INTERNET OF THINGS

Although the total EV footprint might be smaller than that of traditional ICE, the battery is still an aspect to consider.

GM will recycle its EV battery scrap with Tesla co-founder’s company
Redwood says it has deals with most EV makers in the US.

A bit off-topic or maybe not? How does the cardboard box relate to our tech society?

World in a Box: Cardboard Media and the Geographic Imagination
Cardboard boxes hold a world of meaning — a geography of consumption, disposal, and reuse — that spans from Amazon to the Container Corporation of America.

Paper for the week

Nice to see a new article by some of my favorite thinkers in the more-than-human design.

The making(s) of more-than-human design: introduction to the special issue on more-than-human design and HCI

This special issue explores the proposition that conventional human-centered design approaches may not adequately address the complex challenges we face, and that there is instead a need to ground design in more-than-human perspectives. This introduction outlines the evolving landscape of more-than-human design in the context of HCI. Articulating a series of emerging research trajectories, we aim to illuminate the transformative potential of more-than-human orientations to design, including how they both extend and depart from familiar lines of inquiry in HCI – for example, how designers are redefining data, interfaces, and responsibility, and reshaping posthuman knowledge through design.

Giaccardi, E., Redström, J., & Nicenboim, I. (2024). The making(s) of more-than-human design: introduction to the special issue on more-than-human design and HCI. Human–Computer Interaction, 1–16. https://doi.org/10.1080/07370024.2024.2353357

Looking forward

If you read this on time and you are around, this afternoon in Rotterdam, I will pitch the Wijkbot next to other robot projects linked to Rotterdam. https://www.aanmelder.nl/robodam/wiki/1056442/programme

Next week, we will host a workshop with the Wijkbot at the PublicSpaces conference in Amsterdam (6 & 7 June). There is a code for 10% discount: PSPROMO10

Also, next week, on Tuesday, I will present on Generative Things at an evening on designing intelligent cultures with data and AI in Utrecht at CLEVER°FRANKE.

Other events: Into the legal side of AI: The Meta Case: Challenging discriminatory algorithms through legal means. This Wednesday in Amsterdam. If you are near Enschede the Dutch Innovation Days will take place end of this week. And ICAI day in Rotterdam 5 June. And Data and Art Exchange in four performances at v2 end of this week.

Enjoy your week!

Buy Me a Coffee at ko-fi.com