Weeknotes 291 - the valuable friction of context
Showing sources is not enough; design encounters. Some thoughts. And the latest notions from the news, a paper on AI and democracy, and more.
Hi, y’all! Welcome to the new readers! Below I share a bit more backgrounds with this (weekly) newsletter.
These are busy weeks with events and more. Last week, I attended a community gathering of expertise center systemic co-design. And I pitched the Wijkbot as the learning and empowering platform at the Robodam meetup. I started to prepare for two sessions this week, a presentation and a workshop. First, I will share thoughts on Generative Things at an evening meetup at CleverFranke. Drop by if you are in Utrecht! And we will be doing a workshop with the WijkbotKit at PublicSpaces conference this Thursday, using prototyping to dive into the meaning of urban robots for the public space. I am looking forward to seeing what the learnings will be.
The role of AI and our relation to it are developing every week. I just listened (divided in some separate moments) to the podcast of Lex Fridman and Roman Yampolskiy which discussed among others the dangers of superintelligence. What if AI takes the role of a manipulating dictator who will never leave? As an uplifting thought to start…
Triggered thoughts
The value of context in AI. The case of Google’s AI overview and the problems proves that we still need the context to make sense of what we read. And that the context needs to be in your face, part of the experience. Years ago, at the beginning of the digital era when we experienced the early days of the change of framing from GUI to UX, this book was popular: “Don’t Make Me Think”. It was a ‘bible’ for usability-driven interfaces. Remove as much friction as possible. I have always been part of the design “school”, that thinks that friction should be part of the experience. Design for friction. Make people aware of what the impact is of choices. This is also important in times of AI. Not only looking and presenting the sources with search results in an AI chat. Like the difference of Perplexity vs ChatGPT.
But Perplexity is putting the sources as a backstory for those looking for it. The superficial reader will not dive into the sources. The setup is, of course, similar to academic writing. The references in an academic paper will not be read and tracked down. What is the difference with Perplexity and comparable presentations of AI found results? Peer review: articles are peer-reviewed; you can trust that there is rigor and references are checked, demping any doubts. It is not only about adding the links with the response to the question in Perplexity, more is needed for trusted results.
The examples with Google AI Overview make that clear. The sources are right, or better, exist. A source mentions that non-toxic glue prevents cheese from getting loose from your pizza. The source was a joke, though; without the full context in your face, you might believe the end result. In a way, we are spoiled by media that are—in principle—doing our work to dive critically into what we read. Google has made improvements it seems.
There was an earlier article on a new approach by Anthropic -mapping the mind of LLMs- that I shared last week: Claude is not only presenting an answer but is reflecting upon the answer and question in combination and doing a kind of peer review of their own answer. Feels like a good first step. It would be even better if the reviewers were not one but multiple, with different backgrounds. Building such a system is complex but possible. If we add a human in the loop, it becomes even more balanced. It almost starts to look like the page-ranking ecosystem. Nothing wrong there.
Researcher Emile Bender addresses this as Information is Relational.
In short, designing interactions with AI output requires more than sources to prevent accidents; it also requires an active representation of the output's context.
For the subscribers or first-time readers (welcome!), thanks for joining! A short general intro: I am Iskander Smit, educated as an industrial design engineer, and have worked in digital technology all my life, with a particular interest in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to (critically) explore the near future in the context of cities of things. And organising ThingsCon. I call Target_is_New my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week.
Notions from the news
The weekly look into remarkable chunks of news divided by some themes.
Human-AI partnerships
We had the project on a transparent charging station here as design fiction, critical reflections on possible algorithm selections. It makes sense to have AI help balance energy systems.
AI agents are the big promise. Start making bots for you, like Anthropic is offering. And Siri is expecting to become an agent for your apps. The WWDC has got the title Action Packed, which might be an indication of the focus too.
Labeling AI as superhuman undermines the essence of being human. This is the claim of this piece.
Rolling back the time with the newest technologies. The effects of a classical radio play don’t need to be made with sheets of metal, rice, or any other strange sound-making object. Just prompt Eleven Labs..
A positive form of human-AI partnership is the AI that cures us from our illnesses. Organizing medical solutions and medical scientific proof.
We have had these art projects, and in the latest OpenAI demo of 4o, there was also a mixture of real people and an artificial one. In the latter, it was not the body double or proxy of a real human. It makes a lot of sense; the number of meetings will explode if you can attend parallel multiple meetings 🙂
Will the good old (ok, not so old) voice agents return to our lives?
New stuff: Perplexity is introducing Pages. The concept of creating shareable knowledge spaces is done before with other waves, and I am curious how this will hold.
Robotic performances
A Starbucks operated by robots; it might not be super shocking, but interesting to think about the impact on the brand experience as the treatment at the counter is part that.
The application of AI in supply chains before humans are in the loop.
OpenAI is restarting its robotics research group after a pause. GPT-4r to be expected?
Immersive connectedness
Oh, Magic Leaps is still there. A good catch for Google?
Tech societies
I was not aware ICQ was still running. I have good memories of the first company-wide chat to overcome a couple of floors at the end of the 90s. I still remember the characteristic notification sound.
Chatbots are totally different nowadays and will become part the good old workspace.
Changes in last-mile e-commerce will drive the logistics
Will this be the year of AI influencing the elections in all kinds of ways? We can expect several stories on this for the rest of the year.
Politics: an AI office in Europe.
Paper for the week
This week, as we are in an election week, this might be an interesting paper to check. AI and Epistemic Risk for Democracy
(…) AI technologies are trained on data from the human past, but democratic life often depends on the surfacing of human tacit knowledge and previously unrevealed preferences. Accordingly, as AI technologies structure the creation of public knowledge, the substance may be increasingly a recursive byproduct of AI itself – built on what we might call “epistemic anachronism.” This paper argues that epistemic capture or lock-in and a corresponding loss of autonomy are pronounced risks, and it analyzes three example domains – journalism, content moderation, and polling – to explore these dynamics.
Wihbey, John, AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge? (April 20, 2024). Available at SSRN: https://ssrn.com/abstract=4805026 or http://dx.doi.org/10.2139/ssrn.4805026
Looking forward
As mentioned above, I look forward to presenting at the meetup organized by CleverFranke in Utrecht and the workshop with the Wijkbot on Thursday at the PublicSpaces conference in Amsterdam. Making tangible what you can think about urban robots.
NPO is organizing the Week van de Toekomst in Hilversum until 6 June. The future of media that is, I think.
With the European elections; the Future of AI in a political context Thursday in Amsterdam.
Earlier, I was at the Digital Right House lecture; there is a follow-up this Monday. And on Tuesday the work from the Imagining Future Everydays in Eindhoven.
Finally, next week, Mozfest will be back in Amsterdam. Good memories from last year’s edition. Mozfest House.
Enjoy your week!