Weeknotes 291 - the valuable friction of context

Showing sources is not enough; design encounters. Some thoughts. And the latest notions from the news, a paper on AI and democracy, and more.

Weeknotes 291 -  the valuable friction of context
The valuable friction of context while exploring a service - acc Midjorney

Hi, y’all! Welcome to the new readers! Below I share a bit more backgrounds with this (weekly) newsletter.

These are busy weeks with events and more. Last week, I attended a community gathering of expertise center systemic co-design. And I pitched the Wijkbot as the learning and empowering platform at the Robodam meetup. I started to prepare for two sessions this week, a presentation and a workshop. First, I will share thoughts on Generative Things at an evening meetup at CleverFranke. Drop by if you are in Utrecht! And we will be doing a workshop with the WijkbotKit at PublicSpaces conference this Thursday, using prototyping to dive into the meaning of urban robots for the public space. I am looking forward to seeing what the learnings will be.

The role of AI and our relation to it are developing every week. I just listened (divided in some separate moments) to the podcast of Lex Fridman and Roman Yampolskiy which discussed among others the dangers of superintelligence. What if AI takes the role of a manipulating dictator who will never leave? As an uplifting thought to start…

Triggered thoughts

The value of context in AI. The case of Google’s AI overview and the problems proves that we still need the context to make sense of what we read. And that the context needs to be in your face, part of the experience. Years ago, at the beginning of the digital era when we experienced the early days of the change of framing from GUI to UX, this book was popular: “Don’t Make Me Think”. It was a ‘bible’ for usability-driven interfaces. Remove as much friction as possible. I have always been part of the design “school”, that thinks that friction should be part of the experience. Design for friction. Make people aware of what the impact is of choices. This is also important in times of AI. Not only looking and presenting the sources with search results in an AI chat. Like the difference of Perplexity vs ChatGPT.

But Perplexity is putting the sources as a backstory for those looking for it. The superficial reader will not dive into the sources. The setup is, of course, similar to academic writing. The references in an academic paper will not be read and tracked down. What is the difference with Perplexity and comparable presentations of AI found results? Peer review: articles are peer-reviewed; you can trust that there is rigor and references are checked, demping any doubts. It is not only about adding the links with the response to the question in Perplexity, more is needed for trusted results.

The examples with Google AI Overview make that clear. The sources are right, or better, exist. A source mentions that non-toxic glue prevents cheese from getting loose from your pizza. The source was a joke, though; without the full context in your face, you might believe the end result. In a way, we are spoiled by media that are—in principle—doing our work to dive critically into what we read. Google has made improvements it seems.

There was an earlier article on a new approach by Anthropic -mapping the mind of LLMs- that I shared last week: Claude is not only presenting an answer but is reflecting upon the answer and question in combination and doing a kind of peer review of their own answer. Feels like a good first step. It would be even better if the reviewers were not one but multiple, with different backgrounds. Building such a system is complex but possible. If we add a human in the loop, it becomes even more balanced. It almost starts to look like the page-ranking ecosystem. Nothing wrong there.

Researcher Emile Bender addresses this as Information is Relational.

In short, designing interactions with AI output requires more than sources to prevent accidents; it also requires an active representation of the output's context.

For the subscribers or first-time readers (welcome!), thanks for joining! A short general intro: I am Iskander Smit, educated as an industrial design engineer, and have worked in digital technology all my life, with a particular interest in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to (critically) explore the near future in the context of cities of things. And organising ThingsCon. I call Target_is_New my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week.

Notions from the news

The weekly look into remarkable chunks of news divided by some themes.

Human-AI partnerships

We had the project on a transparent charging station here as design fiction, critical reflections on possible algorithm selections. It makes sense to have AI help balance energy systems.

How AI could change EV charging
A small study used AI to see what happens when you plug in an EV.

AI agents are the big promise. Start making bots for you, like Anthropic is offering. And Siri is expecting to become an agent for your apps. The WWDC has got the title Action Packed, which might be an indication of the focus too.

Anthropic’s AI now lets you create bots to work for you
Anthropic is releasing a tool that allows customers to build their own AI assistants.
iOS 18 (and AI) will give Siri much more control over your apps
Bixby did it first!

Labeling AI as superhuman undermines the essence of being human. This is the claim of this piece.

The Danger Of Superhuman AI Is Not What You Think | NOEMA
The rhetoric over “superhuman” AI implicitly erases what’s most important about being human.

Rolling back the time with the newest technologies. The effects of a classical radio play don’t need to be made with sheets of metal, rice, or any other strange sound-making object. Just prompt Eleven Labs..

ElevenLabs’ AI generator makes explosions or other sound effects with just a prompt
Prompts can combine sound effects, voice generation, and music.

A positive form of human-AI partnership is the AI that cures us from our illnesses. Organizing medical solutions and medical scientific proof.

An AI tool for predicting protein shapes could be transformative for medicine, but it challenges science’s need for proof
Science has a need to verify results, but DeepMind’s protein prediction tool doesn’t work this way.
How Google’s new AI could revolutionize medicine
Google DeepMind’s AlphaFold 3 could be the future of drug discovery — and the journey to its creation started more than a century ago.

We have had these art projects, and in the latest OpenAI demo of 4o, there was also a mixture of real people and an artificial one. In the latter, it was not the body double or proxy of a real human. It makes a lot of sense; the number of meetings will explode if you can attend parallel multiple meetings 🙂

Zoom CEO Eric Yuan wants AI clones in meetings
Why have fewer meetings when you could just send your AI clone instead?

Will the good old (ok, not so old) voice agents return to our lives?

Hi, AI: Our Thesis on AI Voice Agents | Andreessen Horowitz
Now is the time to reinvent the phone call. Thanks to gen AI, humans will spend time on the phone only when a call has value to them.

New stuff: Perplexity is introducing Pages. The concept of creating shareable knowledge spaces is done before with other waves, and I am curious how this will hold.

Robotic performances

A Starbucks operated by robots; it might not be super shocking, but interesting to think about the impact on the brand experience as the treatment at the counter is part that.

World’s only Starbucks where 100 service robots fulfill orders
At the South Korean tech giant’s headquarters, people can get a unique Starbucks experience with 100 robots serving customers daily.

The application of AI in supply chains before humans are in the loop.

Amazon’s Project PI AI looks for product defects before they ship
The AI scanner is live in “several” US Amazon warehouses.

OpenAI is restarting its robotics research group after a pause. GPT-4r to be expected?

OpenAI is restarting its robotics research group - The Robot Report
OpenAI is creating a new internal robotics research group after pulling back from robotics research in 2021.

Immersive connectedness

Oh, Magic Leaps is still there. A good catch for Google?

Magic Leap is Google’s new mystery partner for XR headsets
Hint: lenses.

Tech societies

I was not aware ICQ was still running. I have good memories of the first company-wide chat to overcome a couple of floors at the end of the 90s. I still remember the characteristic notification sound.

Chatbots are totally different nowadays and will become part the good old workspace.

RIP ICQ: Remembering a classic messaging app that was way ahead of its time
ICQ will cease operations June 26. If you know, you know.
AI Agents Are Coming for Mundane—but Valuable—Office Tasks
Anthropic and other big AI startups are teaching chatbots “tool use,” to make them more useful in the workplace.

Changes in last-mile e-commerce will drive the logistics

A systematic literature review of last-mile E-commerce delivery in urban areas
A new paper examines the impact of e-commerce on urban last-mile distribution through a comprehensive analysis of scientific studies. A corpus of 317 publications spanning two decades was reviewed, identifying 111 pertinent sources. Utilizing bibliometric analysis and systematic assessment, the study comprehensively reveals the effects of e-commerce on last-mile delivery. Key findings encompass environmental, economic,…

Will this be the year of AI influencing the elections in all kinds of ways? We can expect several stories on this for the rest of the year.

It’s the AI Election Year
With over 60 countries holding elections in 2024, deepfakes and robocalls are being used to manipulate voters across the world. WIRED is tracking every instance of AI interference during this critical election year.

Politics: an AI office in Europe.

Press corner
Highlights, press releases and speeches

Paper for the week

This week, as we are in an election week, this might be an interesting paper to check. AI and Epistemic Risk for Democracy

(…) AI technologies are trained on data from the human past, but democratic life often depends on the surfacing of human tacit knowledge and previously unrevealed preferences. Accordingly, as AI technologies structure the creation of public knowledge, the substance may be increasingly a recursive byproduct of AI itself – built on what we might call “epistemic anachronism.” This paper argues that epistemic capture or lock-in and a corresponding loss of autonomy are pronounced risks, and it analyzes three example domains – journalism, content moderation, and polling – to explore these dynamics.

Wihbey, John, AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge? (April 20, 2024). Available at SSRN: https://ssrn.com/abstract=4805026 or http://dx.doi.org/10.2139/ssrn.4805026

Looking forward

As mentioned above, I look forward to presenting at the meetup organized by CleverFranke in Utrecht and the workshop with the Wijkbot on Thursday at the PublicSpaces conference in Amsterdam. Making tangible what you can think about urban robots.

NPO is organizing the Week van de Toekomst in Hilversum until 6 June. The future of media that is, I think.

With the European elections; the Future of AI in a political context Thursday in Amsterdam.

Earlier, I was at the Digital Right House lecture; there is a follow-up this Monday. And on Tuesday the work from the Imagining Future Everydays in Eindhoven.

Finally, next week, Mozfest will be back in Amsterdam. Good memories from last year’s edition. Mozfest House.

Enjoy your week!

Buy Me a Coffee at ko-fi.com