Weeknotes 293 - apps as capabilities
Will this wave of mundane intelligence change our app model? And other news, events to visit.
Hi y’all! Welcome to the new readers. As every week, here are some thoughts on last week's news in the context of longer-lasting development (from my perspective).
Last week was another week with a bigger event I attended, Mozfest House. For the second time in Amsterdam now, and I liked it again a lot. Meeting interesting people, I already knew, or I met for the first time. It brings mixed emotions too to be honest. The mutual feeling is very aware of the tumultuous times we are living in now and panels discussing AI accountability during war are super important but setting a dark tone. There were also enough critical but future-oriented sessions, like te presentations by the IAM Master on Responsible AI futures. Also happy to hear Mona Chalabi talk about her work illustrating data with numbers. Data journalism is a powerful and necessary way to make sense of our complex world. Let me finally mention the interactive session of Branka Panic on the possible roles of AI in peacemaking. I have to say that my thoughts in the reflection round also went to the potential impact of chilling effects on peaceful minds and behaviors, but I might check https://www.aiforpeace.org/
I also checked last week the student projects from the Imaginaries Lab, though experiencing some possible futures. More concrete was the meetup and workshop on Relational Interfaces, as part of the program Charging the Commons, with a special role for the learnings of Zoöps. With a fine group of researchers and designers, we discussed concepts on re-presenting, empathic, and mediating interfaces for a concrete commons-based initiative, Buitenplaats Brienenoord. It is definitely food for thought and to be continued in future newsletters.
On Monday we kicked off a new initiative of Rotterdam University of Applied Sciences, a Digital Social InnovationLab that will emerge “op Zuid” as they say. Different partners were present to explore first connections and ideas. I was both connected from the exploring project I did on creative industries and proactive digital services for fighting poverty, and experiences and plans for follow-up from the Wijkbot project in Afrikaanderwijk. Speaking of Wijkbots, the videos from PublicSpaces are online, including the ‘unexpected’ appearance of the Wijkbot during the closing on Day 1.
This was a much longer looking back than planned, let’s dive into some reflections…
Triggered thought
As expected, the real reflections on Apple Intelligence's announcements arrived in the media after last week’s newsletter. See below for some links. One specific thought was triggered as a sidenote in a video. Typical way of making sense of unpredictable futures.
The video of Two Minute Papers mentioned something in a subtext that might be one of the second-order changes: why would we need apps anymore? If we can trigger functionality in our phone by giving it situational orders that combine different apps in one experience, we might lose the direct interface connection with the apps. Or, to put it differently, we will still have apps, but the way we use apps will be not by using apps on their own, but by using their capabilities in a integrated experience. Like making an appointment on a timeslot. Connecting it to the current context of travel and noticing other concurrent activities happening.
The apps are needed to create structure and make sense for AI and humans (or vice versa). They structure our thinking and how we feed the system in different chunks of actions, preventing double planning, forgetting, or leaving too late. Apple has previously experimented with merging interfaces in media consumption by creating an overlay interface in Apple TV. This is now extended to everything you could use your pocket assistant for.
And there is an interesting step to consider. In the examples, often the merge of information is on demand. A response is provided when a human operator requests something, and the AI will serve you most conveniently and intelligently. But the AI can also be the PA who keeps track of the planning of your life and starts planning for you, suggesting actions to take to you. That is fine and not even new as it is already connected to functionally contained actions like notifying the right time to leave. But what if it also suggests to act in a certain way? When the PA signals not to forget to answer someone before you forget, it makes you more attentive than you really are or would be. I must think of the classical PA in real life or the archetypical management secretary managing the agenda and making decisions upfront. All are based on delegated responsibility. There is a deliberate delegation here, as you have too many tasks to perform to be bothered with the basic communication. If these communication activities are secondary to the core decision-making process. It also works the other way around; people who want to reach out to a high executive expect that layer in between as a filter or barrier to take (depending on the needs). What will be the consequences in daily interactions between people without these social structured expectations?
So the interesting question here is if we will have all these kinds of filters in real life contact, and how do we manage these filters? Will it remove personal contact? Will it take over agency and responsibility for our actions? These bigger questions might follow up a truly well-made “AI for the rest of us.” As the AI is not just a tool on its own, it always represents a system of (social) structures behind it. More even than the literal supply chain of AI-devices as sketched so well in the Anatomy of AI project.
For the subscribers or first-time readers (welcome!), thanks for joining! A short general intro: I am Iskander Smit, educated as an industrial design engineer, and have worked in digital technology all my life, with a particular interest in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to (critically) explore the near future in the context of cities of things. And organising ThingsCon. I call Target_is_New my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week.
Notions from the news
As mentioned, the Apple Intelligence is also this week's inspiration for reflections and beyond. Like this very nice one by Matt Webb.
And others like Casey Newton, The Verge, 2, Ben Thompson, Wired, Will Knight, Edward Zitron, The Atlantic, Ethan Mollick
Human-AI partnerships
Delegating mundane tasks to AI to free ourselves for more meaningful work might be a wrong understanding. “Implicit in the promise of outsourcing and automation and time-saving devices is a freedom to be something other than what we ought to be.”
“AI tools are transforming how learning designers analyze and improve the learning experience design process”
Little known why the experiment is ended, the first thing that pops into mind is wondering about the added value of a chatbot at the counter…
This can indeed become a core question: are you an NPC?
AI for good, to solve problems we cannot solve ourselves…
The returning AI and your job is both a meme as a serious research field
How will AI design a robot?
AI expectations
A better way for Google to use AI power.
Robotic performances
Infantry dogs.
But why?
Will we get next to the Paralympics an Exolympics?
The party gear for the summer festivals
Better is not good enough…
Immersive connectedness
Adding datastreams to products will ignite new discussions. On data privacy. Especially in an era with potential humanlike nudging by AI.
I was wondering about the new version of CarPlay dashboard wide, that was announced years ago. Some news at WWDC:
Just before the perfect storm.
Personalised sonic environments
Are we entering a new thinness?
Tech societies
AGI predictions scrutinized
How to leverage AI?
No surprises that this is now reality. Already like Person of Interest?
Paper for the week
Anatomy of a Robotaxi Crash: Lessons from the Cruise Pedestrian Dragging Mishap
We then explore safety lessons that might be learned related to: recognizing and responding to nearby mishaps, building an accurate world model of a post-collision scenario, the in-adequacy of a so-called "minimal risk condition" strategy in complex situations, poor organizational discipline in responding to a mishap, overly aggressive post-collision automation choices that made a bad situation worse, and a reluctance to admit to a mishap causing much worse organizational harm down-stream.
Koopman, P. (2024). Anatomy of a Robotaxi Crash: Lessons from the Cruise Pedestrian Dragging Mishap. arXiv preprint arXiv:2402.06046.
Looking forward
I hope to create some time to update the Cities of Things website with the latest proposition and more insights into the activities of last year(s). We are also busy sketching a specific edition of ThingsCon in December on Generative Things. Expect more later this week (I hope).
At Gemaal op Zuid Afrikaanderwijk Co-op is having an open day. The Wijkbots and Inzamelbot were also present, including a short presentation.
Those who are around Basel, might check this out 20 June. Machine Teaching Commons / Teaching Machine Commons Symposium. Or in London: the IoT Meetup this Thursday.
Enjoy your week!