Weeknotes 271 - Large Action Model to connect artificial living with real outputs
This week, CES is a trigger for thoughts on AI and the physical world. And more on mundane robotics, embedded intelligence, and wandering humans.
Hi, y’all! Good to have you here again. And especially welcome to new subscribers that find the newsletter. Thanks, Erwin, for the shout-out. I am following Erwin for so long, he is one of the OG innovators who is not talking about it but doing it. He has a special gift to find the latest and most interesting tools just to start building. Check his newsletter (in Dutch).
This week CES is dominating the news. This good old Consumer Electronics Show is balancing the traditional consumer tech (screens, appliances) with a sense of the new hot thing. Like there is AI in all. Sometimes (or better, often) silly with weird ChatGPT integrations that do not make sense. AI vapourware. However, it is an interesting indication of the merging of intelligence in the physical world. The Verge does nice coverage in their podcast and videos. And this MKBHD Studio for step through the camera perspective. Oh, and robots seem to become a thing in the home.
Triggered thoughts
I cannot avoid the Rabbit R1 thing that tries to create a new interface to LLM in a communication device. Or more as an action device, as they like to frame it. LAM is a Large Action Model to support you with all kinds of actions you would like to take. In an Apple like keynote (but shorter), it is showing all the usual suspect tasks to prove their usefulness (creating recipes from your fridge content, planning a holistic holiday itinerary. That is not, per se, convincing. There is also a lot to oppose the claims. Benedict Evans wrote a good column in its (paid) newsletter. He doubts that all service providers like the Ubers are allowed to break their business models by surpassing through these kind of devices. Another argument references poor service design:
There's also a risk of repeating some pretty basic fallacies. Some people in tech are bored with apps and bored with smartphones, and want to try new things, so they make a new device, even though the entire Rabbit demo was actually an app connected to a cloud service. You can say ‘we’re bored of apps!’ and Humane can say ‘we hate looking at screens!’ but I’m not sure any normal person feels like that.
It is another example of biopic product development, imho. It is another iteration of an Alexa, in a way. That said, I like some of the design choices or explorations in the physical device merged with the service. The Teenage Engineering touch in industrial design is strong. It tries to make a new form combining different interfaces of human and machine intelligence in a small pocket-sized device. In contrast to the Humane device that focuses on voice and projections, this is much more tangible, both in the interactions with the push-to-talk button and visible rotating eye, as having the cute bunny (Rabbit) as interpreter. It is one part of the typical Asian design style to use comic-like representations. You also see it in many car interfaces that originate in China (Nio, Smart #1). Japan was already famous for it but did not export it beyond their own country so much.
The other interesting take is the mentioned LAM, a large action model that is trying to create a new framing to use LLMs in a physical context. That will be super important. One of the biggest questions will be if we will indeed start using different devices or if it will be part of the next-gen phones. If they can live up to the promised price (199 dollars with no extra subscription), they may become part of a mix of devices we will have in the future. As we now have iPhones and Airpods, we might have a different set, and the battle for the share of the device ecosystem will be near. The podcast Dithering has a similar take on this ecosystem of devices. And who knows, should we see the reorganisation of the hardware division of Google also signaling towards this?
The device sparked a lot of excitement that is for sure, the first batch is sold-out (pre-order for Easter delivery (easter - rabbit?).
Events to track
- 16, 17 January - Rotterdam, OASC (Open Agile Smart Cities), I attended these some years ago in Brussels, this year apparently in Rotterdam.
- 17 January - Amsterdam - Sensemakers DIY meetup
- 18 January - online - Visualizing Cities - On Data And Design
- 21 January - Rotterdam - v2 - Finnisage {Class} Exhibitionfunctional-focused
- 24 January - Amsterdam - DDS showcase
- 24 January - Hilversum - Monitor Dutch Creative Industries
Notions from the news
So it is remarkable to notice how the happening of a consumer tech fair as CES is changing the focal point of the news. Robots and AI are everywhere, in a mundane way, as part of everyday objects and things. At the same time, silly things (aka gadgets) are back. Sometimes, it is hard to decide which category it fits.
Some of the interesting robots can be found in the multiple overviews. Robots are buddies or companions, like Ballie of Samsung. Lenovo has a companion for lonely people. But there are, of course, also more functional-focused robotic devices.
Someone framed it as that in this year’s CES “everything connected” is replaced with “everything AI”. You can read this as another short-term fad, but it might be changing the way we use things for real. Like shopping or having new products that are even more personalised.
The ultimate overview with a positive impression of the level of interesting things.
And Apple showed how to ‘steal the show’
Roles for AI
Multimodality in AI is a thing in 2024. And teaming up with AI as a team member is the second or other side of the coin.
Is LLM good as a lie detector? Or ‘truth’ detector? It will need serious scrutiny, though
And will it trigger new computing platform architecture?
And the impact on productivity, what it will bring is not sure yet
The OpenAI store for custom AI bots is there. At the same time, Sam Altman is ‘acting’ being conscious of the impact; a proven strategy to stress potential impact. AI experts are sincerely confused.
“AI is a lever that becomes a lens”. It's always nice to have Dan Shipper’s reflections.
What happened to the metaverse? AI took over rapidly.
Mundane robots
Yes, some humanoids are becoming more serious products. And in kitchens. And more these robots are robot things more than traditional robots.
Autonomous driving
Is autonomous driving still happening? It might, but slower. Or is the AI in cars not, per se, focused on self-driving after all? It was a theme at CES.
What is more ‘disturbing’ from this Cyberrunner experiment: that it is so quick or that it is cheating to achieve a better outcome?
Digital life tooling
Good to see Tom Coates back to blogging. I need to dive into the details of the long post, but curious to find out more about the intentions of Meta for creating a Fediverse integration of Threads.
Too bad: Artifact is shutting down. It became part of my mix of sources and tools to discover news articles for this newsletter.
The impact of 15-minute cities can be different than expected, at least in London.
Paper for the week
This seems to be even more relevant, although written in the pre-LLM wave.
People’s Councils for Ethical Machine Learning
The family of machine learning methods is not somehow inherently bad or dangerous, nor does implementing them signal any intent to cause harm. Nevertheless, the machine learning assemblage produces a targeting gaze whose algorithms obfuscate the legality of its judgments, and whose iterations threaten to create both specific injustices and broader states of exception. Given the urgent need to provide some kind of balance before machine learning becomes embedded everywhere, this article proposes people’s councils as a way to contest machinic judgments and reassert openness and discourse.
McQuillan, D. (2018). People’s councils for ethical machine learning. Social Media+ Society, 4(2), 2056305118768303.
See you next week!
We had a very successful end of the Wijkbot/Hoodbot project as part of the Afrikaanderwijk Coöperatie celebrations. The Wijkbot was playing an important role opening the new Grondstoffenstation. The CityLab010 project has ended, but the intention is to continue developing the urban robot platform within the neighbourhood and beyond. We will be shaping plans this month. The same is true for the other Cities of Things project for a Collect|Connect Community Hub. I will give that more focus in the coming time. A new graduate student from IDE is creating a deeper concept of the intelligent layer in the coming semester. I will keep you posted! And I will share more on some other projects I am involved in for creative industries programs.
Have a great week!