Weeknotes 279: robotic fashion as mild exoskeletons
How will robotics develop as mild exoskeletons and become a fashion item, creating new balance and human-robotic governance? And the latest notions of the news.
Hi, y’all! For the new subscribers or first-time readers: welcome! A short general intro: I am Iskander Smit, educated as an industrial design engineer, and have worked in digital technology all my life, with a particular interest in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to do near-future explorations in the context of Cities of Things. And organising ThingsCon. In the newsletter this week, next to the notions of news on AI, robotics and beyond, I reflect on some thoughts triggered while reading a new book that connects directly -so it seems after 25% progress- to the human-machine co-performance.
Triggered thought
I am reading a new book on robots, specifically the collaboration between humans and robots: The Heart and the Chip: Our Bright Future with Robots (more on the author, Daniela Rus). I just started, but I found this an interesting angle.
“(…) imagine a future in which our clothing will double as soft exoskeletons, monitoring our muscles and vital signs to enhance our abilities, alerting us to health problems, preventing dangerous falls, and much, much more.”
Later, the authors dive a bit more into good old techno-optimism (as the title was foreshadowing), but what I like about this frame is the merge of human and robotic enhancements and the notion of keeping that enhancement as a gradual improvement of human capabilities instead of a big power move. A consequence of these more extended capabilities is a ‘marriage’ of both the strengths of humans and machines. Intelligent machines. She is describing a world where mundane routine tasks are delegated and creating more time for humans to do ‘human stuff’, a known frame, of course.
What if we think it through, though: what kind of relationship do we want to have with these AIs? Are the AI’s butlers, helpers or companions? Is the human to the new generative AI-powered machine as the creative director to the designer? A creative director inspires and steers based on a vision, while the designer is the one making it into reality with their own agency and contribution to shaping it. Or do we grow in totally new forms of relationships and authority models? Holacracy practices for robots-human communities. Thinking about agency is a key concept nowadays, so much is clear.
How will this relationship develop over time? Would the robot that supports the family stay with you to help out when you become elderly? Or are these different ones? I'm curious to find out if the book will dive into these kinds of topics and what the conclusions will be. I'll keep you posted.
Notions from the news
In AI
Last week the arms race of GenAI models was sparked through the new version of Claude. This week Pi is trying to grab the attention. I have to say that I have not used the app since the first time I experimented with Pi, and I also still need to continue reading the book of the founder.the book of the founder.
Is it interesting to follow the boardroom moves of OpenAI? It is a real-life soap, definitely, if you also include the battles of Musk with OpenAI, who is announcing to open source Grok.
The impact of AI-designed things, in general, is something to follow. Especially if it is designing fundamental parts of life, like proteins.
We see a rise in synthetic media and in fake storytelling. And not for nothing there is a lot of attention to this. I was triggered here not so much on those aspects but on the idea of simulating a reality built from raw elements. The example of food grams, plates with food presented as if you are in a restaurant, based on the meals you can order in a ghost kitchen. That feels different. “Generative models excel at making ideas obvious; they make it easy to manufacture obviousness, not truth.”
With new technology, new business concepts, and new gurus, we get new jargon. Evals.
A framework for AI alignment
Learn from doing. Possible by others. Training of LLMs from the ground up.
What is your favourite LLM system’s prompt?
Is AI accelerating the demise of the internet?
Some weeks ago, I was wondering if there is an end for prompt engineering. This feels like a follow-up.
Robotic creatures and behaviour
Tiny, very tiny ones. And creepy but social ones
This will be a returning topic here: robotic ‘device’ with generative AI embedded. Not always with a good reason. It is possible.
New spatial reality and other futures
This research ‘paper’ of Modem triggers me to think about the role of gestures as a new form of language. The internet of touch might be back in a different form, touch at a distance. This topic I explored a lot back in the 2015s. Is it now part of our Google reality?
The Verge is still providing practical guides for your smart living at home.
Looking back at 1900s futurism by someone who does not believe in predicting the future
Digital life
Is Twitch done indeed?
How does the quantum security work? And how to switch it on on your phone?
Street view won the privacy battle in Germany
Paper for the week
Humanoid Locomotion as Next Token Prediction
As you might know is the core working of generative AI and large language models based on tokens, where tokens are parts of words that are combined based on predicted probabilities. Would that also work for the physical movements of robots, or more specifically, humanoids?
We cast real-world humanoid control as a next token prediction problem, akin to predicting the next word in language. Our model is a causal transformer trained via autoregressive prediction of sensorimotor trajectories. To account for the multi-modal nature of the data, we perform prediction in a modality-aligned way, and for each input token predict the next token from the same modality
Radosavovic, I., Zhang, B., Shi, B., Rajasegaran, J., Kamat, S., Darrell, T., ... & Malik, J. (2024). Humanoid Locomotion as Next Token Prediction.
Link: https://arxiv.org/abs/2402.19469
Looking forward to this week
The last time I was in Austin for SXSW was in 2019. It feels shorter ago due to the covid-years. Normally I did a two-year cycle, so it might be about time to consider going again next year. I try to keep track of the vibe (shifts) from a distance, as that is an important value for me. Topics are not per se very new or unique, and the popular format of panels is not always stimulating for the depth of the stories. But the overall feeling about what is at stake and having everyone together at one place creates a vibrant atmosphere. I hope to synthesise it more next week. I keep track via the usual social channels and via a Dutch blog by Erwin I mentioned last week already. Early feelings: AI is, of course, the talk of the town. As the proposals for sessions and talks are done in August, the last year edition was not fully filled with the latest generative AI boost, this year it is. Happy to notice explorations on the impact of AI on our physical space and the relation of us living with intelligent digital twins.
So, to keep it closer to home or online, check out the talk by Simone Rebaudengo this Thursday evening (online), one of the designers of human-ai-companionships I like to follow.
More down to earth: 15 March Agency at night Rotterdam. And looking forward to see the works of Marina Abramovic
Next week, a new event for a specific group is starting in Delft: a Home assistant meetup. And General Seminar is back, a lovely intimate event format. It's a bit more expensive than before, with two timeslots for evening or night owls.
Have a great week!