Thanks for landing here and reading my weekly newsletter. If you are new here, have a more extended bio ontargetisnew.com. This newsletter is my personal weekly reflection on the news of the past week, with a lens of understanding the unpredictable futures of human-ai co-performances in a context of full immersive connectedness and the impact on society, organizations, and design. Don’t hesitate to reach out if you want to know more or more specifically.
Hi all!
Still tuning this format. I got some positive feedback, but let me know if you have suggestions! Last week, everyone returned from the holiday break, and together with the CES happening, the usual selection of links has doubled… Hope that works out to keep it concise :-)
What did happen last week? ___As planned, the focus was on the report on Civic Protocol Economy, and next to this, some “left-over” organization of ThingsCon (aka administration). I also discussed some plans for the coming year and updated with two student teams working on the Hoodbot/Wijkbot workshop tooling.
What did I notice last week? ___Just after sending last week’s newsletter, the most significant announcement at CES happened; NVIDIA took that honor with an AI in a box for consumers, accessible and creating a potential new way we are going to live with our AI buddies and butlers. Or with a critical friend. Enough to dive a bit deeper with a triggered thought (below).
Next to this AI in a box, Nvidia aims to offer tools for the automated world we will live in by creating virtual environments that can feed the intelligence of agentic things, with or without human learning data. The new model of Deepseek V3 is performing quite well, apparently. Gary Marcus compares ‘BSI’: Broad, Shallow Intelligence to AGI. BSI is a better description of the current AI, probably for some time.
The CES turned out to be all about AI (as expected) and consumer robotics. As always it is hard to separate the interesting from the gimmicks (or even screaming silliness), but it a great indicator of the vibe. Robotics as part of the agentic AI and more functional robotic things instead of humanoid (or animoid) is promising. Now, there is still some more focus on social.
Robots are becoming more accessible and friendly, somewhat silly, though, as we already saw last week.
Then, there is the option to ‘add an AI to it’: AI-enhanced interactions and functions. A new Philips Hue light bulb that gets augmented reality via the app; a signature case for the coming hybrid period.
Will mind-reading devices make the connection between human and machine thinking? Such as the Omi device. Or is it like the first iteration of the glasses that learn about us?
CES was also the place where the smart home is still a promise. Or something that becomes real. You can wonder if it is now really more relevant or that there is a moving target, adapting to the latest in new technology as a driver. Now, that is AI, and agentic AI makes a lot of sense, too.
A cyber trust mark for IoT devices does bring back the initiatives from some years ago on Better IoT and Trustmark.
OpenAI is taking the initiative to build policies.
The biggest news, maybe, was the announcement that Zuckerberg would end fact-checking in Meta to please Trump and his followers. Indeed, it is rather significant but, sadly, just another step in Meta's relationship with its users as subjects. It will have a backlash but potentially destroy much of what connecting digital media can mean. It is not only dangerous, but it also creates a chilling effect on the use of social tools. We need strategies to take back agency.
An interesting take is how this is influencing living in a synthetic reality. Do robots support us in learning new collaborations? A cafe in Japan is aiming at this type of partnership, combining robot servicing with humans with the robot as the instigator.
Triggered thought ___What if the move of Meta is the first step into a complete overhaul of social media by bot actors that is more of a lurking environment of bot-generated synthetic content? It is one of the outcomes of the move towards these synthesizing machines that create synthetic models of the real world for robots to operate. Where Nvidia is doing this for the physical world, the big social media parties are doing this in our digital world. What if this AI in a box is the new type of hub unlocking a synthetic reality layer just as the internet router has created a data layer in the last decades?
Dealing with synthetic versions of reality to shape mainly our own expression will soon become common, as you can expect.
Predicting what it will lead to is hard, as these future scenarios often differ from the expected developments…
Paper of this week ___A preprint looks into the behavior of agents vs human behavior, are agents nicer when dealing with the prisoner dilemma? Fontana, N., Pierri, F., & Aiello, L. M. (2024). Nicer Than Humans: How do Large Language Models Behave in the Prisoner's Dilemma?. arXiv preprint arXiv:2406.13605.
What about the coming week? ___Very looking forward to discuss the research in a roundtable, and digest everything together in a first draft. The Rotterdam UX design students will present their end result, and I might have a look at the expo immersive environments that is also part of the civic interaction design research. For old-time sake, I might check TU Delft Dies Natalis on making sense of mobility. January is still in events, I think, but Sensemakers is having a DIY session. Next week there is a new session by Speculative Futures The Hague (in Rotterdam).