Chilling effects of AI as an exoskeleton

Weeknotes 332—Will we have AI as a mental exoskeleton? If so, will it crumple our thinking capabilities, or can we ensure a personal mental coach? Triggered by news from last week next to multiple other topics captured.

Chilling effects of AI as an exoskeleton
Interpretation by Midjourney; wondering where to go, fed by exo-intelligence awareness

Hi all!

What is the feeling of last week? A good week of work on several projects, preparing events, and attending one or two. And the continuous restlessness of outside factors. I am not so much into walking into big demonstrations, but I joined the walk against upcoming fascism and racism last Saturday, especially to address the worries of limiting diversity, breaking down critical thinking platforms and institutions, and creating a society of self-interest and hate toward others. We can build on collective powers and plurality. Tech can help, but it has been a double-edged sword, too. To circle it back to this newsletter frame, design has the potential to shape a fruitful framework for collectivity and respect. I try to bring this to the projects I do and plan to do. Find out more about what is happening via this update.

What did happen last week?

The running projects are completing the report on Civic Protocol Economies, speaking to designer for the final touches, and preparing the next phase. I also looked into the coming program of the Summer of Protocols and watched their town hall meeting a lot to my reading list again.

The AI-report podcast (the renamed POKI podcast, a bit generic now, too bad) has silently become a 2,5-hour podcast. One of the topics covered last week was about generative things without mentioning it. Vibe making, mentioning tools like Lovable.dev, Bold.new, and good old Replit.com. These things are more apps, but you feel that vibe things are around the corner…

The exhibition Generative Things will get a Salon and a Speculative Design Workshop in April. The Meetups are up and running now.

For the RIOT 2025 publication, we received almost 20 article entries and have a partner for the unconference to launch the publication: 6 June in VONK Rotterdam. I expect to share more progress next week.

I am working on some proposals for continuous development for ThingsCon and Cities of Things. I hope that works out.

What did I notice last week?

You could not avoid Adolescence of course. The making off and reflective posts, etc, are as intense as ever. It is not really the place here to discuss. Still, it is super interesting how it can unlock such a global (North?) engagement, both by the topic and also how the choices in doing the one-takes add an extra intensifying experience. The making of movies becomes part of an intense, immersive experience. It contrasts the tendency of the rest of media that is hyper-synthesized and fake, so this is a form of ‘live ’ feeling that we apparently need. Will this be a one of, or a new format for these times?

Ok, that was more than I intended to say about it… Back to the ‘usual’ AI stuff.

  • Continuous stories on Apple Intelligence panic with management changes,
  • with vibe coding attention,
  • with Google visual coding upgrades,
  • with Claude doing web-search,
  • and with Nvidia stealing the show with a long but rich new introductions keynote, ending with their robotic-empowering heart-ware illustrated with a cute Disney creature.
  • Next to that, more more minor news. The robots are coming from China, with level 3 driving Zeekr cars, but especially with endless (Reels/Tiktok-)videos of a Chinese humanoid doing breakdance moves and kung fu, making Americans potentially nervous. It is a compelling item for Fox News, at least.
  • Luckily, some reflective posts, as always. Like thinking about our relation to AGI and what we should aim for there. Check my triggered thoughts below.
  • Heavy use of ChatGPT is not suitable for your mental health, a study by OpenAI and MIT is showing.

What triggered my thoughts?

What will the role of AI be as a potential "exoskeleton" for human intelligence? A mental exoskeleton. A future where we all have our own AI exoskeleton—an always-on, always-available augmented intelligence that assists us with various tasks. This represents the typical dream for Artificial General Intelligence (AGI), but is this what works best for us?

The Calculator Dilemma

We might face "the calculator effect." Are we unlearning essential skills if we rely on calculators instead of mental math? Or are we evolving to use better tools, making ourselves more capable overall? The same question applies to having augmented intelligence always available. Could our future resemble the one depicted in the movie "WALL-E," where humans became sedentary, constantly consuming media, growing physically unhealthy and disconnected from themselves?

A recent podcast discussed an alternative approach—having AI helpers focused on specific moments when truly needed. In this model, we deliberately activate this extra intelligent layer for temporary tasks, then switch it off. We remain aware of what capabilities we're adding to ourselves and potentially contributing to the intelligence. This connects to topics from previous newsletters about balancing human and AI capabilities in co-performance, leveraging each other's unique strengths. The key questions become: When do we switch AI on or off? What happens when we use a more focused version for specific tasks? So, incidental support seems preferable. However, another scenario is imaginable: AI could be designed to maximize our intelligence by stimulating our creativity and uniquely human skills, keeping us actively engaged rather than passive. A strange comparison, maybe, but you have different setups of hybrid cars: some have the ICE as the main engine and can switch on the electric one separately if needed to lower gas consumption. There are also concepts where the electric engine is the main engine, and the ICE is a generator for electricity.

The "Chilling Effect"

Apart from the setup, it is even important to think about the impact on ourselves. The so-called "chilling effect" that might be introduced when we are extended with intelligence that exceeds our own in certain ways, as it has a general accessibility to everything we never can have. How do we operate in a world where we're constantly aware of being near something smarter than us? This "chilling effect" creates a continuous feeling of being around an entity that knows more than you. Though no one forces you to adapt to it, you feel compelled to do so without fully understanding what you're adapting to. This can lead to stress or burnout. That is also directly comes back to the question: do we want AGI that's always smarter and always present, or do we prefer focused AI that excels at specific tasks we deliberately select? Do we understand why and how it's smarter than us in those areas? That is a different lens on being transparent on AI blackboxes. And challenges us human-AI designers to provide the right friction and honesty in the AI exoskeleton to feel recognized and in (self)control.

What inspiring paper to share?

Let me share a bit different paper, a research paper into “Representation of BBC News content in AI Assistants”.

To better understand the news related output from AI assistants we undertook research intofour prominent, publicly available AI assistants – OpenAI’s ChatGPT; Microsoft’s Copilot;Google’s Gemini; and Perplexity. We wanted to know whether they provided accurateresponses to questions about the news; and if their answers faithfully represented BBC newsstories used as sources.

https://www.bbc.co.uk/aboutthebbc/documents/bbc-research-into-ai-assistants.pdf (via newsletter Elger)

Maybe interesting to combine with “Governance of Generative AI”, released around the same time, that intends to look a bit more under the hood.

This introductory article lays the foundation for understanding generative AI and underscores its key risks, including hallucination, jailbreaking, data training and validation issues, sensitive information leakage, opacity, control challenges, and design and implementation risks. It then examines the governance challenges of generative AI, such as data governance, intellectual property concerns, bias amplification, privacy violations, misinformation, fraud, societal impacts, power imbalances, limited public engagement, public sector challenges, and the need for international cooperation.

Araz Taeihagh, Governance of Generative AI, Policy and Society, 2025;, puaf001, https://doi.org/10.1093/polsoc/puaf001

What are the plans for the coming week?

There is more to do for the projects I mentioned above, preparing for the Salon and SDW, and I need to decide if I will enter a proposal before the end of the week. We (with Tomasz) also are doing a workshop at Smart & Social Fest with Wijkbot-inspired AI-Labkar designing, and I need to write an article for RIOT myself too. Start outlining this week…

Responsible AI is another hot topic. A large program is RAAIT, and there is a conference this Thursday I will attend. I'll let you know how it was next week. Fresh research colleague Martijn Arets is doing a webinar on ghost workers this Friday, and I hope to be able to check in at Ordinary Democracy and Digital Cities in Detroit. Finally, I will check out the New Current probably.

See you next week!

References with the notions

I try to avoid the weekly trump trolling, as it is too cynical (and harmfull).

Trump administration’s blockchain plan for USAID is a real head-scratcher
Whatever happens to USAID, it will apparently “leverage blockchain technology.”…

Human-AI partnerships

Apple Intelligence issues lead to reshuffling management.

OpenAI is reshuffling leadership, too, apparently to focus more on tech by Sam. The why will be discussed later, I guess. Earlier last week, in an interview with Ben Thompson, he was still talking business models. While the impact on well-being is also a relevant topic to address.

OpenAI reshuffles Sam Altman’s job once again
The company is shifting how its executive suite functions ahead of for-profit conversion.
An Interview with OpenAI CEO Sam Altman About Building a Consumer Tech Company
An interview with OpenAI CEO Sam Altman about building OpenAI and ChatGPT, and what it means to be an accidental consumer tech company.
OpenAI has released its first research into how using ChatGPT affects people’s emotional well-being
We’re starting to get a better sense of how chatbots are affecting us—but there’s still a lot we don’t know.

Anthropic is doing websearch now.

Anthropic’s new AI search feature digs through the web for answers
Anthropic Claude just caught up with a ChatGPT feature from 2023—but will it be accurate?

Externalizing memory strategies.

The Waters of Lethe Flow From Our Digital Streams
The Convivial Society: Vol. 6, No. 2

The holy grail of predictions is… the weather. So once in a while AI is promising breakthroughs

AI-driven weather prediction breakthrough reported
Researchers say Aardvark Weather uses thousands of times less computing power and is much faster than current systems

How long do we want to have AI doing tasks without human intervention, reframing the question a bit…

🔮 How soon will AI work a full day?
May be longer than we think

The weekly insightful post by Ethan Mollick, this week on the cybernetic teammate

The Cybernetic Teammate
Having an AI on your team can increase performance, provide expertise, and improve your experience

The workflow Matt is describing resembles mine for writing the weekly thoughts above. I use Lex instead of Diane though.

Diane, I wrote a lecture by talking about it
Posted on Thursday 20 Mar 2025. 831 words, 8 links. By Matt Webb.

Robotic performances

Nvidia is empowering robotic empathy

NVIDIA CEO Jensen Huang Keynote at GTC 2025
Experience GTC 2025 In-Person and Online March 17-21, San Jose

Level 3 self-driving from China

ZEEKR launches first door-to-door Level 3 autonomous driving technology
EV automaker ZEEKR held a live event in China today, unveiling several new forms of autonomous driving technology. The company…

Robotic overkill

Coffee-making robot breaks new ground for AI machines
An AI-powered robot that can prepare cups of coffee in a busy kitchen could usher in the next generation of intelligent machines, a study suggests.

Chinese humanoids doing tricks.

Chinese robot’s kung fu moves will make your jaw drop
A humanoid robot developed in China has transformed from a nimble dancer to performing kung fu moves with surprising precision and balance.

Immersive connectedness

SpatialLM is the next thing. Some say Apple Watches will get cameras to access AI functions. Google has already launched a real-time capture mode. Combine this also with the SpatialLM…

manycore-research/SpatialLM-Llama-1B · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
Google is rolling out Gemini’s real-time AI video features
Gemini can use smartphone cameras to “see” now.
The Apple Watch may get cameras and Apple Intelligence
It will be a few years, though.

I loved my first smart watch, the Pebble, but I am not going back now I think.

Eight years later, new but familiar-looking PebbleOS watches appear
Shipping in July and December, with far more battery life and newer chips.

Makes sense for a connected devices company. All sensors are cameras in the future.

Hue accidentally leaks a new video doorbell
The Hue app has a guide to install a nonexistent product.

Tech societies

Handing over control to agents or better not?

Why handing over total control to AI agents would be a huge mistake
When AI systems can control multiple sources simultaneously, the potential for harm explodes. We need to keep humans in the loop.

What is open in AI domain? How will that play out in agent realities?

What do we mean when we talk about ‘openness’ in (generative) AI?
This blog post is the result of writing a too-long email in response to someone who wanted to think through what ‘open’ means when it comes to AI.

Will job replacement be there sooner than you think?

‘Gradually then suddenly’: Is AI job displacement following this pattern?
Many technological shifts have coincided with economic downturns. Will 2025 be the year AI not only augments jobs but starts to replace them?
DOGE’s ‘AI-first’ strategist is now the head of technology at the Department of Labor
The ex-Tesla engineer Thomas Shedd has quietly become the CIO of the labor department as mass layoffs and federal workforce automation continue.

If you are into AI (modern) history, find the source code for AlexNet from 2012.

You can now download the source code that sparked the AI boom
CHM releases code for 2012 AlexNet breakthrough that proved “deep learning” could work.

The future looking back on AI will have a place for the character ai case.

Mom horrified by Character.AI chatbots posing as son who died by suicide
Character.AI takes down bots bearing likeness of boy at center of lawsuit.

This week’s take on vibe coding: DIY apps.

DIY Apps Are the New Myspace
Forget social media. We’re building social software.

Living things described as machines are dismissing complexities.

Living Things Are Not Machines (Also, They Totally Are) | NOEMA
Our formal models of life, computers and materials fail to tell the entire story of their capabilities and limitations.

The concept of abundance in relation to technology reminds me of the attention for exponential organizations etc. The authors make me curious though.

Book review: “Abundance”
In which Ezra Klein and Derek Thompson offer a whole new way of thinking about political economy.
Buy Me a Coffee at ko-fi.com