Reconciling AI Fragmentation and Design for Collectivity

Hi all!
So, there is a new Black Mirror series. It was reviewed with mixed feelings. The first episode I watched was slickly produced and well-made, but the theme felt worn out, and the story was predictable. However, I think most common people think it's a good episode. I might check one or two others then.
What happened last week?
Last week was packed with three events centered around generative technology, social bubbles, and collective design. The week began with the ThingsCon Salon, with a new iteration of the exhibtion with provotypes of generative things. I shared my first impression in the newsletter last week.
One key insight that emerged was how generative interactions create an "in-between layer" - a synthetic space between reality and a more dreamlike or subconscious realm—and what role this could play in creating new interactions between people submerged in their own bubbles.
In the speculative design workshop, we discussed how generative technologies might create a synthetic layer in physical reality—a "dreamscape" that could potentially address social polarization. That would be at least a goal to strive for. We explored whether this generative layer, if made agentic, might help connect people by making social bubbles more porous.
On Wednesday, I attended a conference hosted by the Science and Technology Department of the Dutch National Police. The afternoon workshop, interestingly, also addressed social bubbles. With participants from police departments, research institutes, and other sectors, we discussed strategies for breaking these bubbles and fostering resilience while maintaining human values. A key insight was the need to shift design focus from individuals to collectives.
The week concluded with the Smart and Social Fest. In her opening speech, former European politician Marietje Schaken highlighted how Big Tech controls our services and data and increasingly influences knowledge creation through research funding—a subtle but significant impact.
The standout concept was again "designing for collectivity," a principle that connects directly to our civic protocol economy research. At our Labkar workshop, we explored how neighborhood technology centers could enable residents to prototype and develop technological solutions together nearby with the fewest barriers.
On Easter Sunday, we visited an area in Utrecht where this type of thing is brought into practice in a way (here and here).
What did I notice last week?
Notice from the news, that is, of course.
- o3 is the new model announced by OpenAI and embraced as a leap forward again. It is not yet AGI, but…
- Memory skills of our AI buddies are becoming more capable.
- It triggers discussions on the impact again.
- Google announced a flash version of Gemini, and Microsoft a 1-bit co-pilot.
- Reflections on LMMs and creativity, what AI is doing to us, on agent engineering.
- In robotic performances: some embodied experiences, and running robots completing half a marathon. Expect this part of daily life in China.
- Quirky content on Google Maps, just-in-time content as immersive experiences.
- What is the Vibe Coding Paradox?
- Tech and societal impact, every week more. AI popping bubbles, synthetic realities, modern sweat shops.
- Can there be too much transparency?
- What if AI makes the community notes? What would be the name?
- And more.
What triggered my thoughts?
Building on last week's "design for collectivity" theme, I've considered connections between seemingly disparate concepts.
A review of an art piece by Lev Manovich, shared on Linkedin by the artist (thanks Antoinette for sharing), examines how AI language models create a new relationship between chaos and order. The artwork explores how LLMs generate coherent realities from fragmented, diverse sources—taking tokenized, atomized information and constructing seemingly cohesive representations.
This concept aligns with what Lev Manovich describes as the "formal logic of generative AI" being fragmentation itself. Manovich traces this back to Paul Baran, one of computer science's founding fathers, who discovered in the late 1950s that breaking messages into random parts actually facilitated their transmission—a principle that became foundational to the internet. This same logic applies to how AI functions: by fragmenting knowledge (whether scientific or cultural) and then processing this knowledge in stages, AI learns to produce various types of knowledge on demand. The art piece visualizes this fundamental process—how coherent outputs emerge from deliberately fragmented inputs.
When thinking about design for collectivity, it appears not trivial to focus on the collectivity as collective. Often, it easily becomes an organizational structure to facilitate individual interests. Think about shared mobility concepts or energy cooperations. This is not wrong, of course, but how does it connect with the collectivity above individual drivers? Is there a link with fragmentation as a whole?
Here, I think it could be an interesting provocation to link it to the fragmentation. To reach the notions of collectivity, deconstructing fragments that form the whole is inspiring. How can we create an understanding of seeing things as fragmented wholes?
Something to explore further. It connects to the concept of predictive relations with things, my earlier research endeavors at TU Delft; like LLMs are providing fragments and capabilities to generate (aka predict) the next interactions with things.
But maybe more actionable is the framework Indy Johar was publishing about this week: "restoring relational care". While AI fragments information, Johar suggests practices that reweave social connections: both in describing the processes of interaction as creating places for encountering. These principles suggest that designing for collectivity is all about relational. The fragmentation inherent in digital systems might be counterbalanced by these practices of care—creating a productive tension between technological atomization and social reconnection.
These different concepts deserve more attention, but time is up now. They will be continued at a later time (same place). Let me know if you have thoughts about deconstructing and reconstructing as a methodology for collective design.
What might be an inspiring paper to share?
A paper to critique AI from the frame of “Synthetic media and computational capitalism: towards a critical theory of artificial intelligence”.
This paper develops a critical theory of artificial intelligence, within a historical constellation where computational systems increasingly generate cultural content that destabilises traditional distinctions between human and machine production.
Through these contributions, I argue that we need new critical methods capable of addressing both the technical specificity of AI systems and their role in restructuring forms of life under computational capitalism.
Berry, D. M. (2025). Synthetic media and computational capitalism: towards a critical theory of artificial intelligence. AI & SOCIETY, 1-13.
What are the plans for the coming week?
This week is dedicated to research and future projects. The Civic Protocol Economy research and setting up the design workshop later this year. And shaping the format and program for the unconference to launch RIOT 2025 on 6 June. Also towards June the generative things provotypes will get an immersive version to experience them in the city. We created the first ideas at the speculative design workshop, and a group of students will start to explore this in the minor Makerslab.
On Wednesday I will attend Sensemakers AMS evening with Dimitri Tokmetzis. And I will join the 4-week training program of Majid Iqbal on Analogy, Abstraction and Reasoning. I will miss Creative Mornings Amsterdam, and Rotterdam. I hope I can join the evening of Biomes to AI at v2, if the logistical planning allows…
References with the notions
Human-AI partnerships
The biggest news was the announcement of the new o3 model opening up to broader use. Some think it is a huge leap forward. Not AGI yet, but narrow AGI maybe. And it definitely took over the leading role of Gemini.

The flash version of Gemini will be more integrated, more immersive AI.

A gaming co-pilot feels strange.

LLMs are not a threat to creativity or content theft, but a tool for reflection of our own intentions, bringing us further.

What is AI doing for us. Or to us?

The silent war in agent engineering gets loud…

Robotic performances
Robots running half a marathon can be seen as a step forward, but also a sign that we are not there yet.

Hm. Not sure about this smart robot officer. For real or attention seeker?

Flying saucers become flying speakers.

Precision farming.

Embodied AI aka robotic things, are becoming part of China daily life.

The follow-up of tiny ML is tiny LLM. 1-bit got a new meaning.

Humanoid robots are coming. For you.
Immersive connectedness
The other face of Google Maps is a platform for quirky content.

I like the image that is triggered with “Just in time content”. You can say that personalisation of digital content was always just in time, but with generative AI it has a different meaning. Adapting content to personal needs, existing or latent, is real.

Purely on visuals I am linking this to Last of Us new season, but maybe I should watch first.

Some positive news from the real-world impact.

Have to think this through a bit more. “The Vibe Coding Paradox. The more frictionless execution becomes, the more it loses value.”

Tech societies
Popping bubbles with AI, that is a dream for some. It will not be easy, but framing like a philosopher machine feels like an angle.

You have analog differences between real and presented speed on an odometer. How will that be in a future where everything digital is synthetic above real.

Modern sweat shops are about AI data training.

Can there be too much transparency?

Geopolitics of this week. Will China take over 21st century?

Community notes are a bad thing as they are intended to avoid responsibility for your platform, but what if the community notes are AI generated; will this be the transparent contestable moderator?

Thanks for landing here and reading my weekly newsletter. If you are new here, have a more extended bio on targetisnew.com. This newsletter is my personal weekly reflection on the news of the past week, with a lens of understanding the unpredictable futures of human-ai co-performances in a context of full immersive connectedness and the impact on society, organizations, and design. Don’t hesitate to reach out if you want to know more or more specifically, for organizing speculative design workshops, explorative research, or community connecting events.