unpredictable futures in human-AI partnerships

Stay tuned

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.

jamie@example.com
Target_is_new

Beyond Tools: AI as Autonomous Learning Entity

Beyond Tools: AI as Autonomous Learning Entity
A Ciphering robot blended with protocol governance by Midjourney

Hi all!

For those who were waiting at 7 am for this newsletter, I am sorry to have a slight delay. No, this is no April’s fool day strategy (who would believe me today) but practical. Yesterday I needed to send in a proposal for a new Hoodbot project, and was not able to complete the newsletter.


Thanks for landing here and reading my weekly newsletter. If you are new here, have a more extended bio on targetisnew.com. This newsletter is my personal weekly reflection on the news of the past week, with a lens of understanding the unpredictable futures of human-ai co-performances in a context of full immersive connectedness and the impact on society, organizations, and design. Don’t hesitate to reach out if you want to know more or more specifically.


What did happen last week?

Apart from more world-shaking news, both from nature and crazy politicians, I had a solid week, with a mix of speaking to nice people, visiting a fruitful conference on responsible AI, finishing the setup of Hoodbot with Lisa, and more.

What did I notice last week?

  • The hottest news, or better said, the news with most traction was without doubt the new GPT 4o image generation. It was a blessing in disguise for Trump cs that Signal gate was a bit a the background… However, that is of course to big be totally become off the radar. However linked to the Signal chatgroup the issue is of course tech agnostic here. A design for letting people be added to a group might be improved, but that is not the issue that needs to be solved. It’s the people stupid!
  • But about that new image generation thingy. I was very pleased by how it turns a quick sketch for the proposal I was writing into a realistic image. And with tweaking I could make it even into the right atmosphere. I combined it with a prompt describing the project idea, and by going back and forth the prompt became also better. I added a more fleshed out prompt to Midjourney and was not disappointed by the result. The feel of the interactive installation was even a bit better than GPT-4o but the details were not right. The combination of the three outputs (including my own iPad made sketch) gave a good impression. Let me know if you are curious ;)
  • I did not now Reve.art, delivered maybe better results based on the prompt than GPT-4o, but worse based on the sketch.
  • Gemini also got an updated model (2.5) but received much less attention. It seems good; some say it is even on par with or above the other models.
  • It seems clear that multimodel understanding is key for reasoning models. Upgrading visual understanding makes a lot of sense. Leveraging the natural context you have is also key. That is different with OpenAI, Gemini and Xai. Will the models develop a different type of intelligence there?
  • xAI has now ‘bought’ X. That is of course not a surprise, as the data at X is a great asset for building knowledge graphs on human opinions. Not the most neutral opinions anymore nowadays.. I am not hopeful but there is of course an option to use the AI capabilities to bring in more sense in radicalized people. More likely is that the Tesla humanoids become the new workplace bullies…
  • Apple Intelligence got some minor updates in 18.4, but for Europeans, there is much more access to the functions like visual intelligence.
  • Anthropic is opening the black box of LLMs.
  • Nice thoughts on domestication of humanoids by Matt and other evolutions.

What triggered my thoughts?

Nate B Jones has a great take on the web that is disseminating now with chatbots added to the game. OpenAI's new visual capabilities and coding tools show this; AI  cannot think on a conceptual level—not yet, at least.  AI systems are thinking about the next step.

This does not disqualify the value of AI in its current form at all.  It reveals the real new playing field. We have not only humans and computers, but humans, chatbots, and computers—or, as Nate put it rightly, multiple interactions.

I was at a conference on responsible AI, and in a session on conversational AI and digital humans, I was triggered by this thought, too. They spoke about conversational AI and the user, the human, who needs to be served with the best tool to understand the human. All was fine, but I asked the presenter if they also thought about the other chatbot user, the insurance company employee (insurance company was the use case). Is that not also a 'user'  in the interaction with the chatbot?  Isn't the AI here an entity of its own, learning things, shaping relations,  and functioning as a learning engine for the employee just as much as it can solve problems for the client user?

I felt this needs more attention and more fleshing out. The framing of Nate is a nice one here.

Connect this to the artificial neuron model, infomorphic neurons that learn independently, similar to biological neurons.

Novel artificial neurons learn independently and are more strongly modeled on their biological counterparts. A team of researchers from the Göttingen Campus Institute for Dynamics of Biological Networks (CIDBN) at the University of Göttingen and the Max Planck Institute for Dynamics and Self-Organization (MPI-DS) has programmed these infomorphic neurons and constructed artificial neural networks from them. The special feature is that the individual artificial neurons learn in a self-organized way and draw the necessary information from their immediate environment in the network. The results were published in PNAS.

I like to connect that to another part of the session at the conference; the potential of digital humans in conversations with intelligent services. The angle presented is that the digital humans better represent human emotions in a conversation, and help understanding (or selling of course). I am not sure about that. As it was presented, we see humanlike behavior very quickly in things with certain expressions and behavior. We don't need a human-like visual representation for that. I think we can better use digital humans  inversely: use the level of resolution to indicate how much the intelligence can understand human things or how much it is hallucinating. And learn en passant to understand when real humans are more fake than synthetic ones (like influencers faking real life).

To bring it home, we  must acknowledge we are not using AI as a tool but instead live with AI as a partner that does  itsown thing, and we can capitalize on this if we understand what  these systems are learning.

What inspiring paper to share?

This week a paper that is covering living labs, among others the Wijkbot as a living lab. Nice.

A living lab learning framework rooted in learning theories

Thus, this paper develops a framework that allows to capture learning in a living lab co-creative environment. In response to widespread calls for an epistemological basis for living labs, the study bases the framework on relevant learning theories.
The framework distinguishes content, capacity, and network as learning types; intentional or incidental as learning processes; and individual, team, and organization as learning levels.

Astha Bhatta, Heleen Vreugdenhil, Jill Slinger, A living lab learning framework rooted in learning theories, Environmental Impact Assessment Review, Volume 114, 2025, 107894, ISSN 0195-9255, https://doi.org/10.1016/j.eiar.2025.107894.

What are the plans for the coming week?

This week I need to prepare for the ThingsCon Salon 14 April, shaping the workshop and making planning of the logistics, the usual production work that is part of organizing events.

Working on projects: the new phase with Civic Protocol Economies will start. Some funding schemes for Hoodbot robot citizen voice function are being explored.

No big plans for events. There is a huge webinar (I expect) by Scott Galloway that I am curious about. For those who are attending Milan Design Week, based on the guide by Designboom, there is a lot of good old design stuff, if that is your thing. Google is launching an immersive platform, though, that might be interesting to track or check. The group shows look potentially interesting: Salone del Mobile turning to new forms of embodiment in design, and Fuorisalone is possibly the most interesting: “This edition delves into how design fosters relationships—between physical and digital realms, diverse cultures, humans and the environment, and individuals within communities.”

Have a great week!

References with the notions

Human-AI partnerships

Google rolls out Gemini 2.5, and got less attention because of the introduction of GPT-4o image generation. And also introduces more down-to-earth travel planning AI tools. The multimodal angle to reasoning is the most important angle to track.

Google’s new experimental Gemini 2.5 model rolls out to free users
Google’s improved AI model is now available for free, but usage is limited.
Google unveils a next-gen family of AI reasoning models | TechCrunch
Google has unveiled Gemini 2.5, the company’s new family of AI reasoning models that will pause to ‘think’ before answering.
Google announces Maps screenshot analysis, AI itineraries to help you plan trips
Google wants to help you get away this summer with, you guessed it, AI.
No elephants: Breakthroughs in image generation
When Language Models Learn to See and Create
GPT-4o and the Art of AI Adoption
Plus: The anatomy of a viral tweet-creating prompt

It was a meme driving the news on the new release. Adjusting existing images makes it so much accessible and imaginable for people as Ben Thompson rightly said.

why is the internet freaking out about chatGPT’s studio ghibli-style AI images?
openAI introduces chatGPT-4o that can generate studio ghibli-style AI images, raising copyright infringement concerns.
ChatGPT’s Ghibli filter is political now, but it always was
More insults to life itself.

OpenAI’s Studio Ghibli meme factory is an insult to art itself
Sam Altman is promoting his new image generator by appropriating the work of one of the greatest living animators—who is “disgusted” by AI.

Apple did also update iOS with (some) more Apple Intel features. For us in Europe we now have visual intelligence and the critiqued heavenly image playground among others.

iOS 18.4 is out now with Apple Intelligence-powered priority notifications
Apple’s AI will help you see what’s most important.

“These 22 lessons highlight the quiet revolution students are already leading, the sophisticated interplay between AI and human cognition, and the urgent need for educational institutions to adapt.”

“When learning is structured by systems rather than teachers, how do learners understand what—or who—is teaching them?’

22 Lessons from the GenAI Shadows
Here, beneath the surface, we’ll explore the unseen realities shaping education’s relationship with artificial intelligence.
Deep Teaching
Ghost Pedagogies and the Architecture of Learning

How do LLM think? Anthrophic opens the black box to help understand.

Why do LLMs make stuff up? New research peers under the hood.
Claude’s faulty “known entity” neurons sometimes override its “don’t answer” circuitry.
Anthropic can now track the bizarre inner workings of a large language model
What the firm found challenges some basic assumptions about how this technology really works.

Generative AIs are no tools.

Just a metatool? Some thoughts why generative AIs are not tools
Many people brush generative AI aside as being just a tool. ChatGPT describes itself as such (I asked). I think it’s more complicated than that, and this post is going to be an attempt to explain w…

AI predictions as self-fulfilling prophecies

Reprogramming Humanity’s Primal Instincts & What We Learn From A Future “History of Tech” Class
In this edition we explore AI predictions becoming self-fulfilling prophesies among other implications of reprogramming and we consider lessons learned from the history of tech.

Robotic performances

The future of last mile robotics.

Serve Robotics CEO Ali Kashani on the future of last-mile logistics
Serve Robotics CEO and co-founder Ali Kashani discusses the growth of the sidewalk robotics business and the road ahead.

Nice overview on current humanoids by Matt, including the question if there will be specific fashion for our future fellow co-workers.

Filtered for the rise of the well-dressed robots
Posted on Friday 28 Mar 2025. 1,207 words, 13 links. By Matt Webb.

Is this about robotics or new intelligence, or immersive understanding?

3D printed robot receiver listens and cracks any coded messages broadcast from the world
meet cipherling, a 3D printed ‘robot’ receiver that listens and cracks any coded messages broadcast from all over the world.

Evolution of walking by robots.

Fancy humanoid robot no longer walks like it urgently needs a toilet
‘Years’ of training in simulation helped the Figure bipedal robot walk more like a real human.
China’s Tron 1 robot hurdles over obstacles like they’re nothing
Tron 1, a Chinese company’s two-legged robot, is versatile and can walk, roll and pivot, even on rough terrain. Tron 1 stands 33 inches tall and weighs 44 pounds.

Immersive connectedness

I almost forgot about that… What will drop first? Apple Car or Apple Intelligence?

2026 Porsches Still Won’t Have Next-Gen CarPlay, Which Was Announced in 2022
Link to: https://www.macrumors.com/2025/03/26/2026-porsches-still-wont-have-next-gen-carplay/

Home smart reorganisation

Google discontinues Nest Protect smoke alarm and Nest x Yale door lock
The Google Nest Protect is officially dead.

The Glass Slab is still on a road of development

Corning’s new ceramic glass might save your next phone from disaster
Gorilla Glass Ceramic will appear on its first phone in the coming months.

Tech societies

Interesting observation; the impact of bots on the feasibility of open access (academic) content. Not so much from a commercial, but from a technical perspective.

AI bots are destroying Open Access
There’s a war going on on the Internet. AI companies with billions to burn are hard at work destroying the websites of libraries, archives,…

The future of the generalist.

Why Generalists Own the Future
In the age of AI, it’s better to know a little about a lot than a lot about a little

xAI + X = XAI

Elon Musk’s X has a new owner—Elon Musk’s xAI
xAI buys X; deal values social network at $33 billion, $11B less than Musk paid.
Elon Musk is building an AI giant — and Tesla will be central
Musk plays catch-up in the high-stakes AI race.

GenAI economics are a source of worry, and the the future of AI is not Gen AI. Acc Gary Marcus. How does China’s left over hardware play out?

GenAI’s day of reckoning may have come
It’s not just the stock price
China built hundreds of AI data centers to catch the AI boom. Now many stand unused.
The country poured billions into AI infrastructure, but the data center gold rush is unraveling as speculative investments collide with weak demand and DeepSeek shifts AI trends.

The value of the hype.

AI and the strategic value of hype
The game theory guide to Sam Altman

Knowledge under siege, and other impactful societal happenings.

Knowledge Under Siege
Strategic Response in the Age of Institutional Redundancy
The shape of network society
“I’m a McLuhan absolutist now.”
Pluralistic: Private-sector Trumpism (31 Mar 2025) – Pluralistic: Daily links from Cory Doctorow
How Elon Musk’s SpaceX Secretly Allows Investment From China
As a U.S. military contractor, SpaceX sees allowing Chinese ownership as fraught. But it will allow the investment if it comes through secrecy hubs like the Cayman Islands, court records say. “It is certainly a policy of obfuscation,” an expert said.
Scarcity and Abundance in 2025
I mean, what stage of the S curve is this?
Latest issue
Buy Me a Coffee at ko-fi.com