Beyond Tools: AI as Autonomous Learning Entity

Hi all!
For those who were waiting at 7 am for this newsletter, I am sorry to have a slight delay. No, this is no April’s fool day strategy (who would believe me today) but practical. Yesterday I needed to send in a proposal for a new Hoodbot project, and was not able to complete the newsletter.
Thanks for landing here and reading my weekly newsletter. If you are new here, have a more extended bio on targetisnew.com. This newsletter is my personal weekly reflection on the news of the past week, with a lens of understanding the unpredictable futures of human-ai co-performances in a context of full immersive connectedness and the impact on society, organizations, and design. Don’t hesitate to reach out if you want to know more or more specifically.
What did happen last week?
Apart from more world-shaking news, both from nature and crazy politicians, I had a solid week, with a mix of speaking to nice people, visiting a fruitful conference on responsible AI, finishing the setup of Hoodbot with Lisa, and more.
What did I notice last week?
- The hottest news, or better said, the news with most traction was without doubt the new GPT 4o image generation. It was a blessing in disguise for Trump cs that Signal gate was a bit a the background… However, that is of course to big be totally become off the radar. However linked to the Signal chatgroup the issue is of course tech agnostic here. A design for letting people be added to a group might be improved, but that is not the issue that needs to be solved. It’s the people stupid!
- But about that new image generation thingy. I was very pleased by how it turns a quick sketch for the proposal I was writing into a realistic image. And with tweaking I could make it even into the right atmosphere. I combined it with a prompt describing the project idea, and by going back and forth the prompt became also better. I added a more fleshed out prompt to Midjourney and was not disappointed by the result. The feel of the interactive installation was even a bit better than GPT-4o but the details were not right. The combination of the three outputs (including my own iPad made sketch) gave a good impression. Let me know if you are curious ;)
- I did not now Reve.art, delivered maybe better results based on the prompt than GPT-4o, but worse based on the sketch.
- Gemini also got an updated model (2.5) but received much less attention. It seems good; some say it is even on par with or above the other models.
- It seems clear that multimodel understanding is key for reasoning models. Upgrading visual understanding makes a lot of sense. Leveraging the natural context you have is also key. That is different with OpenAI, Gemini and Xai. Will the models develop a different type of intelligence there?
- xAI has now ‘bought’ X. That is of course not a surprise, as the data at X is a great asset for building knowledge graphs on human opinions. Not the most neutral opinions anymore nowadays.. I am not hopeful but there is of course an option to use the AI capabilities to bring in more sense in radicalized people. More likely is that the Tesla humanoids become the new workplace bullies…
- Apple Intelligence got some minor updates in 18.4, but for Europeans, there is much more access to the functions like visual intelligence.
- Anthropic is opening the black box of LLMs.
- Nice thoughts on domestication of humanoids by Matt and other evolutions.
What triggered my thoughts?
Nate B Jones has a great take on the web that is disseminating now with chatbots added to the game. OpenAI's new visual capabilities and coding tools show this; AI cannot think on a conceptual level—not yet, at least. AI systems are thinking about the next step.
This does not disqualify the value of AI in its current form at all. It reveals the real new playing field. We have not only humans and computers, but humans, chatbots, and computers—or, as Nate put it rightly, multiple interactions.
I was at a conference on responsible AI, and in a session on conversational AI and digital humans, I was triggered by this thought, too. They spoke about conversational AI and the user, the human, who needs to be served with the best tool to understand the human. All was fine, but I asked the presenter if they also thought about the other chatbot user, the insurance company employee (insurance company was the use case). Is that not also a 'user' in the interaction with the chatbot? Isn't the AI here an entity of its own, learning things, shaping relations, and functioning as a learning engine for the employee just as much as it can solve problems for the client user?
I felt this needs more attention and more fleshing out. The framing of Nate is a nice one here.
Connect this to the artificial neuron model, infomorphic neurons that learn independently, similar to biological neurons.
Novel artificial neurons learn independently and are more strongly modeled on their biological counterparts. A team of researchers from the Göttingen Campus Institute for Dynamics of Biological Networks (CIDBN) at the University of Göttingen and the Max Planck Institute for Dynamics and Self-Organization (MPI-DS) has programmed these infomorphic neurons and constructed artificial neural networks from them. The special feature is that the individual artificial neurons learn in a self-organized way and draw the necessary information from their immediate environment in the network. The results were published in PNAS.
I like to connect that to another part of the session at the conference; the potential of digital humans in conversations with intelligent services. The angle presented is that the digital humans better represent human emotions in a conversation, and help understanding (or selling of course). I am not sure about that. As it was presented, we see humanlike behavior very quickly in things with certain expressions and behavior. We don't need a human-like visual representation for that. I think we can better use digital humans inversely: use the level of resolution to indicate how much the intelligence can understand human things or how much it is hallucinating. And learn en passant to understand when real humans are more fake than synthetic ones (like influencers faking real life).
To bring it home, we must acknowledge we are not using AI as a tool but instead live with AI as a partner that does itsown thing, and we can capitalize on this if we understand what these systems are learning.
What inspiring paper to share?
This week a paper that is covering living labs, among others the Wijkbot as a living lab. Nice.
A living lab learning framework rooted in learning theories
Thus, this paper develops a framework that allows to capture learning in a living lab co-creative environment. In response to widespread calls for an epistemological basis for living labs, the study bases the framework on relevant learning theories.
The framework distinguishes content, capacity, and network as learning types; intentional or incidental as learning processes; and individual, team, and organization as learning levels.
Astha Bhatta, Heleen Vreugdenhil, Jill Slinger, A living lab learning framework rooted in learning theories, Environmental Impact Assessment Review, Volume 114, 2025, 107894, ISSN 0195-9255, https://doi.org/10.1016/j.eiar.2025.107894.
What are the plans for the coming week?
This week I need to prepare for the ThingsCon Salon 14 April, shaping the workshop and making planning of the logistics, the usual production work that is part of organizing events.
Working on projects: the new phase with Civic Protocol Economies will start. Some funding schemes for Hoodbot robot citizen voice function are being explored.
No big plans for events. There is a huge webinar (I expect) by Scott Galloway that I am curious about. For those who are attending Milan Design Week, based on the guide by Designboom, there is a lot of good old design stuff, if that is your thing. Google is launching an immersive platform, though, that might be interesting to track or check. The group shows look potentially interesting: Salone del Mobile turning to new forms of embodiment in design, and Fuorisalone is possibly the most interesting: “This edition delves into how design fosters relationships—between physical and digital realms, diverse cultures, humans and the environment, and individuals within communities.”
Have a great week!
References with the notions
Human-AI partnerships
Google rolls out Gemini 2.5, and got less attention because of the introduction of GPT-4o image generation. And also introduces more down-to-earth travel planning AI tools. The multimodal angle to reasoning is the most important angle to track.





It was a meme driving the news on the new release. Adjusting existing images makes it so much accessible and imaginable for people as Ben Thompson rightly said.



Apple did also update iOS with (some) more Apple Intel features. For us in Europe we now have visual intelligence and the critiqued heavenly image playground among others.

“These 22 lessons highlight the quiet revolution students are already leading, the sophisticated interplay between AI and human cognition, and the urgent need for educational institutions to adapt.”
“When learning is structured by systems rather than teachers, how do learners understand what—or who—is teaching them?’


How do LLM think? Anthrophic opens the black box to help understand.


Generative AIs are no tools.

AI predictions as self-fulfilling prophecies

Robotic performances
The future of last mile robotics.

Nice overview on current humanoids by Matt, including the question if there will be specific fashion for our future fellow co-workers.

Is this about robotics or new intelligence, or immersive understanding?

Evolution of walking by robots.


Immersive connectedness
I almost forgot about that… What will drop first? Apple Car or Apple Intelligence?

Home smart reorganisation

The Glass Slab is still on a road of development

Tech societies
Interesting observation; the impact of bots on the feasibility of open access (academic) content. Not so much from a commercial, but from a technical perspective.

The future of the generalist.

xAI + X = XAI


GenAI economics are a source of worry, and the the future of AI is not Gen AI. Acc Gary Marcus. How does China’s left over hardware play out?


The value of the hype.

Knowledge under siege, and other impactful societal happenings.




