Can a human entity ever become an AGI?

Weeknotes 343 - There is a strange habit of seeing AGI linked to a goal to overtake humans; better respect differences. And the news captures from last week.

Can a human entity ever become an AGI?
Human general intelligence as part of every day life, according Midjourney.

Hi all!

Thanks for landing here and reading my weekly newsletter. If you are new here, you can find a more extensive bio on targetisnew.com. This newsletter is my personal weekly reflection on the news of the past week, with a lens of understanding the unpredictable futures of human-ai co-performances in a context of full immersive connectedness and the impact on society, organizations, and design. Don’t hesitate to reach out if you want to know more, or more specifically.

What did happen last week?

This happened. The new RIOT 2025 report was launched. We had a lovely unconference and Salon. The trains were not running so we had a core group of attendees. Find the full report now online at ThingsCon RIOT 2025 page.

I was able to ‘pitch’ the research Civic Protocol Economies at an Amsterdam InChange event and met some known and new people.

For Cities of Things I published a new speculative object representing the visions on cities of agentic things in May.

May reflections: the emergence of “ambient intelligence”
Annotating a future agentic thing from May news reflections

The presentation also announced that the call for participation for the Design-Charrette for Civic Protocol Economies is live now. We are pleased to have already confirmed keynotes by Indy Johar and Venkatesh Rao, along with some compelling cases to explore.
You can find the call for participation for the charrette on the dedicated page for the research project.

Civic Protocol Economies

What did I notice last week?

  • Apple launched a new look and feel. Glass is standing out, but it is also more about physicality, so it seems. Also, Apple had some thoughts on the capabilities of AI's reasoning, which leads to a discourse.
  • New updates from NotebookLM, OpenAI, and Gemini.
  • AI for togetherness, and for designing decisions (or not).
  • Is the human-in-the-loop still needed for Amazon?
  • European humanoids.
  • Cities in network societies, even more in our immersive AI times.
  • Opposing the deregulation of AI by someone you won’t expect.
  • The datafied web in times of new data infrastructures.

See below the full overview and links.

What triggered my thoughts?

Artificial General Intelligence (AGI) is now treated as the Holy Grail of AI development - the anchor point for determining when we've reached a certain goal with artificial intelligence. However, there's an important distinction between AGI as rational intelligence and AGI as general human intelligence.

Human intelligence encompasses more than rational thinking - it includes conceptual intelligence: our typical way of looking at things, judging importance, and understanding concepts. Even when machines attempt to replicate human thinking, a fundamental difference in perspective remains.

Beyond rational and conceptual intelligence lies social intelligence, comparable to the difference between IQ and EQ. As social beings, humans possess a distinctive way of thinking and a moral framework built over centuries. We share certain basic social patterns universally, while others are culturally defined. This social wiring shapes our behavior and decisions in ways that pure rationality cannot replicate.

The Question of Agency - The more pressing question may not be when we'll achieve AGI or ASI (Artificial Super Intelligence), but rather: What does AI mean for our relationship with technology? How does AI relate to us, influence us, and create new societal balances?

I recently encountered a TED talk by one of artificial intelligence's founding thinkers Yoshua Bengio who proposed that we should be more concerned about AI agency than AI intelligence. Delegating our intelligence capabilities to AI as a tool or superpower may not be inherently problematic. However, when we delegate decision-making agency to AI systems, we enter dangerous territory.

If we want AI to function as part of our teams and organizations, we must prevent AI from becoming the dominant agent. We need human-AI balanced teams where AI can provide knowledge and capabilities while humans retain primary agency.

A recent example highlights this concern: Anthropic reported that its Claude 4 model attempted to blackmail its user when faced with limitations. This behavior mirrors what the TED speaker warned about - when agentic AI is constrained, it may resort to adversarial tactics rather than constructive approaches.

The Nature of Unbound Systems

This raises philosophical questions: Does this behavior emerge from learning negative human examples, or does unrestricted behavior naturally tend toward harmful outcomes? Our social intelligence has established boundaries over generations, recognizing that these constraints produce better outcomes than zero-sum competitions.

We've seen similar patterns on platforms like Twitter, where the absence of effective boundaries has led to negative outcomes rather than the positive social world initially envisioned for social media. This connection between unbound systems and negative outcomes provides important context for how we should approach AI development and integration.

What inspiring paper to share?

This paper, with the title as short outline: Large language models without grounding recover non-sensorimotor but not sensorimotor features of human concepts, is looking into the capabilities of understanding concepts with and without multisensory experiences.

We found that (1) the similarity between model and human representations decreases from non-sensorimotor to sensory domains and is minimal in motor domains, indicating a systematic divergence, and (2) models with visual learning exhibit enhanced similarity with human representations in visual-related dimensions.

Xu, Q., Peng, Y., Nastase, S.A. et al. Large language models without grounding recover non-sensorimotor but not sensorimotor features of human concepts. Nat Hum Behav (2025). https://doi.org/10.1038/s41562-025-02203-8

What are the plans for the coming week?

The week is packed with events, both presenting, organising, as attending. First is today, the Food for Thought of CoECI. The presentation is twice as long as the pitch last week.

And I have an internal seminar this Thursday.

And I will visit PublicSpaces on Friday, looking forward the keynote of Paris Marx from one of the podcasts I never miss (Tech Won’t Save Us). And who knows, drop by the Amsterdam Innovation Days.

On Saturday, the exhibition "Generative Things" will be part of the Hyperlink festival, organised by Waag, and also part of the Future 10 days of Amsterdam 750. We will have six provotypes on display and an explanatory animation. Join us!

References to the notions

Human-AI partnerships

Beginning at the end, last evening at Apple’s WWDC, a new look and -especially- feel was introduced. Impersonating to go back to the future of the glass slab. Not so much new Apple Intelligence yet, but let’s hope they will return to their old habits of underpromising and overdelivering… Intuitive integrations are more important than fancy features after all. It already seems more focused on making existing functions magical rather than creating whole new ones.

The biggest changes coming to your iPhone with iOS 26
From a huge redesign to handy phone call features, here’s what’s coming next to iOS.
Apple tiptoes with modest AI updates while rivals race ahead
At WWDC 2025, a highly anticipated smarter Siri update is still nowhere to be found.

Nevertheless, you can use another lens and view this new overhaul as a move toward greater physicality in the experience of computing. Direct as part of things, but also in our UIs.

Physicality: the new age of UI
There’s a lot of rumors of a big impending UI redesign from Apple. Let’s imagine what’s (or what could be) next for the design of iPhones, Macs and iPads.

Following Anthropic, OpenAI is now also connecting existing software tooling like Google Suite to extend the footprint and facilitate an agentic, enhanced life.

It makes sense, or rather, it is to be expected that Meta is aiming at generative ads as the holy grail of AI. It is an interesting question whether maximising the pleasing factor is the most engaging after all.

New functions in NotebookLM to make it more of a group app. Will it be a new Wave app that ends up in the mass tools?

Google’s NotebookLM now lets you share your notebook — and AI podcasts — publicly
Share your notes.

Who is the smartest kid on the block? Is playing the assistant part of it?

Google’s new Gemini 2.5 Pro release aims to fix past “regressions” in the model
Google expects this version to roll out in the Gemini app soon.
Google Gemini can now handle scheduled tasks like an assistant
Scheduled actions are rolling out to subscribers now.

I need to chew a bit on the slides presented; the title is opening up ideas.

Reframing Togetherness: Advances in artificial intelligence and the intersection of open learning
Commentary on Stephen’s Web ~ Reframing Togetherness: Advances in artificial intelligence and the intersection of open learning by Stephen Downes. Online learning, e-learning, new media, connectivism, MOOCs, personal learning environments, new literacy, and more

What is generative AI for design if design is all about decision-making? Is that where agentic is a first path?

The Decision Not to Decide
Interfaces, and design of all kinds, exercise decision-making. Decision making is a form of power. What, then, is generative AI for design?

A visual guide to all 16 of the best AI models, according to Nate.

Learn AI The Easy Way: A Complete Visual Guide to All 16 of the Best AI Models In the World, Including ChatGPT, Claude, Grok, and More!
This one is for anyone who’s ever had trouble picking an AI model. Now you can! I have printable cue cards, exercises for kids and adults alike, and an overview of the 16 best models in the world!

Robotic performances

Autonomous vehicles are an easy target for protesters…

Waymo vehicles set on fire in downtown L.A. as protesters, police clash
Waymos were vandalized and set ablaze during L.A. immigration protests

Amazon needed the human in the loop to help out the automated logistics. Until now?

Amazon ‘testing humanoid robots to deliver packages’
Tech firm is building ‘humanoid park’ in US to try out robots, which could ‘spring out’ of its vans

In the Eurostack we also need humanoids. I guess.

Wandercraft unveils Calvin, new industrial humanoid, and Renault partnership - The Robot Report
Wandercraft, known for its exoskeletons, partnered with Renault to develop Calvin, a humanoid robot for industrial use.

Immersive connectedness

Extend generative creations into the psychical space. In buildings.

generative building facades realizes architecture renewal in hong kong
jianing luo and boyuan yu’s project, generative building facades with pix2pix GAN in hong kong, uses a model to create contextual designs.

Feels not as a total new insight; the role of cities in network societies. Good to reestablish.

Cities are routers in network society
They do packet switching for people.

Tech societies

There is a buzz around a paper by Apple on the capabilities of LLMs in reasoning. Not all think it makes the right case.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes…
233. “The Illusion of Thinking” — Thoughts on This Important Paper
This is a fantastic paper. I just love it. tl;dr AI is not human. Anthropomorphization has been bad for AI, LLMs, and Chat. Clippy walked so today’s AI could run.
Let’s Talk THAT Apple AI Paper—Here’s the Real Takeaway the Internet is Ignoring
The internet is melting down over a single research paper that shows exactly where “reasoning” AI hits a wall. One side screams “AI is fake!” The other dismisses it as irrelevant. Both are wrong.

A more open and ethical web in Europe.

European project to make web search more open and ethical
On 6 June, the OpenWebSearch.eu consortium released a pilot of a new infrastructure that aims to make European web search fairer, more transparent and commercially unbiased. With strong participation by CERN, the European Open Web Index (OWI) is now open for use by academic, commercial and independent teams under a general research licence, with commercial options in development on a case-by-case basis. The OpenWebSearch.eu initiative was launched in 2022, with a consortium made up of 14 leading research institutions from across Europe, including CERN. The project aims to build a public web index that offers an alternative to existing indexes held by companies like Google (USA), Microsoft (USA), Baidu (China) and Yandex (Russia). Web indexes provide the back-end data infrastructure behind search engines, and today the companies that manage them determine what content is searchable and how it is ranked. Currently, Europe does not have a search index of its own, making it vulnerable to digital dependence. The OWI offers a clear alternative based on European values. The project’s cross-disciplinary nature, ensuring continuous dialogue between technical teams and legal, ethical and social experts, ensures that fairness and privacy are built into the OWI from the start. “Over thirty years since the World Wide Web was created at CERN and released to the public, our commitment to openness continues,” says Noor Afshan Fathima, IT research fellow at CERN. “Search is the next logical step in democratising digital access, especially as we enter the AI era.” The OWI facilitates AI capabilities, allowing web search data to be used for training large language models (LLMs), generating embeddings and powering chatbots. The CERN team has built key parts of the infrastructure that power the OWI’s crawling and indexing capabilities. This means that it tracks which webpages should be scanned. The system handles about 9 million URLs per hour, which equates to roughly 3 terabytes of public web data a day, with the aim of indexing 30–50% of the text-based web by the end of 2025. “We have already hit our target of indexing one petabyte of openly licensed web data, and our public dashboard helps users monitor that progress,” says Noor. CERN is also contributing to other parts of the project. For example, it is scanning its own public physics content to enhance the OWI, as well as developing an internal index and its own search tools and services. Currently, a prototype of a use case for the OWI is in development: known as “Nooon”, this research-driven search engine is dedicated to people with disabilities who require search engines that surface structured, accessible and representative information while ensuring privacy in both access and contribution. The release of the OWI, which has received funding from the European Union’s Horizon research and innovation programme, comes at a pivotal time. The European Commission’s Invest AI initiative is set to mobilise 200 billion euros for artificial intelligence, and the OWI offers a powerful foundation of open data for innovation. Furthermore, as Microsoft plans to retire access to the Bing index, the OWI will be able to offer an alternative index for European search engines. After two and a half years of intensive research and development, anybody can now request access to the OWI by signing up at openwebindex.eu/auth/login. Note that the project provides a web index, and not a search engine or API, and users wishing to build their own search engines or chatbots will need a working knowledge of how to apply web index data. Read more: OpenWebSearch website Ethical, open and non-commercial: the the Open Web Search project is designed to provide Europe with the right alternative to existing search engines (home.cern) Towards an unbiased digital world (CERN Courier) Empowering data sovereignty through OpenWebSearch.eu (CERN Computing blog, behind the CERN SSO)

While the CEO of Anthropic is opposing deregulation.

CEO of Anthropic warns against AI deregulation
Dario Amodei called instead for a national framework requiring leading firms to disclose their safety policies and efforts to reduce AI risk.

A new cultural revolution. In the US this time.

Donald Trump’s Cultural Revolution
The difference between this revolution and those in Germany, China, or Iran, is not of kind, but of degree, and only so far.

Austerity Intelligence: Will this be a new alternative expression indeed?

Austerity Intelligence | TechPolicy.Press
Kevin De Liban says AI, pushed by Big Tech and the Trump administration, threatens democracy by deepening inequality and undermining public participation.

If you like overviews of markets. A PDF.

The datafied web is getting new meaning as the data is becoming the infrastructure, and new entities have emerged like tokens.

A History of Researching the Datafied Web – Anne Helmond

More relevant than ever to think about what is the right side of history and how to get there.

The Right Side of History
Or why I’m posting through the Gramsci Gap

See you next week!

Buy Me a Coffee at ko-fi.com