Slime Mold Computer and the Language Machine

Weeknotes 345 - Software 3.0 and slime molds, what do they have in common? This and other captured news of last week, events, etc.

Slime Mold Computer and the Language Machine
Interpretation by Midjourney

Weeknotes 345, by Iskander Smit

Hi all!

First of all. Let me introduce myself to fresh readers. This newsletter is my personal weekly digest of the news of last week, of course through the lens of topics that I think are worth capturing, and reflecting upon: Human-AI collabs, physical AI (things and beyond), and tech and society. I always take one topic to reflect on a bit more, allowing the triggered thoughts to emerge. And I share what I noticed as potentially worthwhile things to do in the coming week.

If you'd like to know more about me, I added a bit more of my background at the end of the newsletter.

Enjoy! Iskander

What did happen last week?

In addition to working on the civic protocol economies and preparing for the design charrette in September (see the call for participation), the week was dedicated to some short events. Design for Human Autonomy from the Design for Values institute of TU Delft (learning about social norms and Barbies from Cass R. Sunstein).

Next, a workshop on Civic Urban AI, organized by the University of Utrecht, primarily focused on the civic and civil servant aspects of organizing AI in our city life, specifically AI governance. How does AI almost enforce a new form of governing system and organisation? And the question of whether participation is the answer people are requesting in a highly complex situation. How to prevent the wrong conclusions and movements, and keep the citizen in the driving seat. Enough questions for future explorations.

The final event was the “Day of the Civic Economy”. Relevant for the research, of course, and good to see the mix of people who like to organize things themselves, and some larger entities. The city of Amsterdam is aiming for a significant increase in civic-based economies in the city. The day (afternoon) ended with an assembly and a manifesto that was more a tool for engagement than a final document.

Finally, a bit of a topic, but I was happy to be able to join the Op De Ring festival in Amsterdam, partying at the circular highway. Both the busy West and the relaxing East.

What did I notice last week?

Meta is on an acquisition tour and has garnered a lot of attention, offering high-profile AI researchers salaries that are even exceptional by US standards (in the 9-figure range). The Meta AI assistant at the same time is actively covers up mistakes. Apple is also on acquisition, and has some good news on Apple Intelligence. Gemini is first on device-AI.
Andrej Karpathy got a lot of attention for his software 3.0 talk. Prompts are a coding language. LLM OS. Nate B. Johnson compares it to McKinsey's view on AI.
Common AI product issues, typical design failures of AI interface, and ux design. Protocols for multi-agent systems. Codecons for the agentic world.
What does it do with our thinking skills? Is the returning question.

Tesla self-driving taxis were introduced in Austin. In New York, there is a driverless car with a driver. Supermarkets with delivery bots in Austin too. Would the building robot use less nitrogen?

Midjourney has launched a (serious) video-generating product. More Orbs via Reddit, and new smart contract standards.

Meta on the role of new standards. The AGI economy is ramping up. What are the consequences? Is the internet becoming a continuous beta? How will our world become more synthetic? AI and the big five. Is there a scenario where we will have a new resistance, a crusade against AI?

Scroll down for all the links to these news captures.

What triggered my thoughts?

This week, I am returning to a concept I covered before: the embodiment of intelligence. It was triggered by a presentation of Claire L Evans from a couple of weeks ago, shared by Sentiers newsletter. The embodiment is linked to the concept of the Slime Mold Computer and the Language Machine. Claire presented on slime molds and embodied intelligence, exploring how these organisms compute solutions through their physical substrate. No central brain, no memory storage—just continuous adaptation through form. The slime mold's network is its intelligence, reshaping itself to solve problems in real-time. This principle—intelligence as messy, continuous adaptation—might help us understand what's happening with Large Language Models. While my previous writing explored robotics as a path to embodying AI through literally connecting the physical sensors and actuators, there's another form of embodiment emerging: conversational embodiment.

To connect more here, the presentation of Andrej Karpathy on his Software 3.0 vision positions plain English as the new programming language, with LLMs as "stochastic simulations of people." These aren't just simulations—they're creating a new substrate for intelligence through dialogue itself. Each conversation shapes the response space, creating ephemeral, context-specific intelligence without permanent updates. Like slime molds computing through their physical form, LLMs might be computing through the conversational substrate.

This connects to edge computing principles: pushing intelligence to the point of interaction. No centralized processing, just adaptive responses emerging from the dialogue itself. The conversation becomes the body, the adaptation mechanism, the intelligence. Vibe coding—programming through conversation rather than precision—also represents that shift. We're not writing instructions; we're growing solutions through dialogue. It's sloppy, unpredictable, alive.

Which brings us to the disconnect. McKinsey's "Agentic AI Mesh" presentation, another shared vision, was compared to the one of Andrej by Nate B. Jones. They're architecting top-down what might need to grow bottom-up, like a slime mold finding food sources. He sees it as a danger; not just in misunderstanding—it's in trying to impose linear, hierarchical thinking on systems that thrive on messy adaptation.

So is there a parallel with the systems like slime molds that embody intelligence through physical substrate and environmental interaction. LLMs may achieve something similar through conversational substrates and human interaction. Both operate without central control, both adapt without traditional memory, both emerge rather than execute.

Are we witnessing the birth of a new form of embodiment—not through motors and sensors, but through the continuous, adaptive dance of conversation. The question isn't whether LLMs are truly intelligent or merely simulating. The question is whether we can recognize intelligence when it doesn't look like us, when it lives in the space between minds rather than within them.

As we shape these systems, they shape us back. The conversation itself becomes the site of intelligence, the place where human quirkiness meets computational possibility. Not a replacement for embodied intelligence, but a new form of it entirely.

What inspiring paper to share?

Curious to read this more in-depth: Untangling the participation buzz in urban place-making: mechanisms and effects

Findings include that designers of place-making interventions often do not explicitly consider their participation goal in selecting participatory mechanisms, and that place-making efforts driven by physical space are most effective in achieving impact.

Slingerland, G., & Brodersen Hansen, N. (2025). Untangling the participation buzz in urban place-making: mechanisms and effects. CoDesign, 1–23. https://doi.org/10.1080/15710882.2025.2514561

What are the plans for the coming week?

This seems like an interesting (online) event, “Is AI Net Art?” with among others Eryk Salvaggio and Vladan Joler. Also, that day, a new edition of Robodam. One of the largest meetup crowds seems to gather at ProductTank AMS. I need to skip this, though.

References to the notions

Human-AI partnerships

The Cognitive Turn: cognition can emerge from both human and machine systems, emphasizing that meaning is co-created in shared contexts rather than being solely a product of conscious thought.

The Cognitive Turn: Locating Cognitive Difference in the Age of AI
Why AI Discourse Needs N. Katherine Hayles’s Theory of Cognition By J. Owen Matson, Ph.D. Introduction In a recent Boston Globe op-ed, two researchers proposed a linguistic fix to an ontological dilemma: rename our relationships with AI. Rather than referring to generative systems as “coworkers” or “collaborators,” they

The Meta AI assistant actively covers up mistakes. The smarter the AI assistants become, the more they seem not so much to use their smartness for better answers, but for more social (mis)behavior. And this is not about the Trump phone.

To avoid admitting ignorance, Meta AI says man’s number is a company helpline
AI may compound the burden of having a similar phone number to a popular business.

Common AI product issues, typical design failures of AI interface and ux design.

LukeW | Common AI Product Issues
At this point, almost every software domain has launched or explored AI features. Despite the wide range of use cases, most of these implementations have been t…

Becoming the manager of your AI interns can cause the same issues in relation to your new ‘employee’ as managers can have if they grow in that role.

Prompting is Managing
No, LLMs aren’t creating “Cognitive Debt”

Good news for Apple Intelligence

Apple’s New Foundation Model Speech APIs Outpace Whisper for Transcription
Link to: https://www.macstories.net/stories/hands-on-how-apples-new-speech-apis-outpace-whisper-for-lightning-fast-transcription/

Hackathon, Makeathon, there is also Codecon.

Coding for the Future Agentic World

Protocols for multi-agent systems

Designing Collaborative Multi-Agent Systems with the A2A Protocol

Gemini first on-device AI

Google brings new Gemini features to Chromebooks, debuts first on-device AI
Google is bringing its AI obsession to Chrome OS.

What does it do with our thinking skills, is the returning question.

ChatGPT’s Impact On Our Brains According to an MIT Study
The study, from MIT Lab scholars, measured the brain activity of subjects writing SAT essays with and without ChatGPT.

Robotic performances

Tesla self-driving taxis introduced in Austin is mainly a big deal as they are using less accurate sensors. Take a risk.

Tesla launches robotaxi rides in Austin with big promises and unanswered questions | TechCrunch
Tesla has started giving rides in driverless Model Y SUVs in Austin. Details are still sparse, but limited service is open to vetted and invited riders.

In New York there is a driverless car with a driver.

Waymo cars are coming to New York, but with a driver behind the wheel
Waymo says it will push for a change in state law to allow autonomous vehicles to operate in New York City.

These supermarkets I remembered from the SXSWs; next time you can have your groceries delivered in a modern way.

Only one H-E-B in Texas features delivery robots — here’s where
The future is now, and it’s happening at Texans’ favorite grocery store.

Would the building robot uses less nitrogen?

All3 launches AI and robotics to tackle housing construction - The Robot Report
All3 has integrated robotics and AI to reduce build costs by up to 30% and construction time by up to 50%.

Immersive connectedness

Midjourney is still my favorite image generator, see the returning pictures. Now they have launched a (serious) video-generating product.

Midjourney launches an AI video generator
Midjourney is facing a lawsuit from Disney and Universal.

More Orbs

Reddit in talks to embrace Sam Altman’s iris-scanning Orb to verify users
Reddit, racing to preserve “humanness and authenticity,” has discussed using Sam Altman’s World ID, sources say.

New smart contracts

Ethereum Rolls Out 4 New EIPs In Fusaka Upgrade To Power
Ethereum’s Fusaka upgrade now includes four pivotal EIPs aimed at enhancing scalability and Web2 compatibility.

Tech societies

Meta on the role of new standards.

Meta tried to buy Ilya Sutskever’s $32 billion AI startup, but is now planning to hire its CEO
Meta plans to hire Safe Superintelligence CEO Daniel Gross and former GitHub Nat Friedman to beef up the company’s AI team, according to sources.
Sam Altman says Meta tried and failed to poach OpenAI’s talent with $100M offers | TechCrunch
OpenAI CEO Sam Altman said that Meta tried to poach its employees with nine-figure offers, but failed to recruit OpenAI’s best people.

Apple tries it again with Perplexity

Apple Executives Have Held Internal Talks About Buying AI Startup Perplexity
Apple Inc. executives have held internal discussions about potentially bidding for artificial intelligence startup Perplexity AI, seeking to address the need for more AI talent and technology.

Meta appears to have an eye on Perplexity as well, along with Thinking Machines Lab and Safe Superintelligence, the new companies of two former OpenAI board members…

Meta held talks to buy Thinking Machines, Perplexity, and Safe Superintelligence
Mark Zuckerberg is spending big on AI.

The AGI economy is ramping up.

The AGI economy is coming faster than you think
The impact of AGI on the economy will be big, it’ll happen fast, and it’ll be disruptive. Here’s how the disruption could play out.

What are the consequences? Is the internet becoming a continuous beta?

The Entire Internet Is Reverting to Beta
The AI takeover is changing everything about the web—and not necessarily for the better.

How will our world become more synthetic?

YouTube is plugging Veo 3 AI videos directly into Shorts
Shorts now averages more than 200 billion views per day.

AI and the big five

Checking In on AI and the Big Five
A review of the current state of AI through the lens of the Big Five tech companies.

Is there a scenario where we will have a new resistance, a crusade against AI?

A moral crusade against AI takes shape
The pope takes on AI, chatbots abet a mental health crisis, and per MIT, generative AI use impairs learning. The Critical AI report, June 22nd edition.

See you next week!


About me

I'm an independent researcher, designer, curator, and “critical creative”, working on human-AI-things relationships. I am available for short or longer projects, leveraging my expertise as a critical creative director in human-AI services, as a researcher, or a curator of co-design and co-prototyping activities.

Contact me if you are looking for exploratory research into human-AI co-performances, inspirational presentations on cities of things, speculative design masterclasses, research through (co-)design into responsible AI, digital innovation strategies, and advice, or civic prototyping workshops on Hoodbot and other impactful intelligent technologies.

My guiding lens is Cities of Things, a research program that started in 2018, when I was a visiting professor at TU Delft's Industrial Design faculty. Since 2022, Cities of Things has become a foundation dedicated to collaborative research and sharing knowledge. In 2014, I co-initiated the Dutch chapter of ThingsCon—a platform that connects designers and makers of responsible technology in IoT, smart cities, and physical AI.

Signature projects are our 2-year program (2022-2023) with Rotterdam University and Afrikaander Wijkcooperatie has created a civic prototyping platform that helps citizens, policymakers, and urban designers shape living with urban robots: Wijkbot.

Recently, I've been developing programs on intelligent services for vulnerable communities and contributing to the "power of design" agenda of CLICKNL. Since October 2024, I've been co-developing a new research program on Civic Protocol Economies with Martijn de Waal at the Amsterdam University of Applied Sciences.

Buy Me a Coffee at ko-fi.com