Weeknote 235; Somatic agents in AI space

Generative Agents: Interactive Simulacra of Human Behavior - Midjourney

Hi all!

A shorter week after Easter, and next to that I combined a visit to STRP on Friday as I was in Eindhoven for the ThingsCon Salon we organised. For the third time, we teamed up with the Eindhoven community; it has been a while since the last in-person Salon was before Covid. This time we planned it as a side program of STRP. That made it easy for me to visit the STRP activities before. I liked this edition; with several nice art installations on the Art of Listening.

The Salon was also very inspiring, with three good speakers offering a mix of insights. All on listening profiles, sensing homes and bringing data to life. The theme of Listening Things about our personal space that is becoming more and more a sensing environment with the silent IoT revolution. With the new standard of Matter, it will be even easier to add consciously connected devices to our homes. As Lorna mentioned, back in 2010, when we both were present at the launch of The Council Internet of Things, initiated by Rob van Kranenburg (read about the impact), we were in a session on the impact of the smart home on the sense of personal space (organised by Alexandra Deschamps-Sonsino). How listening is part of acoustic biotopes is what Elif showed in her research. How the sensor data is made tangible and given meaning is the work of CleverFranke, and Bob showed how this could work in a smart home context. Our relationship with the tech surrounding us is changing, something that can be clearly experienced in the years of student projects with the IoT Sandbox, a scale model of a home that is triggering new forms of ideas to interact through the physicality of the data. Home IoT is a growing system, Joep showed us.

Our relationship with our more or less connected home will change even more if that relationship becomes more intelligent. See all the news again this week; agency becomes the topic of the coming time…

Peet made some nice pictures you can find here.

Events for the coming week

Notions of last week’s news

The more we become partners with the new AI tools, new relations will be built. This week a paper on generative agents got a lot of attention. I added it to the ‘paper for the week’ part below; it is next to an interesting way of research, also an insight into a possible new form of relation in AI. Not new to research and people looking into human-tech agent relations, but it seems to trigger thinking with a broader audience.

Another tool is AgentGPT which is an experiment in this area.

GitHub - Significant-Gravitas/Auto-GPT: An experimental open-source attempt to make GPT-4 fully autonomous.
An experimental open-source attempt to make GPT-4 fully autonomous. - GitHub - Significant-Gravitas/Auto-GPT: An experimental open-source attempt to make GPT-4 fully autonomous.

Gary Marcus is addressing another aspect; anthropomorphising of AI; treating AI like people.

Stop Treating AI Models Like People
No, they haven’t decided to teach themselves anything, they don’t love you back, and they still aren’t even a little bit sentient.

The appearance of robots everywhere is influencing, of course the way we communicate with them. “Effective integration of robots into human life requires balancing responsibility between people and robots, and designating clear roles for both in different environments.”

Robots are everywhere – improving how they communicate with people could advance human-robot collaboration
Robots are already carrying out tasks in clinics, classrooms and warehouses. Designing robots that are more receptive to human needs could help make them more useful in many contexts.

The robot as an intelligent creature is just starting. I liked how Stacey Higginbotham in her latest podcast brought the combination of TinyML and ChatGPT to bring more ‘life’ into IoT devices, Microsoft was sharing research that converts national language instructions into executable robot actions.

A New Microsoft AI Research Shows How ChatGPT Can Convert Natural Language Instructions Into Executable Robot Actions
Large language models (LLMs) that can comprehend and produce language similar to that of humans have been made possible by recent developments in natural language processing. Certain LLMs can be honed for specific jobs in a few-shot way through discussions as a consequence of learning a great quanti…

Applications might be seen first in an industrial context, where robots are more common already, of course.

Microsoft, Siemens to drive industrial productivity with generative AI
Azure OpenAI Service powered assistant can augment the creation, optimization and debugging of code in software for factory automation.

The role of embodiment in intelligence is linked to the current focus on AI forms. This makes the link with robotics even more interesting, according to the New York Times.

Soft robotics might be necessary for this.

Research in Japan Shows the Way Toward Tactile and Proximity Sensing in Large Soft Robots

And creating team-operating robots could be needed for more complex work.

Robotics Researchers Focus on Teamwork
Teaching robots to communicate and respond to each other could lead to the automation of more complex work.

Dan Shipper always has interesting explorations (and makes a great tool); this one might be a bit controversial:

“We put a premium on explanations because historically we’ve felt that unless we know “why” something happens, we can’t reliably predict it or change it. (…)

“That’s not true anymore. AI works like human intuition, but it has more of the properties of explanations. In other words, it is more inspectable, debuggable, and transferrable.”

Not sure if AI resembles intuition, especially if the transformer models, are more forced intuition or planned intuition then.

“None of this means we should leave scientific explanations behind. It just means we don’t have to wait for them.”

Against Explanations
AI can make progress where science has struggled

In other genAI news.

Google is sprinting to protect its core business with a flurry of projects, including updates to its search engine and plans for an all-new one.

AWS is entering the Generative AI space from a storage and processing perspective.

Elon Musk seems to plan for a Generative Twitter

The Verge
Elon Musk has created a new company dedicated to artificial intelligence — and it’s called X.AI, as first reported by The Wall Street Journal. The company, which a Nevada filing indicates was incorporated last month, currently has Musk as its director and Jared Birchall, the director of Musk’s famil…

Not generative but generalist medical AI

Foundation models for generalist medical artificial intelligence - Nature
This review discusses generalist medical artificial intelligence, identifying potential applications and setting out specific technical capabilities and training datasets necessary to enable them, as well as highlighting challenges to its implementation.

“It starts with a familiar intro, unmistakably the Weeknd’s 2017 hit “Die for You.” But as the first verse of the song begins, a different vocalist is heard: Michael Jackson. Or, at least, a mechanical simulation of the late pop star’s voice.”

AI-generated music as a new thing?

AI-Generated Music Is About to Flood Streaming Platforms
There are already countless songs on Spotify, Apple Music, and SoundCloud. And as tunes become easier to create, anyone can add to the copyright din.
AI, NIL, and Zero Trust Authenticity
AI-generated content is not going to harm those with the capability of breaking through: it will make them stronger, aided by Zero Trust Authenticity

Google is creating a song creator

Google’s AI music generator is like ChatGPT for audio
Google has unveiled MusicLM, an advanced AI music generator that can produce audio based on a short text description.

Curious how Coachella will look next year 🙂

coachella 2023 art installations immerse visitors in a world of vibrant architectural beacons
the coachella valley music and arts festival returns with immersive installations from renowned artists and designers.

GPT-5 is not rushed; Sam Altman of OpenAI is securing us.

The Verge
In a discussion about threats posed by AI systems, Sam Altman, OpenAI’s CEO and co-founder, has confirmed that the company is not currently training GPT-5, the presumed successor to its AI language model GPT-4, released this March. Speaking at an event at MIT, Altman was asked about a recent open…

And did you hear about crypto? A report on the current state;

2023 State of Crypto Report: Introducing the State of Crypto Index
Our 2023 report aims to break through the noise of price movements to track the signals that matter and the progress of web3 technology.

The protocol economy is on another level; the shift to cloudless computing is based on protocols instead of centralised services.

The Paradigm Shift to Cloudless Computing

And to close on a geopolitical note. “Cold War 2 won't be decided by the opening moves.”

2023 is when the empires strike back
Cold War 2 won’t be decided by the opening moves.

Ok, let’s end with another type of AI prompting human behaviour: AI-generated skateparks…

sweeping AI-generated skateparks emerge from the iconic parisian cityscape
amid the dense urban fabric of paris, ūti architectes unveils hidden skateparks sweeping across the city using AI design tool midjourney.

Paper for the week

Generative Agents: Interactive Simulacra of Human Behavior

Research through game design is a really interesting way of simulating the experience of Generative AI. The paper received much attention as a signpost for a new form of AI; the agents. Just like the earlier mentioned AgentGPT.

In this paper, we introduce generative agents—computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior.

Park, J. S., O'Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative Agents: Interactive Simulacra of Human Behavior.

https://arxiv.org/pdf/2304.03442.pdf

Have a lovely week!