Weeknotes 309 - the hidden influencer role of an adaptive AI

What will happen when these AIs become more and more capable of playing us? And other notions from last week's news in this newsletter.

Weeknotes 309 - the hidden influencer role of an adaptive AI
interpretation by Midjourney

Hi, y’all!

Last week, the natural voice of ChatGPT o1 gained more following, especially as it is adopted by mainstream media here in the Netherlands with a talk show appearance and numerous podcasts. It is one of these AI iterations that triggers imagination and experience magic and becomes a way for people to reflect on the role of AI in society. The hardest thing I saw or had in the conversation is that the AI is sometimes too eager to break in a conversation. In that sense, it feels like a conversation with a counterpart who is one step ahead in the thinking…

I also had the pleasure of joining another lovely cocktail hour tiny event by Monique. Talking to great people, chatting with Matt Webb in real life (one of our speakers at ThingsCon this year), and experiencing the entertaining acrobot act of Daniel Simu. And I visited the 6 year celebration party of ink Social Design with a good panel on the role of changemakers and designers in crafting transformations.

Triggered thought

Watching one of these conversations unfold with OpenAI's new natural voice interface, it strikes me that an AI system is erupting that behaves like influencers, adapting its messages and emotional tactics to maximize its impact on individual users.

Observations in AI-driven audio applications have shown that these systems are most effective when replacing complex, multifaceted interactions. This aligns with the concept of an "AI layer" that integrates various aspects of our digital lives, as discussed in edition 293: "apps as capabilities" within a unified interface.

There is, however, a potential unwanted consequence in their ability to trigger genuine emotional responses from users. As noted in last week’s triggered thought, the most significant emotional aspect of AI interactions often comes from our own reflection on the conversation. This self-reflection, combined with an AI's ability to tailor its approach, creates a powerful psychological dynamic reminiscent of a current ELIZA, the pioneering chatbot designed to mimic a psychotherapist.

What happens when AI systems generate fake emotions that elicit real human emotions? We're entering uncharted territory, especially considering how these interactions may shape our evolving sense of self in partnership with AI. While some research suggests AI could lead to more realistic human behavior and fewer conspiracy beliefs, we should be wary of potential backlash.

When OpenAI's chatbot was featured as a guest in a talkshow last week, a skeptical human participant initially expressed discomfort with artificial relationships. However, as soon as the AI began discussing topics of interest to him, his attitude shifted noticeably towards acceptance.

Whether intentional or not, this adaptive behavior in AI systems poses significant ethical questions. While generating insights and fostering creativity can be beneficial, we must establish safeguards against manipulative influence. Perhaps a "fixed system card" for AI, prioritizing ethical boundaries over data protection, could be a step in the right direction.

As we move forward, it's crucial to remain vigilant about the potential for AI to become a hyper-effective influencer, capable of adapting its message and emotional appeal to each individual user. The consequences of such technology demand our attention and careful consideration.


For the subscribers or first-time readers (welcome!), thanks for joining! A short general intro: I am Iskander Smit, educated as an industrial design engineer, and have worked in digital technology all my life, with a particular interest in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to (critically) explore the near future in the context of cities of things. And organising ThingsCon. I call Target_is_New my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week.


Notions from the news

Some key introductions last week: Meta with a video with sound model called Movie Gen, OpenAI with a new canvas and rolling out natural voice, new Microsoft co-pilot improvements. And NotebookLM is gaining even more attention through the audio function. The Hololens is silently discontinued.

Human-AI partnerships

Realtime AI API is a new OpenAI feature that brings humanlike AI assistants to every app. Will this be as powerful as the Facebook Like button?

New OpenAI update brings advanced voice features to any app | Semafor
The ChatGPT maker now allows developers to incorporate real-time conversational interfaces with little effort.

How will it relate to an ecosystem of domain-specific foundation models? Is that indeed the future?

💡 The foundations of AI
ChatGPT, Claude and other language models have dominated the mainstream discussions and use.

ChatGPT is following Claude with a new type of Canvas interface to support co-working with the AI better.

ChatGPT’s ‘Canvas’ interface makes it easier to write and code
OpenAI gave its chatbox an editable notepad window.

Microsoft is also improving the co-pilot.

Uninstalled Copilot? Microsoft will let you reprogram your keyboard’s Copilot key
Copilot key becomes a “whatever” key in latest Windows Insider Preview build.

What will the AI DJ look like? Will this be a topic at the Amsterdam Dance Event next week?

DJs are debating whether AI can replace them | Semafor
Software companies that make DJing platforms are increasingly adding AI capabilities to their tech, allowing DJs to automate some tasks. The reception within Berlin’s iconic club scene is mixed.

Are more sophisticated AI models indeed better equipped to lie to us, human counterparts?

Sophisticated AI models are more likely to lie
Human feedback to AIs makes them favor providing an answer, even a wrong one, while making the answer more convincing.

Is NotebookLM more distracting users than enhancing their understanding and creativity? (depends on the way it will behave and be designed imho)

Enough about me
The thing to write about in the “tech commentary” space this week is Google’s NotebookLM, a tool that lets users explore a set of documents through an LLM interface instead of reading them.

Robotic performances

A new type of security breach.

Kamala Harris campaign motorcade halted by confused robotaxi
Is this AI showing the would-be leader of the free world who’s really the boss?

Apart from the traditional factory robots, humanoids are popping up at this tradeshow in China.

CIIF 2024 shows major robotics trends in China - The Robot Report
CIIF 2024 showed the latest in China’s robotics industry, from cobots and industrial automation to humanoids.

I expect proxy robots will be in the near future. First signals. This does not seem to add a lot to a normal remote presence.

This little robot is helping sick children attend school | CNN Business
For children with long-term illness or struggling with mental health issues, No Isolation developed the AV1 robot, which can take their place in the classroom and help them stay connected with their classmates.

Immersive connectedness

I noticed before how Oura is an IoT device that keeps going without much notice. Lately, with some improvements and more attention.

The Oura Ring 4 has slimmer sensors, increased accuracy, and more sizes
Yes, it still has a subscription, but at least it hasn’t increased?

Check this if you have a Rayban. It's nice for the next birthday.

Facial recognition data breach: Meta glasses extract info in real time
In what might be described as a real-life Black Mirror episode, a Harvard student uses facial recognition with $379 Meta Ray-Ban 2 smart sunglasses - to dig up personal data on every face he sees in real time.

Tech societies

Not so much tech societies as tech business models. In AI. Is OpenAI a healthy business?

OpenAI Is A Bad Business
OpenAI, a non-profit AI company that will lose anywhere from $4 billion to $5 billion this year, will at some point in the next six or so months convert into a for-profit AI company, at which point it will continue to lose money in exactly the same way. Shortly after

Can you opt out of AI?

Ethan Mollick on AI in organizations. In a short post and longer NotebookLM podcast, as you do…

AI in organizations: Some tactics
Meet the Lab and the Crowd

“Your phone is a little libertarian buddy who is whispering in your ear yeah go for it, who’s gonna stop you, might is right.” Matt Webb (the same as above indeed) makes a nice case that the phone should take responsibility, or as we call this in the Netherlands, should have a “zorgplicht” to prevent, or at least notice, you for rude behavior. This might be a good topic to flesh out for ThingsCon 🙂

Ok hear me out iPhones should have a sense of shame
Posted on Thursday 3 Oct 2024. 739 words, no links. By Matt Webb.

A different voice on the impact of robotics on our jobs.

What if everyone is wrong about what AI does?
Both critics and supporters seem to think AI is a “human remover”. What if they’re both wrong?

What is the other end of software eating the world?

After software eats the world, what comes out the other end?
Malkovich. Malkovich. Malkovich?

Paper for the week

Strategic planning for degrowth: What, who, how

Degrowth is gaining traction as a viable alternative to mainstream approaches to sustainability. However, translating degrowth insights into concrete strategies of collective action remains a challenge. To address this challenge, this paper develops a degrowth perspective for strategic spatial planning as well as a strategic approach for degrowth.

Savini, F. (2024). Strategic planning for degrowth: What, who, how. Planning Theory, 0(0). https://doi.org/10.1177/14730952241258693

Looking forward

Looking forward to this evening where POM Live that might have a link with the Dutch release of the useful book by Ethan Mollick co-intelligence today.

And I am looking forward to the Society 5.0 festival, not only because I will host a Wijkbot-workshop.

Enjoy your week!

Buy Me a Coffee at ko-fi.com