Weeknotes 300 - sensors impersonating cameras to recreate reality

Reflecting on talks on “corporate design thinking” applying AI tools, and thinking about sensors impersonating cameras to capture new realities. With notions from news, and paper for the week.

Weeknotes 300 - sensors impersonating cameras to recreate reality
Interpretation of thoughts by Lex & Midjourney

Hi y’all!

If you are reading this now, you are probably not on holiday (yet); you might enjoy a somewhat quiet environment and, finally, some pleasant summer weather (in the Netherlands). There are still some things to do, like last Wednesday, when I attended a meetup organized by SDN (Service Design Network), themed about the impact of AI (tooling) on the work of service designers. I saw three speakers (I had to skip the last one) focused on the helpful role of AI in creating customer journeys and personas with the usual AI tooling. Let me share some impressions.

Deloitte Digital created a set of prompts and databases for it. The customer journey they showed was visually rich but also felt a bit synthetic. It made me wonder how much your vision of the human actors might be distorted, especially if you can widen your research population by interviewing synthetic respondents. AI as inspiration can work, sharpening your questions, and even challenging your assumptions. It all depends on how you embed it in the total representation of the customer journeys.

The second speaker worked at Arcadis Innovation and coined the term “AI-lchemy” (…). Maybe not totally as meant, I think there is an interesting take on using the concept of alchemy as it is a sort of “occult” in chemistry science, in early medieval times. Is this the same phase we are in using AI in design?

Tanishqa Bobde shared how she is looking for the right mix of human and AI elements. I like to attend these types of events because they give a good insight into how others are achieving these concepts.

She is looking into a planet-positive future, which is what Arcadis strives for. This triggered me to think that now that Design Thinking has become a consultancy skill instead of a design skill, we can expect that co-design will be next.

Executing the human-AI mix is now rather first-level tooling: context research, iterating on outcomes, and PESTLE analysis. We are far from thinking about human-AI co-performances.

She is also realistic in noting that the outcomes are never as in-depth as with humans. Her takeaways are: use targeted prompts, see AI outputs as preliminary knowledge, and cast your critical human eye.

The last speaker (for me) was Serena Westra, a business designer for IKEA. She is not speaking about IKEA but about an initiative called AI-by-design, an approach aiming to embed AI in the double-diamond process. Inspired by CRISP-DM, she explains how she is building feedback loops for AI. She believes in the role of AI as a “bad intern,” as Kevin Kelly coined it (I'm not sure he sees it as a bad intern).

Listening to her presentation on the exchange of HCD (human-centered design) and AI, I felt interestingly as a kind of AI-centered design packaged as human-to-AI-centered design, as if we optimize the things we design to source the AI best. It was probably not exactly meant like this, but it was what she was signaling to me.

The conclusion from these talks was that AI is embraced in “corporate design thinking” but can result in design research performed less with humans. It can also be seen as a way to create a multiplier of human insights via synthetic enhancements, more positively framed. This can work, but you need to be very careful not to be carried away with the possibilities and force-fit humans into the synthetic-shaped contexts…

Triggered thought

I did not intend to, but that little report on the meetup triggered more thoughts than “just” reporting. Still, I was thinking about some other things, too, listening to The Verge podcast on Friday.

The hosts are discussing the state of AI photography and the need for watermarking. They are discussing what is real in photography, as it is always a representation; creating a better self. Samsung is a master at bending reality, but now Apple is also entering the field, as expected. There will be a new possibility for creating new differentiating propositions. Should you fill in the background based on suggestions or keep a blurry, unclear item in the background as unclear?

I wondered if there is a difference in AI-enhancing and synthesizing reality between people and places, humans and objects. We are kind of used now that our phone “cameras” are creating a synthesized version of ourselves and others in the pictures. With places and things, we might expect and like more reality. However, we are increasingly able to distort and clean up the context. In that sense, is the thing we capture not the reality we like to save for our later memories, but are we staging our scripted play we are part of? The play of our perceived life at that moment.

In the end, cameras are not used primarily to capture reality. Cameras are more like sensors that capture enough data to produce a believable and idealized representation of reality.

This is all amplified by the pressure of social peers and technical FOMO (fear of missing out; we fear not using the technical capabilities provided).

On the other hand, there is a counter-movement. The early generations of digital cameras seem to become popular with Gen Z and below. Force yourself to capture the reality not as real as possible but as honest as possible. Know the limits of the technology. See the flashlight as a flashed image in your picture. And by using the non-connected device, you distance yourself from oversharing. Build in a barrier, a more conscious selection you have to make if you look at the pictures later when importing them on your computer.

It is an interesting development to use analog-feeling digital devices. I'm unsure if it is a temporary hype or a fork in the use of digital technology. To prepare, I dug up my old, tiny Canon Ixus 40.

I make one final connecting leap here. There was an interview with professor Gusz Eiben in de Volkskrant on the missing link in ChatGPT: a body. He thinks ChatGPT is too focused on the “brain” angle of intelligence. However, intelligence is also very embodied; we learn through physical encounters. This is part of the theme and questions at this year’s TH/NGS 2024 on Generative Things. I could not help but connect it to the notions above. Is that “old” digital now part of better understanding our relation with our feelings, the tangible reality?


For the subscribers or first-time readers (welcome!), thanks for joining! A short general intro: I am Iskander Smit, educated as an industrial design engineer, and have worked in digital technology all my life, with a particular interest in digital-physical interactions and a focus on human-tech intelligence co-performance. I like to (critically) explore the near future in the context of cities of things. And organising ThingsCon. I call Target_is_New my practice for making sense of unpredictable futures in human-AI partnerships. That is the lens I use to capture interesting news and share a paper every week.


Notions from the news

The EU AI Act is now active, just as there are signs of a new winter and opinions.

AMD is following Nvidia focusing on AI chips. Makes sense, better late than never. Others might be too late

And there is a consolidation of AI companies and apps.

Human-AI partnerships

Okay, we have a new image-generating tool: Flux.1. It is free and promises good results even with text. Check it out or watch the video of Two Minute Papers.

Black Forest Labs - Frontier AI Lab
Amazing AI models from the Black Forest.

I am not planning to use this for this newsletter - SEO is not my main driver. I’m curious how this would influence the landscape of blogposts; will it become all the same, or will Wix play divide and conquer between the blogposts it serves?

Wix’s AI will now write whole blog posts for you
You can even choose SEO keywords to litter throughout.

After a rather long delay, the new voice system of OpenAI seems to be beyond the uncanny valley for chatbots. See also this video of Dan Shipper testing the new voice.

ChatGPT Advanced Voice Mode impresses testers with sound effects, catching its breath
AVM allows uncanny real-time voice conversations with ChatGPT that you can interrupt.

Apple has been rolling out the first version of Apple Intelligence in dev mode. Slightly smarter.

A first look at Apple Intelligence and its (slightly) smarter Siri
Siri’s big glow-up is here.

AI as a copilot or as an agent, different strategies that can become part of existing routines but need to be valued as significantly different concepts.

On speaking to AI
Voice changes a lot of things

GPT-4o changed the use of AI

GPT-4o-mini changed ChatBotArena
And how to understand Llama 3.1’s results on the community’s favorite benchmark.

Can an AI companion that you wear as a device to talk to make you less lonely? Or might it create more distance?

Can an AI friend make you less lonely?
Meet Friend: a ‘Tamagotchi with a soul’, wearable AI companion that records your interactions and texts back

And some tech bro behavior updates. OpenAI.

The reversed centaur is someone harnessed to the machine. Need to chew on this one.

Pluralistic: The reverse-centaur apocalypse is upon us (02 Aug 2024) – Pluralistic: Daily links from Cory Doctorow

An interesting comparison: AI and Excel; in how it deals with learning

ChatGPT Is the New Excel
Plus: Make a website with just two words

Additive intelligence, the new way of interacting?

We need to prepare for ‘addictive intelligence’
The allure of AI companions is hard to resist. Here’s how innovation in regulation can help protect people.

Robotic performances

Teeth and intelligent machines is a theme. See below too.

Perceptive says AI-driven robot is faster than a human dentist - The Robot Report
Perceptive has developed and demonstrated a robot that uses imaging and AI for dental procedures such as crown placement.

Robotic machines that can create new shapes.

‘robotic’ machine can knit solid furniture and 3D accessories using elastic cord or yarn
researchers at carnegie mellon university develop a machine for solid knitting, a technique that weaves objects using elastic cord or yarn.

For the drone enthusiasts.

The HoverAir X1 is the first drone I want to use all the time
A selfie drone that keeps it simple, stupid.

The economic impact of autonomous vehicles (in the US).

Coalition to shed light on economic impact of autonomous vehicles - The Robot Report
The U.S. AV Jobs Coalition will share resources, statistics, and initiatives that show the workforce impacts of AVs on the U.S. economy.

Can AI chatbots learn common sense to a car relation?

Driverless cars still lack common sense. AI chatbot technology could be the answer
Chatbots are much better than traditional driverless car technology at dealing with complex, previously unknown scenarios.

Immersive connectedness

The domain of route-planning apps vs location finders vs traffic mastering, subtle shifts are happening, now with Google and Waze trying to become more relevant both ways. Wondering if we would get services based on the core ingredients that will spread out depending on contexts like in-car, on the move, planning at home, etc. It might all change when the intelligence engine is the new playmaker for mixing different services depending on those needs.

Google Maps is getting even more like Waze
Maps gets easier incident reporting, while Waze gets more camera alerts.

The AI unbundling so to say…

The Great AI Unbundling
Why ChatGPT and Claude will spawn the next wave of startups

Facepalm. We will see a new category of AI that is the successor of Sneaky IoT: Sneaking AI. And silly implementations.

“AI toothbrushes” are coming for your teeth—and your data
App-connected toothbrushes bring new privacy concerns to the bathroom.

Is this about human-AI partnerships or new AI-driven ecommerce?

Chrome is going to use AI to help you compare products from across your tabs
Chrome will be able to make a handy comparison table.

Redefining Solar Punk fasion

EcoFlow’s Power Hat is a floppy, phone-charging solar panel for your noggin
It’s absolutely brimming with solar panels.

Tech societies

This is the future. Or no, the now: the reality is defined by the levers of Meta and similars. At least the perceived reality. And more on Meta’s AI plans for the future, or future planning for AI.

Meta addresses AI hallucination as chatbot says Trump shooting didn’t happen
Meta “programmed it to simply not answer questions,” but it did anyway.
Meta’s future is AI, AI, and more AI
Zuckerberg: ‘At this point, I’d rather risk building capacity before it is needed rather than too late.’

Are you expecting that LLMs develop into AGI (Artificial General Intelligence)? Not everyone does.

LLMs are a dead end to AGI, says François Chollet
AI researcher François Chollet thought we needed a better way to measure progress on the path to AGI — so he made one.

Paper for the week

To extend on the earlier mentioned embodied intelligence, this paper from the longlist.

PaLM-E: An Embodied Multimodal Language Model

Large language models excel at a wide range of complex tasks. However, enabling general inference in the real world, e.g., for robotics problems, raises the challenge of grounding. We propose embodied language models to directly incorporate real-world continuous sensor modalities into language models and thereby establish the link between words and percepts.

Driess, D., Xia, F., Sajjadi, M. S., Lynch, C., Chowdhery, A., Ichter, B., ... & Florence, P. (2023). Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378

Looking forward

Thanks again for reading. Looking forward to meeting an old colleague, making some proposals more concrete, and browsing the proposals for next year’s SXSW (you might check out this one :-).

Enjoy your week!

Buy Me a Coffee at ko-fi.com