Weeknotes 245; captured in the AI bubble
Hi all, Happy July. The holiday season is about to kick in here, I think, for all who are tight to the school cycle.
A short look back at last week, as I did not attend events to report on. I was at the kick-off of the AI Labs for SME at Utrecht University last Friday.
The AI Labs are the place for knowledge institutions, companies, public sector organisations, students, lecturers, and researchers to cooperate on strategic, societally-relevant research in the fields of AI and Data Science.
We had a stand at the network market to offer possible assignments to students in developing the AI-platform and copilot of Structural. Good to see how many students passed by and resonated with the challenges.
Two things I watched and listened to that triggered some thoughts. First, I finished Silo, and without spoiling anything, there is a link to the immersive synthetic reality we will be part of… If you have seen the series, you might agree. Speaking of synthetic realities, nice to have my short reflections I wrote earlier on the angle of Vision Pro on this published in UXmag.
This Nerdwriter edition is also very nice in showing what synthetic vision means or can mean.
And a second listen: the latest podcast of Near Future Laboratory featured Juliana Schneider on More-than-human-centered design as a guest. Not a new concept for readers of this newsletter as it is an important concept in the academic discourse (e.g., DCode project, More than human design), but more in a bubble than expected. At least rather for Julian Bleecker…
Events for the coming week
Here holidays are becoming real…
- This afternoon the Critical City Making symposium in Amsterdam
- A broader festival on creativity and more in A-Lab: https://www.a-lab.nl/events/a-lab-festival
- UXcamp Amsterdam, this Saturday https://uxcampams.com/
- The Future is Funghi, 11 July, online from Arizona
- Save the date: we announced a new ThingsCon Salon on Doing Ethics in Smart City Tech, which we organise together with the research project Human Values for Smarter Cities, in Rotterdam; on 6 September.
Notions from the news
The AI is potentially breaking the internet; it was an important story. And it turns out “that AI-generated websites with soulless and repetitious text are taking over search engine results, driving out human-written content and ad revenues.” Same for synthetic tweets.
‘Furthermore, according to two new studies, using synthetic data generated by other AI systems is causing models to collapse, raising real risks for the web.’
Ted Underwood argues that large language models are a triumph for cultural theory, “the thesis that language is not an inert medium used by individuals to express their thoughts but a system that actively determines the contours of the thinkable.”
https://critinq.wordpress.com/2023/06/29/the-empirical-triumph-of-theory/
Writing as an academic practice in the time of Generative AI.
Human AI collaboration is still early to trust completely.
Things now on speaking terms with the human
Spatial computing in context of other technologies.
We need a new form of humanism: “In "Post-Anthropocene Humanism," philosopher Giorgio Agamben explores the relationship between humans and technology and how it affects our definition of being human.”
A conditional chain reaction as inspiration for combining AI tools, it seems
A bit shorter (half) than the one with Lex Fridman last week, Ben Thompson did an interview too with Marc Andreessen. As an antidote to the “AI doomer movement”,… “Andreessen believes that AI has the potential to be the most important technological advance since fire if it is used for compounding human intelligence rather than replacing it.“
EU companies are unhappy with the proposed AI regulations and are -very hip nowadays- signing an open letter.
Also discussion on AI rules in Japan. Making good regulation is not so easy
Is Meta really becoming a proponent for responsible AI, or is it a good fit with the market sentiment?
There is a lot said about the impact of the new AI summer on work. Benedict Evans wrote an analysis: “ChatGPT and generative AI will change how we work, but how different is this to all the other waves of automation of the last 200 years? What does it mean for employment? Disruption? Coal consumption?”
How to digital identity
Let AI help you explain your weird dreams.
A typical deep exploration by Venkatesh Rao and. The magic mundane, deep protocolization
Looking at nature for inspiration in designing tools and beyond is always a proven strategy:
More on more than human. “Microbes aren't as charismatic as megafauna, but they outnumber humans—and their welfare deserves consideration”
A new robot on the road.
Is there indeed a positive vibe?
Paper for the week
I hope we did not “Inducing anxiety in large language models increases exploration and bias”
“We propose to turn the lens of computational psychiatry, a framework used to computationally describe and modify aberrant behavior, to the outputs produced by these models. We focus on the Generative Pre-Trained Transformer 3.5 and subject it to tasks commonly studied in psychiatry.”
Coda-Forno, J., Witte, K., Jagadish, A. K., Binz, M., Akata, Z., & Schulz, E. (2023). Inducing anxiety in large language models increases exploration and bias. arXiv preprint arXiv:2304.11111.
See you next week!
This newsletter will not break for the summer… :-)