Weeknotes 231; may the 4th be with you
GPT-4 is here. And much more AI-related news and events in this weeknotes.
Hi! Happy AI y'all.
Another week is full of AI news. Steered, of course, by that big happening in Austin as mentioned last week, it was a moment to announce new versions (GPT-4, Baidu, Claude), or integrations of AI in tools (Google, Office, Linkedin), language models (PaLM from Google). While Microsoft is getting extra attention laying of the Responsible AI Team. Pretty sure GPT-4 was the most anticipated and discussed. The CEO did an impressive demo where both the more conceptual thinking capabilities and the visual capturing stole the show. And now it is passing the bar exams with high numbers. Creating a functioning website from a sketch on paper was impressive.
I decided to take the plus subscription to start some conversations myself. I will do more later; I am specifically curious about how well you can train it with existing concepts and develop a proper discussion that delivers new insights, ideally letting you reframe your thinking. This is one of the core applications we would use. My first quick conversation was still delivering a feeling of lip service than building a critique, but I am sure I need to tune the prompting. It feels like talking to a chatbot with media training.
By fostering open discussions about the potential risks and benefits of GPT-4 in human-tech relations, and actively working to mitigate potential negative consequences, we can ensure that the technology is used responsibly and contributes positively to the human experience.
Lex also integrated GPT-4, one of the nicer incremental improving integrations. Watch this introduction of GPT-4 in their writing tool.
A powerful concept part of Lex and other writing tools is "reimagine". It is now also an image tool by Stable Diffusion. “Stable Diffusion Reimagine does not recreate images driven by original input. Instead, Stable Diffusion Reimagine creates new images inspired by originals.”
Events
Before diving into all the other (AI-)news of last week, some events in the coming week and beyond that might be interesting:
- Mozfest started Monday; I mentioned it last week. I hope to attend some sessions or watch some later. Trustworthy AI innovation in a worldly context is an important theme. All online.
- Design with AI, also the hot theme for Amsterdam UX, this time at Argo Design, 22 March in Amsterdam
- AI through an occult lens, the Hmm, Wednesday 22 March, Arnhem and online
- Kick-off City Net Zero, Amsterdam 21 March
- IoT London, online 21 March
- Demodag Amsterdam Smart Cities, 23 March
- Responsible AI & FinTech, 28 March
And we announced a third speaker for ThingsCon Salon on Listening Things at STRP; Joep Frens will especially reflect on the insights from student explorations in the IOT Sandbox.
News that prompted attention
On to the news of last week. Or continue better said. Collecting my captured links shows how hectic it was. I will make a selection…
First, an overview of GPT-4
As always, there are numerous additions to the AI critique.
Gary Marcus wrote on GPT-4, its successes and failures. “How GPT-4 fits into the larger tapestry of the quest for artificial general intelligence”
Last week I reported on the talk by James Bridle on other intelligences. In The Guardian, he published a long read on the relationship of AI and culture. “This is a huge shift. AI is now engaging with the underlying experience of feeling, emotion and mood, and this will allow it to shape and influence the world at ever deeper and more persuasive levels.” “Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous.”
More Bridle in this interview with Claire Evans, in case you cannot get enough.
An advance publication of a paper by Rita Raley and Jennifer Rhee discusses Critical AI: A Field in Formation.
Asking questions and getting answers is not hard; validating the correctness is a different skill.
Kevin Roose got lost of attention with his Bing/Sydney conversations just a couple of weeks ago. He thinks GPT-4 is exciting and scary at the same time; risks we cannot anticipate.
Robin Sloan: “We are living and thinking together in an interesting time. My recommendation is to avoid chasing the ball of AI around the field, always a step behind. Instead, set your stance a little wider and form a question that actually matters to you.”
Nir Eisikovits: AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it
Jon Evans: on the role of languages, the change of having models, it seems very likely that language will be key and that modern LLMs, though they'll seem almost comically crude in even five years, are a historically important technology. Language is our latent space, and that's what gives it its unreasonable power.
Nathan Baschez is seeing the LLMs as the new CPUs
Matt Webb on AI in a loop: It’s not self-replication that we should be looking at. It’s self-evolution
Dan Shipper is seeing GPT-4 growing into a copilot for the mind.
The openness of OpenAI is different from GPT-4
And how about those other tools? Did they lose the AI race?
Some applications. Nabla is a digital health startup with a copilot. Be my eyes for the visually impaired. VALL-E is doing voice cloning. Midjourney is introducing a magazine.
And PWC is introducing legal business solutions, which seems to make sense.
Reid Hoffman had early access and took that advantage to be the first to co-author a book with GPT-4
With all that AI happening, robots are less in the news—or hidden. Disney was introducing their humanoid at SXSW. With some smart tricks.
The academic conference on HRI (human-robot interaction) is a rich source of new research. Check the work of the DEI4EAI project in this thread. And check the tweets of @mlucelupetti for some pointers.
A self-driving lab is using AI on another level: “Autonomous Discovery and Optimization of Multi-Step Chemistry using a Self-Driven Fluidic Lab Guided by Reinforcement Learning”
And in other news, the newest Zipline drone delivery was announced. Silent, precise… Read more here.
What will be our future post-automobile?
And what is the current state of Tesla’s full self-driving future?
And to close the captured news section; in other other news, climate change…
IPCC came with alarming news today. With a positive framing, though, of a report that makes mostly very grim reading was a deliberate counterblast to the many voices that have said the world has little chance of limiting global heating to 1.5C above preindustrial levels, the threshold beyond which many of the impacts of the crisis will rapidly become irreversible
And some specific consequences of changing climate in California.
Paper for this week
To stay on topic, Algorithmic Black Swans: From biased lending algorithms to chatbots that spew violent hate speech, AI systems already pose many risks to society. While policymakers have a responsibility to tackle pressing issues of algorithmic fairness, privacy, and accountability, they also have a responsibility to consider broader, longer-term risks from AI technologies.
Organizations building AI systems do not bear the costs of diffuse societal harms and have limited incentive to install adequate safeguards. Meanwhile, regulatory proposals such as the White House AI Bill of Rights and the European Union AI Act primarily target the immediate risks from AI, rather than broader, longer-term risks. To fill this governance gap, this Article offers a roadmap for “algorithmic preparedness” — a set of five forward-looking principles to guide the development of regulations that confront the prospect of algorithmic black swans and mitigate the harms they pose to society.
Kolt, Noam, Algorithmic Black Swans (February 25, 2023). Washington University Law Review, Vol. 101, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4370566
Thanks for reading. I hope that next week will be a bit less packed...