Weeknotes 231; may the 4th be with you

GPT-4 is here. And much more AI-related news and events in this weeknotes.

a humanoid using a lightsaber to read a book on AI in a dark space - Midjourney

Hi! Happy AI y'all.

Another week is full of AI news. Steered, of course, by that big happening in Austin as mentioned last week, it was a moment to announce new versions (GPT-4, Baidu, Claude), or integrations of AI in tools (Google, Office, Linkedin), language models (PaLM from Google). While Microsoft is getting extra attention laying of the Responsible AI Team. Pretty sure GPT-4 was the most anticipated and discussed. The CEO did an impressive demo where both the more conceptual thinking capabilities and the visual capturing stole the show. And now it is passing the bar exams with high numbers. Creating a functioning website from a sketch on paper was impressive.

I decided to take the plus subscription to start some conversations myself. I will do more later; I am specifically curious about how well you can train it with existing concepts and develop a proper discussion that delivers new insights, ideally letting you reframe your thinking. This is one of the core applications we would use. My first quick conversation was still delivering a feeling of lip service than building a critique, but I am sure I need to tune the prompting. It feels like talking to a chatbot with media training.

By fostering open discussions about the potential risks and benefits of GPT-4 in human-tech relations, and actively working to mitigate potential negative consequences, we can ensure that the technology is used responsibly and contributes positively to the human experience.

Lex also integrated GPT-4, one of the nicer incremental improving integrations. Watch this introduction of GPT-4 in their writing tool.

A powerful concept part of Lex and other writing tools is "reimagine". It is now also an image tool by Stable Diffusion. “Stable Diffusion Reimagine does not recreate images driven by original input. Instead, Stable Diffusion Reimagine creates new images inspired by originals.”

Stable Diffusion Reimagine — Stability AI
Stable Diffusion Reimagine is a new Clipdrop tool that allows users to generate multiple variations of a single image without limits. No need for complex prompts: Users can simply upload an image into the algorithm to create as many variations as they want.

Events

Before diving into all the other (AI-)news of last week, some events in the coming week and beyond that might be interesting:

And we announced a third speaker for ThingsCon Salon on Listening Things at STRP; Joep Frens will especially reflect on the insights from student explorations in the IOT Sandbox.

News that prompted attention

On to the news of last week. Or continue better said. Collecting my captured links shows how hectic it was. I will make a selection…

First, an overview of GPT-4

OpenAI’s GPT-4 exhibits “human-level performance” on professional benchmarks
Multimodal AI model can process images and text, pass bar exams.

As always, there are numerous additions to the AI critique.

Gary Marcus wrote on GPT-4, its successes and failures. “How GPT-4 fits into the larger tapestry of the quest for artificial general intelligence”

GPT-4’s successes, and GPT-4’s failures
How GPT-4 fits into the larger tapestry of the quest for artificial general intelligence

Last week I reported on the talk by James Bridle on other intelligences. In The Guardian, he published a long read on the relationship of AI and culture. “This is a huge shift. AI is now engaging with the underlying experience of feeling, emotion and mood, and this will allow it to shape and influence the world at ever deeper and more persuasive levels.”Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous.”

The stupidity of AI
The long read: Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous

More Bridle in this interview with Claire Evans, in case you cannot get enough.

There’s Nothing Unnatural About a Computer
James Bridle’s Ways of Being wants us to take a fresh look at nature’s intelligence.

An advance publication of a paper by Rita Raley and Jennifer Rhee discusses Critical AI: A Field in Formation.

Critical AI: A Field in Formation | American Literature | Duke University Press

Asking questions and getting answers is not hard; validating the correctness is a different skill.

Getting the Right Answer from ChatGPT
How do you know that ChatGPT isn’t lying?

Kevin Roose got lost of attention with his Bing/Sydney conversations just a couple of weeks ago. He thinks GPT-4 is exciting and scary at the same time; risks we cannot anticipate.

Robin Sloan: “We are living and thinking together in an interesting time. My recommendation is to avoid chasing the ball of AI around the field, always a step behind. Instead, set your stance a little wider and form a question that actually matters to you.”

Phase change
Protocols and plain language.

Nir Eisikovits: AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it

AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it
Our tendency to view machines as people and become attached to them points to real risks of psychological entanglement with AI technology.

Jon Evans: on the role of languages, the change of having models, it seems very likely that language will be key and that modern LLMs, though they'll seem almost comically crude in even five years, are a historically important technology. Language is our latent space, and that's what gives it its unreasonable power.

Language is our latent space
And latent space is Plato’s Cave

Nathan Baschez is seeing the LLMs as the new CPUs

LLMs are the new CPUs
…but is OpenAI the new Intel?

Matt Webb on AI in a loop: It’s not self-replication that we should be looking at. It’s self-evolution

The surprising ease and effectiveness of AI in a loop
Posted on Thursday 16 Mar 2023. 1,918 words, 21 links. By Matt Webb.

Dan Shipper is seeing GPT-4 growing into a copilot for the mind.

GPT-4: A Copilot for the Mind
Never forget what you read again

The openness of OpenAI is different from GPT-4

OpenAI co-founder on company’s past approach to openly sharing research: “We were wrong”
Should AI research be open or closed? Experts disagree.

And how about those other tools? Did they lose the AI race?

Some applications. Nabla is a digital health startup with a copilot. Be my eyes for the visually impaired. VALL-E is doing voice cloning. Midjourney is introducing a magazine.

7 creative ways people are already using GPT-4
Users are already demonstrating the wide array of uses the new AI tool may have.

And PWC is introducing legal business solutions, which seems to make sense.

PwC announces strategic alliance with Harvey, positioning PwC’s Legal Business Solutions at the forefront of legal generative AI
Today, PwC announced a global partnership with artificial intelligence (AI) startup Harvey, providing PwC’s Legal Business Solutions professionals exclusive access (among the Big 4) to the game-changing AI platform.

Reid Hoffman had early access and took that advantage to be the first to co-author a book with GPT-4

Reid Hoffman on LinkedIn: I wrote a new book with OpenAI's latest, most powerful large language… | 448 comments
I wrote a new book with OpenAI's latest, most powerful large language model. It’s called Impromptu: Amplifying our Humanity through AI. This, as far as I know,… | 448 comments on LinkedIn

With all that AI happening, robots are less in the news—or hidden. Disney was introducing their humanoid at SXSW. With some smart tricks.

The academic conference on HRI (human-robot interaction) is a rich source of new research. Check the work of the DEI4EAI project in this thread. And check the tweets of @mlucelupetti for some pointers.

A self-driving lab is using AI on another level: “Autonomous Discovery and Optimization of Multi-Step Chemistry using a Self-Driven Fluidic Lab Guided by Reinforcement Learning”

Self-Driven Laboratory, AlphaFlow, Speeds Chemical Discovery
The system has already found a more efficient (and previously unheard of) way to produce high-quality semiconductor nanocrystals.

And in other news, the newest Zipline drone delivery was announced. Silent, precise… Read more here.

What will be our future post-automobile?

The Free Street Manifesto Is a Guide for a Post-Automobile Future
Cars have dominated the streetscape for so long that it’s hard to imagine a future without them — but that’s exactly what The Free Street Manifesto does. With images, comics, historical information and scientific insights, the book tries to convince readers of the importance of so-called ’free stree…

And what is the current state of Tesla’s full self-driving future?

And to close the captured news section; in other other news, climate change…

IPCC came with alarming news today. With a positive framing, though, of a report that makes mostly very grim reading was a deliberate counterblast to the many voices that have said the world has little chance of limiting global heating to 1.5C above preindustrial levels, the threshold beyond which many of the impacts of the crisis will rapidly become irreversible

World can still avoid worst of climate collapse with genuine change, IPCC says
Positive framing of otherwise grim report a counterblast to those who dismiss hopes of limiting global heating to 1.5C

And some specific consequences of changing climate in California.

Why rain on snow in the California mountains worries scientists
Another round of powerful atmospheric rivers is hitting California, following storms in January and February 2023 that dumped record amounts of snow. This time, the storms are warmer, and they are triggering flood warnings as they bring rain higher into the mountains—on top of the snowpack.

Paper for this week

To stay on topic, Algorithmic Black Swans: From biased lending algorithms to chatbots that spew violent hate speech, AI systems already pose many risks to society. While policymakers have a responsibility to tackle pressing issues of algorithmic fairness, privacy, and accountability, they also have a responsibility to consider broader, longer-term risks from AI technologies.

Organizations building AI systems do not bear the costs of diffuse societal harms and have limited incentive to install adequate safeguards. Meanwhile, regulatory proposals such as the White House AI Bill of Rights and the European Union AI Act primarily target the immediate risks from AI, rather than broader, longer-term risks. To fill this governance gap, this Article offers a roadmap for “algorithmic preparedness” — a set of five forward-looking principles to guide the development of regulations that confront the prospect of algorithmic black swans and mitigate the harms they pose to society.

Kolt, Noam, Algorithmic Black Swans (February 25, 2023). Washington University Law Review, Vol. 101, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4370566


Thanks for reading. I hope that next week will be a bit less packed...

“If I Had More Time, I Would Have Written a Shorter Letter”
“If I Had More Time, I Would Have Written a Shorter Letter” is an infamously misattributed quote that highlights the importance of brevity and editing in writing.

See you next week!