Weeknotes 232; the AGI gap
Weeknotes 232; the AGI gap. "it is more likely that human and machine intelligence will widen the gap than become similar." Found news, events and paper of the week.
Hi! The 4th continues.
Still, GPT-4 is the talk of the town, and people start playing with it, finding out what its capacities are. I did use it for the first time as a coder; I did not before with 3, but it worked quite well. As a prototype tool and a way to tinker and visualise thoughts. Designing the most complex interface of the platform we are building with Structural helps to demo some interfaces. With a paragraph description and 15 min thinking and iterating, I had something that fulfilled that need. Nothing will ever be used, but as a think piece, it works very well.
There are numerous podcasts with Sam Altman of OpenAI on GPT-4 and beyond, On by Karen Swisher was rather good and compact and now I am still listening to the always lengthy podcast with Lex Fridman.
The one of Benedict Evans I still need to here, but reading this week's column of Benedict Evans (in the paid newsletter) on the hype cycle of GenML triggered some thoughts:
That is how Moore’s Law has applied to computing - just hit it with more compute! - and it’s one way that machine learning has evolved- just hit it with more data and more bigger models.
The question is if this will lead to AGI (Artificial General Intelligence) and if this intelligence is the same as human intelligence after all. Undoubtedly, we -as humans- will be outperformed on certain tasks. Still, without choosing a different strategy for increasing the capabilities of AI, it is more likely that human and machine intelligence will widen the gap than become similar. This is not a bad idea; let us stand on each other's shoulders rather than replace one with another.
Below much more articles and opinions shared last week. And a view of other topics too.
Events
I attended an evening meetup last week organised by Amsterdam UX, at argodesign. Guus Baggermans -who did a presentation at the summer edition of ThingsCon last year too- did a solid introduction to the role of (visual) AI for the work of UX and brand designers. And after the talk, we were invited to play with some tools to build collaborative pieces in Invoke.ai. The idea of creating a hidden zoomed-out version of images.
I also attended a part of the demo day of Amsterdam Smart Cities, the break-out session on Mobility as a Common. The presenters of the municipality made very clear how they work on shaping opportunities for new forms of mobility initiated by citizens and commons. However, on the other hand, they still are struggling with loosening control and having real trust in the intentions of citizens beyond their interests. It relates to the work on building trust in agreements through the promises architecture of Structural language, as the goals we have with the MUC AMS Cities of Things fieldlab Collect|Connect Community Hub. Useful to attend.
Events for the coming week
- Responsible AI meetup, 28 March, Utrecht
- Interaction 23 Redux, 29 March, Online
- Future Image Making, 5 April, Amsterdam
- ThingsCon Salon Listening Things, 14 April, Eindhoven
Found news from last week
As promised last week, I will try to be more selective to limit the number of articles. News on Twitter source code leaks are all over the internet already, and the opinion of Bill Gates on AI too.
Let’s start with a round of new AI upgrades like Adobe (Firefly), Levi’s (to increase diversity), Wolfram Alpha (the other way around, as a plug in in ChatGPT), the internet (or vv).
And some tips on prompt-writing for GPT-4.
Google opens up Bard to the world. The Verge is not that impressed.
In a direct comparison of ChatGPT, Bing and Bard later in the week, that does not change much. “And if you are shorting Google’s stock and want to reassure yourself you’ve made the right choice, try Bard.” In the meantime, ChatGPT is opening to the web.
ChatGPT becomes even more valuable with the new plugins. And connecting to the internet. Becoming even more 'app store-like.
The Two minutes papers channel is -of course- also keeps track of the latest.
How does GenAI fit in the IoT? What we will discuss at ThingsCon Salon probably. Are we entering the third level (The Creat0r) of products as agents (see this paper)?
Is AI art protected?
And should you be able to be free from automation?
The critique of Gary Marcus. Especially on the lack of openness of OpenAI with this release.
Cybersecurity and data poisoning.
And will Web3 be more about AI than community-driven governance?
Triggering conflicting emotions in tech.
Can Microsoft ruin it all?
In other news, regarding autonomous driving, China is taking steps:
Also interesting how they found a way to speed up testing: create simulated terrible drivers.
NVIDIA is catering for robots to make them easier to use.
When the Cities of Things research started at TU Delft back in 2017 we were looking at possibilities to clean the polluted air via moving objects. That is now becoming more realistic.
The rumours go that the mixed reality goggle by Apple had an internal release.
How risky is the rapid dependability on FinTech?
And don't forget the Climate Crisis…
Paper of the week
I was unsure if I should also have a paper here on GPT-4, but ok...
Sparks of Artificial General Intelligence: Early experiments with GPT-4
“In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models.”
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., ... & Zhang, Y. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712. Chicago
https://doi.org/10.48550/arXiv.2303.12712