Weeknotes 227; bing with a bang

Weeknotes with the interesting news from last week, and reporting sensing ethics...

Bing with a bang in Sydney Australia AI - MidJourney

Hi all!

Never a dull week in the AI arms race era. The main news was, of course, the problems that became apparent with the new Bing-ChatGPT combination, called Sydney. The first batch of test persons now has access (like Kevin, Ben, Ethan, Chris) and the reviews are rather good. Microsoft scaled down the test use for now. And OpenAI published a clarification on the behavioural style of ChatGPT.

Still, there is a problem with the behaviour and manners in the conversations with a couple of the users. It brings back memories of an earlier attempt by Microsoft to create a companion bot, now one of the most used examples in AI presentations. Tay became, within 24 hours, an ultra rude bot trolling, and was switched off right after.

It will be a pity if this is influencing the developments of AI and the like. There is so much potential in professional services based on these tools.

One of the strategies can be to build opportunities to oppose the behaviour. Contestable AI, Kars, is doing a whole PhD on this topic and he presented shortly during the 2 year anniversary of Responsible Sensing Lab on Thursday. I could not make it in person in the end but watched the recordings. These are some of my impressions.

Peter-Paul Verbeek had a keynote on democratizing the ethics of smart cities. We are now entering Society 5.0, living a digital life. Robots become citizens. He wondered if AI is learning the same way as humans do, if AI has its own agency or if it is always derived from the relation with humans.

He showed how sensing is also shaping the way we see our world. Technology is not only a tool; it is also shaping how we are in touch with our world. This is a topic to address as designers of these sensing environments. Smart cities are part of politics. In AI systems, it is always the question of who has agency: the operator, the control unit, or the drone itself. Citizen ethics can be an important concept; ethics developed by the citizens. There are three stages: (1) technology in context, (2) the dialogue, and (3) options for actions.

After the keynote, Thijs Turel and Sam Smits looked back on two years of RSL and to the future. An important theme is to design data collectors in the city to design for just enough. An interesting aspect is how, via “designing” tender requirements, we can set goals for climate, among others. Next to a lot of examples, the presentation included the development of the scanned car; what does that do for the people living in the city? If we have a 100% chance of being fined, it should be taken into account in the democratic decisions that were the basis for the level of fines.

Kars Alfrink was sharing his concept of Contestable AI. Leveraging disagreements to improve the systems, validating the concepts of enabling civic participation, ensuring democratic embedding, and building capacity for responsibility. For example, we should not discuss if scan cars or cameras are part of political programs and promises; we should connect them to the values behind them and acknowledge - like Peter-Paul Verbeek was presenting - how technologies mediate not only the things and services we use but also the political decisions behind these.

It relates nicely to an essay by Maxim Februari that was published in NRC on Saturday. Democracy is not a product with a certain outcome; it is the process that counts, and that should be stimulated. There is much more to say, but it is best to read it yourself.

For this week only a few possible interesting events:

IoT London on Thursday; on Digital Security by Design; Design cities for all: regenerations, in Pakhuis de Zwijger 27 Feb, this time on design for time. And save the date: 14 April the ThingsCon Salon.

News updates for this week

Find below interesting articles from last week. The representation of Sydney and ChatGPT is the hot topic  of course, also here.

Platforms’ promises to researchers: first reports missing the baseline - AlgorithmWatch
An initial analysis shows that platforms have done little to “empower the research community” despite promises made last June under the EU’s revamped Code of Practice on Disinformation.
REAL MEDIA - "(...) dozens of tech companies—including major social media platforms like Facebook, YouTube, TikTok, and Twitter—delivered the first baseline reports meant to detail their efforts to combat disinformation in the EU."
The Prompt Box is a Minefield: AI Chatbots and Power of Language
The Convivial Society: Vol. 4, No. 2
PROMPTING - "It seems useful to frame AI-powered chatbots as a new class of automated sophists, whose indifference to either the true or the good, indeed, their utter lack of intentions, whether malicious or benign, coupled with their capacity to manipulate human language makes them a potential threat to human society and human well-being, particularly when existing social structures have generated such widespread loneliness, isolation, anxiety, and polarization."
In praise of the ‘15-minute city’ – the mundane planning theory terrifying conspiracists | Oliver Wainwright
The frightening prospect of greener, people-friendly streets has sent the online right – and Tory MPs – into a tailspin, says Guardian architecture critic Oliver Wainwright
CITIES - Protests against the 15-minute city concept for strange reasons.
IoT startups received record funding in 2022, new research reveals
A new analysis of startup funding in the IoT sector in 2022 has revealed that the average amount of investment in Internet of Things companies is at an all-time high in Europe and the highest
FUTURE STATS-Some positive numbers after some slow years...
Responsible use of AI in the military? US publishes declaration outlining principles
12 “best practices” for using AI and autonomous systems emphasize human accountability.
AUTONOMOUS SOLDIERS - "the US declaration outlines that an increasing number of countries are developing military AI capabilities that may include the use of autonomous systems"
Text is All You Need
Personhood appears to be simpler than we thought
REWRITE RETREATS - "If text is all you need to produce personhood, why should we be limited to just one per lifetime? Especially when you can just rustle up a bunch of LLMs to help you see-and-be-seen in arbitrary new ways?"
When The Blue-Collar Backbone Meets Generative AI | NOEMA
From labor share of income to labor share of wealth.
ROBOTICS - "Labor should not just bargain for a greater share of income in enterprises where they will still be able to find jobs, but also own a share of the robots that will be generating the value they once did on the assembly lines of a smokestack economy."
Sydney and the Bard
What hath Microsoft and Google wrought?
AI ARMS - Reflecting on the AI chat works
<div style=“max-width: 480px;”>What Is ChatGPT Doing … and Why Does It Work?</div>
Stephen Wolfram explores the broader picture of what’s going on inside ChatGPT and why it produces meaningful text. Discusses models, training neural nets, embeddings, tokens, transformers, language syntax.
EXPLAINER - How does ChatGPT works
Man beats machine at Go in human victory over AI
Amateur exploited weakness in systems that have otherwise dominated grandmasters.
EXTENDING - The competitiveness is not in the mode model but the rethinking.
Writing Essays With AI: A Guide
We should take AI seriously as a creative tool—here’s how
HELPFUL AI - In the writing process the current tooling is a good inspiration.
The future, soon: what I learned from Bing’s AI
We had a brief glimpse of two different types of AI. Both are significant
MORE AI - The weird future is already here.
Machines that draft laws: they’re heeeere
I Watched Elon Musk Kill Twitter’s Culture From the Inside
This bizarre episode in social-media history proves that it’s well past time for meaningful tech oversight.
5 top robotics trends to watch in 2023
Here are the top five trends shaping the robotics industry, according to the International Federation of Robotics.
Startup uses DALL-E to make food menus more appealing
Food tech startup Lunchbox has made the text-to-image AI DALL-E 2 available to generate food pics for restaurant menus.
Some ways for generative AI to transform the world
For the past two months I’ve been scrambling to work on generative AI. That’s the phrase I prefer to corral together ChatGPT, art generators like DALL-E, and any AI-driven software whi…

Paper for this week

“Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are.

(…) To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work.

From the paper “Talking About Large Language Models” Link to PDF

Shanahan, M. (2022). Talking About Large Language Models.

See you next week!

Buy Me a Coffee at ko-fi.com