Hi all! First, let me apologise for sending this newsletter one day later than usual. We decided to enter a grant proposal quite late, and the deadline was yesterday, so I needed to eat in the hours I normally spend completing the newsletter (Monday evening and -nite).
Nevertheless, here we are. As an update of activities, that proposal is a follow-up of some good conversations on our AI roadmap that is key to Structural’s services. Of course, the interlanguage we are developing specifically connects the collaborative work of human and machine analysis. Let me know if you want to know more; happy to extend.
I don't know your experiences; I got delayed listening to all the tech podcasts. I need s strategy for choosing a tech cast diet. Hardfork, Sharptech, Vergecast, Dithering, Pivot, the topics are often the same (Bluesky was last week's hot topics) but they have different angles. Maybe I should allow myself a diet of one of all per week. To prevent I have no time for other things, like events.
In the series AI tooling, Microsoft was announcing Bing and Edge extensions.
And Google is announcing their co-pilot showing off creative writing tools and coding capabilities, which is the expectation.
Coursera is going a step further, introducing the Responsible AI principles
Let’s see if they are adding to this with a new constitutional approach at these schools.
It is a small step from chatbot to emotional connections. In one of these podcasts mentioned above, it was mentioned how we more easily mix reality and chat interfaces due to our digital nomad life…
Sometimes you get an image right away. The Future of Writing resembles how hip-hop has created a new art form by remixing music. Driven, of course, by the AI support tooling.
A long interview on the capacity of AI to expand human learning.
And how about AI as the new consulting?
Some updates on predictions by experts on AI
This is just a logical step in a longer strategy that only might be speeding up now.
A sandbox for complex systems; would be nice to use.
An interesting take from Ethan Mollick: “Once you see AI as being more like a person in how they operate, it becomes much easier to understand how and when to use them.”
Clickbait or some serious downfall?
Years ago, we developed a conceptual application of robotic creatures in cities that measured air pollution (PACT); it was the start of the Cities of Things research project. It becomes now partly reality.
Did you follow the hype around Humane? Especially the unrealistic product demo. Now online:
Cars driving computers is becoming cheezy to say; it was already a long time the case for Mircomobility, defined by the app experience even more than the ride. In that sense, it is interesting to see how Qualcomm is extending this market by acquiring Autotalks.
Robots without chips
What makes you more happy, possessions or experiences?
In other news…
And, similar in the series of unintended consequences
Some people from Vai Kai introduce a new toy with an educational touch. Crowdfunding.
Paper for this week
A relevant paper deals with the political impact of AI.
“This chapter discusses the regulation of artificial intelligence (AI) from the vantage point of political economy, based on the following premises: (i) AI systems maximize a single, measurable objective. (ii) In society, different individuals have different objectives. AI systems generate winners and losers. (iii) Society-level assessments of AI require trading off individual gains and losses. (iv) AI requires democratic control of algorithms, data, and computational infrastructure, to align algorithm objectives and social welfare.”
Kasy, M. (2023, April 19). The Political Economy of AI: Towards Democratic Control of the Means of Prediction. https://doi.org/10.31235/osf.io/x7pcy