The European Parliament generated with Stable Diffusion XL

Some AI news, and EU regulation

This ended up being a pretty weird post, juxtaposing technical releases and legal developments, we don’t know if we really managed to give it a coherent shape, but this is very much the complex space in which the future of digital humanities, and indeed the future of everything we do, is being shaped.

There are a lot of powerful actors that are working to shape it in their favor, and they are not being secretive at all.

AI stuff

We’re working on a couple of audio-guide projects, and in one of them it would be really convenient to automate some of the work with an AI. Our current attempts are technically presentable, but honestly pretty boring. Think something like the Spot guide-dog from Boston Dynamics (what is up with their microphones?), but on your phone and without installation. We have built it, it works, we do not think it’s good enough, just like automated translations aren’t really good enough, compared to a good professional translator. Needless to say, we’re keeping our eyes peeled for the latest advancements. After all AI is guaranteed to disrupt us.

Large Language Models are tools, and it’s up to us how we use them, from Giant Robots Smashing Into Other Giant Robots.

The EU Council and Parliament have reached an agreement on the new AI ACT. Before the agreement OpenFuture had written about copyright opt-outs, about friction and governance and about self-regulation.

Local AI

Sticking to AI, it seems that Google is catching up to ChatGPT, confirming our “there is no moat” position. However, just like Threads and Horizon Worlds, it is not coming to the EU yet, which we consider a worrying trend.

We suspect it’s going to end up just like cloud computing: there is a lot of money to be made, but it’s going to be so capital-intensive that only a few big players will dominate the market, developing useful products that are hamstrung by the imperative to build in vendor lock-in. Luckily there are some really interesting developments on the local deployment side, and we think that the transparency and user control of locally run Open Source software are going to be extremely important.

Mozilla published a guide for new AI developers last month, but last week they also published Llamafile, which combines llama.cpp and the brilliant Cosmopolitan library to distribute Large Language Models as single files.

In a similar vein Noiselith allows us to run Stable Diffusion XL and generate images on local hardware, with a user-friendly setup.

Voicemod allows to change your voice in real time, both to existing voices and to newly generated ones. We do not agree with The Verge’s disregard for the legal aspects of voice cloning, rights limiting the reproduction of one’s likeness are well established, copyright is not the issue.

Apple released a Machine Learning framework for Apple Silicon, and they literally just pushed it on GitHub without announcing it. Of course nothing that Apple does passes unnoticed, the tech press was instantly on it.

The ex-Apple employees behind Shortcuts have a new desktop AI startup.

Meta, IBM, Intel, and around 50 other organizations launched an alliance for Open Source AI.

Mistral AI, the French company that releases Apache-licensed models (their benchmark scores are pretty amazing), has received a $2B evaluation.

AI Trust

Meta has a new AI trust and safety initiative, but at a glance it seems like it’s mostly focused on spotting content that does not align with what the owners want, which is obviously super important.

The most important article on AI trust you’ll read this week: AI and Trust by Bruce Schneider.

Stuff at the intersection between law and culture

Getting back to EU legislation, Felix Reda (of Pirate Party fame) and Justus Dreyling wrote about the need for a Digital Knowledge Act, highlighting the need to achieve something that is very much in line with our mission: allowing every research question to be done online. It should never happen that a document or a resource is held in a public archive or is made with public money (we’re thinking of scientific papers) and not be made available, online and for free.

Cory Doctorow wrote on the evils of DRM, and that is always important to keep in mind when building digital archives and collections.

The EU Data ACT is also moving forwards, with some good stuff (harmonization and foreign transfers, in particular) and some really concerning bits. Offering legal protection to trade secrets favors some specific players, but it is opposed to the hard bargain that makes patents exist. The current discourse seems to have lost sight of the fact that “Intellectual Property” is not property at all, in the abstract all knowledge should belong to every human being, we have setup a legal system that exchanges a temporal monopoly for technical information (in the case of patents) or as an incentive for the creation of more cultural works (in the case of copyright). Trade secrets should not be legally protected, if you want protection you should use patents. The current Data Act agreement also restricts reverse engineering, which is terribly harmful for innovation and competition.

Work has also been progressing on the EU Cyber Resilience Act. It seemed to go in a weird direction for what concerns the intersection between security and Open Source, but they seem to have already fixed the most glaring issues. This is a good chance to recommend following Bert Hubert, he always has insightful articles and up to date news. For example that’s how we learned about this case in which Sony attempted to strong arm a DNS provider.


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *