Round-up of the week: A report on antisemitism on Twitter after Musk takeover - which found more than 2x in volumes of posts and a surge in new accounts - I worked on with Carl Miller, CASM, and ISD (Institute for Strategic Dialogue) has come out. It's been featured in The Times and The Washington Post, amongst other outlets. | Report: https://lnkd.in/eZTMfz_U | The Times: https://lnkd.in/eqbaKch5 | Washington Post: https://lnkd.in/ehy7ZTa4 | I also wrote my own piece on worrying barriers social media platforms are creating to data access: https://lnkd.in/ecy-pnzx Finally, prompted by comments at the Alan Turing Institute's #AIUK 2023 conference, I wrote a short piece about Generative AI as a 'disinformation machine gun': https://lnkd.in/eDH5xfKX Hopefully next week I can write less depressing things about the internet....
Dr. Oliver Marsh’s Post
More Relevant Posts
-
More sensible thoughts about GenAI and elections - with actual evidence of scale (not much) - from Sam Jeffers. To be clear, given recent posts I've been making and sharing - I don't think we should dismiss risks of genAI creating misleading content. But we certainly shouldn't run away with overhyped and poorly-supported ideas of fake news swinging people's opinions and votes. Focussing on fake news and elections makes for good stories, but risks distracting from longer-term and more targeted issues: issues of people being abused out of politics, or recommender systems promoting divisive and radicalising views. Fact-checking networks are good, but I haven't seen as much support for initiatives against those latter problems (though that's maybe started to change in the last couple of years). My (personal) thoughts here: https://lnkd.in/eaNmtg7v
I did an interview about generative AI and elections in 2024 for the European AI and Society Fund's newsletter. I'm not sure campaigns are really ready to "go there" just yet. https://lnkd.in/eSajQ472
Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online?
https://europeanaifund.org
To view or add a comment, sign in
-
Hello Crowd! Can you help us Crowdsource evidence of online risks? Learn more and help us here 👉 https://lnkd.in/e8PK36R6 Super short summary: The Digital Services Act means online platforms & search engines should minimise "systemic risks" in the EU. But what are "systemic risks"? The Act has a list, including risks to fundamental rights, elections, etc. But the benchmark to distinguish between "bad things" and "systemic risks" is still unclear. Debates are best addressed with real evidence. So we at AlgorithmWatch need 👉 your help 👈 in collecting real events observed on platforms / search engines which *might* be systemic risks. More info in the form, linked above & also here again: https://lnkd.in/e8PK36R6 Then we'll distribute the observations to experts to give views on whether these are or aren't systemic risks. That will help us work with the European Commission to draw up proper guidelines for systemic risks. "Systemic risks" really is a key part of the Digital Services Act - and also increasingly other tech regulation, like the AI Act. So your help in gathering evidence could hopefully be very impactful. I'm very happy to answer questions. But bear in mind, if something is unclear - that might be precisely the problem we're trying to solve. Shares from people with digital research networks would be wonderful, e.g. Brandi Geurkink Carl Miller Francesca Arcostanzo, PhD Philipp Lorenz-Spreen Danie Stockmann Joanna Bryson #onlineharms #socialmedia #techpolicy
Online Systemic Risks under the DSA: Crowdsourced Evidence
https://www.jotform.com
To view or add a comment, sign in
-
Gaia Marcus FRSA is not only a great force in the field of technology governance, but also a fantastic leader, always insightful and creative, a problem solver, and a wonderfully kind (and fun!) person. I was lucky to ask her to be a mentor back when we were both in DCMS, and have never looked back. Absolutely great appointment by the Ada Lovelace Institute.
We are delighted to announce that Gaia Marcus has been appointed by the Nuffield Foundation as the new Director of the Ada Lovelace Institute, following an open recruitment process. Gaia will take up her post as Director in June following Francine Bennett’s tenure as Interim Director. Francine will re-join the Ada Board, on which she has served as a member since 2019. Read the full announcement: https://lnkd.in/e3kUvC9B
Gaia Marcus appointed as new Director of the Ada Lovelace Institute
adalovelaceinstitute.org
To view or add a comment, sign in
-
Carl being wise on generative ai disinfo threats, worth a read 👇
Partner, CASM Technology, Research Director, CASM, Demos. Author, The Death of the Gods. Visiting Fellow, King's. International Speaker
I spoke in Parliament to the APPG on AI yesterday. The topic was the hottest of potatoes: disinformation and electoral interference in a world of generative AI. My main point was that to think about how AI might be used to undermine elections, we have to situate it within what we know about how covert influence already works: 1) AI will be used to create material to confirm our worldviews, not contradict them. Confirmation bias is incredibly powerful, and material will be influential when it consolidates something audiences already suspected to be the case. 2) But to me, the possibly game-changing threat is the way that generative text models could semi-automate one-to-one conversations with people. Any behavioural scientist will tell you that influence flows down social ties, and there's now the potential for an attacker to draw tens of thousands of people into months-long conversations that will come to resemble friendships in the eyes of the people targeted. Think therapy-bots as an influence vector. What neither of these look like is the general spamming of fake images to large audiences that will cause people to change their minds. I think the sad reality is that our trust in any information provided to us by people we do not know is going to fall off a cliff very quickly.
To view or add a comment, sign in
-
AlgorithmWatch recently responded to the EU Commission consultation on protecting elections from online risks. You can find our full response here. Then below post is my thoughts on "best practices" in the context of online risks to elections. https://lnkd.in/e2hmMmB4 First, a positive story: We proposed that AI chatbots, when asked about elections, should return standard search results rather than LLM summarisation. The day after we submitted, Google said that's what they will do with Gemini! (Though IMO simply serving the search results in the chatbot interface would be a little less friction than directing users to search). Anyway, here's some broader thoughts. The background: Data access requests have stalled while we wait for an update to the law, probably until October at least. Also we also still don't have guidelines by which we should assess "systemic risks" under the DSA. So the DSA is substantially weaker, heading into elections, than it should be. These guidelines, while much weaker than a properly implemented law, could be useful stop-gap measures - even if they rely a lot on full participation from platforms, which recent Crowdtangle news (amongst a long line of other such news) suggests is risky. Nonetheless, we argue the guidelines could have an important role in providing clarity around "systemic risks" and opportunities for data access while we wait for the full DSA. They could even provide chances to test these provisions, and learn for fuller implementation later. But we obviously all want to go well above the floor of "comply with the DSA", and reach the heights of best practices. There's lots of great work, from within and outside platforms, on clever ways to detect and protect from online risks to elections. We need something more modern and dynamic than a guideline document listing "best practices" (especially as I am sceptical that some of the practices listed in the draft are actually "best", but rather are "talked about a lot"). In the fast-moving and complex world of elections and digital platforms, general best practice is hard to capture - there is no "silver bullet" beyond obvious things like "have enough moderators and good processes". Our proposals therefore focus on this: how to help platforms, when planning for elections, draw on best available expertise *at that time and for that context.* The answer is about people, collaborations, and transparency - and the guidelines can, and should, help put these in place. A good recent piece on evidence around online counter-disinformation measures (and the lack of clear "silver bullet") is this from is from Jon Bateman and Dean Jackson at the Carnegie Endowment for International Peace - https://lnkd.in/eXpF6tWH #dsa #elections #disinformation
AlgorithmWatch proposals on mitigating election risks for online platforms
https://algorithmwatch.org/en
To view or add a comment, sign in