This year I have the pleasure to give a workshop together with Hong Phuc Dang as part of the ARS Electronica Festival 2020. The title is How to create your own AI device with SUSI.AI – An Open Source Platform for Conversational Web and its part of an overall event Waltraud Ernst and colleagues from the University of Linz have organized. The whole event is dealing with bias and discrimination in algorithmic systems: “How to become a high-tech anti-discrimination activist collective” with awesome keynotes by Lisa Nakamura and Safiya Noble; more infos can be found on the ARS/ Uni Linz website. There you can also register if you’re interested in participating! I’m already looking forward to this event!!!
Author Archives: astrid
4s/EASST conference, virtual prague
Tomorrow the huge 4S/EASST conference starts, albeit virtually. It’s the first ZOOM conference I’m attending at this scale and I’m really curious how that will work out! đ I’m involved in three sessions:
- First, I’m contributing a paper on my alternative search engines research to the “grassroot innovation” session that nicely fits the scope of my paper. My first analysis of the YaCy/ SUSI.AI research materials is on visions/ politics of open source developer communities, their practices & politics, their relation to both the state and capital and how cultural differences play into all that – comparing Europe, Asia and the US. Here’s the link to the session.
- Second, I’m involved in the session “building digital public sector” with our research on the “AMS algorithm” (together with Doris Allhutter, Florian Cech, Fabian Fischer & Gabriel Grill). We’re presenting a sociotechnical analysis of the algorithm and its biases & implications for social practices. Here’s the link to the session & here’s the link to our article for further information.
- Finally, Christian Katzenbach and I have submitted a paper to the session “lost in the dreamscapes of modernity?” that covers the main arguments of our editorial to the special issue in NM&S on future imaginaries in the making and governing of digital technology (which is currently in press and will hopefully be published soon!). The editorial argues that sociotechnical imaginaries should not only be seen as monolithic, one-dimensional and policy-oriented, but also as multiple, contested and commodified; especially in the field of digital tech. Here’s the link to the session.
If you’re registered for the conference you can find the links to the ZOOM meetings right next to the sessions.
Digitale Weichenstellungen in der Krise
Here’s the link to my Standard blog post on COVID-19 & digitalization (in German); the teaser reads as follows:
E-Learning-Plattformen und Videokonferenz-Tools haben sich in unserem Alltag eingenistet. Kaum ein Haushalt, der sich dem noch entziehen kann. Dennoch dĂŒrfen grundlegende technopolitische Fragen nicht aus den Augen verloren werden.
algorithmic profiling
I’m very happy that our article “Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective” by Doris Allhutter, Florian Cech, Fabian Fischer, Gabriel Grill and me (from the ITA, TU Wien, University of Michigan) is published now!! It’s part of the special issue “Critical Data and Algorithm Studies” in the open access journal Frontiers in Big Data edited by Katja Mayer and JĂŒrgen Pfeffer! Thanks Katja for a speedy review process!! Surprisingly, the article triggered quite some resonance within the academic, but also in the public sphere. Lot’s of journalists etc got interested in this “first scientific study” on “the AMS Algorithm”. Since we’re currently working on an additional study comprising a deeper analysis of our own materials (including our own data inquiry to the AMS), we’re not able to talk much about this paper in public at this specific moment. But new insights will follow by the end of Mai or mid-June at the latest, so keep posted!!! Here’s the project description of the current study funded by the Arbeiterkammer OĂ.
media coverage
Last year, my work has been covered by various media outlets and events. First, the Austrian Academy of Sciences (ĂAW) made a portrait/ interview with me on the way visions and values shape search engines as part of their series “Forschen fĂŒr Europa”. This piece included a fancy foto shooting, as you can see here. Second, I was invited to take part in the panel discussion of the ORF Public Value event “Occupy Internet. Der gute Algorithmus” (together with Tom Lohninger from epicenter.works, Matthias Kettemann from the Hans-Bredow-Institut and Franz Manola from the ORF Plattformmanagement). The live discussion took place at the “Radiokulturhaus” and was aired in ORF 3 thereafter. Here you can find the abstract, the press release and the video in case you want to watch the whole discussion. Finally, I was invited as a studio guest to the radio broadcast “Punkt 1” at Ă1 “Das eingefĂ€rbte Fenster zur Welt“, where I spoke about alternative search engines and people could phone in and ask questions per email. Talk radio it is! đ – all in German.
50 internet myths
Today, the Internet Governance Forum started in Berlin. As part of this huge event the edited volume âBusted! The Truth About the 50 Most Common Internet Mythsâ will be launched. This wonderful volume – edited by Matthias Kettemann & Stephan Dreyer – is a compilation of common Internet myths and their deconstructions. Here is the link to the whole book: https://internetmythen.de (English and German; including summaries in all five UN languages). Enjoy!!
I’ve contributed Myth #19: Search engines provide objective results:
Myth: Search engines deliver objective search results. This is the founding myth of the leading search engine in the Western World: Google. 20 years later this founding myth still exists in Googleâs company philosophy. More importantly, however, it resonates in peopleâs minds. Without knowing how the search engine actually works, many users say that the best websites can be found on top.
Busted: In 1998, the founding year of Google, Sergey Brin and Larry Page described their search engineâs central aim as follows: âThe primary goal is to provide high quality search results over a rapidly growing World Wide Web.â (Brin and Page 1998: 115). Accordingly, the notions âqualityâ and âsearch qualityâ feature over 30 times in their research paper. The authors depict the PageRank algorithm â originally using the number and quality of hyperlinks a website gets, anchor text and proximity to determine the quality of a website and rank it accordingly â as their main competitive advantage. They describe the algorithm as âobjective measureâ corresponding well to âpeopleâs subjective idea of importanceâ (Brin and Page 1998: 109). Interestingly, this seems to be the case indeed. Having asked people why they use Google to find online health information in the context of my PhD project, most people answered with saying that Google delivered the best search results, implicitly shaping the search engine as a tool for quality assurance. Without knowing â or even thinking about â how the search engine actually works, Googleâs founding myth was reproduced in peopleâs stories.
But it is a myth. Search engines are no neutral, objective technologies, but rather tightly intertwined with societal norms, values and ideologies; the capitalist ideology most importantly. Over the past decades, Googleâs âtechno-fundamentalistâ ideology of neutral ranking was aligned with and overshadowed by non-objective considerations. New media scholars started to deconstruct the myth of objectivity soon after the search engineâs successful market entry. At first, they challenged the PageRank algorithm by arguing that it would threaten the democratic ideal of the web (#28) by systematically preferring big, well-connected, often commercial websites at the expense of smaller ones. Later they switched over to questioning search enginesâ business models based on user-targeted advertising and the commercialization of search engine results and privacy issues these trigger. A major criticism in this body of work concerns the âconsumer profilingâ conducted by Google â and others like Bing â that enable search engines to adjust advertisements to usersâ individual interests. (#21; #22)
Due to the growing amount of user data these companies acquired, the search algorithm and the âorganicâ search results changed too. Besides hyperlinks other factors were thrown into the measuring of a websiteâs quality including user profiles and click behaviour most particularly, but also the structure of a website, timeliness, and the amount of keywords and content. Accordingly, new media researchers, but increasingly also journalists, criticized the intensified personalization of search engine results, search engine biases and discrimination. This illustrates that search algorithms are tightly intertwined with the business models their companies rely on. The capitalist ideology is embedded in search engines and âacts through algorithmic logics and computational systemsâ (Mager 2014: 32).
Truth: It is important to keep in mind that search engines and their algorithms are no neutral technologies, but rather incorporate societal values and ideologies; the capitalist ideology most importantly. Only then may we come up with forward-looking governance models respecting local regulations and resonating with human rights (especially in Europe, where data protection is enshrined as a fundamental right).
Â
Source: Sergey Brin and Lawrence Page, The anatomy of a large-scale hypertextual Web search engine, Computer Networks and ISDN Systems 30: 107- 117 (1998); Astrid Mager, Defining Algorithmic Ideology: Using Ideology Critique to Scrutinize Corporate Search Engines, tripleC 12(1): 28-39 (2014).
intro course STS & digital tech
This is the abstract for my introductory course into Science and Technology Studies using digital technology as an exemplary case (data, algorithms & prognosis more specifically). I’m already looking forward to heated discussions on social media, AI, self-driving cars, recommender systems and their sociopolitical dimensions and governance implications! (@ the Deptartment of Science and Technology Studies, University of Vienna; in German).
Technik im Alltag am Beispiel von Daten, Algorithmen und Prognosen
Suchmaschinen, soziale Netzwerke und eine Vielzahl von Apps am Handy sind aus unserem Alltag nicht mehr wegzudenken. Sie haben sich in unsere alltĂ€glichen Praktiken eingenistet, gestalten aber gleichzeitig auch welche Informationen wir finden, wie wir ĂŒber Distanz kommunizieren, und wie wir unseren Körper wahrnehmen, wenn wir zum Beispiel an Gesundheitsapps denken. Sie werfen aber auch eine Reihe gesellschaftspolitischer Fragen auf: Was bekommen wir in Suchmaschinen-Ergebnissen, Newsfeeds und Online-Recommendations zu sehen und was nicht? Welche neuen Formen von Bias und Diskriminierung entstehen dabei? Wie können auf Basis gesammelter Daten Zukunftsprognosen erstellt werden und welche Konsequenzen gehen damit einher? Was bedeutet die zunehmende Quantifizierung unterschiedlicher Lebensbereiche fĂŒr Individuen und Gesellschaft? Wie können wir global agierende Technologie-Unternehmen und deren GeschĂ€ftsmodelle (Stichwort ‘Datenhandel’) regulieren und welche gesellschaftliche Teilhabe ist dabei möglich?
Diese Fragen möchten wir in unserem Kurs anhand von klassischen EinfĂŒhrungstexten aus der Wissenschafts- und Technikforschung (STS), sowie aktuellen Texten aus den kritischen New Media Studies behandeln. In jeder Einheit wird die Lehrveranstaltungsleiterin zunĂ€chst ein klassisches STS-Konzept – soziale Konstruktion von Technologie, Politik von Technologie, Actor-Network Theory, Technikentwicklung und Geschlecht, Partizipation etc – vorstellen und zur Diskussion aufbereiten (Pflichttext). Darauf aufbauend werden wir einen Text aus den Themenfeldern Daten, Algorithmen und Prognosen diskutieren, der das jeweilige Konzept zur Anwendung bringt (Referatstext). Dieser wird von Studierenden in der Gruppe aufbereitet und zur Diskussion gestellt/ moderiert. ZusĂ€tzlich dazu werden zwei schriftliche Arbeitsaufgaben gestellt, die wir im Seminar diskutieren werden. Voraussetzungen fĂŒr den Zeugniserwerb sind Anwesenheit, Mitarbeit, mĂŒndliche PrĂ€sentation (Textdiskussion oder Position in der BĂŒrgerkonferenz), schriftliche Arbeitsaufgaben, sowie die Absolvierung der schriftlichen AbschlussprĂŒfung. Da der Kurs gröĂtenteils auf englischsprachigen Texten basiert sind grundlegende Englischkenntnisse erforderlich. Die Unterrichtssprache ist deutsch.
More information can be found at the University of Vienna website.
body data – data body
Together with Katja Mayer I wrote an article about quantified self, big data and social justice in the health context. The title is “Body data-data body: Tracing ambiguous trajectories of data bodies between empowerment and social control in the context of health” and it has just recently been published by the wonderful open access journal Momentum Quarterly!! Here is the link to the full text (completely free of charge!)! Don’t get irritated by the German title and abstract, the article is in English, no worries! đ
Thanks to Leonhard Dobusch and Dennis Tamesberger!! I’m happy to be part of this great Momentum Quarterly editorial team!
die macht der algorithmen
Radiosendung zum Thema “Die Macht der Algorithmen”. Mit vielem lieben Dank an Herbert Gnauer!! Radio Dispositiv zu hören auf Orange 94.0 @HerbertGnauer
Where is SUSI in the AI?
Thanks for your interest and great response to the FOSSASIA 2019 workshop I advertised in my previous blog post! Are you a SUSI.AI developer/ contributor? Are you up for an experiment? Would you be willing to write a short piece of text on how the social appears in the technical development of SUSI.AI/ your daily work practices? This text should only be half a page or a page and you should’t think about it too hard; rather: you just find a nice spot (like I did last spring in Berlin, where the picture above was taken) and quickly write down what comes to your mind when you hear the following question:
When and how did you encounter SUSIÂ (standing for the social in terms of social biases, user imaginations, gender relations, your own desires and expectations, or something else that comes to your mind..) when developing/ contributing to SUSI.AI and how did you handle SUSI back then?
Please send your memories to me (astrid.mager(at)oeaw.ac.at) so that we can discuss/ work with them during the workshop. Based on these texts we’ll be able to draw out how to (better) handle SUSI in the future, but also how SUSI can be made productive in terms of creating more âopenâ, âtransparentâ or âfairerâ (AI) technology more generally.
If you don’t find the time to write such a memory, don’t worry! I’d still be happy to see you at the workshop and learn about your ideas on the way SUSI figures in your work and how you usually deal with it!
Remember: The workshop titled “Where is SUSI in the AI?” will take place on Saturday, 16th March, 18-18.55, at the Event Hall 2-1. I’m already looking forward to seeing you there!!!  Please use this link to sign up for the workshop! Thank you!
If you’re interested in learning more about working with memories in software design, I’d be happy to give you further insights in the method “mind scripting” I’ve been toying around with just recently. It’s a method developed by my colleague Doris Allhutter, who particularly created this method to investigate (and potentially also to intervene in) software practices.