Friction with algorithmic systems: Exploring breakages, repair and renewal

Parallel session 5:
Thursday 8 June, 16:00-17:30 

Stort Møterom 50, Georg Sverdrups Hus

Chair: Minna Ruckenstein 

Perle Møhl & Dorthe B. Kristensen, University of Southern Denmark:
Matters of friction: humans, algorithms, deskilling and control 

Antti Rannisto, Aalto University:
Friction and value resonance in algorithmic systems 

Elisa Elhadj, KU Leuven - In Silico in the Making:
Exploring frictions and controversies between actors and values 

Tuukka Lehtiniemi, University of Helsinki:
Predictive AI tools in social work: Broken to begin with? 

Parallel session 6:
Friday 9 June, 09:00-11:00 

Stort Møterom 50, Georg Sverdrups Hus

Chair: Minna Ruckenstein 

Maria Eidenskog, Linköping University:
When the digital revolution never comes – frictions and care in implementing Building information modeling 

Yana Boeva & Cordula Kropp, University of Stuttgart:
Frictions in Computational Design as an Algorithmic System 

Ida Schrøder & Helene Friis Ratner, Aarhus University:
Ethical trials as de/valuation practices in algorithmic experiments 

Maiju Tanninen, KU Leuven & Gert Meyers, Tilburg University:
Care, control and trust in data-driven insurance 

Taina Bucher, University of Oslo:
Unbuilding algorithms: Towards an anarchitecture of the digital 

Parallel session 7:
Friday 9 June, 11:30-13:00 

Stort Møterom 50, Georg Sverdrups Hus

Chair: Tuukka Lehtiniemi 

David Moats & Sonja Trifuljesko, University of Helsinki:
“Meaningful” Transparency? Tinkering with Values in Algorithmic Cultures 

Robert Collins, Umea Institute of Design:
The Fixer: New Roles for the Repair of algorithmic Decision Systems 

Johan Irving Søltoft & Anders Kristian Munk, Aalborg University:
The Art of AI Acceptance: How to make creatively accepted AI in the cultural industry? 

Ignacio Garnham, Aarhus University:
The social life of algorithmic values: Examining how algorithmic values control the narratives behind the breakage and repair of algorithmic systems 

Abstracts

Frictions in Computational Design as an Algorithmic System

by Yana Boeva, University of Stuttgart; with Cordula Kropp

Algorithms and their sociotechnical environments have entered many aspects of life, including the production of architecture and the built environment. A corresponding approach is computational design, an umbrella term for combining various digital and computational methods, software, and technologies, typically based on data and algorithms. As relational and hybrid arrangements, the algorithmic infrastructures of computational design are subjected to a continuous process of infrastructuring, that is, care and cure coming from social, political, and technological actions. However, a central problem of infrastructuring processes is their transparency. Coding and algorithms are embedded in design software as scripts, plugins, and visual programming with bounded manipulation options for many professionals. Recent AI-based generative design tools go further in blackboxing coding into the algorithmic systems. 

Following critical research in algorithmic studies, software studies, and infrastructure studies, this paper argues that the ongoing infrastructuring of computational design as an algorithmic system reconfigures practices and decision-making with increasing inscrutability, and reorganizes design work, but also how buildings are conceived. Computational design calls up such tensions, particularly in moments of heterogeneous infrastructuring through interactions with algorithmic systems and the wider sociotechnical assemblages. The paper spotlights how decisions are made in these hybrid assemblages of algorithms, data, software, technology, standards, and across organizations, work practices, and human actors. Their sovereign application requires not only technical skills but an understanding of how a technology-related reconfiguration involves sociocultural, regulatory, and economic aspects. The paper draws on an empirical study of computational design processes for architecture and construction.

Unbuilding algorithms: Towards an anarchitecture of the digital

by Taina Bucher, University of Oslo

This paper offers a conceptual intervention into the maintenance and repair literature in STS and media studies by arguing against the need to repair what is essentially a broken machine (Sharma, 2020). Drawing on the grammars of queer negativity and queer failure (e.g. Halberstam, 2011), and abolitionist projects (e.g. Harney & Moten, 2013), this paper grapples with a version of broken world thinking that rests on resisting the world as given. Rather than seeking remedies to broken algorithms in practices of maintenance and repair, I am interested in the potentials for unmaking and unbuilding worlds that hinge on fundamentally broken systems to begin with. Building on Halberstam’s (2018) work on Gordon Matta-Clark’s anarchitectural practices of unmaking, this paper asks what an anarchitecture of the digital would look like, especially as it pertains to algorithmic landscapes. 

Discussing a set of contemporary artists - Mimi Onuoha; Rachel Ora; Doris Salcedo- whose artworks engage with algorithmic failure and/or breakdowns through tactics of unbuilding, I use Matta-Clark’s counter-architectural project of anarchitecture as a conceptual framework to reveal a world that refuses to be “plugged into patriarchy’s technological conditions of possibility» (Sharma, 2020: 176). That is, rather than repairing the gaps, splits and voids made by broken machines via an ethics of care, an anarchitecture of the digital insist on attending to the cracks and fissures of algorithms by choosing not to care, not to be moved, not to repair.

The Fixer: New Roles for the Repair of algorithmic Decision Systems

by Robert Collins, Umea Institute of Design

This paper embraces broken world thinking and the ways that Steven Jackson (2014) sees that “the world is always breaking; it's in its nature to break. That breaking is generative and productive”. Accepting the brokenness of our systems is not a failure, but an opportunity to reimagine how we design these systems for a more agonistic and repairable future and as an antidote to solutionism and black boxing.

We explore how Jackson’s suggested role of The Fixer can be a beneficial new character with “special insight and knowledge” who might “know and see different things - indeed, different worlds - than the better-known figures of "designer" or "user"” in our entanglements with algorithmic decision systems and as a mediator for the frictions necessary to assume responsibility and response-ability in our socio-technical futures. Complimentary to the maintainer, the Fixer’s duty is to question and challenge broken systems, to suggest alternatives in the ethical interest of all stakeholders, and to facilitate a more respectful relationship between designer, user and algorithmic system. 

Taking inspiration from the successes of the Right to Repair movement, the community ethos of Repair Cafés and the author's own experiences with design, repair and use, the Fixer can be seen as the direct descendant of the hardware repair technician. A character who has been with us since the dawn of the artificial and can move with us into the more abstracted and entangled technological environment in which we find ourselves.

Exploring resistance towards Building information modeling - a story of care, knowledge and pride

by Maria Eidenskog, Linköping University

The new technologies under the name of Building Information Modelling (BIM) can be used to facilitate collaborations and have been predicted to be central in the digital revolution of the construction industry. However, BIM has been available for almost a decade, yet the industry still relies heavily on paper documents and the profession specific software BIM was supposed to replace. The resistance towards BIM has foremost been explained by organizational and collaboration problems but in this paper I will argue that BIM more importantly fails in changing how knowledge is produced, handled and negotiated. The paper builds on a case study of a consultancy company in Sweden and analyse professional practices inspired by concepts from the knowledge infrastructure framework (Edwards et al., 2013; Bowker, 2016). In this paper I explore how BIM sometimes successfully breaches established methods and challenges existing knowledge hierarchies while at other times fails to become embedded in everyday practices. BIM comes with embedded values and shift power relations due to different levels of IT knowledge. Feelings of pride in, and care for, craftmanship is shown to be crucial for when digital technology succeeds in, or fails to, change how data is reconfigured into knowledge as well as how knowledge is negotiated and ordered. With this paper I invite further discussion on how we can understand failure of digital technologies in relation to feelings of care as well as how it changes knowledge infrastructures.

In Silico in the Making: Exploring frictions and controversies between actors and values

by Elisa Elhadj, Life Sciences and Society Lab (KU Leuven)

In silico medicine is a field that involves using computer simulations and modeling to improve the diagnosis, treatment, and prevention of diseases. Techno-optimist visions tend to describe in silico medicine as having the potential to transform healthcare by personalizing treatments, accelerating the drug discovery process, and make various health services more efficient. However, in silico medicine is a complex field, full of dynamics and powers which require closer scrutiny. Its adoption raises socio-political concerns related to, for example, equity, responsibility, privacy, de-skilling of the workforce, and environmental considerations. 

Building on insights from science and technology studies (STS), this paper follows the actors involved in in silico medicine with the aim of understanding how they are shaping and discussing the field. Furthermore, it explores the frictions and controversies between actors and central values, guided by the following questions: What do these frictions and controversies expose, and what is at stake?

The paper is based on an in-depth discourse analysis of three focus groups, a Delphi survey and a workshop held within the frame of the H2020 In Silico World Project, as well as white papers, policy initiatives, and other sources. Understanding these frictions and contradictions can help policymakers, researchers, and the public to anticipate and address potential negative consequences of in silico medicine, and shed light on some of its social implications. This is particularly important as an increasing number of in silico models are in the process of getting regulatory approval.

The social life of algorithmic values: Examining how algorithmic values control the narratives behind the breakage and repair of algorithmic systems

by Ignacio Garnham, DDINF, Aarhus University

In the context of HCI, researchers looking at bottom-up understandings of algorithms that are constructed through everyday encounters with algorithmic systems rather than from knowledge of their salient qualities place the interactions between humans and algorithm-driven technologies as the sites where the notion of an algorithm–what they are and what they can do–becomes appropriated. In contrast, scarce attention has been placed in human-human interactions where algorithmic values–the moral values embedded in code to guide the design of principled algorithms, such as efficiency, fairness, and transparency–are discussed. To address this gap, this paper focuses on the tensions that arise when human and algorithmic values become negotiated and asks: how is the passing and adoption of algorithmic values controlling the narrative from which top-down characterisations of algorithms become broken apart and renewed? I address this question building on findings from field research documenting the social life of algorithmic values in the Salvadorian town of El Zonte as the local community faces the top-town transition into Bitcoin Beach, and discuss how shifting attention to the social life of algorithmic values contributes to:

  • Move past the boundaries of human-computer interactions as the sites where new understandings of algorithms emerge.
  • Scrutinise the role that moral values play in shaping perceptions of algorithms that, for better or worse, don’t align with their technical counterparts.
  • Position algorithmic values as central to engaging with everyday situations where discussions about the need for maintenance, repair and renewal of algorithmic systems occur.

Matters of Friction: Humans, algorithms, deskilling and control  

by Perle Møhl and Dorthe Kristensen, University of Southern Denmark 

The introduction of AI in surveillance and border control is warranted by a need to alleviate workloads and create seamless movement, but also to improve the identification of potential threats. In this paper, we first describe two such instances of introducing AI technologies in surveillance and threat detection: facial recognition and object scans. We identify a series of frictions between the professed goals of the AIs and the effects they have on the agents using them. These frictions concern both the skills and potential deskilling of the users’ sensory and interpretive competencies, as well as inbuilt systems of user surveillance.

On the basis of these findings, we then explore the implementation of AI in breast cancer screening (radiology) where similar logics are at play, concerning both the professed goals of using AI - work load reduction, efficiency and precision of threat detection (cancer) - and the inbuilt potentials for deskilling and surveilling the professional users. We discuss how the use of the AI potentially affects workflows and workloads, changes the radiologists’ reading and interpretation of x-rays, possibly deskilling the radiologists, and provides an inbuilt control of the radiologists’ results and proficiency.

Predictive AI tools in social work: Broken to begin with?

Tuukka Lehtiniemi, University of Helsinki

In pilot trials, Finnish caseworkers in children’s services used an AI tool that turned social and healthcare data into predictions of child custody. These predictions, the thinking goes, allow caseworkers to intervene early and act pre-emptively. In this presentation, I examine the case with the lens of breakages and repair, with an aim to discuss how the inevitably proliferating AI tools could matter more in social work. An obvious breakage occurs when AI tools do not predict risks fairly or accurately, and instead generate biased predictions, make errors, or treat people inequally. These are extremely relevant concerns in a field where mistakes can cause serious damage. Related repair work is constantly underway: models are improved, datasets are expanded, and modes of repairing bias and inequality are suggested by ethical AI guidelines and principles. The case however reveals a different and more fundamental breakage, one stemming from a difference between what predictive AI tools do and what casework needs. Caseworkers aim to form a situated understanding of a client’s problems, based on intimate interactions with them. Risk predictions however distill a client’s whole situation into a single risk score, jumping to the conclusion instead of helping caseworkers form it. Predictive AI tools are in this sense broken to begin with. Instead of repairs to them, this suggests a need for renewed thinking about AI in social work. It could start from assessing contextual needs, carefully considering existing professional practices, and figuring out what AI tools could meaningfully slot into those practices.

“Meaningful” Transparency? Tinkering with Values in Algorithmic Cultures

by David Moats, University of Helsinki; with Sonja Trifuljesko

The field of what is sometimes called Artificial Intelligence (AI) Ethics emerged in the second half of 2010s as a collective attempt at “repair” prompted by the harms generated by data-driven algorithmic systems. Various public, private and civil society organizations have proliferated numerous guidelines of ethical principles, foregrounding different ethical values (fairness, accountability, transparency, etc). Critical data and algorithm scholars have mostly designated these endeavours as misguided (or disingenuous) while others have recognised the need to put these lofty “principles into practice” (Floridi 2019). But how does one go about turning these abstract ethical values into concrete actions and procedures to improve algorithmic systems? 

In this paper we study closely one such attempt, which led to the creation of an “AI Register” for the cities of Helsinki and Amsterdam. As a digital tool enabling documentation of the decisions and assumptions pertaining to algorithmic systems, the Register was envisioned as a manifestation of “AI transparency”, which is posited to lie at the heart of “trustworthy AI”. Drawing on ethnographic material, we pay attention to what happens to values like transparency and trust when they become shaped by cultural patterns constituted by algorithmic imaginaries and materialities. We argue that in order for algorithmic systems to be understood as ethical, it is not only the algorithmic systems but the values themselves which are “repaired” and “tinkered with” – discursively framed, translated, or made to fit various actions. In particular, we focus on how values are made to be “meaningful” for various groups, through ongoing negotiations. 

Friction and value resonance in algorithmic systems

by Antti Rannisto, Aalto University

The algorithmization of society seems to spark only little interest among citizens in Finland. In the Netherlands things are very different with especially the Dutch childcare benefits scandal from 2018 making obvious the dangers of automated societies, leading eventually to the resignation of Prime minister Rutte’s cabinet. To understand differences in how the application of algorithmic technologies in society might or might not spark public interest and political awareness, I will propose the concept of value resonance. I start by complementing Steven J. Jackson’s (2014) suggestion of focusing on cracks and decay by laying out a Deweyan conception of action and values (Joas 1996; 2001). Here value resonance – and through that: value articulation – can be seen as happening during moments of friction, crisis, and breakage of habitual conduct. These frictions are also moments sparking reflexive and creative agency (to repair habit). I will then propose combining complementing elements from the theory of moral intuitions by Jonathan Haidt (2012) with the theory of pragmatic regimes of engagement and (cultural) forms of common good by Luc Boltanski and Laurent Thévenot (2006; Thevénot 2001) to formulate a socially stratified approach to value resonance on individual and cultural levels. I will outline my PhD project’s approach for ethnographically following concrete social processes and praxis of value-coupling and resonance in the processes of algorithmization of social services at Kela. I will also include related observations from qualitative research conducted with users of the covid-19 tracing application, Koronavilkku, during its launch in the autumn of 2020. 

Ethical trials as de/valuation practices in algorithmic experiments

by Ida Schrøder, Aarhus University, DPU; with Helene Ratner

In the ‘post-digitalization’ of the Danish public administration, predictive algorithms are casted as both objects of curiosity (Cochoy, 2016) and tools for problem-solving (Regeringen, 2022). While algorithms seduce public agencies to envision new futures of precision and scope in their decision-making processes, ongoing algorithmic experiments reveal that algorithms also fail to deliver the promised solutions for pressing societal problems, and many are cancelled before they reach casework. In this situation, the ‘valuations’ of why, how, and when it is right to invest in new algorithmic technologies are charged with uncertainty and ethical concerns about possible consequences.

In this paper, we propose that the valuations of predictive algorithms unfold as ethical trials of contesting and explicating risks, possibilities, and imaginations about future scenarios. In such “trials of explicitness” (Muniesa & Linhardt, 2011), involved actors continuously adjust and repair not only their style of justifying their algorithmic experiment but also the data-scope, algorithmic models, software etc. As such, ethical valuations of predictive algorithms for public administration interweave with designers’ decisions about how algorithms are improved to be more ethical or possibly abolished as unethical. To examine how ethical trials unfold as de/valuation practices, we report on a longitudinal case study of a cancelled algorithmic experiment: RISK, a Danish research project developing a predictive algorithm to support decision-making about how and when to intervene in families where children have special needs.

The Art of AI Acceptance: 
How to make creatively accepted AI in the cultural industry?

by Johan Irving Søltoft, Aalborg University - TANTlab; with Anders Kristian Munk

How can AI be creatively accepted within the European cultural industry? This presentation draws on empirical data from workshops with culture creators across Europe, testing various machine learning approaches and visualizations. It aims to contribute to the debate on socially responsible development of AI. Much of the literature focuses on bias (Cheng et al., 2021) and privacy (Stahl & Wright, 2018), but there are also those who argue that accountability is primarily about making the technology meaningful to the people who are affected by it. Floridi et al. (2018) thus point out that the potentials of AI turn into risks when the algorithms no longer support people's room for action and competences, but instead remove self-determination and control from those who should make the decisions. How should AI then be developed if it is to make sense for a group of professionals who require room for manoeuvre and control over their own creative process.

The presentations take a socio-technical perspective (Akrich, 1992; Callon, 2004) on the development of socially responsible AI and uses Participatory Data Design (Jensen et al., 2021) to involve the content producers of the cultural industry in decisions about datafication, training and parameterization of machine learning. The technology thus becomes an elicitation device in an anthropological fieldwork that must provide concrete knowledge about certain groups of people's expectations of AI as a usable and acceptable part of their work practice (Pedersen, 2021; Munk et al., 2022; Albris et al., 2021).

Care, control and trust in data-driven insurance

by Maiju Tanninen, KU Leuven; with Gert Meyers

In this paper, we analyse 1. how care and control are at stake in emerging data-driven insurance technologies and 2. how ‘digital trust’ becomes a key for insurers in solving frictions between careful control and controlling care elements. Data-driven technologies are supposed to disrupt insurance by enabling new ways to calculate, price and manage risks. The implementation of these technologies is controversial, with fears of increased surveillance, control and exclusion from coverage. However, they are not yet established as a common practice. Besides technological development, the realization of data-driven insurance depends on relational aspects, such as trust.

Trust has for long been a key concern in the insurance business. Lately, insurers have paid heightened attention to it in the digital context. Algorithmic technologies based on increased monitoring of digital traces can help insurers control and trust their customers. However, they might not be useful and caring enough to engender trust among policyholders. In this paper, we show the frictions between insurers’ will to collect new data and their need to ensure trust among customers. The emergence of data-driven insurance technologies will potentially reconfigure the trust relations, with insurers employing the notion of ‘digital trust’ to solve the frictions. Hence, the developing of Insurtech serves as a lens to analyse how algorithmic technologies affect values in established institutions like insurance. The paper is based on empirical fieldwork conducted among insurance providers in Finland, Belgium and the Netherlands. Furthermore, we analyse reinsurance companies’ reports on digitalization and ‘digital trust’. 

Organizers

Tuukka Lehtiniemi, Minna Ruckenstein 

Published May 31, 2023 12:14 PM - Last modified June 5, 2023 4:19 PM