Chapter 6 Uncovering the long-term development of identification infrastructures: A multi-temporal perspective

Abstract

Systems and infrastructures for identifying and registering mobile populations have many facets and long development histories, and researchers’ partial perspectives shape their understanding of the technologies and practices involved. To overcome methodological partiality, researchers frequently study infrastructures at multiple sites or including human and non-human actors that shape identification encounters. As a further option, this chapter suggests that researchers can use multi-temporal sampling methods to understand the long-term development of identification systems and infrastructures. The chapter proposes two heuristics for selecting contingent moments in the lifecycle of identification technologies. The first heuristic employs the Social Construction of Technology’s concept of “interpretative flexibility” to pick out moments when social groups challenge, change, or close down the meanings of identification practices and technologies. The second heuristic employs infrastructure studies’ concept of “gateway moments” to pick out moments when heterogeneous identification software systems and infrastructures intersect. These two heuristics were tested through the analysis of data gathered at an IT vendor of software for matching people’s identity data. This chapter makes two contributions to the research agenda of long-term perspectives on identification software development. The first contribution demonstrates how the contingent interpretation of data matching system corresponds to diverse problematizations of identification and its securitization. The second contribution demonstrates how “gateway moments” make it possible to see the compromises necessary when building identification infrastructures and adapting globally honed technologies to new settings. Together, these findings shed light on the activities of under-the-radar actors, such as software vendors, whose distribution and reuse of data matching systems have long-term implications on identification practices and infrastructures, not only in the security field.


Contribution to research objectives
To investigate the long-term development of identification systems and building of transnational data infrastructures by identifying crucial moments in their lifecycle to explore how data matching expertise travels and circulates.
This chapter contributes to the research objective by employing two guiding heuristics: the interpretative flexibility of data matching systems and gateway moments associated with their integration into broader infrastructures, to select and analyze contingent moments. The first heuristic focuses on the system’s changing interpretive flexibility, which allows us to see actors’ varying problematizations of data matching and identification. The second heuristic uses “gateway moments” when systems and infrastructures intersect, which makes it possible to see the compromises necessary when integrating identification systems into broader infrastructures. Consequently, the chapter highlights the myriad factors influencing such systems’ development, adaptation, and dissemination. Of particular significance is the exploration of how data matching expertise traverses and circulates, exemplified by the development of name matching expertise that finds application elsewhere and instances where this circulation faces hindrances due to infrastructural challenges, like backward compatibility issues. The chapter’s analysis offers a nuanced perspective on the intricate and contingent interplay between technological advancements in data matching and their enduring impacts on transnational data infrastructures.
Infrastructural inversion strategy
Third inversion strategy — sociotechnical change
The strategy is operationalized using a “multi-temporal sampling” approach, offering an alternative to conventional longitudinal studies for exploring the complex processes shaped by and shaping identification systems and infrastructures. In this context, the analysis of the data matching system assumes a dual role, serving as both the subject of investigation and a resource. This dual role enables a comprehensive exploration of the intricate and contingent processes underlying the development of WCC’s ELISE data matching system within broader transnational data infrastructures. The ELISE system was repeatedly adapted to the evolving demands and challenges within the field of data matching, exhibiting phases of openness and closure in its design. As a resource, the method is used to address the research question of the chapter, which revolves around the investigation of the circulation of data matching knowledge and technology across various organizations, thereby providing insights into the dynamics of how data matching knowledge travels and circulates within the realm of identification systems and infrastructures.
Contribution to research questions
RQ3: How do knowledge and technology for matching identity data circulate and travel across organizations?
By examining the interpretive flexibility of data matching software, the chapter sheds light on the roles of often-overlooked actors in this knowledge and technology circulation. The analysis highlights instances where international professional networks and government agencies have influenced the trajectory of data matching technology, emphasizing how such factors drove adaptations and innovations within organizations like WCC. Furthermore, the chapter explores gateway moments, illustrating that the circulation of data matching knowledge and technology is not deterministic but contingent on specific contexts. The integration of ELISE into both the EU-VIS and IND systems serves as a compelling example, showcasing how different implementation approaches influence the circulation of data matching technology and expertise across organizations.

Figure 6.1: The axes pertaining to the methodological framework in relation to chapter 6.

Contribution to main research question
How are practices and technologies for matching identity data in migration management and border control shaping and shaped by transnational commercialized security infrastructures?
Through an analysis of interpretative flexibility moments, this chapter traces the evolution of identity data matching technology, conceived initially for versatile applications but later refocused on the identity and security sector due to geopolitical shifts and heightened security concerns. This shift led to expanding data matching capabilities, including biometric data matching and specific name matching for security agendas. Additionally, the chapter emphasizes the commercial nature of these technologies, detailing their extensive network of WCC’s partners and collaborators. Furthermore, examining gateway moments illustrates how transnational infrastructures influence data matching practices. The integration of ELISE into EU-VIS showcases member states’ control over matching criteria and the challenges of such integration. At the same time, incorporating data matching into legacy systems highlights opportunities for standardization. Overall, the chapter provides a comprehensive understanding of how identity data matching practices and technologies are shaping and being shaped by transnational commercialized security infrastructures.

6.1 Introduction

What we know about technologies for identifying people is inextricably linked to what we know about the devices and practices involved (Garfinkel 1964; Latour 1986; Marres 2017; Pollock and Williams 2009). Every year, authorities identify millions of people crossing the EU’s external border, including migrants, tourists and asylum seekers. In order to identify people, authorities frequently cross-reference and link personal information from different databases. Examples include checking visa applicants’ identities, preventing identity theft by linking identities across worldwide law enforcement databases, and screening passengers for potential security risks by comparing their personal information to government watch lists. By facilitating or hindering particular forms of mobility and excluding others, these processes contribute to the fragmentation of mobilities (Olivieri 2023; Sparke 2006). However, what do we know about the histories of identification systems, and how are the technologies known to us? Most importantly, through which methods?

One significant challenge in researching technologies, often stemming from their development over extended periods and involving multiple components, lies in the inherent limitations of researchers’ partial perspectives. For example, identification systems can be extensive, involving numerous interconnected components, databases, and networks, making it difficult for any researcher to grasp every aspect fully. Furthermore, the evolution of technological systems over time adds another layer of complexity. Many technologies have a long developmental history, undergoing continuous changes, updates, and adaptations. Investigations might focus on a specific snapshot in time, but this limited perspective might not capture the system’s history or its future trajectories. To address this challenge, multi-site sampling methods are often employed to study technologies in diverse contexts. While these methods offer valuable insights into the varied practices and actors shaping identification technologies across sites, they have limitations in tracking how these practices and technologies evolve over time. By recognizing the significance of understanding temporal changes, incorporating a multi-temporal perspective becomes crucial. Tracking changes over time allows unraveling the dynamic nature of identification practices and technologies, unveiling the influences, adaptations, and challenges that occur throughout their lifecycle. By adopting a multi-temporal approach, researchers could uncover the intricate interplay between social, technological, and contextual factors that shape identification systems and infrastructures.

As employed in this chapter, the term “multi-temporal” does not imply the coexistence of multiple linear temporal dimensions running in parallel. Rather, it pertains to the practice of sampling distinct moments in time. Adopting such an approach can unveil a diverse range of moments within the lifecycle of identification technologies, each offering a unique perspective on these technologies and their associated practices. For instance, tracing the emergence of identification systems often involves following key actors and scrutinizing the records they generate. This can encompass examining processes such as the design, construction, and maintenance of systems and policies, as seen in initiatives like the EU’s “smart borders” project (Bigo et al. 2012; Jeandesboz 2016) or the EU’s Visa Information System (Glouftsios 2019). Of course, chronicling a system’s entire lifecycle may be impractical and not always necessary. Researchers should instead make well-informed choices based on their familiarity with the subject matter and research objectives (Pollock and Williams 2010). These choices might involve focusing on one or more specific moments in the lifecycle of a sociotechnical identification system (e.g., planning, analysis, design, procurement, implementation, operation, and maintenance), recognizing that omitting certain stages will inevitably result in partial descriptions (Pollock and Williams 2010). Sampling multiple moments allows researchers to explore various stages of the development and operation of sociotechnical identification systems, depending on their research goals and objectives.

Within STS, there is an ongoing recognition of the contingency involved in the development and practical use of technological artefacts (e.g., Akrich 1992; Bijker 1993; Suchman 2007). This position questions deterministic perspectives on technological development and recognizes the role played by multiple factors, actors, and structural constraints in shaping uncertain sociotechnical outcomes. In a similar vein, in the overlap of STS and security studies, it is acknowledged that practices and security encounters are contingent (Amoore 2013; Kloppenburg and van der Ploeg 2020; Pelizza and Aradau 2024). This approach emphasizes the divergence of security apparatuses from a perceived coherent structure, as various social, political, and technological forces introduce contingency. By emphasizing contingency, I specifically aim to spotlight moments where outcomes in a data matching software development are not predetermined but are instead influenced prominently by the circumstances, factors, and decisions made by individuals and entities involved.

By concentrating on how sociotechnologies of identification move and circulate across organizations, influencing identification problems and solutions, this chapter addresses the dissertation’s goal to investigate the long-term development of identification systems and building of transnational data infrastructures by identifying contingent moments in their lifecycle to explore how data matching expertise travels and circulates. It does so by proposing two heuristics to identify contingent moments to study the evolution of identification-related sociotechnologies by drawing on concepts and theories from long-term and genealogical analyses of data infrastructures and social constructivist accounts of technology. Overall, the chapter aims to offer a methodological solution using sampling moments to address research question 3:

How does knowledge and technology for matching identity data circulate and travel across organizations?

To address this research question, the chapter first considers methods suitable for tracking the spread of identification practices and technologies, and then provides heuristics for identifying moments where these circulations of knowledge and technology are prominent. The next section begins by classifying the various research strands that have investigated sociotechnologies of identification according to the types of sampling methods used. First, much ethnographic research has been influenced by debates about expanding the number of research sites and connecting observations to gain insight into more encompassing phenomena, such as the construction of a border identification regime that criminalizes migration. Second, scholars have begun considering the complex web of human and non-human actors that make up border identification encounters. However, neither of these approaches can show how practices and technologies have changed over time. Thus, there is a need to provide detailed accounts that include the multiplicity of sites and actors and the multiplicity of moments in time (Hyysalo, Pollock, and Williams 2019). Most often, longitudinal research is used to answer this need. This chapter introduces an alternative approach to longitudinal research by identifying analytically valuable moments in the evolution of sociotechnologies of identification.

This chapter suggests utilizing two heuristics drawn from the literature to investigate the evolution of sociotechnologies related to identification, focusing on moments of interpretative flexibility and gateway moments. Long-term and genealogical studies of information systems and infrastructures have convincingly demonstrated that temporal approaches make it possible to avoid a teleological view of the design of technologies (Edwards et al. 2009; Karasti, Baker, and Millerand 2010; Ribes and Finholt 2009; Williams and Pollock 2012). Instead, temporal approaches allow the inclusion of often overlooked actors and moments in the development of identification sociotechnologies, such as the numerous interactions between government actors and technology consultants as they work together to develop the problems and solutions for identification. However, as an alternative to longitudinal studies, this chapter will propose “multi-temporal sampling” to provide another approach for examining of the evolution of identification in technologies in border and migration control. Before going over the two heuristics for multi-temporal sampling, we need to comprehend how researchers typically study identification, which occurs across multiple sites and involves both human and non-human actors.

6.2 Sampling methods for dealing with the scale of sociotechnologies of identification

Ongoing scholarly debates in Science and Technology Studies (STS) about the intertwining of methods and outcomes of investigations may provide inspiration to the research of identification technologies. In general, the discussion has drawn attention to an incompatibility between research methods that adopt a limited perspective and the understanding that technologies are shaped over time, in various contexts, and by various actors (Hyysalo, Pollock, and Williams 2019; Silvast and Virtanen 2023). This incompatibility has led to investigations that fail to account for the multiple and contingent lives of technologies, due to research designs that limit the scope of technology analysis. Such assessments are significant, especially when combined with the realization that methods do not merely describe the world, but also have the power to shape specific realities (Garfinkel 1964; Latour 1986; Law and Urry 2004). Therefore, it is necessary to acknowledge the limitations of methods used to study identification technologies, as they may inadvertently shape the specific realities they seek to describe.

Methods, in this view, can be thought of as devices that bring bits and pieces of the world together to enact certain realities (Latour 1986; Law and Urry 2004). As a result, researchers must answer the question of what kind of “ontological politics” (sic.) they participate in and what kinds of realities they contribute to. There are compelling reasons, for example, to use migrants’ practices and experiences as a starting point for understanding the forms of discrimination and unpredictability embedded in border technologies. However, this focus can cause one to miss other phenomena, such as the rise to power of a small oligopoly of technology companies and IT consultancy firms in the development of technologies for identifying people (Lemberg-Pedersen, Rübner Hansen, and Halpern 2020; Jones, Valdivia, and Kilpatrick 2022).

What we know about sociotechnologies of identification is primarily based on empirical studies that investigate encounters between people and technologies or based on desk research and document analysis. This section suggests how methodological decisions in studying identification sociotechnologies can shape the research outcomes. Particular attention will be paid to the growing body of literature exploring the information systems that regulate the entry of people into EU territory. There are two main reasons why these systems were selected. First, there is much research on these EU systems, which are among the largest identification systems in the world. Second, the software I looked into, and will expand on later, is directly connected to one of the systems (the Visa Information System). In reviewing the literature on sociotechnologies of identification, it is possible to highlight thematic and methodological similarities and differences that affect the reviewed research findings.

This section first examines how research has dealt with the large-scale nature of identification systems. In other words, how research has dealt with the many people, places, and things involved in making, deploying and using these systems. As we will see, it is possible to discern three different sampling methods to address these scale issues. First, researchers can multiply the number of research sites to account for the dispersed nature of the studied phenomenon. Second, researchers can increase the number of actors in their analysis. Third, and most importantly for this chapter, different moments in a technology’s lifecycle can be compared and analyzed. Much of the current literature on identification sociotechnologies focuses on the first and second approaches for dealing with scale issues. Section 3 of this chapter will cover the theoretical foundation and empirical value of using a multi-temporal sampling approach to broaden the analytical reach of IT for border and migration control.

6.2.1 Transverse sampling, or situating and tracing connections across sites

It is common conceptualizing information systems and database technologies that store data about mobile populations as part of larger structures, regimes, systems, infrastructures, and assemblages that bring together border and migration-related practices, rules, and meanings. Researchers are then confronted with the methodological conundrum of localizing and investigating these more comprehensive phenomena. One approach is to multiply the research sites to provide multiple accounts of the connections between sites and unravel distributed phenomena. Such approaches are inspired by multi-sited ethnography (Marcus 1995). This theoretical framework improved upon the limitations of earlier ethnographic methods, which relied on researchers physically “being there,” or observing and interacting with a specific group of people or community in a bounded field site for an extended period. While these accounts may be rich in empirical data, focusing on particular fieldwork locations was deemed insufficient for comprehending globally overlapping phenomena. When ethnographers trace links between sites, they do more than just put one site in a broader global context. In contrast, an ethnographer can decide to focus, for example, on consumption processes within a capitalist political economy by following connections between sites.

For instance, the “ Digital Networks, Migration and Gender” project (Tsianos and Karakayali 2010) used a multi-sited ethnographic approach to investigate European border policies and practices by bringing together data from sites and contexts across Europe. Triangulating observations from various actors (migrants, policy experts) and locations (Greece, Germany, and Italy) allowed the team of researchers to show how, for instance, classifications of asylum seekers do not only follow a coherent system of governance. In principle, the Eurodac system categorizes asylum seekers into one of three categories based on whether they (1) regularly apply for international protection, (2) are discovered illegally crossing the border, or (3) are discovered illegally present within a Member State. However, based on interviews with officials conducted in 2011, the researchers discovered that there are differences in how these categories are applied and understood at a national and institutional level. A German contact for the project, for example, asked the researcher why Greece primarily employed the second rather than the third category. The German practitioner’s apparent ignorance runs counter to the Eurodac system’s purported role in the development of a uniform EU asylum policy. Additionally, by emphasizing the inconsistency and unpredictability of border and migration control, this case highlights how approaches that sample and aggregate observations from multiple sites can provide counter-evidence to the view of coherent migration and border control practices and policies.

Multi-sited research can cut across preconceived groupings such as local sites and larger phenomena (Marcus 1995). However, caution should be taken not to postulate the existence of actors and phenomena beforehand. The problem may arise when researchers begin with a theoretical construct — e.g., the existence of an EU border regime that criminalizes migration — and investigate it by triangulating results from different sites and established actors. We differently wonder whether a sense of global phenomena can instead emerge from tracing paths between heterogeneous actors. Actor-network theory, in particular has cast doubt on a priori methods’ capacity to demonstrate how scale is accomplished in practice, and argued, instead, that “scale is the actor’s own achievement” (Latour 2005, 185) and that, hence, theoretical divisions between micro and macro should be dropped (see also, Latour 1999; Law 2006). In this way, one valuable methodological insight is to “localiz[e] the global” (Latour 2005, 173) by following the “connections leading from one local interaction to the other places, times, and agencies through which a local site is made to do something” (p. 173). In other words, researchers can allow any notion of providing actors and locations structure to come from following connections rather than assuming specific orderings.42

Following the multiple circulating standards and categories is one effective way of tracking down these links (Latour 2005). For example, Pelizza (2021) has documented how regulations for adopting FBI biometric identification standards in equipment used at the Greek border “create trans-national associations with the EU Commission, corporate contractors and the US security regime” (p. 16). Another example is provided by Donko, Doevenspeck, and Beisel (2022), who have demonstrated how European migration management technology for identification stretches beyond the external EU borders. The authors describe how border checkpoints between Burkina Faso and Niger are linked to EU agencies such as the European Border and Coast Guard Agency (Frontex) via IOM border management information systems that record people’s biographic and biometric data. Furthermore, the EU-funded West Africa Police Information System connects these border checkpoints to all INTERPOL member nations via the global police communication system. In order to see the formation of such novel and distinct “flat ‘networky’ topographies” (Latour 2005, 242) of interactions between actors, it is crucial to avoid a priori postulating them in advance.

So, if we imagine this sampling strategy as a line that crosses and extends across research sites, we can refer to it as transverse sampling for studying large-scale data infrastructures. Far-reaching relationships might appear by exploring the lines between local encounters and other places. Tracing these connections highlights the different interconnected actors involved in large-scale data infrastructures.

6.2.2 Perpendicular sampling, or incorporating ecologies of interlinked actors

The need for solutions to scale up the number of research locations to examine large-scale infrastructures runs parallel to the challenge of who should be included in research designs. Scholars have debated the field’s proclivity to investigate socially, politically, and geographically marginalized individuals. Contrary to the abundance of research on marginalized individuals, less research has been conducted to “study up” on the wealthy and powerful (Nader 1972; Gusterson 1997). Similarly, many published studies on sociotechnologies of identification focus on the experiences of marginalized persons at the border and the related power imbalances. For instance, a growing body of literature (e.g., Kloppenburg and van der Ploeg 2020; Kuster and Tsianos 2016; Olwig et al. 2019) has investigated the contentious processes of collecting migrants’ biometric data in border zones. Following the studying up/down analogy, researchers interested in ethnography have tended to concentrate on the perspectives of people who are identified and controlled down at the border and less on those responsible for creating and maintaining these large-scale identification infrastructures.

Alternatively, scholars can arrive at more balanced portrayals of the numerous human and non-human actors involved by concentrating on moments and places where diverse actors interlink and impact one another. Examples in the literature show that researchers can thus include a more diverse set of actors: from interviews with local administrators and officers of international organizations of migration centers (Pollozek and Passoth 2019) to interviews with officials and experts from European, national, and international institutions (Glouftsios 2018; Pelizza and Loschi 2023; Trauttmansdorff and Felt 2023), and professionals in the security domain at industrial fairs (Baird 2017). Studies influenced by STS and ANT highlight the significance of non-human actors alongside these human ones (e.g., Pelizza 2021; Pollozek and Passoth 2019). Suppose we visualize these approaches as highlighting the multitude of actors whose paths cross at sites. In that case, we can refer to this strategy of increasing the number of actors in studies as perpendicular sampling.43

The decision of whom to include in the study reflects the questions and politics of the researcher. The “Autonomy of Migration” (AoM) approach, for example, begins with migrants’ practices of subverting and appropriating mobility regimes and contrasts them with the Fortress Europe discourse (De Genova 2017; Scheel 2019). It has been argued that depicting migration and border controls as fortifications fosters a narrative that ignores the diversity of experiences on borders and migration (Mezzadra and Neilson 2013), thus instilling a paternalistic view of migrants as helpless victims in need of protection. Authors in this body of work seek to destabilize such tropes by centering on the experience of migrants and illustrating how migrants can circumvent and subvert restrictive migration and border control mechanisms. One may wonder, however, if the emphasis on migrants’ practices and tactics to demonstrate that migration politics are “a site of struggle” (Strange, Squire, and Lundberg 2017, 243) does not also contribute to a negative image of migrants as subversive of a rules-based international order. In addition, restrictive definitions of migrants run the risk of excluding other “privileged migrants” (Benson and O’Reilly 2009), like professionals living abroad, recipients of golden visas, and retirees who migrate. Other approaches provide methodologies that even begin from migrants’ perspectives, such as focusing on their acts and claims of citizenship (e.g., Isin 2013), or where research adopt a more interventionist stance (e.g., Olivieri 2023).

As Laura Nader stated in her 1972 article on studying up, “we aren’t dealing with an either/or proposition; we need simply to realize when it is useful or crucial in terms of the problem to extend the domain of study up, down, or sideways” (Nader 1972, 8). Since then, the sites and domains of ethnographically inspired research have expanded in many directions. Ethnographers have expanded their work to include tracing the work of scientists in laboratories (for example, Latour and Woolgar 1986; Gusterson 1996) and policymakers in governance processes (Shore, Wright, and Però 2011). Furthermore, these developments coincided with researchers’ recognition of other forms of non-human agency. For example, Glouftsios (2021) theorized that the agency of EU information systems, as well as the labor of IT and security professionals to maintain those systems, shapes mobility governance throughout the Schengen area. More researchers are now looking into how devices used in security practices affect political agency due to the realization that technological properties shape practices and are intertwined with effects that shape the world (for example, Amicelle, Aradau, and Jeandesboz 2015).

Despite these achievements, the diversity of places and actors involved and the specialized and closed nature of border and security work can create barriers for researchers. A report by the “Advancing Alternative Migration Governance” project, for example, describes how the development of EU information systems “has been engineered in specialized and closed forums, such as expert workshops, task forces, technical studies, pilots, or advisory groups and technological platforms steering not just policies, but also the formulation of research and development priorities of funding programmes” (Jeandesboz 2020, 10). Moreover, according to the report’s authors, the influence of less visible actors, such as global actors who build, fund, and thus profit from border infrastructure construction, needs to be studied more. To support this effort, in what follows, I propose two heuristics that can serve as valuable analytical tools. The first heuristic, based on “interpretative flexibility,” allows us to identify significant moments when social groups challenge or transform identification technologies and practices. The second heuristic, employing “gateway moments,” helps us understand the compromises and adaptations involved in building identification infrastructures and deploying technologies in diverse contexts. Together, these heuristics provide a robust framework to analyze the development and impact of identification systems, considering the role of under-the-radar actors and their long-term implications for identification practices and infrastructures.

6.3 Multi-temporal sampling, or tracing the genealogies of data infrastructures

The reach of sociotechnologies of identification lies not only in their ability to operate across numerous sites and bring together diverse actors but also in how the technologies develop over time and integrate into infrastructures to have long-lasting effects (see also Ribes and Finholt 2009; Karasti, Baker, and Millerand 2010). While researchers tend to see rational processes of designing and implementing systems by system builders with well-defined goals, they tend to overlook the contingency in those processes over time. A closer examination of, for example, the well-known Second Generation Schengen Information System (SISII) reveals how the system’s creation was nearly derailed by “delays, an escalating budget, political crises, and criticisms of the new system’s potential impact on fundamental rights” (Parkin 2011, 1, see also Figure 6.2). In building the SISII, the “instability of system requirements,” which includes the European Commission’s ignorance of how Member States’ end-users use the system, was frequently cited as a cause for delays, according to a report by the European Court of Auditors (ECA 2014, Special report No 03/2014:13). Consequently, research on the sociotechnologies of identification should correct the misconception that designing and building systems are purely rational processes. Instead, researchers must recognize that such systems result from negotiations, adaptations over multiple timescales, and interactions between actors from different organizations (Gasson 2006; Pollock and Williams 2009).

Figure 6.2: Chronology of the SISII (ECA 2014).

Tracing how knowledge and technologies for matching identities emerged and circulated thus necessitates uncovering choices and contingencies in the design of information systems over time. Longitudinal studies, employed to understand the complexity of technological development over time, can adequately account for contingencies, though their feasibility in practice can be hindered by the expansive scope and time commitment they demand. As we know, social constructivists, for one, have long emphasized that the growth and spread of new technologies do not adhere to any simple linear models (e.g., Pinch and Bijker 1984; Hughes 1983b). Instead, various factors influence the trajectory, resulting in distinct sociotechnologies for identification. There are likely many forking paths that could have produced different technological outcomes, and avoiding teleological accounts is paramount. For example, Cowan (1985)’s research on refrigerator development showed that the choice of a compressor with its humming sound was not predetermined, as other cooling technologies like gas-based systems were also feasible alternatives. Another example can be found in the research conducted by Pollock and William (2009), which examined the integration of enterprise software systems. The study demonstrated how these integrations are complex and subject to change, without adhering to predictable patterns, but evolving within intricate dynamics between customers and suppliers.

The practical implementation of such approaches is, however, hindered by the extensive scope and time commitment they demand. Yet, by using methodological criteria, it becomes possible to overcome these challenges and identify pivotal moments in the lifecycle of a software. Literature offers limited methodological criteria for recognizing such moments in a software lifecycle. For instance, Hyysalo, Pollock, and Williams (2019) suggest identifying “moments and sites in which the various focal actors in the ecology interlink and affect each other and the evolving technology.” In general, STS has historically used technological controversies and breakdowns as points in time for understanding how technology functions and the meanings ascribed by actors who are present or who claim to speak for others (Callon 1984; Marres 2007).

How can we meaningfully – if not systematically – collect data from various stages in the sociotechnical development of identification technologies? This chapter proposes two heuristics for detecting contingent moments in their developmental trajectory. First, tracing the moments when the meanings of technologies change can help to explain the emergence and establishment of standardized software. Second, tracing the moments when technology connects to other systems can help us understand the unfolding of large-scale identification infrastructures.

6.3.1 Interpretative flexibility and the making of a standard software

Social constructivist accounts of technological innovation provide the theoretical foundation for the first analytical heuristic to identify contingent moments in developing standardized identification systems. Constructivist approaches, such as the Social Construction of Technology (SCOT) and Social Shaping of Technology (SST), have criticized linear and deterministic models of innovation and technological development. Instead, these scholarships have shown how technological development is a long-term and open-ended process in which change can be disorderly and protracted (Bijker and Law 1992; Bijker, Hughes, and Pinch 2012; MacKenzie and Wajcman 1999).

A fundamental premise of constructivist approaches is that technologies aim to solve difficult-to-solve problems with multiple solutions due to competing demands and requirements. Thus, the adaptability of technological designs shapes and is shaped by the interpretation and interest of specific (groups of) actors. To demonstrate this “interpretative flexibility” of artefacts, the basic template for conducting a SCOT analysis calls for first identifying “relevant social groups” (Pinch and Bijker 1984). Social groups are empirically introduced when a group of actors assigns a particular interpretation or meaning to a technological artefact. As such, a SCOT analysis is interested in how different interpretations of social groups give different problems to be solved by technology. The second step is to analyze how interpretative flexibility decreases through the process of “closure,” in which the number of alternative solutions narrows and “stabilizes” into one or more artefacts (sic.). It is important to note that the original SCOT approach has some issues due to its ambivalence to structural contexts and that power imbalances can render some actors invisible (Klein and Kleinman 2002). However, analyzing changes and closures in the interpretative flexibility of artefacts can be a valuable heuristic for identifying contingent moments of change in the lifecycle of identification sociotechnologies.

Information systems are typically (re)designed to be “generic” and “travel” across organizational contexts (Pollock and Williams 2009). Such genericity can be seen as corresponding to SCOT’s interpretive flexibility. Multiple meanings can be attributed to software, diverse uses are contemplated, and heterogenous problems can be solved with its mediation. Over time, however, interpretive flexibility leaves space for stabilization. Similarly, using the metaphor of a biography, Pollock and Williams (2009) demonstrated how software suppliers must balance requirements as the software matures and accumulates functionalities through its history. For instance, they found that software may be more adjusted to particular user requirements early in the development process. Later, when vendors want to transfer their software to new customers, they must identify overlaps between the sites’ needs. This moment of closure is labeled as “process alignment work” by the authors (p. 174). Power disparities between the supplier and large/small customers may skew the requirements in particular directions, eventually giving shape to best practices and standards.

Interpretive flexibility characterizes also security information systems. Scholars have noted that systems storing personal identity data can easily be used for new and derived purposes aside from their original objectives (Monahan and Palmer 2009). This type of “function creep” has been referred to for the Eurodac biometric identification system (Ajana 2013). The system’s original purpose was to assist the Dublin system in preventing people from requesting asylum in multiple Member States. However, and as an attestation of the connection between migration and crime control (also known as “crimmigration,” Stumpf (2006)), this scope has gradually expanded by allowing police authorities to query the database (Broeders 2011; Amelung 2021). As a result, it is essential to consider how diverse security organizations and their systems are interconnected and how this results in the emergence of new and contingent meanings.

The attention to such contingent moments can enable us to discover the “biography” of data matching systems and their standardization by considering tensions between “local” and “global” aspects of software, as well as the links between technical and organizational changes (Pollock and Williams 2009). For example, one of the recommendations related to the SISII system problems mentioned above was that the Commission “ensure that there is effective global coordination when a project requires the development of different but dependent systems by different stakeholders” (ECA 2014, Special report No 03/2014:07).44 Therefore, the report’s recommendation for how to deal with the issue of end-user’s differing perspectives was to establish a new organizational structure that could align the various actors, from Member States to international contracting firms.

Even a single European security artefact can have different meanings and be used differently by diverse states. Soysüren and Nedelcu (2022), for example, compared the deployment of the Dublin III regulation (for the Eurodac system) between a founding EU member (France) and an associated country (Switzerland). The researchers found that France took a more skeptical and decentralized approach to use the Dublin system for deportation, while Switzerland eagerly adopted the Dublin system and implemented it in a highly centralized manner. In this case, interpretative flexibility characterizes a single European security instrument, which can have different meanings and be used differently by diverse countries. The spatial and temporal reach of sociotechnologies of identification does not imply that these technologies have necessarily stabilized; instead, the systems may still be implemented in various ways, depending on the context.

Utilizing the SCOT and Biography of Artifacts and Practices (BOAP) approaches to analyze identification software provides several methodological advantages. Firstly, these approaches enable the examination of how the design of identification technology can reach points of closure, wherein customers perceive their problems to be resolved. Secondly, these approaches shed light on the local and global tensions inherent in the development of identification software. They highlight how software systems are designed to be generic and adaptable, capable of traveling across various domains, including security. By studying these tensions, researchers gain insights into the compromises and negotiations involved in adapting identification technologies to different contexts while maintaining their functionality and interoperability. Overall, the SCOT and BOAP approaches offer valuable methodological tools for comprehensively analyzing identification software, uncovering its design processes, and understanding its broader implications in diverse settings.

6.3.2 Gateways to infrastructures of identification

Infrastructure scholars have argued that (data) infrastructures can only “grow” and build up from pre-existing systems, practices, and communities rather than being purposefully constructed (for example, Edwards et al. 2007; Karasti et al. 2016; Monteiro et al. 2013; Star and Ruhleder 1996). Hence, studies like those on creating the infrastructure to connect scientific communities (for example, Star and Ruhleder 1996, and @edwardsUnderstandingInfrastructureDynamics2007) demonstrated how challenging it is to build large-scale data infrastructure deliberately. In connecting various IT systems, data infrastructures assemble “a combination of standard and custom technology components from different suppliers, selected and adapted to the user’s context and purposes” (Pollock and Williams 2009, 286). At the same time, it is debatable when the assemblage of various elements qualifies as infrastructure. Hence, in light of this argument, Star and Ruhleder (1996) posed the question, “When is an Infrastructure?” According to their argument, the concept of infrastructure is fundamentally relational, as it only becomes infrastructure through its relationship with organized practices.

With several detached systems, it is frequently uncertain which one will succeed or whether other technological and social compromises are necessary to allow systems to work together. Moments where several systems compete are pivotal for infrastructure development, as previously incompatible systems may be able to work and communicate with one another. Edwards et al. (2007) have referred to the phenomenon of making contending systems compatible as a “gateway problem” (p. 34). We can find a paradigmatic example of a gateway technology to solve such problems in the historical development of electricity infrastructure: the innovation of the rotary converter, which made it possible to have co-existing forms of electric power (David and Bunn 1988; Edwards et al. 2007). The converter qualifies as a gateway technology because it enables compatibility between competing delivery systems, such as alternating current (AC) and direct current (DC). Modern-day equivalents of such a gateway technology are the international travel plug adapters which enable us to charge our electronic devices in different parts of the world without worrying about the various voltages and plug types used.

More generally, Edwards et al. (2009) define a “gateway phase” as a period “in which technical, political, legal, and/or social innovations link previously separate, heterogeneous systems to form more powerful and far-reaching networks” (Edwards et al. 2009, 369). According to a definition provided by David and Bunn (1988), what gateway technologies, in effect, do is to “make it technically feasible to utilize two or more components/subsystems as compatible complements or compatible substitutes in an integrated system of production.” Such sociotechnical arrangements, they say, would “permit multiple systems to be used as if they were a single integrated system” (p. 367). Information and communication technologies, for instance, heavily rely on gateway technologies, such as protocol converters that link telecommunications networks with various network protocols. Similarly, Hanseth (2001) uses the term gateway to refer to “elements linking together different networks which are running different protocols” (p. 72). Additionally, Hanseth argues that gateways can be just as critical to the success of large-scale network and infrastructure projects as better-known data standards.

In contrast to standards, gateways have gained less scholarly attention. Moreover, gateways are sometimes considered “as a consequence of a failed design effort as they are imperfect translators between the networks they are linking” (Hanseth 2001, 71). However, as Hanseth rightly notes, gateways can be crucial building blocks for connecting heterogeneous networks into larger-scale infrastructures. He gives the example of health care data, which may be standardized within countries. However, standardizing such data for cross-border data exchange has proven unattainable. A different strategy would be to develop “gateways to enable the (limited, but slowly increasing) transnational information exchange” (Hanseth 2001, 88). Building gateways to facilitate communication between heterogeneous systems is often more manageable than settling on a single standard.

The case of the EU Digital COVID Certificate Gateway

As an example, look at how the European Union (EU) responded to the Covid-19 pandemic by creating the “EU Digital COVID Certificate Gateway” to authenticate digital COVID certificate signatures across EU member states (European Commission 2021a, 2022). The European Commission established this gateway in 2021 as a means “through which all certificate signatures can be verified across the EU” (European Commission 2021b). The EU’s member states would have had difficulty agreeing on establishing a central health certificate database during the urgency of a pandemic. Since no personal data would be exchanged via the EU gateway, the system did “not require the setting up and maintenance of a database of health certificates at EU level” (European Commission 2021b). This choice was also significant for Member States because it allowed them to “retain flexibility in how they link the issuing and verification of their certificates to their national systems so long as they meet [the] common standards” (sic.). In those moments of urgency, “most Member States [had] decided to launch a contact tracing” (European Commission 2020b). Nevertheless, through “decentralised systems,” those 20 or so apps could be made “interoperable through the gateway service” (sic.). As a result, a sophisticated contact-tracing infrastructure quickly developed as EU member states (and others) was able to link their national applications while maintaining their national back-ends and data standards. At the same time, this EU contact tracing system revealed variations in how the Member States applied the rules in their domestic context, such as how much time to consider a vaccine’s viability before expiration (Calder 2022). The EU gateway exemplifies gateway technologies’ critical but underappreciated role in establishing and maintaining networks within larger-scale infrastructures. It also shows how gateways can be either short-lived or long-lived because the EU Gateway is already offline at the time of writing.

According to Egyedi (2001), there are different kinds of gateways with varying degrees of standardization and, thus, varying degrees of flexibility. Gateways, as per her typology, can be dedicated, generic, or meta-generic. In her view, a dedicated gateway is designed to link only predetermined subsystems and is not or only minimally standardized. She regards the AC/DC rotary converter, for instance, as a specific gateway for converting those two types of currents. On the other hand, generic gateways are standardized and thus can connect an undetermined number of subsystems. We can think of the EU Digital COVID Certificate Gateway (see box) as an example of a generic gateway because it established a common standard that any EU or non-EU country could adopt (European External Action Service 2021). For example, South Korea, a non-EU country, established a connection with the EU gateway in July 2022 to allow “certificates of vaccination issued in South Korea to be valid in EU countries, and vice versa” (Kim 2022). Lastly, the best way to understand meta-generic gateways is through examples such as the OSI reference model (ISO/IEC 1994), which specifies a foundation for computing system communications. These reference models serve as frameworks for developing specific generic standards, rather than defining them. This typology of gateway technologies will help us understand how standardized and adaptable the gateways are in linking heterogeneous identification systems into more extensive networks.

This chapter proposes operationalizing the gateway technology concept as a second heuristic for identifying moments where systems and infrastructures intersect. The term “gateway moment” will be used broadly to refer to instances in which different systems and communities of practice are linked together into larger infrastructures using gateway-like technologies. Such gateway moments are thought to reveal structural constraints that must be reconciled to connect new components in the emergence of identification infrastructures.45

The following section uses these theoretical concepts as heuristics to identify points in the lifecycle of a system for matching people’s identity data that can provide insight into the evolution of practices and technologies for identifying people in the context of migration, borders, and security.

6.4 Methodology

This section draws on the fieldwork data collected at a software vendor for matching people’s identity data in the context of border security and migration management. This section builds on Chapter 5’s findings on the specific deployment of the software at The Netherlands’ government immigration agency. In addition, the focus on other software deployments in EU and Member State identification systems sheds light on different stages in developing and using the ELISE software. As a result, the study illustrates the diverse set of actors involved in practices of identifying and circulating data about people on the move at the European border (Pelizza 2019). As detailed in Chapter 3, I joined the company “WCC Group” (WCC) to investigate the design, use, and evolution of a software product dealing with data matching and deduplication. Since I was a temporary member of the ID team, I could visit the company’s headquarters in Utrecht (The Netherlands), review all necessary paperwork, conduct one-on-one interviews with relevant company and personnel, and sit in on some of the team’s group meetings.

In the course of the research, seven interviews were conducted with individuals from the company WCC, spanning from July 2020 to July 2021. The interviews aimed to illuminate events in the history of their identity-matching software system. I asked people with different profiles about their connections with current and potential customers in the security and identity market. Based on their profiles, we can divide these participants into two clusters. The first cluster comprises WCC’s “ID Team” members who hold consultant, pre-sales, and solutions manager positions. The second group consisted of the more technically minded; among them were a senior software developer and a user experience designer. Participants described their knowledge of building and deploying the company’s software in six semi-structured interviews, each lasting approximately an hour. In addition, I also conducted observation of several extensive meetings among WCC staff as part of my fieldwork. These meetings served as briefings on various aspects of WCC’s solutions and provided valuable insights into topics similar to those explored in the interviews. These observations were documented through detailed field notes.

The interview protocol (included in the Annex) comprised a series of initial questions to understand the interviewee’s role at WCC and their insights regarding the challenges and solutions in identity matching. The interviews commenced by inquiring about the interviewee’s position and function within the organization, with questions adapted in a semi-structured manner based on the individual’s profile and experiences. Participants who had prior involvement with the EU-VIS or MITRE Challenge projects (see below) were presented with tailored questions, as these projects were perceived as potentially pivotal moments in the software’s development, the dissemination of data matching expertise, and its securitization. Interestingly, the findings from these inquiries challenged my initial hypothesis, revealing instances where name matching expertise did not consistently circulate as anticipated. The interviews delved into the complexities of matching identity data across diverse organizations and geographical locales, including EU Member States and national or international institutions, and addressed the crucial role of achieving interoperability in identity data. These inquiries, which focused on the software’s integration into broader sociotechnical networks, were instrumental in pinpointing and examining gateway moments in the evolution of the data matching software.

In particular, the protocol also encompassed questions designed to extract insights from the interviewee’s extensive experience, such as the significance of various data categories in identity matching and the extent customers clearly understand their data matching needs. This line of questioning helped reveal how the company adapted to its customers’ specific contexts and requirements, potentially influencing the design of the software to meet these specific needs. Moreover, it shed light on how customers embraced and integrated the proposed data matching software. Consequently, these questions played a pivotal role in uncovering moments of interpretative flexibility within the software, particularly in how customers configured data matching rules — whether they adhered to defaults, followed suggested configurations, or required extensive customization. Consequently, this line of questioning explored the role of the software’s defaults and configurability in disseminating data matching expertise. This understanding was instrumental in analyzing the deployments of EU-VIS and IND and discerning their differences in terms of configurations.

Furthermore, the interview protocol probed into the generification of software for identity matching, exploring its adaptability across different domains, including employment and security. These questions were designed to uncover instances of interpretative flexibility within the data matching software as it navigated various domains. More specifically, the questions aimed to gain insight into the software’s evolution as it extended its reach into security contexts, notably those within law enforcement and migration management. Furthermore, the interview protocol delved into the evaluations surrounding WCC solutions, examining the concept of “vendor-neutrality” and exploring potential challenges arising from proprietary formats or algorithms. It sought to understand how software solutions like ELISE accommodated diverse data formats. This facet was considered pivotal in comprehending shifts in interpretative flexibility, notably because it involved the incorporation of biometric formats, offering valuable insights into the software’s securitization.

The data analysis aimed was to establish links between various data fragments that document the software’s historical development, pinpoint moments of contingency, and construct a narrative that, although fragmented, remains meaningful. Throughout this endeavor, I pursued threads that connected the diverse actors and entities that played roles in the software’s development. The company’s partnership with Accenture was identified through a notable thread of jointly undertaken projects. To create a structured narrative, I initially organized the data fragments around two overarching themes. First, I traced the software’s evolution as it ventured into the identity and security domain, aiming to unveil moments of interpretative flexibility and the contingencies surrounding this transition. Secondly, the investigation extended into the software’s assimilation within the EU-VIS and IND systems. By scrutinizing the configurations tailored for these two distinct systems, I identified these instances as gateway moments capable of shedding light on more expansive transformations within the realm of identification.

6.5 Tracing fields of identification through the evolution of software for matching data: Interpretative flexibility moments

This section delves into the biography of WCC’s ELISE data matching system, exploring the evolutionary trajectories of data matching within transnational and commercialized security infrastructures. The exploration is divided into two parts based on the analysis using the two heuristic approaches to achieve this as methodological alternatives to conventional longitudinal research. The first part uses the “interpretative flexibility” heuristic to pinpoint moments where social groups have challenged, reshaped, or restricted the meanings associated with the ELISE data matching system. This first heuristic will focus our attention on the company’s foundational roots while highlighting the complexities of adapting and generalizing the data matching software for diverse contexts and users. This adaptability ultimately paves the way for ELISE to assume a significant role within international identification systems. The second part uses the “gateway moments” heuristic to direct attention towards instances when ELISE was integrated to connect heterogeneous identification systems. This second heuristic will focus our attention on the role of data matching in bridging disparate identification systems and contributing to forming more extensive infrastructural networks.

6.5.1 Pioneering data matching in the dot-com era

The WCC company’s early days reveal a surprising amount of interpretative flexibility regarding what the data matching software should accomplish and for whom it should be helpful. When the company was first conceived in 1996, its founders saw the software primarily as a generic database technology for matching various “things.” In conversations with multiple media sources (such as Betlem 2011), and as shared with me by company personnel, one of the co-founders and former CEO, Peter Went, recalls the genesis of the product idea as follows. Mr. Went recounts that this concept initially emerged from his experience of encountering unsatisfactory outcomes while searching for a house. Furthermore, he was informed of similar challenges faced by friends during their online job hunts. The primary issue Mr. Went identified was that search results drastically declined when highly specific search criteria were employed. Recognizing this limitation, he saw the need for more advanced search engines capable of meeting users’ expectations.

The company’s website from 1998, preserved in the Wayback Machine,46 provides insight into WCC’s perspective on their “flexible approach to searching” and the rationale behind why they considered it superior to “traditional searching techniques.” This archived webpage offers a valuable glimpse into the company’s mindset during this era and sheds light on their approach to data matching and search technologies. Here is the website excerpt that sheds light on why WCC views this approach as a superior alternative (emphasis in original):

There are a number of drawbacks to using traditional searching techniques. Firstly, traditional searching techniques simply go through a database — in a more or less intelligent manner — looking for a combination of keywords. This combination of keywords is either found or is not found. This type of searching is known as binary or hard searching. In practice, often a more “soft” and flexible approach to searching is desired. The fuzzy searching concept offers more flexibility than traditional (hard) searching. With fuzzy searching techniques, it is possible to define ranges in which each search criterion can lie, rather than specifying exact values for each criterion.

Another disadvantage of traditional searching methods is that they perform what is called one-sided searching. This means that a search is performed from the viewpoint of one side only. In practice, often a two-sided search is wanted that considers the preferences of the supplying side as well as the demanding side.

This excerpt illustrates WCC’s alternative interpretation of search technology, setting it apart from what it calls “traditional searching techniques” that necessitate strict adherence to all specified criteria and follow a “one-sided searching” model. In contrast, WCC proposed an alternative approach characterized by fuzzy matching, which advocates the utilization of permissible ranges for each search criterion. Moreover, it introduces a “two-sided search” concept that considers matches between a “supplying side” and a “demanding side.” To illustrate, in the context of housing searches, the supplying side may comprise available houses listed on a website, while the demanding side represents individuals seeking a house. Fuzzy matching facilitates the identification of matches that align with specific criteria, such as proximity or budget, without rigidly adhering to precise criteria. This interpretation is embedded in the product’s design features, as described on the company’s 1998 website:

ELISE is especially designed for matching purposes. After relevant information has been entered into the system, ELISE is used to find possible matches. ELISE finds matches by calculating match scores to quantify the degree of mutual interest between both parties. By definition, a match score will lie between 0% and 100%, with higher match scores indicating greater mutual interest between the parties involved. The one hundred best scoring matches are shown to the user for further (manual or electronic) processing.

Due to the highly specialised nature of ELISE, it finds matches very quickly and very accurately at the same time. This is in contrast with non-dedicated systems that are usually inflexible and slow compared to ELISE. ELISE was designed with optimal speed and flexibility in mind and uses a proprietary database to meet the high-speed requirements set by its match engine(s). It is impossible to obtain these high speeds (thousands of transactions per second) with standard relational database products.

Traditional search methods, as referred to WCC, primarily rely on what are known Boolean expressions, such as is a house less than or equal to the desired price specified in a search, resulting in binary outcomes: either a match or no match. In contrast, WCC’s software introduces a probabilistic data matching approach. Under this approach, search results are ranked based on the likelihood of a match, reflecting a notion of “mutual interest between the parties involved.”

But who are these parties? Their definition is characterized by interpretive flexibility. According to the 1998 webpage, WCC envisions that “ELISE can basically be used in any markets where products are being offered and demanded.” The page went on to specify that WCC was actively addressing the following markets with ELISE: the employment market, real estate, cars, and dating. Moreover, other search systems are depicted as sluggish and lacking in flexibility. This highlights the issue that WCC aimed to address, which is the limitations of conventional relational databases when it comes to efficient data retrieval. These relational databases excel in data storage but tend to struggle with quick data retrieval due to their technical architecture.47 In response to this challenge, WCC’s product employed alternative database technology, which allowed for faster data retrieval and enhanced search performance.

WCC’s alternative interpretation of search technology presents a multifaceted approach to the problem. Firstly, it identified the need for a more flexible search and matching engine capable of handling multiple criteria with varying degrees of importance. This approach viewed search as a two-sided matching problem rather than a one-sided search. Secondly, it identified the inefficiencies of search methods based on relational database management systems, deemed too sluggish for the rapidly growing internet-driven economy. Thirdly, it positioned its solution as versatile and universally applicable, designed to streamline search processes across diverse industries where products were both offered and sought after. This perspective is further elucidated in a 2007 blog post by Mr. Went on the company’s website, where he touches upon another dimension of the evolving internet landscape. Specifically, he emphasizes the transformation in reliance on search experts, underscoring the shifting dynamics in how people conduct searches and access information in the digital realm48:

[M]arket-focused search engines can sometimes lower the number of total hits to zero. This is because their database technology is too rigid. For example, a user searching cars.com for their dream car, with the exact options they want, under a certain number of miles, within a certain distance from their home, and at a particular price is probably going to receive a “Sorry, but no results were found using the search criteria you entered” message. At that point, the user is forced to adjust the different criteria to see what the limiting factors may be. […]

Matching technology is rapidly developing a devoted following among staffing agencies, dating services, travel industry, real estate industry, automotive sales, etc. These industries are all built around complex searches that require a complicated database and an industry expert to perform the search. They are quickly finding that our matching technology lessens the dependence on the human expert and provides much more accurate, meaningful search results.

The quoted passage emphasizes the limitations inherent in conventional search methods tailored to specific markets. It suggests that these methods often yield unsatisfactory results when stringent criteria are applied, necessitating users to possess the expertise to fine-tune their searches. In contrast, WCC’s proposed data matching solution is presented as a versatile alternative that can be applied across many industries. This approach aims to reduce reliance on human expertise, marking a departure from “market-focused search engines” where search accuracy hinges on the involvement of human experts, such as intermediaries in the travel or real estate sectors. Instead, WCC’s approach is portrayed as market-independent and consumer-empowering. This shift reflects a broader transformation coinciding with the rise of the Internet and emerging IT technologies, characterized by removing intermediaries and empowering consumers. During the dot-com and new economy era, technology was expected to reshape industries and consumer behavior by leveraging the value of information.49 For example, technological innovations were anticipated to revolutionize how people searched for new homes, job opportunities, or planned vacations (Benjamin and Wigand 1995). Consequently, pioneering technologies like advanced search engines were seen as catalysts for redefining or replacing the roles previously held by essential intermediaries, such as travel agents and real estate agencies (Wigand 2020).

When examining WCC’s founders as the initial social group, it becomes apparent that their interpretation of the challenges faced by organizations at that time can be summarized as follows. In order for goods and services to be effectively discovered by customers within the context of an emerging Internet-based economy, it was crucial for organizations to implement flexible search mechanisms. Technically, this problem definition consequently led to the development of the fuzzy data match system as a generic and high performance solution that could be universally applied across various domains. By conceptualizing the problem in this way, the founders aimed to address the overarching need for efficient discovery and enable organizations to adapt to the evolving digital landscape. This approach sought to provide a versatile and adaptable solution that could facilitate the connection between supply and demand across diverse sectors of the economy. The question is whether this interpretation was successfully put into practice or rather challenged by social groups.

6.5.2 Narrowing markets, narrowing design flexibility

Although customers were quite diverse at the time, we could consider them another social group. As such, it would seem that customers accepted WCC’s problem definition and the technical solution. For example, based on interview data and the customers mentioned in old WCC marketing materials, we can understand that WCC had customers in diverse domains. At the time, WCC customers used the software solution to match house seekers with suitable houses, job seekers with relevant jobs, wine lovers with wines that suit their tastes, and tourists with their ideal holiday booking. However, despite the variety of industries and sectors served by WCC, not all amounted to a sizable market. Therefore, as the following interviewee recalls, WCC gradually reduced the software solution’s interpretative flexibility and redefined the problem by focusing on fewer, more commercially successful domains, such as public employment:

So, yes, we [WCC] were very broad. The first customer, a major customer, was the Dutch employment agency UVV. And that made us think. Because all those other customers were small amounts. And the UVV was a significant customer, and that convinced management at the time that it was a great match. And the main reason for that was—and we are still uniquely ourselves in that regard even compared to the open source competition you see now—the bidirectional matching. So ELISE can not only include your own search criteria but also what the other party wants. […] So what the job needs and what the employee is looking for does matter. And that is then matched with each other and that is what ELISE can do very well. So, that’s the reason we entered the labor market. And that has now been completely expanded into much more than just matching wishes with supply, and we are now also solving all kinds of preconditions. (Interview with WCC senior manager, May 31, 2021)

WCC shifted its focus towards a more concentrated set of specific markets while retaining the fundamental features of the original ELISE data matching software, including its distinctive bidirectional and fuzzy matching capabilities. This strategic shift was grounded in market size and the applicability of the interpretation and technological design to these markets. As the interviewee underscores, the bidirectional search design, based on the concept of supply and demand, found particular resonance in certain domains and emerged as a distinctive selling point for this technological feature. Bidirectional matching worked particularly well for public and private employment services that link jobseekers to suitable job opportunities and vice versa. In this context, it leverages fuzzy matching algorithms to gauge compatibility between two sets of data records: job descriptions and job applicants’ preferences.

From a technical perspective, employing these functionalities necessitates the organization adopting the data matching system to perform a data mapping exercise. This entails taking the organization’s data, often residing in a Relational Database Management System (RDBMS), and aligning it with ELISE’s Object Model. The object model transforms data from databases, which are typically organized in the relational model of an RDBMS as a collection of tables, each consisting of rows and columns, into objects with associated properties. In doing so, the data can be precisely characterized in accordance with the supply and demand attributes integral to WCC’s data matching model, facilitated through a programming interface. Subsequently, the data originating from the organization is synchronized with the ELISE database using a tool known as the ELISE Data Replicator. This synchronization process ensures that the data is up-to-date and consistent between the organization’s systems, where data is stored, and the ELISE database, which facilitates the searching and matching. Together, these steps make the organization’s data available for the ELISE data matching system.50

Customers, as a social group, expressed their requests in confidential interactions with vendors and clients, but evidence of these requests can be found in other documents and reports. For instance, the 2003 annual report introduces the ELISE Data Replicator as “a powerful tool to automatically synchronize ELISE with the most complex database designs.” This description implies that customers needed a means to map their unique database structures to the ELISE data matching system, likely for scenarios that WCC did not always anticipate. Consequently, the Data Replicator can be interpreted as a response to accommodate the diversity in database designs sought by customers. Similarly, the introduction of the ELISE Data Model can be attributed to customers’ demands for an effective mapping solution to the data matching engine. The stability of these two components suggests that social groups have come to perceive those issues as resolved.

At this juncture, the software design had reached notable points of closure. The ELISE object model and replicator remained core components of the ELISE solution, even though, as we will soon discover, markets were undergoing significant shifts. Despite the fact that the range of applications for the searching and matching tool had narrowed, moving away from encompassing diverse contexts like e-commerce and increasingly focusing on employment and identity domains, the core data matching system itself retained a notably “generic” and context-independent nature. Rather than diversifying the core technology, the company embarked on the creation of more dedicated, context-specific platforms that were built upon this “generic” ELISE system. Despite the varying degrees of success experienced by these context-specific applications, the foundational principles of the data matching technology, which revolved around mapping and replicating data within the ELISE object model for matching purposes, remained rather stable.

6.5.3 Expanding data matching horizons in the post-9/11 landscape

As the interpretative flexibility of the ELISE system began to diminish, the 2002 Annual Report highlights a concurrent expansion into the domain of law enforcement. This report delineates WCC’s target markets as Employment, Crime Fighting, Travel, and Other (WCC 2002). Notably, previously listed markets such as “dating, real estate, used cars, pharmacy” were reclassified under the broader category of “Other” and were no longer actively pursued. The addition of “Crime Fighting,” later renamed “Law Enforcement” in 2003 (WCC 2003), to the company’s markets is described in the report as follows:

WCC has entered this new market [Law Enforcement] in 2002 inspired by the focus worldwide on crime fighting and anti-terrorism after the 9-11 attacks in 2001. WCC has developed, in collaboration with one of Europe’s leading Forensic Science institute, an application for matching (crime scene) DNA strings with existing DNA profiles. The mutual expectations with respect to this market are high, because in our opinion DNA and DNA matching significantly enhance the results of forensic research and crime prevention. (p. 9)

The 2002 annual report notes how this feature arose out of a pilot project of student’s thesis:

WCC also met serious interest in a relatively new area, DNA-matching. A request from a student for an internship led to building a prototype for DNA matching as subject of his thesis. The prototype that was build shows impressive results and has attracted a, sought after, potential launching customer. Early 2003 the final prototype version should enable us to show the powerful solution ELISE has to offer to the crime fighting industry. (p. 19)

From the excerpt we can deduce that, following a renewed focus on crime fighting and anti-terrorism efforts following the tragic events of September 11, 2001, new social groups such as the student and the European “leading Forensic Science Institute” started to re-interpret the problems to be solved by data matching: matching DNA profiles found at crime scenes with DNA profiles stored within a database. By reintroducing interpretative flexibility, WCC had to adapt ELISE’s data matching capabilities.

These new meanings can be linked to re-interpretations of two aspects of the original ELISE system’s design. Firstly, DNA matching deviates from the bidirectional supply and demand model that characterized the software’s original design. Forensic investigations involve a unidirectional process, focusing on matching crime scene DNA strings to existing DNA profiles rather than the reverse. From a technical standpoint, ELISE could be readily adapted to maintain its bidirectional search model while only executing the matching in one direction. Secondly, DNA samples can be represented as data strings through a process that translates the DNA code into sequences of characters, much like textual or numerical data strings. This representation aligns with the principles of data matching, which involve comparing one data string with another to identify patterns, similarities, or matches. From a technical standpoint, this compatibility likely facilitated the seamless integration of DNA matching into ELISE.

The integration of DNA matching into ELISE’s data matching capabilities can be viewed as a larger reintroduction of design flexibility within the data matching system. The adaptations reflect a response to reinterpretations in matching various types of data for identification purposes. This shift is evident in a 2005 webpage where ELISE’s matching capabilities were reimagined to encompass the “recognition of objects” (WCC 2005). The website describes this feature as follows:

ELISE enables recognition (matching) of objects in massive amounts of data in a sub second response time. In fact it does not matter what type of data is used; finger prints, pictures of faces, DNA structures, ELISE is able to find the best matching profiles based on data containing millions of profiles and thousands of characteristics per profile.

The transformation in the design of ELISE is notable because it extended matching capabilities to new forms of data and reverted to a uni-directional search. First, these new forms of data handle what is referred to on the web page as Binary Large Objects (blobs), signifying a departure from solely matching textual data. Now, ELISE could match not only text-based information but also images of faces, fingerprints, or diverse biometric profiles. Data from this array of sources could be effectively modeled and integrated into the data matching system. This expansion was presented as a development that opened the door to practical, cross-disciplinary solutions. The company’s website highlighted a myriad of potential applications, including crime matching, DNA analysis, fingerprint matching, disaster victim identification, stolen art recovery, car theft tracking, missing children locating, financial misdemeanors prevention (such as credit card fraud), anti-corruption efforts, and combating child pornography.

Second, when dealing with such identity data, data matching tends to revert to a unidirectional process, as the concept of supply and demand mapping no longer applies. A person’s identity record typically has no requirements to match, unlike a job announcement in a database that specifies who should apply. Still, the company could solve both problems using the same data matching engine, despite different problem definitions between the employment services domain and the identity and security domain. The company could translate and reconcile such data matching across contexts, even with these different problem definitions of the customers as relevant social groups. Because unidirectional search is just a tweaked version of bidirectional matching, there is no technical incompatibility between the two designs. As the software can function without the user being aware of this difference, the software design has reached a point of closure as customers perceive their problems to be resolved.

In the post-9/11 era, with a heightened global emphasis on crime fighting and counter-terrorism efforts, data matching underwent a significant reinterpretation within a context that demanded interpretative flexibility. Rather than merely applying data matching to markets where products were offered and demanded, it was re-conceived as a potent tool for addressing critical challenges in policing and security. This shift in meaning necessitated the system’s adaptation to handle various forms of data beyond the typical text-based matching found in HR and staffing or e-commerce. Specifically, it required the capacity to work with binary data, notably biometric images and profiles. These binary data forms were made compatible with the ELISE data matching, for example, by converting them into text data, which the system could process for matching purposes. For instance, a fingerprint scan can be digitally processed to generate a biometric template, a collection of extracted characteristics that could be stored and employed for matching.

6.5.4 Cultivating identity matching and international professional networks

The further expansion of WCC ELISE into identity matching was closely intertwined with collaborative efforts involving another social group: external partners. These partnerships encompassed various entities, including system integrators and software partners responsible for crafting tailored solutions for identity-related markets and technology partners providing crucial identity matching components, such as biometric matching capabilities. Furthermore, partnerships in data matching solutions in other markets facilitated WCC’s expansion into security and identity markets. One illustrative example is the company’s venture into the security and identity sectors, closely linked with collaborative efforts in data matching solutions for employment services. In the early 2000s, WCC joined forces with Accenture, a global IT services and consulting powerhouse, to introduce a data matching system for a web platform for the German public employment service. Peter Went, who served as WCC’s CEO at the time, underscored the achievements of this collaboration in a 2006 interview published in “Database Magazine.” He specifically highlighted how this collaborative effort propelled the company into the field of identity matching:

WCC entered that world [identity matching] through a successful trajectory with Accenture at the Employment Service in Germany. The response was so positive that Accenture decided to hire WCC for a huge project that the consultancy won in 2004 with US Visit, the US border security company. “They search there, as every traveler to America knows, by face and fingerprint. Ideally suited for our ranking technology, because there are no perfect Boolean-true matches with biometric data.” [Peter Went] (Rippen 2006, 37, translated from Dutch)

The 2003 WCC annual report described that an Accenture-led consortium secured a contract to develop and maintain a “Virtual Job Market” portal for the German Department of Labor, leading to a substantial ELISE license agreement with WCC. Remarkably, this virtual job portal was launched in under a year and garnered praise for its robustness and high-performance capabilities, efficiently handling significant loads of data. Recognizing the success of this partnership, the 2003 report notes that WCC and Accenture formalized their collaboration by entering into a “global alliance agreement,” described as follows:

Because of this success in Germany and the added value for the clients of such a combination, WCC and Accenture formalized the cooperation into a global alliance agreement. This alliance agreement emphasizes WCC’s credibility for large projects all over the world. (p. 18)

If we consider Accenture as a social group, the partnership suggests that ELISE emerged as a trusted tool for effectively handling large-scale data matching projects globally, positioning ELISE as a dependable data matching solution for endeavors worldwide. Following the fruitful collaboration on the German employment service project and establishing the global alliance agreement, WCC and Accenture embarked on several substantial projects, notably also in identity matching, encompassing biometric and non-biometric data. In 2004, Accenture invited WCC to participate in a contract awarded to the consulting firm for the United States Visitor and Immigrant Status Indicator Technology (US-VISIT) system (Rippen 2006). Due to confidentiality constraints, I lack access to specific details, and it remains uncertain whether the ELISE system was indeed employed in this context. Nonetheless, it is worth noting that, as reported in a newspaper article, Accenture played a pivotal role in introducing WCC to the field of biometric analysis during this time.

In 2012, the European Commission selected a consortium of companies, including Accenture, Morpho, and HP, to maintain the EU Visa Information and Biometric Matching Systems, with WCC serving as a subcontractor tasked with providing the search engine for alphanumeric data (Accenture 2012). This collaboration continued, with WCC remaining a subcontractor to Accenture, furnishing the search and match solution for biographical and biometric data within the UNHCR’s Identity Management System in 2015 (Accenture 2015). These collaborative initiatives highlight how, for Accenture as a social group, WCC became a dependable vendor of data matching solutions, contributing significantly to the system’s evolution through its recurrent deployment across various international projects. However, this expanded partnership also necessitated addressing the challenges associated with increased scale of data to match, as exemplified by the release of ELISE version 5 during the German public employment service project (WCC 2003). This release highlights enhanced performance, scalability, and fault tolerance, reflecting the growing requirements stemming from the increased size and complexity of data to match in these projects.

In the post-9/11 world, characterized by a renewed emphasis on anti-terrorism and border security, there was a rapidly expanding market for biometric technologies tailored for data matching in these contexts.51 Simultaneously, it is essential to recognize the prevailing uncertainty before and immediately after the 9/11 attacks. A glance at the 2001 Market Review of Biometric Technology Today reveals the industry’s tumultuous state (Biometric Technology Today 2002). The challenging U.S. economy significantly influenced numerous biometric companies even before the attacks. However, the aftermath of 9/11 brought about a remarkable shift in the biometrics industry. While share prices in other sectors plummeted due to the attacks, some biometric companies experienced meteoric rises as investors anticipated heightened demand for high-security products (see also, Amoore 2006; Lyon 2003). This period marked a significant turning point in the industry, with increased interest in biometrics despite economic challenges. Examining WCC’s evolution and its product ELISE provides a lens through which we can observe the evolving alliances and transnational networks that emerged in this burgeoning market for data matching technology, especially in the realm of security, where novel interpretations of data matching and design solutions rapidly co-evolved.

6.5.5 Embracing multi-modal matching and pursuing interoperability

During the transition towards data matching for security purposes, there was a contingent moment marked by interpretative flexibility and a redefinition of data matching. This redefinition expanded the scope to encompass data matching from various sources and biometric modalities. In this context, “biometric modality” refers to distinct categories of biometric data used in biometric systems, including fingerprints, facial features, iris patterns, voice, and DNA. The concept of “multi-modal matching” emerged, signifying the simultaneous utilization of multiple biometric modalities for identification purposes, often complemented by matching with biographic data. This shift in the design and functionality of data matching systems was closely tied to establishing partnerships with external collaborators because extracting features from raw biometric data often relied on proprietary technologies provided by third-party sources. Furthermore, this transformation aligned data matching with the critical task of supporting counter-terrorism efforts and crime prevention by enabling the matching of information from diverse sources, such as from different government agencies.

These transformations are exemplified in a 2009 WCC position paper titled “Homeland Security Presidential Directive 24 (HSPD-24): A layered approach to accurate real time identification.” This paper describes how the ELISE software could be used to comply with HSPD-24, a framework for interagency cooperation and interoperability of biographic and biometric data as part of US counterterrorism efforts and screening processes against terrorism watchlists. Here is the purpose, as defined in the directive titled “NSPD-59 / HSPD-24 on biometrics for identification and screening to enhance national security (Bush 2008):

This directive establishes a framework to ensure that Federal executive departments and agencies (agencies) use mutually compatible methods and procedures in the collection, storage, use, analysis, and sharing of biometric and associated biographic and contextual information of individuals in a lawful and appropriate manner, while respecting their information privacy and other legal rights under United States law.

The excerpts below from WCC’s position paper demonstrates the company’s response to the evolving redefinition of data matching, as the paper “explores the ramifications of HSPD-24 and explores its implications for the matching software that supports these processes, with a close look at how WCC’s ELISE ID supports the layered approach” (WCC 2009a):

HSPD-24 recognizes that technological progress and real-world implementations have substantially advanced in recent years, but also that a lack of biometrics standardization and the existence of conflicting mission security rules limit data-sharing among federal agencies. It further acknowledges that biometrics is only one of several layers of identifying data, and that a layered approach instead of a single mechanism — is needed to improve the executive branch’s ability to identify and screen for persons who may pose a national security threat. (WCC 2009b, 1)

While HSPD-24 does not provide any definition of a layered approach, it is understood that it refers to successively applying any or all available biographic, biometric, and contextual identifying data in order to arrive at an informed and accurate decision about a person’s identity. This process is commonly known as identity matching, but until now identity matching solutions were primarily single-layered or siloed approaches that used a single biometric modality or a single factor such as a name to perform the identification. (WCC 2009b, 3)

The provided excerpts shed light on several facets of the directive and the evolving definitions of data matching. Firstly, there is a redefinition of data matching that involves pooling data from diverse government agencies to leverage existing information to identify known and suspected terrorists. This approach hinges on interagency cooperation and interoperability to enhance the efficiency of terrorist screening processes, yet it faces challenges associated with data-sharing constraints between these government entities. Secondly, there is a redefinition of data matching that emphasizes adopting a multidimensional data approach for identifying security threats. This leads to the proposal of a layered identification strategy that combines various forms of data, including biometric and biographic data, alongside other factors. The challenge in this context lies in the usability of these data forms for matching purposes, particularly given that biometrics often rely on proprietary technologies.

The second excerpt from the position paper highlights an intriguing observation: “HSPD-24 does not provide any definition of a layered approach” (WCC 2009b, 3). This statement underscores the inherent need for entities like WCC to partake in the interpretation of government directives, including their objectives, challenges, and intended outcomes. In response to this interpretative flexibility, WCC introduces a technical design referred to as a multi-modal fusion solution, which boasts the ability to achieve “high accuracy with a large number of criteria by fusing individual match scores” (p. 8). In short, this design seeks to amalgamate or “fuse” match scores obtained from distinct sources, including various biographic data and biometric modalities, each assessed using different algorithms. The intention behind this approach is to address existing challenges and enhance accuracy when compared to relying solely on a single biometric modality. For example, when an individual’s data aligns with fingerprint and iris readings, the likelihood of accurately identifying that person increases, especially compared to depending solely on fingerprint matches or even in cases where biographic data may differ considerably.

The directive highlights another pressing challenge: the need for standardized and interoperable biometric technologies. Many available biometric technologies utilize proprietary algorithms to generate profiles from raw biometric data. This diversity of formats and proprietary nature complicated the integration of different biometric modalities, hindering the development of a cohesive and interoperable identification system. In response to these emerging challenges, WCC introduced a new “vendor-neutral and future-proof” software architecture designed to allow seamless integration of new biometric standards “as soon as they are ratified and deployed” (WCC 2009b, 7). This architecture was engineered to address the challenge by enabling the plug-in of other vendors’ biometrics through Software Development Kits (SDKs), facilitating the computation and fusion of match scores. This evolution in the design and functionality of data matching systems closely ties to establishing partnerships with other technology companies,52 highlighting the interplay between technology providers in this dynamic field.

The possibility of new social groups forming and reintroducing interpretive flexibility means the closure and reduction of design flexibility are temporary. Accordingly, the issues and solutions of identification were once again open to interpretive flexibility in the post-9/11 era. The United States government, as a social group, re-problematized policy problems of identifying people in the context of security as a problem to be solved with new technical solutions. Hence, the significance of biometrics is growing, along with the need for data interoperability between various agencies as technological solutions to identify potential threats. The development of biometric and multimodal technical solutions evolved alongside the changing demands of identification practices in exchanges between government and business actors. In response to new problematizations, WCC proactively incorporated new features into the ELISE software solution to accommodate both biographic and biometric data.

6.5.7 Evolving landscapes of data matching

This exploration of ELISE’s interpretative flexibility provided a window into the ever-evolving landscape of problem definitions and design solutions for matching identity data. Initially conceived as a versatile technology applicable to a wide array of markets on the internet, ELISE found its niche within diverse social groups with unique challenges, notably in job matching. As ELISE’s core design solidified, its interpretative and design flexibility diminished. Over time, WCC shifted its focus to two primary data matching markets: public and private employment services on one front and the identity and security industry on the other. The latter pivot was catalyzed by geopolitical shifts and heightened global security concerns, where data matching technology was increasingly viewed as a valuable asset for law enforcement and border security. This transition brought about newfound design flexibility as the data matching system expanded to encompass biometric data matching and addressed specific identity matching intricacies, exemplified by name matching. By examining the various instances of interpretative flexibility within the ELISE system’s evolution, it becomes evident that its original intent was never centered on identity matching. Instead, this transformation occurred gradually and contingently, influenced by many parties and factors over time. Of particular significance is the shift away from supporting a wide array of social groups utilizing ELISE for various purposes like housing and car searches, with the predominant focus shifting towards public and private employment, as well as identity and security contexts.

The exploration of interpretative flexibility has made it possible to highlight points at which governmental and private entities collaborated, co-producing and collectively shaping the challenges and solutions within the realm of identity data matching. On the one hand, this analysis has illuminated the formation of international professional networks, which secured contracts for developing identification systems in various national and international contexts. Consequently, WCC’s product found itself repeatedly deployed across diverse international landscapes. On the other hand, the focus on interpretative flexibility underscored the development of data matching methodologies and technologies as a contingent and dynamic process. For instance, the HSPD directive and the MITRE challenge exemplified redefinitions of identity data matching issues, reintroducing interpretative flexibility. WCC’s responses to these new actors, linked to the U.S. government and its security agenda, included proposing a plug-in architecture for biometric standards and devising innovative approaches to name matching. These moments highlight the reintroductions of interpretative flexibility and system adaptations, driven by evolving demands and problematizations within the domain of security and identity data matching.

WCC’s recent strides in the identity and security sector have primarily revolved around the utilization of the ELISE system in two core domains: passenger screening and civil registrations. However, this narrative did not incorporate these recent developments due to their distinctive nature. Although built upon the ELISE system, they are effectively separate software applications, warranting individual analysis. In contrast to the past, when WCC primarily offered the ELISE data matching system as a back-end solution, the company now crafts comprehensive application packages encompassing back-end and front-end components. The design and functionality of the ELISE data matching system, as outlined in this section, have reached a point of design closure as these new applications are constructed upon the foundation of the current ELISE system.

Methodologically, this multi-temporal sampling approach, employing interpretative flexibility as a heuristic, has provided a valuable lens through which to understand how the ELISE system’s design was contingent upon the specific circumstances and actors involved in its development. Instead of viewing the data matching system as a predetermined outcome within the realm of identity data matching, particularly in contexts such as migration and border control, we have unveiled how its securitization was profoundly influenced by specific choices made by actors and the evolving sociotechnical landscape. As an alternative to longitudinal research, this sampling method draws attention to analytically relevant moments that reveal the dynamic construction of technology and the intricate interplay between social groups, technological artefacts, and evolving problems within the field of identity data matching.

6.6 Gateway moments

Examining moments of interpretative flexibility has allowed us to comprehend the dynamic evolution of the ELISE system, tracking its shifts in design flexibility as it ventured into novel markets and encountered fresh challenges, particularly in the context of identity data matching. However, this analytical lens is less apt to investigate the system integration processes, wherein data matching is used for interconnecting system components with more extensive infrastructures. Our second heuristic, centered on gateway moments, offers a complementary approach to identify contingent moments in the long-term evolution of the data matching software. This perspective will illuminate the nuanced interplay between technology and the broader networks it becomes embedded within, which are not visible by only looking at shifts in interpretative flexibility.

6.6.1 The VIS evolutions project and the problems of backwards compatibility

The previous section described how WCC’s ELISE system was integrated into the EU Visa Information System (VIS), underscoring the company’s growing prominence within international professional networks. When the European Commission selected a consortium comprising Accenture, Morpho, and HP to maintain the EU’s visa information and biometric matching systems, WCC’s ELISE system was chosen to power the VIS’s searching and matching capabilities. The significance of this selection becomes apparent when considering the scale of the VIS. As detailed in the “Report on the technical function of the Visa Information System (VIS),” (eu-LISA 2020) the central VIS system handled 17 million visa applications in 2019 alone, registering extensive personal data from non-EU citizens. According to the report, this data was subjected to approximately 25 million alphanumeric searches, illustrating its crucial role in identification within the VIS. On the one hand, their inclusion into the consortium exemplified WCC’s reputation as a dependable partner for delivering efficient and effective data matching solutions that could meet the demands of data matching at this scale.

On the other hand, examining ELISE’s role within the EU Visa Information System (VIS) through the lens of a gateway moment reveals a different perspective, particularly the challenges associated with interconnecting diverse systems. Notably, the consortium responsible for maintaining the EU Visa Information System was awarded a comprehensive contract that encompassed supporting “the exchange of visa data across border management authorities by ensuring the processing capacity of the system and the availability of high levels of search and matching capabilities required for visa applications.” (Accenture 2012). However, as systems evolve and interfaces adapt, interoperability and backward compatibility become critical concerns, especially in transnational data sharing and complex infrastructures.

While the Visa Information System (VIS) is now widely available, its initial iterations were rolled out to different regions over time. The increased system usage necessitated launching a project to expand the system’s capabilities called “VIS Evolutions.” The then newly established European Agency for the Operational Management of Large Scale IT Systems in the Area of Freedom, Security, and Justice (eu-LISA) was overseeing the development of a new VIS system through a consortium of companies. The goal was to obtain a “completely new VIS system in terms of infrastructure, software versions, and search engine” (eu-LISA 2016, 8). Interestingly, the project’s objectives included changing “the search engine to improve its performance.” (eu-LISA 2013, 8). This way, WCC was part of the consortium as a “subcontractor for a maintenance contract” (Field notes 24-07-2020) for building this upgraded IT infrastructure. The software WCC supplied would provide an improved technical component to search and match based on alphanumeric data. It is common to gloss over these aspects of the VIS’s history. However, understanding these evolutions of the VIS reflects how the identification infrastructure expands and grows from its existing technological basis. Following this growth and deployments in countries worldwide can highlight how system builders grapple with technical difficulties, develop solutions, and reach compromises. These processes, in turn, can reveal key moments in the long-term developments in data matching and identification.

The VIS Evolutions project upgrade was built upon existing systems, necessitating careful consideration to ensure seamless operation for Member States already integrated into the central VIS system. Technical specifications, established initially for the VIS system’s search functionalities, had to be meticulously followed to maintain compatibility with the established integrations between the central EU VIS system and the VIS systems of the Member States (often referred to as “backwards compatibility). During a meeting, I had the opportunity to discuss the integration of ELISE into the EU-VIS system with a senior developer who was actively involved in the project. They provided a description of the process, which I have reconstructed and paraphrased below (Field notes, July 24, 2020):

A lot of customers usually already have a different system, so they want to stick to the existing matching rules of that system. Even though that may not be the optimal way to match. And that is kind of how EU-VIS works. […] The EU-VIS system is not actually using our name matching solution. Because in that EU-VIS deal, we are only subcontractors on a maintenance deal. There was an existing VIS system of which the hardware and was updated so, basically, the pre-exiting functionalities needed to be upheld. For example, for certain data fields, they wanted to be able to manually determine to use this kind of typo correction, or that kind of phonetization.

They provided the EU-VIS project as a recurring example where the ELISE matching engine is integrated with existing systems, emphasizing the need to avoid extensive changes and maintain existing matching rules. This approach was particularly critical in projects like the EU-VIS, where preserving pre-existing functionalities was essential. Consequently, novel and specialized features, such as advanced name matching algorithms, could not be readily introduced. They elaborated on the process, explaining that WCC had to ensure that match results aligned with the expectations set in the tests, making significant deviations from these specifications challenging.

So, on the one hand, there were new things that were added to the EU-VIS, from functional wishes of the EU member states. But on the other hand, there was also a given test set, stating how this query should yield that results, etc. And if we diverged from those expected outputs, then we really had to defend why we wanted to deviate from it or had to deviate from it. […] So some deviations just had really technical reasons, because ELISE couldn’t do certain things in the same way. And other deviations were because we simply said that it should not be solved in the way that was specified. For example, one of those use cases was to match Moscow with Moskva using typo correction. While we said that you should not match that way, to match place name variants via edit distance metrics. […] But so, eventually there was actually nothing from the name databases or things like that in it added to the EU-VIS.

They highlighted that certain data matching practices were being utilized due to these constraints that might not align with WCC’s preferred best practices. An example cited was matching place names using solely algorithmic methods, like calculating the edit distance between two strings. They emphasized that the company would recommend utilizing lexical databases for more accurate results. However, they acknowledged the complexities of implementing such changes within the EU-VIS system, given its widespread use across all member states, making extensive alterations a significant challenge.

The matching is configured in the EU-VIS API. So a member state can query the EU-VIS database where they can indicate in the request if they, for example, want to do a typo match or not, a fuzzy match or not, an exact match on the first name, but not by last name, etc. […] In effect, it is the member state application that then determines how the match is performed at EU-VIS. […] And yes, they might miss a lot of functionalities of ELISE and they could get much better results than they’re probably getting right now. […] Of course, the problem is you would actually have to deal with all the 20+ member states when the EU-VIS system would change, because they all would have to adjust their system accordingly. Even if the API might remain the same, but with slightly different results, they would have to justify such a functional change to all the other parties and get approvals to implement it.

Examining the integration of ELISE into the EU-VIS system as a gateway moment reveals a complex interplay between various systems, shedding light on aspects like path dependencies and contingency and unveiling insights that might otherwise remain undetectable. The EU-VIS architecture allows member states to configure their systems based on their preferred matching criteria, drawing from the capabilities provided by the EU VIS central system API. While one might expect the EU-VIS to encompass all the specialized name matching expertise developed previously, as seen in the MITRE Challenge discussed earlier, the reality is more intricate. The integration process was subject to constraints such as backwards compatibility and strictly pre-defined use cases. These constraints limited the implementation and utilization of ELISE’s advanced matching functionalities, as the upgrade needed to align with the legacy system of the previous VIS iteration. The VIS Evolution project is a compelling illustration of how path dependencies within the development of upgraded systems do not always facilitate the integration of advanced identity data matching features.

6.6.2 INDiGO and traveling data matching knowledge

In contrast to the EU-VIS case, the integration of ELISE into the system of the Immigration and Naturalization Service of The Netherlands tailored data matching knowledge can circulate across organizations. This was particularly evident through discussions and interviews with WCC personnel involved in the project, in which it became clear that the IND project allowed for a much more organizationally specific configuration, enabling the implementation of advanced matching functionalities tailored to the specific requirements of the IND’s context. By delving into historical meeting minutes and technical documentation, I gained insights into how WCC and IND collaborated to fine-tune various search criteria, including configuring the weighting of factors like last name matching in calculating match scores. The flexibility in configuring data matching capabilities in IND systems is in contrast to the more rigid use cases and testing procedures encountered in EU-VIS because of the many member states connecting to it. Consequently, it created an avenue to integrate advanced name matching functionalities, which were not available within the constraints of the EU-VIS framework.

During fieldwork, I had the opportunity to analyze a collection of documents, among which was a noteworthy presentation in 2013 titled “ELISE: New Features.” This presentation aimed to showcase the enhanced capabilities of the upcoming version of ELISE, which the Immigration and Naturalization Service (IND) could leverage for its search operations. The presentation began with an overview of recent company updates, highlighting projects like the EU-VIS and the recognition of their rank in the MITRE multi-cultural name challenge. Subsequent slides, aptly titled “new name matching features,” delved into various new and upgraded name matching functionalities. Furthermore, the slides noted new integrated algorithms for different biometric modalities and the supported vendors.

For the new name matching features slides, features related to the “transcription and transliteration of Arabic and Asian names” are of particular interest. They encompass name variations in original and Roman scripts. One slide specified “licensing” related to a “name matching module” and “name databases,” which can understood as relating to the need for purchasing an additional license to use, for instance, the CJK name databases (described in the previous section) in ELISE. Additionally, a table comparing matching features across different ELISE versions was presented, including the ability to match based on also-known-as information provided by third parties, with an illustrative example demonstrating how “Ahmed the Tall” should match with “Sheikh Ahmed Salim Swedan” and vice versa. New name matching features, like the ones mentioned, can be attributed to, among others, WCC’s active participation in the MITRE Challenge. The subsequent integration of these features into the ELISE system through system upgrades, as highlighted in the presentation, demonstrates how name matching knowledge and technologies developed in one context was adopted and circulated to an organization like the IND.

The presentation also highlighted an “EU-VIS specific feature” integrated into ELISE, referred to as “partial,” with an example demonstrating that “Wij should match Wijfels.” Notably, this feature specific to the EU-VIS context, was not confused with another similar ELISE feature labeled “partial fuzzy,” illustrated by the example “Wij should match Wilders.” This distinction underscores that there was a need for a more stringent partial matching mechanism tailored explicitly for EU-VIS. So, the distinction between these two features lies in the level of precision they apply when matching names. The “partial” feature is strict and requires an exact portion of the name to match, while “partial fuzzy” is more flexible and allows for a broader similarity in names. The presentation further included a comprehensive table comparing various matching features across different ELISE versions, providing insights into the evolving software’s capabilities and its adaptability to diverse organizational requirements.

The integration of ELISE into the IND systems as a gateway moment thus demonstrates how identity matching capabilities can transcend organizational boundaries. The software updates are demonstrative of the relation between the generification of the data matching system and the development of new customer-specific data matching functionalities (compare with Pollock, Williams, and D’Adderio 2016). Over time, the ELISE “core,” as it is known at WCC, has accumulated a wide variety of matching features developed for different contexts. Therefore, this core also possesses the potential for configurations to accommodate domain-specific solutions, including those tailored for identity and security or public employment services. In this way, we can conceivably conceptualize ELISE as a reusable system that “transports” identification expertise across organizations through its identity- and name-matching features, as evidenced by the name matching features employed at the IND. When WCC releases a new ELISE version, customers have the option to upgrade their existing software to access new features. Therefore, this process is not entirely deterministic either, as evidenced by the EU-VIS project, where specific name matching features were deliberately omitted to address concerns related to backward compatibility.

6.6.3 Data matching and standardizing data models

In exploring our final gateway moment, we delve into the role of data matching systems as triggers for standardizing data and data models. As elucidated earlier, WCC’s ELISE system often integrates with existing systems that provide the necessary data. In this process, the incoming data is mapped to conform to ELISE’s data model and replicated to facilitate efficient and effective data matching. During an interview, a senior WCC staff member shared insights on bridging the gap between old and new systems, or making diverse systems interoperable. Drawing on their experiences, they note this commonly involves addressing disparities in data models’ values and categories across systems:

An example that I always give is: when personal data was stored, the hair color field contained free text. So sometimes brown, sometimes light brown, sometimes “brn,” sometimes as abbreviated or light. (…) So that was free text. To go from there to a pick list, a drop-down list — what they wanted in the new system — that’s quite complicated. And that is something you have to do when you connect systems with each other. Now in this case it had to because the [organization] went from a legacy to a new system. But sometimes when you talk about interoperability, the winged words of the EU as well, then you have to be able to compare these kinds of data with each other. For example, a VIS system that contains something and a nationally different system that contains exactly the same data, but the field names are different. Or the notation is just a little different. Even then, it should be possible to match those data. So, there will need to be standard models. The EU is trying to achieve this with UMF, the Universal Message Format. America has a number of standards that are also included in our product; NIEM is one of them. And in addition to having standards, you also need to be able to match smartly, and that won’t be easy. That’s quite complex.” (Interview with WCC senior manager, May 31, 2021)

In the interview quote, the informant alludes to two approaches to dealing with the heterogeneity of data infrastructures. One approach is to establish uniformity in the values used. The informant’s first example shows how the values of a free-form field for hair color in a law enforcement database were standardized to determine categorical values. Another method is to adopt common standards to reduce the various formats used by different systems and organizations. The informant’s second example focuses on the interoperability framework for EU information systems in the area of justice and home affairs. The introduction of a new data standard, the Universal Message Format (UMF), that would allow for consistent data exchange is a crucial component to achieving data interoperability, as the relevant EU legislation also specifies:

The universal message format (UMF) should serve as a standard for structured, cross-border information exchange between information systems, authorities or organisations in the field of Justice and Home Affairs. The UMF should define a common vocabulary and logical structures for commonly exchanged information with the objective to facilitate interoperability by enabling the creation and reading of the contents of exchanges in a consistent and semantically equivalent manner. (European Union 2019a, 22)

During the fieldwork, it became clear that WCC was familiar with this UMF data standard due to their previous work with the Finnish national police (WCC 2020). The Finnish national police was one of the EU member states participating in the pilot project to automatically consult from their own national systems to the Europol watch lists via an interface called the QUEST (Querying Europol System) which provides query results in the UMF format (European Commission 2020a; Kangas 2019). Hence, as a supplier of their data matching system for the Finnish police for WCC was able to gain “valuable expertise in using UMF” (WCC 2020). As a gateway moment, this illustrates how data matching in the Finnish police context included linking with Europol databases and the utilization of the UMF data model, which, in turn, demanded an additional mapping process between the UMF and ELISE object model.

UMF, as elucidated in a 2014 information sheet from the European Union Agency for Law Enforcement Cooperation (Europol 2014), is characterized as a layer for facilitating cross-border data exchange: “It must be emphasised that UMF is not the internal structure of systems/databases (you are not required to change your national systems, legislation or processes!) but rather an XML-based data format acting as a layer between them to be used whenever structured messages cross national borders.” (Europol 2014). The UMF is conceived as a versatile multi-plug adapter connecting the concepts within different agencies’ internal data models to those within the UMF’s “reference model” (See also Figure 6.3). The first page of the Europol UMF brochure similarly describes the problem of law enforcement databases having similar data but in different formats using examples such as plug-and-socket standardization. As such, UMF can be used to connect systems, such as between EU member states and Europol, while keeping those systems’ internal database structures intact.

Figure 6.3: This diagram from Europol (2014) shows how concepts from national databases are mapped to UMF.

Considering UMF’s intended use as a common standard in the broader EU field of Justice and Home Affairs, questions arise about the mapping process between the internal data models of various agencies and this data model originally designed for law enforcement purposes. According to the brochure’s definition of UMF version 1, it is “a standard or agreement on what the structure of the most important law enforcement concepts when they are exchanged across borders should be” (Europol 2014, 3). Subsequent updates to the model, such as the UMF version 3 project, continue to emphasize the goal of “enhancing information exchange among law enforcement authorities” (European Commission 2020a). Mapping data from other systems to the UMF data model entails aligning with the ontologies commonly used in law enforcement. A senior WCC solutions manager, well-versed in UMF, described the general UMF data model and its connections to law enforcement as follows:

So POLE, persons objects locations and events. That model was used by police before computers. So, that’s the model that any police forces, anywhere [in the world], use to categorize and classify crimes. So, if there is a crime, there are persons: the victim and the suspect. There are objects like weapon, like, now it’s getting even different because of technology. Now, the objects are getting to be more a means of communication, any means of communication: the Internet, a mobile cellphone. Those are all objects. If there is a car, that’s an object. Licence plate, vehicle, aeroplane, boat, so that’s an object. […] The location is not only the location of the event. The location can also include the addresses of the people involved in the crime. Or addresses where they used to go to. Any means of address, or regions, or even a journey. So, a route between two points. For example, there may be a crime where they used a car and they escaped from the crime scene at point A and then they hid in point B, from point A to B. And the event is the offence. What’s the offensive action. That’s the event. […] So that model is what they are now trying to use in the systems, as a data model. And what the European Union did is to follow this model — but they didn’t announce this anywhere. But following this model, they developed a standard for the format and the exchange of the data related to law enforcement. And this standard format is named UMF, Universal Message Format. (Interview with WCC solutions manager, 30 July, 2020)

Examining the mappings between diverse systems’ data models as gateway moments reveals the emergence of new sociotechnical connections. The fundamental concept behind this process is mapping various data types from heterogeneous systems to make them suitable for searching and matching operations. However, in the case of EU member states’ systems and their interaction with EU agency systems through the UMF, it is clear that these connections carry specific historical contexts and origins. As the interviewee underscores, the UMF is rooted in the well-established POLE model historically used by law enforcement for crime categorization and classification. This POLE model encompasses identifying individuals involved in criminal activities, the objects related to the crimes, locations connected to the incidents, and the criminal offences.

The contingency in this context remains unresolved, as adopting the UMF as a standard format for system interoperability in the Justice and Home Affairs field is still an ongoing process. Significantly, this adoption extends beyond law enforcement to encompass systems related to migration and asylum. The UMF will serve “to describe and label the identity, travel document and biometric data” within interoperability components, as laid down in the implementing decision (European Commission 2023). These applications encompass specifying search queries and responses (as seen in the European Search Portal), acting as a common format for connecting identity data across JHA databases (the Common Identity Repository) and facilitating the detection and connection of identity data between these databases (the Multiple Identity Repository). When viewed through the perspective of gateway moments, it becomes evident that a contingency exists in this alignment with ontologies originating from law enforcement. This alignment, while facilitating data matching, also carries the potential to contribute to the securitization of cross-border mobility.

The analysis of these gateway moments underscores their significance in recovering less visible facets of data matching systems. One such facet pertains to the capacity of identity matching expertise to circulate between organizations and when such circulations do not occur. The integration of the ELISE software into the IND systems, including the subsequent software updates, illustrated how specific features, notably in name matching, can traverse organizational boundaries. The incorporation of specific name matching functionalities within ELISE provides an example of the dissemination of data matching expertise across networks of organizations and entities. These include name databases, which are employed by various organizations for diverse purposes, exemplify the extensive network through which data matching knowledge is exchanged and shared. Conversely, the case of the VIS Evolutions project highlights that this transfer of knowledge does not always materialize, for example, due to factors such as backward compatibility constraints. Therefore, the multi-temporal sampling method of gateway moments enables us to identify the contingency inherent in these moments.

Methodologically, this multi-temporal sampling approach, employing gateway moments as a heuristic, has provided a lens to understand how the ELISE system’s integration was contingent upon the specific circumstances and actors involved in its development. The difference between the IND and EU-VIS cases highlights that the transfer of data matching expertise is contingent upon various factors. The gateway moments of the UMF and data model mapping demonstrate how data matching does not require full data interoperability. Utilizing gateways facilitates a more open and adaptable approach to data matching across heterogeneous systems while introducing the potential for contingent standardization. This standardization is not assured, but rather dependent on specific organizational choices. In practice, organizations may merely map their data, retaining it as is in their databases, or choose to adapt it more extensively to align with the requirements of the integrated systems. As Hanseth (2001) suggests, gateways, while sometimes seen as a consequence of failed design and standardization due to their role as imperfect translators between linked systems’ data models, can be practical tools in linking heterogeneous systems.

6.7 Conclusions on tracing the evolution of a data matching system: Insights into shifting landscapes of data matching and identification

This chapter has proposed a methodological approach known as “multi-temporal sampling” as an alternative to traditional longitudinal research, offering a means to examine various contingent moments in the evolution of a data matching system. By employing this method, we were able to construct a biography of WCC’s ELISE as a technological artifact. This approach aligns with the methodological strategies discussed in Chapter 3, where data matching serves not only as the object of investigation but also as a valuable methodological resource. This dual role allowed us to shed light on the intricate and contingent process behind the development of WCC’s ELISE system, which continuously adapted to the evolving demands and challenges within the field of data matching, undergoing phases of openness and closure in its design. Moreover, our exploration served as a resource to address the research question of this chapter: : “How do knowledge and technology for matching identity data circulate and traverse various organizations?” (RQ3).

By tracing the interpretive flexibility of identity data matching software, this chapter has highlighted the work of otherwise rarely featured actors in the circulation of knowledge and technologies for matching identity data. The analysis illuminated, for example, how the formation of international professional networks, exemplified by the partnership with Accenture, spurred WCC to explore biometric data matching technology with greater depth. Additionally, the chapter highlighted the influence of a US government directive, which introduced new challenges for dealing with the heterogeneity of biometric technologies and the complexities of inter-organizational data matching. These emerging issues served as catalysts for re-opening design flexibility, prompting WCC to develop techniques for fusing biometric and biographic data matching and a plug-and-play architecture capable of accommodating diverse data formats and proprietary algorithms. Moreover, the MITRE challenge presented yet another novel problem in the form of multicultural name matching, driving international actors to engage in a competitive quest to devise innovative name matching solutions.

The examination of gateway moments has illuminated a critical aspect of the circulation of data matching knowledge and technology across organizations, one that does not unfold deterministically but is profoundly contingent upon specific contexts. The integration of ELISE into the EU-VIS system exemplified a stringent implementation approach, given the intricate web of diverse member state systems interconnected within the framework of the EU-VIS data infrastructure. This setup revealed the constraints imposed on utilizing specialized name matching technology, as its incorporation would have engendered backward compatibility issues with pre-existing member states’ systems. Conversely, the integration of ELISE into the IND systems displayed a higher degree of adaptability in data matching. This flexibility was epitomized by software updates, illustrating how newly developed name matching functionalities, forged in disparate contexts like the MITRE challenge and EU-VIS, could seamlessly become part of the ELISE core, which, in turn, facilitates their capacity to circulate to other organizations such as the IND. The instance of UMF and data model mapping demonstrates how data matching among heterogeneous systems can introduce the prospect of contingent standardization. Here, organizations can decide whether to simply map their existing data, keeping it in its original form within their databases, or to opt for a more extensive adaptation that aligns with the integrated system’s specific requirements for matching identity data.

Moreover, our exploration served as a resource to address dissertation’s main research question. “How are practices and technologies for matching identity data in migration management and border control shaping and shaped by transnational commercialized security infrastructures?

The moments of interpretative flexibility afforded insight into the ever-evolving realm of problem definitions and design solutions for identity data matching. Originally conceived as a versatile technology with applications across diverse internet markets, ELISE underwent a significant transformation, redirecting its focus towards the identity and security sector. This strategic shift was primarily driven by shifting geopolitical dynamics and escalating security concerns, wherein data matching technology emerged as a crucial asset for law enforcement and border security agencies. This evolution ushered in newfound design flexibility, with the data matching system expanding to encompass biometric data matching and addressing specific name matching associated with identity verification in the context of security agendas. Moreover, these moments underscore the commercial nature of these technologies, not only in terms of WCC’s product offerings but also in its extensive network of implementation partners, value-added resellers, and technology collaborators, including biometric vendors and external name matching databases through licensing agreements.

The gateway moments, on the other hand, highlighted opportunities and limitations surrounding data matching within transnational, commercialized security infrastructures. An illustrative case was the integration of ELISE into EU-VIS, revealing a data infrastructure where member states have greater control over defining their matching criteria through the API offered by the central system. However, from WCC’s standpoint, this approach led to suboptimal data matching outcomes. Another noteworthy observation was integrating data matching into pre-existing and legacy systems, representing a potential standardization moment. New data models like the ELISE data model can serve as instances of mapping exercises between diverse systems, enabling data matching. This concept was also exemplified in the Universal Message Format, a dedicated gateway technology providing a standard for countries and agencies to align their data model concepts, thereby introducing the prospect of contingent standardization. These gateway moments thus offer valuable insights into the contingent influences on data matching within the broader landscape of infrastructural networks.

As highlighted, the interpretative flexibility framework in the Social Construction of Technology (SCOT) has some limitations, particularly its tendency to overlook structural contexts and the potential for power imbalances that can render certain actors invisible (Klein and Kleinman 2002). Similarly, the gateway moments approach primarily emphasizes technological components and system builders, potentially privileging specific actors while making those subject to data matching technology less visible. However, this focus on specific actors presented an opportunity in this case. Paradoxically, previous research on identification technologies, which often emphasized various sampling methods, did not delve deeply into the work of less conspicuous actors within the international network that develops, finances, and profits from identification in border security and migration management.

As Hughes (1983a) observed in his analysis of the construction of large technical systems, the substantial investments made by individuals, businesses, and other system builders wield substantial influence over the trajectory of technology. This influence engenders a phenomenon he called “technological momentum,” which propels the development of technology along specific pathways as a consequence of their concerted efforts. Nevertheless, the methods employed in this chapter also underscore the significance of contingent moments, yielding specific effects and aiming to illustrate that the realm of identification technology and securitization is not inherently deterministic nor linear but rather shaped by contingent choices, thereby emphasizing that alternative courses of action have been and continue to be possible.

References

Accenture. 2012. “European Commission Selects Consortium of Accenture, Morpho and HP to Maintain EU Visa Information and Biometric Matching Systems.” Press Release. https://web.archive.org/web/20201206154800/https://newsroom.accenture.com/subjects/client-winsnew-contracts/european-commission-chooses-consortium-of-accenture-morpho-and-hp-to-maintain-eu-visa-information-and-biometric-matching-systems.htm.

Accenture. 2015. “United Nations High Commissioner for Refugees and Accenture Deliver Global Biometric Identity Management System to Aid Displaced Persons.” Press Release. https://web.archive.org/web/20221203234022/https://newsroom.accenture.com/news/united-nations-high-commissioner-for-refugees-and-accenture-deliver-global-biometric-identity-management-system-to-aid-displaced-persons.htm.

Ajana, Btihaj. 2013. “Asylum, Identity Management and Biometric Control.” Journal of Refugee Studies 26 (4): 576–95. https://doi.org/10.1093/jrs/fet030.

Akrich, Madeleine. 1992. “The de-Scription of Technical Objects.” In Shaping Technology/Building Society: Studies in Sociotechnical Change, edited by Wiebe E. Bijker and John Law, 205–24. Inside Technology. Cambridge, Mass.: The MIT Press.

Amelung, Nina. 2021. “‘Crimmigration Control’ Across Borders: The Convergence of Migration and Crime Control Through Transnational Biometric Databases.” Historical Social Research 46 (3): 151–77. https://doi.org/10.12759/HSR.46.2021.3.151-177.

Amicelle, Anthony, Claudia Aradau, and Julien Jeandesboz. 2015. “Questioning Security Devices: Performativity, Resistance, Politics.” Security Dialogue 46 (4): 293–306. https://doi.org/10.1177/0967010615586964.

Amoore, Louise. 2006. “Biometric Borders: Governing Mobilities in the War on Terror.” Political Geography 25 (3): 336–51. https://doi.org/10.1016/j.polgeo.2006.02.001.

Amoore, Louise. 2013. The Politics of Possibility: Risk and Security Beyond Probability. Duke University Press.

Baird, Theodore. 2017. “Knowledge of Practice: A Multi-Sited Event Ethnography of Border Security Fairs in Europe and North America.” Security Dialogue 48 (3): 187–205. https://doi.org/10.1177/0967010617691656.

Benjamin, Robert, and Rolf Wigand. 1995. “Electronic Markets and Virtual Value Chains on the Information Superhighway.” MIT Sloan Management Review, January. https://sloanreview.mit.edu/article/electronic-markets-and-virtual-value-chains-on-the-information-superhighway/.

Benson, Michaela, and Karen O’Reilly. 2009. “Migration and the Search for a Better Way of Life: A Critical Exploration of Lifestyle Migration.” The Sociological Review 57 (4): 608–25. https://doi.org/10.1111/j.1467-954X.2009.01864.x.

Betlem, Rutger. 2011. “Utrechtse datatechnologie moet terroristen buiten de VS houden.” Het Financieele Dagblad, December.

Bigo, Didier, Sergio Carrera, Ben Hayes, Nicholas Hernanz, and Julien Jeandesboz. 2012. Justice and Home Affairs Databases and a Smart Borders System at EU External Borders: An Evaluation of Current and Forthcoming Proposals. Brussels: Centre for European Policy Studies. https://www.ceps.eu/ceps-publications/justice-and-home-affairs-databases-and-smart-borders-system-eu-external-borders/.

Bijker, Wiebe E. 1993. “Do Not Despair: There Is Life After Constructivism.” Science, Technology, & Human Values 18 (1): 113–38. https://doi.org/10.1177/016224399301800107.

Bijker, Wiebe E., Thomas Parke Hughes, and Trevor Pinch, eds. 2012. The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology. Anniversary ed. Cambridge, Mass: MIT Press.

Bijker, Wiebe E., and John Law, eds. 1992. Shaping Technology/Building Society: Studies in Sociotechnical Change. Inside Technology. Cambridge, Mass: MIT Press.

Biometric Technology Today. 2002. “2001 Market Review: Uncertain Times.” Biometric Technology Today 10 (1): 9–11. https://doi.org/10.1016/S0969-4765(02)00118-2.

Broeders, Dennis. 2011. “A European ‘Border’ Surveillance System Under Construction.” In Migration and the New Technological Borders of Europe, edited by Huub Dijstelbloem and Albert Meijer, 40–67. Migration, Minorities and Citizenship. London: Palgrave Macmillan. https://doi.org/10.1057/9780230299382_3.

Bush, George W. 2008. “NSPD-59 / HSPD-24 on Biometrics for Identification and Screening to Enhance National Security.” http://web.archive.org/web/20221006170237/https://irp.fas.org/offdocs/nspd/nspd-59.html.

Calder, Simon. 2022. “EU Brings in Vaccine Expiration Date of 270 Days for Travellers.” The Independent. https://web.archive.org/web/20220307103943/https://www.independent.co.uk/travel/news-and-advice/eu-vaccine-expiration-date-travel-270-b2004777.html.

Callon, Michel. 1984. “Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St. Brieuc Bay.” The Sociological Review 32 (1_suppl): 196–233. https://doi.org/10.1111/j.1467-954X.1984.tb0011.

Cowan, Ruth Schwartz. 1985. “How the Refrigerator Got Its Hum.” In The Social Shaping of Technology, 202–18. Philadelphia: Open University Press.

David, Paul A., and Julie Ann Bunn. 1988. “The Economics of Gateway Technologies and Network Evolution: Lessons from Electricity Supply History.” Information Economics and Policy 3 (2): 165–202. https://doi.org/10.1016/0167-6245(88)90024-8.

De Genova, Nicholas, ed. 2017. The Borders of "Europe”: Autonomy of Migration, Tactics of Bordering. Durham, NC: Duke University Press.

Donko, Kamal, Martin Doevenspeck, and Uli Beisel. 2022. “Migration Control, the Local Economy and Violence in the Burkina Faso and Niger Borderland.” Journal of Borderlands Studies 37 (2): 235–51. https://doi.org/10.1080/08865655.2021.1997629.

Dourish, Paul. 2014. “No SQL: The Shifting Materialities of Database Technology.” Computational Culture, no. 4. http://web.archive.org/web/20230529102119/http://computationalculture.net/no-sql-the-shifting-materialities-of-database-technology/.

ECA, European Court of Auditors. 2014. Lessons from the European Commission’s Development of the Second Generation Schengen Information System (SIS II). Vol. Special report No 03/2014. Luxembourg: Publications Office of the European Union. https://data.europa.eu/doi/10.2865/8113.

Edwards, Paul N., Geoffrey C. Bowker, Steven J. Jackson, and Robin Williams. 2009. “Introduction: An Agenda for Infrastructure Studies.” Journal of the Association for Information Systems 10 (5): 364–74. https://doi.org/10.17705/1jais.00200.

Edwards, Paul N., Steven J. Jackson, Geoffrey C. Bowker, and Cory Philip Knobel. 2007. “Understanding Infrastructure: Dynamics, Tensions, and Design.” Working Paper Final report of the workshop, "History and Theory of Infrastructure: Lessons for New Scientific Cyberinfrastructures". http://deepblue.lib.umich.edu/handle/2027.42/49353.

Egyedi, Tineke. 2001. “Infrastructure Flexibility Created by Standardized Gateways: The Cases of XML and the ISO Container.” Knowledge, Technology & Policy 14 (3): 41–54. https://doi.org/10.1007/s12130-001-1015-4.

eu-LISA. 2013. “Report on the Technical Functioning of VIS, Including the Security Thereof, Pursuant to Article 50(3) of the VIS Regulation.”

eu-LISA. 2016. VIS Report Pursuant to Article 50(3) of Regulation (EC) No 767/2008: VIS Report Pursuant to Article 17(3) of Council Decision 2008/633/JHA. July 2016. Luxembourg: Publications Office of the European Union. https://data.europa.eu/doi/10.2857/022699.

eu-LISA. 2020. Report on the Technical Function of the Visa Information System (VIS). Luxembourg: Publications Office of the European Union. https://data.europa.eu/doi/10.2857/66661.

European Commission. 2020b. “Coronavirus: EU Interoperability Gateway.” Press Release. https://web.archive.org/web/20220831164020/https://ec.europa.eu/commission/presscorner/detail/en/ip_20_1904.

European Commission. 2021a. “EU Digital COVID Certificate: EU Gateway Goes Live.” Press Release. https://web.archive.org/web/20221028104309/https://ec.europa.eu/commission/presscorner/detail/en/ip_21_2721.

European Commission. 2021b. “Questions and Answers – EU Digital COVID Certificate.” Press Release. http://web.archive.org/web/20221016124101/https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_2781.

European Commission. 2023. “COMMISSION IMPLEMENTING DECISION (EU) 2023/220 of 1 February 2023 Laying down and Developing the Universal Message Format (UMF) Standard Pursuant to Regulation (EU) 2019/817 of the European Parliament and of the Council.” https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32023D0220.

European External Action Service. 2021. “Non-EU Countries Welcome to Join the EU Digital COVID Certificate System.” http://web.archive.org/web/20220816070933/https://www.eeas.europa.eu/eeas/non-eu-countries-welcome-join-eu-digital-covid-certificate-system_en.

European Union. 2019a. “Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on Establishing a Framework for Interoperability Between EU Information Systems in the Field of Borders and Visa and Amending Regulations (EC) No 767/2008, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) 2018/1726 and (EU) 2018/1861 of the European Parliament and of the Council and Council Decisions 2004/512/EC and 2008/633/JHA.” http://data.europa.eu/eli/reg/2019/817/oj/eng.

Europol. 2014. Universal Message Format: Faster, Cheaper, Better. Luxembourg: Publications Office of the European Union. https://data.europa.eu/doi/10.2813/15318.

Garfinkel, Harold. 1964. “Studies of the Routine Grounds of Everyday Activities.” Social Problems 11 (3): 225–50. https://doi.org/10.2307/798722.

Gasson, Susan. 2006. “A Genealogical Study of Boundary-Spanning IS Design.” European Journal of Information Systems 15 (1): 26–41. https://doi.org/10.1057/palgrave.ejis.3000594.

Glouftsios, Georgios. 2021. “Governing Border Security Infrastructures: Maintaining Large-Scale Information Systems.” Security Dialogue 52 (5): 452–70. https://doi.org/10.1177/0967010620957230.

Glouftsios, Georgios. 2018. “Governing Circulation Through Technology Within EU Border Security Practice-Networks.” Mobilities 13 (2): 185–99. https://doi.org/10.1080/17450101.2017.1403774.

Glouftsios, Georgios. 2019. “Designing Digital Borders: The Visa Information System (VIS).” In Technology and Agency in International Relations, edited by Marijn Hoijtink and Matthias Leese, 164–87. London; New York: Routledge.

Gusterson, Hugh. 1996. Nuclear Rites. A Weapons Laboratory at the End of the Cold War. Berkeley; Los Angeles, California: University of California Press.

Gusterson, Hugh. 1997. “Studying up Revisited.” PoLAR: Political and Legal Anthropology Review 20: 114–19. https://heinonline.org/HOL/Page?handle=hein.journals/polar20&id=122&div=&collection=.

Hanseth, Ole. 2001. “Gateways — Just as Important as Standards: How the Internet Won the ‘Religious War’ over Standards in Scandinavia.” Knowledge, Technology & Policy 14 (3): 71–89. https://doi.org/10.1007/s12130-001-1017-2.

Hughes, Thomas Parke. 1983a. “Chapter 4: Reverse Salients and Critical Problems.” In Networks of Power: Electrification in Western Society, 1880-1930, 79–105. Baltimore: The Johns Hopkins University Press.

Hughes, Thomas Parke. 1983b. Networks of Power: Electrification in Western Society, 1880-1930. Baltimore: The Johns Hopkins University Press.

Hyysalo, Sampsa, Neil Pollock, and Robin A. Williams. 2019. “Method Matters in the Social Study of Technology: Investigating the Biographies of Artifacts and Practices.” Science & Technology Studies 32 (3): 2–25. https://doi.org/10.23987/sts.65532.

Isin, Engin F. 2013. “Claiming European Citizenship.” In Enacting European Citizenship, edited by Engin F. Isin and Michael Saward, 19–46. Cambridge: Cambridge University Press.

ISO/IEC. 1994. “7498-1:1994 Open Systems Interconnection — Basic Reference Model: The Basic Model.” https://www.iso.org/standard/20269.html.

Jeandesboz, Julien. 2020. “Final Report on Entry.” AdMiGov Deliverable D.1.4. Brussels: Université libre de Bruxelles. http://web.archive.org/web/20221202051249/https://admigov.eu/upload/Deliverable_D14_Jeandesboz_Final_Report_on_Entry.pdf.

Jeandesboz, Julien. 2016. “Smartening Border Security in the European Union: An Associational Inquiry.” Security Dialogue 47 (4): 292–309. https://doi.org/10.1177/0967010616650226.

Jones, Chris, Ana Valdivia, and Jane Kilpatrick. 2022. “Funds for Fortress Europe: Spending by Frontex and EU-LISA.” Statewatch. http://web.archive.org/web/20220812111011/https://www.statewatch.org/analyses/2022/funds-for-fortress-europe-spending-by-frontex-and-eu-lisa/.

Kangas, Anssi. 2019. “UMF3+ Technological Solution for Better Access to MS Data Held at Europol.” EU2019.Fi. European Parliament, Brussels, Belgium. http://web.archive.org/web/20230920101049/https://www.europarl.europa.eu/committees/it/fifth-meeting-of-the-joint-parliamentary/product-details/20190911EOT03961.

Karasti, Helena, Karen S. Baker, and Florence Millerand. 2010. “Infrastructure Time: Long-Term Matters in Collaborative Development.” Computer Supported Cooperative Work (CSCW) 19 (3): 377–415. https://doi.org/10.1007/s10606-010-9113-z.

Karasti, Helena, Florence Millerand, Christine M. Hine, and Geoffrey C. Bowker. 2016. “Knowledge Infrastructures: Part I.” Science & Technology Studies 29 (1): 2–12. https://doi.org/10.23987/sts.55406.

Kim, Arin. 2022. “South Korea Joins EU’s Digital COVID-19 Certificate System.” The Korea Herald. http://web.archive.org/web/20220817052820/http://www.koreaherald.com/view.php?ud=20220701000611.

Klein, Hans K., and Daniel Lee Kleinman. 2002. “The Social Construction of Technology: Structural Considerations.” Science, Technology, & Human Values 27 (1): 28–52. https://doi.org/10.1177/016224390202700102.

Kloppenburg, Sanneke, and Irma van der Ploeg. 2020. “Securing Identities: Biometric Technologies and the Enactment of Human Bodily Differences.” Science as Culture 29 (1): 57–76. https://doi.org/10.1080/09505431.2018.1519534.

Kuster, Brigitta, and Vassilis S. Tsianos. 2016. “How to Liquefy a Body on the Move: Eurodac and the Making of the European Digital Border.” In EU Borders and Shifting Internal Security: Technology, Externalization and Accountability, edited by Raphael Bossong and Helena Carrapico, 45–63. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-17560-7_3.

Latour, Bruno. 1986. “Visualisation and Cognition: Drawing Things Together.” Edited by H. Kuklick. Knowledge and Society Studies in the Sociology of Culture Past and Present 6: 1–40. http://web.archive.org/web/20230325193816/http://www.bruno-latour.fr/node/293.

Latour, Bruno. 2002. “Gabriel Tarde and the End of the Social.” In The Social in Question: New Bearings in History and the Social Sciences, edited by Patrick Joyce, 117–32. London: Routledge.

Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. Clarendon Lectures in Management Studies. Oxford & New York: Oxford University Press.

Latour, Bruno. 1999. “On Recalling ANT.” The Sociological Review 47 (1_suppl): 15–25. https://doi.org/10.1111/j.1467-954X.1999.tb03480.x.

Latour, Bruno, and Steve Woolgar. 1986. Laboratory Life: The Construction of Scientific Facts. Second. Princeton, N.J.: Princeton University Press.

Law, John. 2006. “Traduction / Trahison: Notes on ANT.” Convergencia Revista de Ciencias Sociales, no. 42 (December): 32–57. https://convergencia.uaemex.mx/article/view/1394.

Law, John, and John Urry. 2004. “Enacting the Social.” Economy and Society 33 (3): 390–410. https://doi.org/10.1080/0308514042000225716.

Lemberg-Pedersen, Martin, Johanne Rübner Hansen, and Oliver Joel Halpern. 2020. “The Political Economy of Entry Governance.” Advancing Alternative Migration (ADMIGOV) Deliverable 1.3. Copenhagen: Aalborg University. http://web.archive.org/web/20230705132811/https://admigov.eu/upload/Deliverable_D13_Lemberg-Pedersen_The_Political_Economy_of_Entry_Governance.pdf.

Lyon, David. 2003. Surveillance After September 11. Malden, Mass.: Polity.

MacKenzie, Donald, and Judy Wajcman, eds. 1999. The Social Shaping of Technology. Philadelphia: Open University Press.

Marcus, George E. 1995. “Ethnography in/of the World System: The Emergence of Multi-Sited Ethnography.” Annual Review of Anthropology 24 (1): 95–117. https://doi.org/10.1146/annurev.an.24.100195.000523.

Marres, Noortje. 2017. Digital Sociology the Reinvention of Social Research. Malden, MA: Polity.

Marres, Noortje. 2007. “The Issues Deserve More Credit: Pragmatist Contributions to the Study of Public Involvement in Controversy.” Social Studies of Science 37 (5): 759–80. https://doi.org/10.1177/0306312706077367.

Mezzadra, Sandro, and Brett Neilson. 2013. Border as Method, or, the Multiplication of Labor. Durham: Duke University Press.

Miller, Keith J., Elizabeth Schroeder Richerson, Sarah McLeod, James Finley, and Aaron Schein. n.d. “International Multicultural Name Matching Competition: Design, Execution, Results, and Lessons Learned.” In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), 3111–7. Istanbul, Turkey: European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2012/index.html.

Monahan, Torin, and Neal A. Palmer. 2009. “The Emerging Politics of DHS Fusion Centers.” Security Dialogue 40 (6): 617–36. https://doi.org/10.1177/0967010609350314.

Monteiro, Eric, Neil Pollock, Ole Hanseth, and Robin Williams. 2013. “From Artefacts to Infrastructures.” Computer Supported Cooperative Work (CSCW) 22 (4): 575–607. https://doi.org/10.1007/s10606-012-9167-1.

Nader, Laura. 1972. “Up the Anthropologist: Perspectives Gained from Studying up.” In Reinventing Anthropology, edited by Dell Hymes, 284–311. New York: Pantheon Books. https://eric.ed.gov/?id=ED065375.

Nader, Laura. 1980. “The Vertical Slice: Hierarchies and Children.” In Hierarchy & Society: Anthropological Perspectives on Bureaucracy, edited by Gerald Mark Britan and Ronald Cohen, 31–44. Philadelphia: Institute for the Study of Human Issues. https://archive.org/details/hierarchysociety00brit_0/mode/2up.

Olivieri, Lorenzo. 2023. Temporalities of Migration. Time, Data Infrastructures and Intervention. Padova: Padova University Press.

Olwig, Karen Fog, Kristina Grünenberg, Perle Møhl, and Anja Simonsen. 2019. The Biometric Border World: Technologies, Bodies and Identities on the Move. London: Routledge. https://doi.org/10.4324/9780367808464.

Parkin, Joanna. 2011. The Difficult Road to the Schengen Information System II: The Legacy of ’Laboratories’ and the Cost for Fundamental Rights and the Rule of Law. Brussels, Belgium: Centre for European Policy Studies (CEPS). https://www.ceps.eu/system/files/book/2011/06/INEX_PB_No_13_Parkin%20on%20SIS.pdf.

Pelizza, Annalisa. 2019. “Processing Alterity, Enacting Europe: Migrant Registration and Identification as Co-Construction of Individuals and Polities.” Science, Technology, & Human Values 45 (2): 262–88. https://doi.org/10.1177/0162243919827927.

Pelizza, Annalisa. 2021. “Identification as Translation: The Art of Choosing the Right Spokespersons at the Securitized Border.” Social Studies of Science 51 (4): 487–511. https://doi.org/10.1177/0306312720983932.

Pelizza, Annalisa, and Claudia Aradau. 2024. “Scripts of Security: Between Contingency and Obduracy.” Science, Technology, & Human Values 0 (0). https://doi.org/10.1177/01622439241258822.

Pelizza, Annalisa, and Rob Hoppe. 2018. “Birth of a Failure: Consequences of Framing ICT Projects for the Centralization of Inter-Departmental Relations.” Administration & Society 50 (1): 101–30. https://doi.org/10.1177/0095399715598343.

Pelizza, Annalisa, and Chiara Loschi. 2023. “Telling ‘More Complex Stories’ of European Integration: How a Sociotechnical Perspective Can Help Explain Administrative Continuity in the Common European Asylum System.” Journal of European Public Policy, April, 1–22. https://doi.org/10.1080/13501763.2023.2197945.

Pinch, Trevor, and Wiebe E. Bijker. 1984. “The Social Construction of Facts and Artefacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other.” Social Studies of Science 14 (3): 399–441. https://doi.org/10.1177/030631284014003004.

Pollock, Neil, and Robin Williams. 2009. Software and Organisations: The Biography of the Enterprise-Wide System or How SAP Conquered the World. Routledge Studies in Technology, Work and Organisations 5. London; New York: Routledge.

Pollock, Neil, and Robin Williams. 2010. “E-Infrastructures: How Do We Know and Understand Them? Strategic Ethnography and the Biography of Artefacts.” Computer Supported Cooperative Work (CSCW) 19 (6): 521–56. https://doi.org/10.1007/s10606-010-9129-4.

Pollock, Neil, Robin Williams, and Luciana D’Adderio. 2016. “Generification as a Strategy How Software Producers Configure Products, Manage User Communities and Segment Markets.” In The New Production of Users: Changing Innovation Collectives and Involvement Strategies, edited by Sampsa Hyysalo, Torben Elgaard Jensen, and Nelly Oudshoorn, 160–89. Routledge Studies in Innovation, Organization and Technology 42. New York: Routledge.

Pollozek, Silvan, and Jan Hendrik Passoth. 2019. “Infrastructuring European Migration and Border Control: The Logistics of Registration and Identification at Moria Hotspot.” Environment and Planning D: Society and Space 37 (4): 606–24. https://doi.org/10.1177/0263775819835819.

PRNewswire. 2011. “WCC Wins Top Tier Vendor Position at MITRE Multi-Cultural Name Matching Challenge.” PR Newswire, October. https://web.archive.org/web/20111017050549/http://www.prnewswire.com/news-releases/wcc-wins-top-tier-vendor-position-at-mitre-multi-cultural-name-matching-challenge-131213309.html.

Ribes, David, and Thomas A. Finholt. 2009. “The Long Now of Infrastructure: Articulating Tensions in Development.” Journal of the Association for Information Systems 10 (5): 375–98. https://doi.org/10.17705/1jais.00199.

Rippen, René. 2006. “Sterke positie in HR en identity matching.” Database Magazine 8 (December): 36–38. http://web.archive.org/web/20230906103716/https://biplatform.nl/magazines/Aveq/111773.pdf.

Scheel, Stephan. 2019. Autonomy of Migration? Appropriating Mobility Within Biometric Border Regimes. Abingdon, Oxon; New York, NY: Routledge.

Shore, Cris, Susan Wright, and Davide Però, eds. 2011. Policy Worlds: Anthropology and the Analysis of Contemporary Power. EASA Series. New York: Berghahn Books.

Silvast, Antti, and Mikko J. Virtanen. 2023. “On Theory-Methods Packages in Science and Technology Studies.” Science, Technology, & Human Values 48 (1): 167–89. https://doi.org/10.1177/01622439211040241.

Soysüren, Ibrahim, and Mihaela Nedelcu. 2022. “European Instruments for the Deportation of Foreigners and Their Uses by France and Switzerland: The Application of the Dublin III Regulation and Eurodac.” Journal of Ethnic and Migration Studies 48 (8): 1927–43. https://doi.org/10.1080/1369183X.2020.1796278.

Sparke, Matthew B. 2006. “A Neoliberal Nexus: Economy, Security and the Biopolitics of Citizenship on the Border.” Political Geography 25 (2): 151–80. https://doi.org/10.1016/j.polgeo.2005.10.002.

Star, Susan Leigh, and Karen Ruhleder. 1996. “Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces.” Information Systems Research 7 (1): 111–34. https://doi.org/10.1287/isre.7.1.111.

Strange, Michael, Vicki Squire, and Anna Lundberg. 2017. “Irregular Migration Struggles and Active Subjects of Trans-Border Politics: New Research Strategies for Interrogating the Agency of the Marginalised.” Politics 37 (3): 243–53. https://doi.org/10.1177/0263395717715856.

Stumpf, Juliet. 2006. “The Crimmigration Crisis: Immigrants, Crime, and Sovereign Power.” American University Law Review 56 (2): 367–419.

Suchman, Lucy. 2007. Human-Machine Reconfigurations: Plans and Situated Actions. Second Edition. Learning in Doing: Social, Cognitive and Computational Perspectives. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511808418.

Trauttmansdorff, Paul, and Ulrike Felt. 2023. “Between Infrastructural Experimentation and Collective Imagination: The Digital Transformation of the EU Border Regime.” Science, Technology, & Human Values 48 (3): 635–62. https://doi.org/10.1177/01622439211057523.

Tsianos, Vassilis, and Serhat Karakayali. 2010. “Transnational Migration and the Emergence of the European Border Regime: An Ethnographic Analysis.” European Journal of Social Theory 13 (3): 373–87. https://doi.org/10.1177/1368431010371761.

WCC. 2002. “Annual Report 2002.” Amstelveen, The Netherlands: Went Computing Consultancy Group B.V. https://web.archive.org/web/20050121083505if_/http://www.wcc.nl/doc/WCCGROUP_2002.pdf.

WCC. 2003. “Annual Report 2003.” Amstelveen, The Netherlands: Went Computing Consultancy Group B.V. https://web.archive.org/web/20050121093827if_/http://www.wcc.nl/doc/WCCGROUP_2003.pdf.

WCC. 2009a. “HSPD-24 White Paper Now Available from WCC Smart Search & Match.” Security Info Watch. http://web.archive.org/web/20220806035706/https://www.securityinfowatch.com/home/news/10492664/hspd24-white-paper-now-available-from-wcc-smart-search-match.

WCC. 2009b. “Meeting the Challenges of HSDP-24: A Layered Approach to Accurate Real Time Identification.” White Paper. WCC Smart Search & Match.

Wigand, Rolf T. 2020. “Whatever Happened to Disintermediation?” Electronic Markets 30 (1): 39–47. https://doi.org/10.1007/s12525-019-00389-0.

Williams, Robin, and Neil Pollock. 2012. “Moving Beyond the Single Site Implementation Study: How (and Why) We Should Study the Biography of Packaged Enterprise Solutions.” Information Systems Research 23 (1): 1–22. https://doi.org/10.1287/isre.1110.0352.


  1. Ontological stances similar to these have their roots in earlier forms of social theory, such as Gabriel Tarde’s “monadology” (see, for example, Latour 2002).↩︎

  2. Nader (1980) proposed the concept of “vertical slice” to, for instance, map the various actors, government agencies, policies, corporations, and associations to understand how power and governance of problems are organized (see also Shore, Wright, and Però 2011). In using the term perpendicular sampling, I aimed to avoid implying that some vertical hierarchical organization exists.↩︎

  3. The report further notes that for the development of SISII a “Global Project Management Board” was established at a late stage in the project to “to draw more fully on the experience of end-users in member countries” (ECA 2014, Special report No 03/2014:37).↩︎

  4. Of course, these junctions between new components and existing infrastructures are also prone to failure (Edwards et al. 2009). Such failures have, for instance, been well documented in e-government and information systems literature more broadly, where failures have been a long-standing concern due to their high stakes and use of public money (Pelizza and Hoppe 2018).↩︎

  5. https://web.archive.org/web/19981212033959/http://www.wcc.nl:80/s↩︎

  6. To achieve this, WCC ELISE employed in-memory databases instead of the conventional disk-based databases. Additionally, for an accessible overview of the shifts in database technologies, see Dourish (2014).↩︎

  7. https://web.archive.org/web/20070301063433/http://www.wcc-group.com/page.aspx?page=pagecontent&id=4171069↩︎

  8. In a newspaper article Mr. Went was quoted as follows “During the internet bubble, almost every job site [in the Netherlands] was a customer [of WCC]. It was an opportunistic world.”↩︎

  9. According to the 2003 Annual Report the Object Model and Replicator were introduced in that year (WCC 2003)↩︎

  10. In a newspaper article notes that “WCC founder Peter Went (49) admits honestly that the attacks on the WTC have created new opportunities for his company.” (Betlem 2011).↩︎

  11. A webpage from 2019 on WCC’s website featured a roster of technology partners specializing in biometrics, including: SecuGen, NEC, Toshiba, Iris ID, Cognitec, Warwick Warp, and Genkey (WCC 2019).↩︎

  12. A more encompassing term for these databases is “lexical databases,” a concept commonly used in natural language processing. Lexical databases comprise collections of interconnected words and their relationships within natural language.↩︎

  13. https://www.cjk.org/data/arabic/proper/database-arabic-names/↩︎

  14. According to the website, the DAN database, for example, “plays an important role in helping software developers, especially of security applications related to anti-money laundering and terror watchlists, as well as natural language processing tools, enhance their technology by enabling named entity recognition and extraction, machine translation, variant normalization, and information retrieval of Arabic names” (CJK Dictionary Institute 2018).↩︎

  15. A 2010 version of the WCC homepage lists as one capability of ELISE as “Name & address matching.” In 2012, this capability was still the same but there was a note about the MITRE challenge and a download related to “Multi-Cultural Name Matching” https://web.archive.org/web/20120322012217/http://www.wcc-group.com/. In 2016, “Built-in Name Matching” is stated as a specific feature described as follows: “Many years of research & development have lead to advanced algorithms that make the matching process much more reliable as it includes multi-cultural name conventions and much more.” https://web.archive.org/web/20160531183140/https://www.wcc-group.com/identity-matching/border-management↩︎

  16. https://www.netowl.com/name-matching-software↩︎

  17. Another company Rosette also has dedicated Name Matching which includes “cross-script and cross-lingual matching” https://www.rosette.com/name-matching-algorithms/.↩︎