Article: Security through the Eyes of the Algorithm: A Short Reflection on Risk and Suspicion in the Fight against Terrorism Financing

SHARE
Picture1

By Tasniem Anwar

Assistant Professor, Faculty of Law, Criminology, and Assistant Professor, Empirical and Normative Studies, Vrije Universiteit, Amsterdam

 

How many times a week do you make a financial transaction? Besides counting the transactions, imagine that all these transactions are monitored, analyzed, and categorized for the purpose of detecting money laundering and terrorism financing. While we might not realize this, our financial behaviour is increasingly being observed by both humans and algorithms for the purpose of security. Banks and other financial institutions have the legal responsibility to conduct client research, to monitor financial transactions for ‘suspicious’ behaviour, and to report potential criminal transactions to the authorities. This is done as part of a preventive approach to combat terrorism: clients and transactions are classified and monitored based on the potential risk that they might be involved in terrorist or criminal activities. As such, financial institutions suddenly play a key role in the security of our financial system, and our societies more broadly.

This means that banks need to detect criminal money flows amid millions of daily transactions. It is not surprising that this search is often described as ‘finding a needle in a haystack’. Yet, what does the needle even look like when trying to detect suspicious transactions that might indicate terrorism financing? In other words, what are the indicators that the bank uses for ‘suspicion’ or ‘risk’? And what possible forms of insecurity are produced through these security measures? In the upcoming sections, I will provide some glimpses into the practices of financial security and their impact on human security. 

As a result of these legal responsibilities, banks investigate whether their clients have a high risk of being (abused by) terrorism financiers, money launderers, or other financial criminals. Clients with a high-risk profile are subjected to increased forms of monitoring and are expected to mitigate their risks. If they are unable or unwilling to mitigate these risks, or if the bank does not want to accept the risk posed, they can experience delays with or a freezing of their transactions, or even be refused by the bank. As banks deal with millions of transactions by numerous clients in different jurisdictions, this practice is increasingly becoming an automated process through the deployment of Financial Technology (FinTech). Some practical examples of these technologies are the use of facial recognition software to verify the identity of a client, algorithms that detect anomalies in financial behaviour, and automated checks against international sanctions lists. FinTechs carry the promise of reducing the time and costs of this compliance process, flagging risks more accurately, and even detecting new patterns and indicators of financial crime.

The use of FinTech is encouraged by international organizations such as the United Nations and the Financial Action Task Force, under the condition that it follows a risk-based approach. A risk-based approach for terrorism financing, for example, requires countries to map the current and potential terrorist threats, the vulnerability of particular sectors, and gather intelligence on terrorist sympathizers. Such data can reveal patterns or scenarios of terrorism financing that help banks in determining the possible risks of their clients and transactions. Risk-based approaches therefore need to be fluid, adaptable to changing circumstances and scenarios of threat. Rather than a rule-based approach, where you indicate that an entire sector or geography is high-risk, a risk-based approach allows for more nuance and context. Banks do not have to flag all transactions to a particular region in the world, but they can more accurately combine risk factors such as sector, region, frequency, and amount to calculate risk. As such, a risk-based approach, especially aided by algorithmic decision-making, is considered a valuable asset to the practice of countering terrorism financing. 

In this light, the use of technology and the risk-based approach seem very neutral: just a more efficient way to tackle a complex security problem. Yet, it begs the question: What exactly are the indicators for ‘suspicion’ or ‘risk’? After all, what is determined ‘risky’ is not static, but deeply political. It is, namely, not only a technical decision on available data, but a normative decision on what does not fit normal behaviour, what is not an ordinary client, and what is not a mundane financial transaction. A risk-based approach is therefore inherently a judgment on what or who is suspicious, a possible (or known) threat, out of place, and illegitimate. Yet, despite the far-reaching consequences and political nature of a risk-based approach, practical definitions seem to be quite absent. For example, the Financial Action Task Force writes in her report on terrorism financing that “The FATF Standards do not prescribe a particular risk assessment methodology, and there is no one-size fits all approach. Ideally, a risk methodology should be flexible, practical and take into consideration specific features and characteristics of the jurisdiction.”(p.19). Not very concrete. In practice, it is up to an unusual assemblage of individuals and institutions to define risk: bank employees, international organizations, intelligence services, law enforcement, but also journalists or political actors. In the aftermath of 9/11, for example, the common imaginary of a terrorism financier among many security actors was that of a wealthy, conservative Arab Muslim. Similarly, Muslim communities and financial systems associated with Muslim or migrant communities became heavily surveilled and targeted. Such stereotypes illustrate that risk is not a neutral category, but entangled with political views on suspicion, belonging, and threat. In more recent debates, we notice that one sector that has suffered particularly from the fight against terrorism financing are Non-Profit Organizations (NPOs). Even though there is minimal proof to classify NPOs as high-risk customers, many NPOs have faced barriers in their daily work as a direct result of counter-terrorism financing regulations. Sometimes NPOs are unfairly flagged because they work in war zones where terrorist organizations are active, other times they are criminalized for political reasons in order to shrink the space in which they operate.  

Without a specific attention to the construction of our risk categories, automated decisions might not live up to the promise that FinTechs can function as a tool for a more efficient and precise detection of anomalies. In recent research commissioned by the European Centre Not-for-profit Law (ECNL), we found that companies rarely use explicit benchmarks or frameworks to measure whether FinTechs work more efficiently or accurately. In other words, we assume they do but we do not know for sure. The same goes for the assumption that FinTechs can reduce bias and help financial inclusion. While most software companies strive for financial inclusion, fairness, and equality, in practice such ambitions cannot be addressed without debating the underlying categorization of risk and suspicion. Furthermore, eliminating bias and discrimination against NPOs or marginalized communities requires an explicit strategy to include these groups in the design, development, and deployment of the technology. In practice, that rarely happens. That is a pity, as these groups hold the expertise and knowledge to identify the actual risks and threats within their sector or community. 

What we observe is a fine line between security and insecurity in these practices. Banks and financial institutions need to comply with international regulations to counter terrorism financing, and other financial crimes. This is a security objective, and these actors are considered important for the enhancement of financial security. At the same time, these security objectives also result in a great insecurity for NPOs or other ‘risky’ clients who are at risk of losing access to financial services. More importantly, the work that NPOs carry out, especially in conflict areas, can actively contribute to more security. Programmes that advocate or stimulate gender equality, reconciliation, education, refugee assistance, and many other projects, are a clear example of enhancing human security. Yet, such programmes are dependent on access to the financial system and reliable money transfers that keep the projects running. As such, delays caused by flagging these transfers as ‘suspicious transactions’, can directly impact human security on the ground. The paradox that exists here becomes clearer: the objective of financial security is producing human and financial insecurity at the same time.  

Being a risk or at risk is not always clear-cut or self-evident. Such assessments are dependent on specific contexts, political discourses, and normative frameworks. These assessments are complex and have great consequences for security. Nevertheless, it seems we are making a turn towards automatization of such risk assessments, not only in financial security decisions, but also at the border, and in health practices. This makes understanding the decision to flag someone as ‘risky’ even more difficult to understand and to challenge in cases of injustice. After all, considering the amount of data and factors that an algorithm can analyze, it can be challenging to understand how a categorization of ‘risky’ comes into being. To minimize the harm of such assessments, we need to ensure that we do not rely on a narrow understanding of security that is focused on pre-emptive counter-terrorism interventions based on suspicion. Rather, NPOs and human rights experts should be included in the process of defining security. They can point to human rights, the importance of a vibrant and active civic space, and legal accountability, as vital pillars of a secure society. Involving this expertise in the design, developments, and deployment of FinTechs is essential to prevent harm in financial security practices. Technological developments will inevitably impact our experience and understanding of security. Yet, it should not obfuscate the deeper socio-political structures through which security issues gain shape. And it should not obfuscate the possibilities of ensuring that security is not a zero-sum game, coming at the expense of the security of marginalized communities and civic space.