Experiments with new technologies in migration control are increasing. Although technological advancements is assisting policymakers in countering potential threats and providing information to immigrants, it has also led to the infringement of immigrants’ rights. Many people move worldwide yearly due to various issues, including political instability, poverty and natural disasters (Molnar, 319). Therefore, many countries have turned to the new technology to find solutions for population control, border enforcement and data collection. Although governments justify these innovations as necessary for maintaining security, the negative impacts they cause to the immigrants cannot be ignored. The use of new technologies in immigration raises various issues, including systemic discrimination, free and informed consent and biased decision-making.
The primary technologies used in migration management include AI machine learning, predictive analytics and automated decision-making. These technologies replace human decision-makers as they process information using a specific algorithm to generate the required output. They need a large amount of data to learn. For example, the UN projects require large data sets to predict population. Although data collection is essential in providing aid to immigrants and marginalized communities, the lack of regulated methods of oversight and accountability has led to privacy breaches and infringement of human rights. For instance, a vast amount of data was collected after the 9/11 attacks in the US concerning the individuals under suspicion (Hollifield, 886). Due to a lack of regulation on data collection, the migration data needs to be more accurate and accurate during the development of anti-immigration policies. It means that once the policymakers collect such vast data, they can bend it and develop policies that affect the immigrant’s ability to seek help in a foreign country.
Secondly, the new technologies raise the issue of free will and informed consent, especially with the increasing reliance on biometric data. When governments seek to collect data about immigrants, they use coercive methods that do not allow them to give consent. For instance, refugees in Jordan must scan their irises to receive food weekly (Staton, 258). Here, the question is whether the refugees can opt out of having their data collected. Research in one of the camps revealed that most refugees were uncomfortable with these technological experiments but could not refuse because they needed food (Staton, 263). When consent is not freely given, it infringes on the human right to privacy.
Furthermore, the growing involvement of the private sector in data collection, use and storage is an increasing concern. For instance, the World Food Program in the US has signed a contract with Palantir Technologies to track and separate families. It enforces deportations and detentions of people escaping violence in Central and Latin America (Molnar, 321). This kind of arrangement infringes on the immigrants’ right to the privacy of their data as it is shared with private companies without their consent. The US government has not elaborated on whether the affected immigrants can refuse to have their data transferred or whether there will be any form of accountability and transparency. Therefore, the new technologies are affecting the immigrants by separating them from their families and deporting them without substantial evidence.
Another challenge of new technologies is the automation of immigration, which affects decision-making. Automated migration is a system that allows for a computerised passage by authenticating an electronic machine-readable travel document that establishes whether the passenger is rightful of the papers. It automatically determines the eligibility for crossing the border according to the pre-determined rules and regulations. Although it enables the government to control immigration, instances of bias regarding gender and race have been recorded widely. For example, some countries, like Greece and Hungary, have introduced AI-powered lie detectors at border checkpoints where passengers’ faces are monitored through a series of questions to detect signs of lying (Ozkul, 410). If the system becomes sceptical of an individual through analyzing a series of questions, it refers them to a human officer for further screening. Such machines cannot account for trauma and its effects on an individual’s memory, leading to it making a wrong decision.
Moreover, facial recognition technologies still struggleneed help to analyze men and women with darker skin tones. It means that the system is prone to biasness when it comes to colour and other factors. The question is, what will happen when such algorithms make a mistake? Many immigrants have been deported following an analysis made by an AI system. Unfortunately, such decisions cannot be challenged in a court of law because the court does not know who to hold accountable (Castles, 319). The individuals involved in developing immigration management technologies include the designer, the coder, and the operator. Determining whether to hold the developers or the algorithm is challenging when human rights have been infringed. Therefore, the technologies weaken the procedural safeguards and administrative law principles.
Thirdly, these technologies contain the power to shape democracy and influence elections, and this can reinforce the politics of exclusion. Through technological apps, activists organize protests and promote changes in policies regarding democracy. New technologies also support power asymmetries, influencing people’s thinking about which countries can push for innovation (Hollifield, 889). While the competition for power continues, the new technologies create spaces for refugee camps and conflict zones to become experimentation sites. Due to their vulnerability, they are targeted people on the move for exploitation, who use their data to provide inaccurate and deceptive information that threatens their privacy.
The fourth challenge surrounding the new immigration management technology is the lack of technical capacity by the government to develop policies that regulate these technologies. Lack of technical ability leads the government and the public sector to over-rely on the private sector. Adopting complex technologies and experimental tools with more knowledge to understand and evaluate them is dangerous (Castles, 315). Although the private sector is responsible for ensuring that the technologies they develop do not infringe on human rights, these developments occur in black boxes that hinder the public from fully understanding how they operate. Therefore, instead of fastening the decision making and development of policies, they create new barriers to access justice.
Another issue is that the mobile technologies believed to be the facilitators of immigration are not determinative. Although they can assist migrants throughout the journey, they can generate digital traces that leave them more vulnerable to surveillance and primary breaches. Satellites and other border control technologies can detect movement before individuals reach the national borders. State officials and smugglers can also use the GPS apps that provide direction for the immigrants to track them which can lead to arrest or abduction (Molnar, 330). Moreover, the same online materials that immigrants find information regarding their destination country can contain misinformation and falsehoods. Therefore, this technology provides an opportunity for vulnerable refugees to be exploited.
Lastly, most immigrant management technologies still need to be developed to benefit the immigrants. Technologies such as biometric data and lie-detecting AI are designed to help, first and foremost, the state authorities. Since they are designed to support migration controls, the migrant’s interests and needs are not included in their design and implementation (Hollifield, 912). The lack of regard for immigrants opens a door for the technologies to be manipulated by government officials and the private sector to benefit their interests. For instance, the refugees in Jordan are not allowed to decide whether or not they want to share their personal information because they are forced to exchange it for food. Therefore, these technologies are promoting violations of human rights.
Technology is helpful to immigrants as it can connect them to information and resources. Using biometric identifiers and digital identity can improve the distribution of humanitarian services such as food rationing and assist them in finding jobs (Sadik &Ceren, 150). The development of joint databases is also essential as the government can use them to counter threats stemming from a lack of information, especially about immigrants who threaten public security. Technology has bridged this gap and helped many countries identify potentially dangerous persons. Through such a database, the federal police and other law enforcers can verify if any foreigner has committed actions that would infringe the rights of other citizens.
The new technology has enabled law enforcers to identify lawbreakers and ensure the safety of other citizens. The collection of biometric data has also aided in the distribution of food to needy refugees. However, the collection of such sensitive data exposes the immigrants to the risks of impersonation and exploitation. If ill-intentioned government officials and intelligence services steal the information, the individuals can be at risk of exploitation. Secondly, private companies can manipulate this data to effect separation, detention and deportation shortly. This technology can also be used by smugglers to exploit the hopes and aspirations of the refugees for profit. Therefore, it has negative impacts on the immigrants and their rights.
In conclusion, new technologies harm immigration policies and migrant rights. Although these technologies are meant to control immigration, most of them encourage the collection of the immigrant’s data without free will and informed consent. Others are prone to human manipulation and making mistakes that affect the development of policies by the governments. Despite them being essential to policymakers, they involve risks of bias and discrimination that pose a significant threat to migrants and asylum seekers. Therefore, more research must be done to determine how these technologies can promote transparency and accountability.
Work Cited
Castles, Stephen. “Why migration policies fail 1.” Celebrating 40 Years of Ethnic and Racial Studies. Routledge, 2019. 300-320.
Hollifield, James F. “The emerging migration state 1.” International migration review 38.3 (2004): 885-912.
Molnar, Petra. “New technologies in migration: human rights impacts.” Forced Migration Review 61 (2019): 318-332.
Ozkul, Derya. “Automating immigration and asylum: the uses of new technologies in migration and asylum governance in Europe.” (2023): 410-415).
Sadik, Giray, and K. A. Y. A. Ceren. “The Role of Surveillance Technologies in the Securitization of EU Migration Policies and Border Management.” Uluslararası İlişkiler Dergisi 17.68 (2020): 145-160.
Staton, Bethan. “Eye spy: Biometric aid system trials in Jordan.” The New Humanitarian 2016. 254-270.