Risks of A.I. for Human Rights

Just like for the potential opportunities inherent to A.I., the following list of risks linked to A.I. is by no means an exhaustive one. However, not taking a human rights-based approach to the development, the deployment and the use of A.I. can entail serious threats to the enjoyment of a large number of human rights, such as:

-          the freedom from discrimination, through bias in hiring practices, in law enforcement and in criminal justice and access to services, thereby violating the right to non-discrimination.

-          the right to own property, through misuse of personal data and inadequate landownership registration;

-          the freedom from interference with privacy and the right to peaceful assembly and association, through mass surveillance; through A.I. and new technologies that will infringe upon human rights, human dignity and threaten democratic principles and violate international law; through A.I. and new technologies and use for indiscriminate and targeted surveillance and facial recognition technologies that may be weaponized; through the potential misuse or unauthorized access to user data;

-          the freedom of expression and information, through algorithms used by social media platforms and search engines that may influence and augment substantially the information and viewpoints to which users are exposed; through targeted disinformation and deep fakes; through potentially restricting the right to freedom of expression and access to diverse sources of information; through censorship;

-          the right to participate in government and free elections, through hacking and election meddling.

-          the right to desirable work and the right to an adequate standard of living, through automation and job displacement (increasing automation of tasks through AI and new technologies has the potential to displace workers, leading to job loss and economic insecurity); through biased job descriptions;

-          the right to life itself, through unregulated and unsupervised use of Legal Autonomous Weapon Systems (LAWS);

-          Risk of inequality of arms in judicial adjudication between the State and accused persons;

-          Unethical use of A.I.: Looming ethical risks may take the form of lack of transparency, accountability and oversight mechanisms needed to address ethical concerns related to A.I.;

-          Discrimination in access to Education, Digital Skills and STEM Development: As A.I. and new technologies reshape labour markets and the future of work, we must ensure that individuals have access to meaningful advanced digital literacy and competence, STEM education and training programs. The development of such competencies needs adequate learning infrastructure that equip learners to thrive in a digital economy, both at consumer and producer levels, through productive capacities and competencies for meaningful value contribution and participation in the global digital market economy ecosystems and value chains. This is essential for upholding the right to education and the right to work with readiness and competencies required by the future of work.