User Tools

Site Tools



Computer science and legal methods for enforcing the personal rights of non-discrimination and privacy in ICT systems

A FIRB (Italian Fund for Basic Research) Project - Call Futuro in Ricerca 2008. Project ID: RBFR081L58.

Project duration: 1st December 2010 - 1st March 2014



Project Objectives

The greater presence and pervasiveness of ICT in everyday life poses new concerns about personal rights such as non-discrimination and individual privacy. Since these concerns limit the practical applicability and the broad acceptance of advanced technologies, such as decision support systems (DSS) and location-based services (LBS), the diffusion of these applications relies on the existence of technical tools to enforce personal rights. Developing these tools requires a constant reference to what is legal and what is not, in measurable and formal terms. This makes it necessary to have an interdisciplinary approach between the legal and the computer science research areas.

This is the approach we intend to follow in this project: starting from the requirements derived by the current regulation and legal debate, we intend to develop formal models and technical tools to enforce the intertwined personal rights of non-discrimination and individual privacy. The state-of-the-art of both the legal and the computer science literature concentrates on the concept of data anonymization as a possible strategy to guarantee personal rights in the automated processing of personal data. Unfortunately, the issue is more complex than one would expect. According to existing regulations, data are anonymous if it is “reasonably impossible” for a malicious adversary to re-associate data with the identity of its respondent. However, it is hard, both in legal and in computer science terms, to formally define this “reasonably impossible” condition. Another problem is that anonymity does not necessarily prevent the identification of a group of individuals, e.g., a minority or a group protected by law, and then an unfair or discriminatory treatment of members of that group.

Our research intends to provide answers to the protection of the intertwined personal rights of non-discrimination and privacy-preservation both from a legal and a computer science perspective. On the legal perspective, our objective consists of a systematic and critical review of the existing laws, regulations, codes of conduct and case law, and in the study and the design of quantitative measures of the notions of anonymity, privacy and discrimination that are adequate for enforcing those personal rights in ICT systems. On the computer science perspective, our objective consists of designing legally-grounded technical solutions for discovering and preventing discrimination in DSS and for preserving and enforcing privacy in LBS. We believe that the techniques applicable to the two problems share common issues and solutions.

Units and People

  • [UNIBO] Dipartimento di Scienze Giuridiche “Antonio Cicu”, Università di Bologna: Paco D'Onofrio (unit coordinator), Annarita Ricci (associate researcher), Carlo Bottari (professor), Giusella Finocchiaro (professor)
  • [UNIPI] Dipartimento di Informatica, Università di Pisa: Salvatore Ruggieri (project coordinator), Andrea Romei (young researcher), Dino Pedreschi (professor), Anna Monreale (post-doc), Thanh Binh Luong (phd student)
  • [UNIMI] Dipartimento di Informatica e Comunicazione, Università di Milano: Sergio Mascetti (unit coordinator), Claudio Bettini (professor), Andrea Gerino (associate researcher)

Work Packages

The project is organized in three work packages (WP).

WP 1 Analysis and formalization of (non-)discrimination and anonymity

A highly relevant legal issue is to determine in which cases data should be considered linkable or not linkable to an individual, thus determining if said connection depends on technological or other grounds and if it is absolute or relative by nature. In the words of European Directive 95/46/EC: “to determine whether a person is identifiable, account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person”. In such a perspective, this work package aims at developing a systematic analysis of anonymity as an instrument of privacy and (non-) discrimination, to be put in place along with:

  • the analysis of legislation and legal literature referring to anonymity and to (non-)discrimination;
  • the study of the legal definition of anonymity, of a right to anonymity and of (non-)discrimination;
  • the specification of the criterion of reasonableness as the very core of the notion of anonymity;
  • and the analysis of relationship between anonymity and (non-)discrimination.

WP 2 Discrimination discovery in DSS

Due to the complexity of the internals of a DSS, and to the massive amount of accumulated decisions, statistical approaches based on hypothesis testing adopted in the literature on the “economics of discrimination” are not enough. We adopt a novel approach based on data mining for extracting patterns of discriminatory decisions. Since data mining models themselves are being more and more present behind the scenes of a DSS, we consider also the issue of building data mining models that prevent DSS to take discriminatory decisions. The privacy issues in processing data for discrimination analysis will be covered, since control authorities must provide themselves adequate confidence to data owners about privacy-protection. Also, we investigate techniques for unveiling indirect discrimination through models of attack strategies, in analogy with similar attack models proposed in the study of privacy in LBS. The approach will be experimented on real and synthetically generated data.

WP 3 Privacy in LBS

The state-of-the-art literature in the field of LBS shows that users' privacy can be protected by enforcing users' anonymity or by obfuscating the sensitive information transmitted in the LBS requests. We intend to investigate both techniques. For what concerns the “anonymity based” techniques, there is a need to study “reasonable” adversary's models. Indeed, the models proposed so far in the literature have been shown to be not conservative enough or too conservative, hence exposing user's 2 privacy to possible violations in the former case and making it not possible to provide efficient defences in the latter. Once the formal model is defined, it is possible to design new techniques to enforce anonymity that should be proved to guarantee protection in that model. For what concerns the “obfuscation based” techniques, existing works in the literature show that different types of LBS require different obfuscation based techniques. Hence, there is a need to classify LBSs according to the type of obfuscation techniques that can be applied, to identify, through an appropriate metric that should be defined, which is the best techniques for each class, and possibly to define new techniques. Finally, both the obfuscation and the anonymity based techniques will be experimentally evaluated; this will require an additional effort to develop a simulator of user movements to generate the artificial data that are necessary for the experimental evaluation.



Scientific papers by year.


Members Web Site

Working web site for project members (restricted access) is available from here.

Locations of visitors to this page

enforce/project.txt · Last modified: 2018/03/14 16:58 by enforce