西班牙URV大学FERRER教授学术报告New directions in anonymization
发布时间: 2015-10-15 03:01:04 浏览次数: 供稿:计算机系
演讲人:西班牙URV大学Josep Domingo-Ferrer教授
讲座时间:2015-10-22 14:00:00
讲座地点:信息楼四层报告厅
讲座内容

时间:10月22日(周四)下午2:00-3:30

地点:信息楼四层报告厅

报告人:西班牙URV大学Josep Domingo-Ferrer教授(IEEE Fellow,总理科技顾问,Narcís Monturiol勋章获得者)

Josep Domingo-Ferrer is a Distinguished Professor of Computer Science and an ICREA-Acadèmia Researcher at Universitat RoviraVirgili, Tarragona, Catalonia, where he holds the UNESCO Chair in Data Privacy. His research interests are in data privacy, data security, statistical disclosure control and cryptographic protocols, with a focus on the conciliation of privacy, security and functionality.

Co-author of 5 patents and over 350 publication (H-index=44, July 27, 2015). Google Faculty Research Award (2014),Twice winner of the ICREA Acadèmia Prize (2008 and 2013), Govt. of Catalonia. "Narcís Monturiol" Medal for merit in science and technology (2012), Govt. of Catalonia. Elected Member, Academia Europaea (2012). Elected Member, International Statistical Institute (2012). Fellow, IEEE (2012).

摘要: Current approaches to anonymization of microdata sets are either utility-first (use an anonymization method with suitable utility features, then evaluate the disclosure risk and, if needed, reduce the risk by possibly sacrificing some utility) or privacy-first (enforce a target privacy level via a privacy model, e.g. k-anonymity or ε-differential privacy, without regard to utility). The second approach is the only one that offers formal privacy guarantees, but it is seldom used in practice because it produces data releases with no utility guarantees. We address the previous conflict between utility and privacy, by showing how to get privacy guarantees without destroying more utility than necessary. Furthermore, we tackle the following unresolved issues: how to make anonymization verifiable by the data subject (so that she can verify how safe is the record she has contributed), how to get rid of background knowledge assumptions when defining the intruder, and what does transparency of anonymization to the user mean. We present a permutation paradigm of anonymization, whereby any microdata anonymization method is functionally equivalent to permutation plus a residual amount of noise addition. Thus, the privacy offered by a method is the amount of permutation it achieves and this amount can be verified not only by the data protector, but also by the subject contributing each record (subject-verifiability). Furthermore, we define an intruder model that makes no assumption on background knowledge and we show how to determine the right amount of permutation to withstand such an intruder without losing more utility than necessary. Finally, we show that an anonymization method safe against such an intruder can also be safely transparent to any user, which increases the analytical utility of anonymized data.

演讲人简介