%0 Conference Paper %B Computer Security Foundations Symposium (CSF), 2011 IEEE 24th %D 2011 %T Dynamic Enforcement of Knowledge-Based Security Policies %A Mardziel,P. %A Magill,S. %A Hicks, Michael W. %A Srivatsa,M. %K abstract interpretation %K belief networks %K belief tracking %K Data models %K dynamic enforcement %K Facebook %K information flow %K knowledge based systems %K knowledge-based security %K knowledge-based security policy %K privacy %K probabilistic computation %K probabilistic logic %K probabilistic polyhedral domain %K probabilistic polyhedron %K probability %K query analysis %K Security %K security of data %K semantics %K Waste materials %X This paper explores the idea of knowledge-based security policies, which are used to decide whether to answer queries over secret data based on an estimation of the querier's (possibly increased) knowledge given the results. Limiting knowledge is the goal of existing information release policies that employ mechanisms such as noising, anonymization, and redaction. Knowledge-based policies are more general: they increase flexibility by not fixing the means to restrict information flow. We enforce a knowledge-based policy by explicitly tracking a model of a querier's belief about secret data, represented as a probability distribution, and denying any query that could increase knowledge above a given threshold. We implement query analysis and belief tracking via abstract interpretation using a novel probabilistic polyhedral domain, whose design permits trading off precision with performance while ensuring estimates of a querier's knowledge are sound. Experiments with our implementation show that several useful queries can be handled efficiently, and performance scales far better than would more standard implementations of probabilistic computation based on sampling. %B Computer Security Foundations Symposium (CSF), 2011 IEEE 24th %I IEEE %P 114 - 128 %8 2011/06/27/29 %@ 978-1-61284-644-6 %G eng %R 10.1109/CSF.2011.15 %0 Journal Article %J ACM Trans. Algorithms %D 2010 %T Achieving anonymity via clustering %A Aggarwal,Gagan %A Panigrahy,Rina %A Feder,Tomás %A Thomas,Dilys %A Kenthapadi,Krishnaram %A Khuller, Samir %A Zhu,An %K anonymity %K Approximation algorithms %K clustering %K privacy %X Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of deidentifying records is to remove identifying fields such as social security number, name, etc. However, recent research has shown that a large fraction of the U.S. population can be identified using nonkey attributes (called quasi-identifiers) such as date of birth, gender, and zip code. The k-anonymity model protects privacy via requiring that nonkey attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a prespecified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an ε fraction of points to remain unclustered, that is, deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well. %B ACM Trans. Algorithms %V 6 %P 49:1–49:19 - 49:1–49:19 %8 2010/07// %@ 1549-6325 %G eng %U http://doi.acm.org/10.1145/1798596.1798602 %N 3 %R 10.1145/1798596.1798602 %0 Conference Paper %B 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) %D 2010 %T Sectored Random Projections for Cancelable Iris Biometrics %A Pillai,J.K. %A Patel, Vishal M. %A Chellapa, Rama %A Ratha,N. K %K biometric pattern %K Biometrics %K Cancelable Biometrics %K cancelable iris biometrics %K data mining %K data privacy %K Degradation %K Eyelashes %K Eyelids %K Iris %K iris recognition %K pattern recognition %K privacy %K random processes %K Random Projections %K Robustness %K sectored random projection %K Secure Biometrics %K Security %K security of data %X Privacy and security are essential requirements in practical biometric systems. In order to prevent the theft of biometric patterns, it is desired to modify them through revocable and non invertible transformations called Cancelable Biometrics. In this paper, we propose an efficient algorithm for generating a Cancelable Iris Biometric based on Sectored Random Projections. Our algorithm can generate a new pattern if the existing one is stolen, retain the original recognition performance and prevent extraction of useful information from the transformed patterns. Our method also addresses some of the drawbacks of existing techniques and is robust to degradations due to eyelids and eyelashes. %B 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP) %I IEEE %P 1838 - 1841 %8 2010/03// %@ 978-1-4244-4295-9 %G eng %R 10.1109/ICASSP.2010.5495383 %0 Conference Paper %D 2009 %T Controlling data in the cloud: outsourcing computation without outsourcing control %A Chow, Richard %A Golle, Philippe %A Jakobsson, Markus %A Elaine Shi %A Staddon, Jessica %A Masuoka, Ryusuke %A Molina,Jesus %K Cloud computing %K privacy %K Security %X Cloud computing is clearly one of today's most enticing technology areas due, at least in part, to its cost-efficiency and flexibility. However, despite the surge in activity and interest, there are significant, persistent concerns about cloud computing that are impeding momentum and will eventually compromise the vision of cloud computing as a new IT procurement model. In this paper, we characterize the problems and their impact on adoption. In addition, and equally importantly, we describe how the combination of existing research thrusts has the potential to alleviate many of the concerns impeding adoption. In particular, we argue that with continued research advances in trusted computing and computation-supporting encryption, life in the cloud can be advantageous from a business intelligence standpoint over the isolated alternative that is more common today. %S CCSW '09 %I ACM %P 85 - 90 %8 2009 %@ 978-1-60558-784-4 %G eng %U http://doi.acm.org/10.1145/1655008.1655020 %0 Conference Paper %B Proceedings of the ACM SIGCOMM 2009 conference on Data communication %D 2009 %T Persona: an online social network with user-defined privacy %A Baden,Randy %A Bender,Adam %A Spring, Neil %A Bhattacharjee, Bobby %A Starin,Daniel %K ABE %K Facebook %K OSN %K persona %K privacy %K social networks %X Online social networks (OSNs) are immensely popular, with some claiming over 200 million users. Users share private content, such as personal information or photographs, using OSN applications. Users must trust the OSN service to protect personal information even as the OSN provider benefits from examining and sharing that information. We present Persona, an OSN where users dictate who may access their information. Persona hides user data with attribute-based encryption (ABE), allowing users to apply fine-grained policies over who may view their data. Persona provides an effective means of creating applications in which users, not the OSN, define policy over access to private data. We demonstrate new cryptographic mechanisms that enhance the general applicability of ABE. We show how Persona provides the functionality of existing online social networks with additional privacy benefits. We describe an implementation of Persona that replicates Facebook applications and show that Persona provides acceptable performance when browsing privacy-enhanced web pages, even on mobile devices. %B Proceedings of the ACM SIGCOMM 2009 conference on Data communication %S SIGCOMM '09 %I ACM %C New York, NY, USA %P 135 - 146 %8 2009/// %@ 978-1-60558-594-9 %G eng %U http://doi.acm.org/10.1145/1592568.1592585 %R 10.1145/1592568.1592585 %0 Conference Paper %B Proceedings of the 18th international conference on World wide web %D 2009 %T To join or not to join: the illusion of privacy in social networks with mixed public and private user profiles %A Zheleva,Elena %A Getoor, Lise %K attribute inference %K groups %K privacy %K social networks %X In order to address privacy concerns, many social media websites allow users to hide their personal profiles from the public. In this work, we show how an adversary can exploit an online social network with a mixture of public and private user profiles to predict the private attributes of users. We map this problem to a relational classification problem and we propose practical models that use friendship and group membership information (which is often not hidden) to infer sensitive attributes. The key novel idea is that in addition to friendship links, groups can be carriers of significant information. We show that on several well-known social media sites, we can easily and accurately recover the information of private-profile users. To the best of our knowledge, this is the first work that uses link-based and group-based classification to study privacy implications in social networks with mixed public and private user profiles. %B Proceedings of the 18th international conference on World wide web %S WWW '09 %I ACM %C New York, NY, USA %P 531 - 540 %8 2009/// %@ 978-1-60558-487-4 %G eng %U http://doi.acm.org/10.1145/1526709.1526781 %R 10.1145/1526709.1526781 %0 Conference Paper %B Proceedings of the 1st ACM SIGKDD international conference on Privacy, security, and trust in KDD %D 2008 %T Preserving the privacy of sensitive relationships in graph data %A Zheleva,Elena %A Getoor, Lise %K anonymization %K graph data %K identification %K link mining %K noisy-or %K privacy %K social network analysis %X In this paper, we focus on the problem of preserving the privacy of sensitive relationships in graph data. We refer to the problem of inferring sensitive relationships from anonymized graph data as link reidentification. We propose five different privacy preservation strategies, which vary in terms of the amount of data removed (and hence their utility) and the amount of privacy preserved. We assume the adversary has an accurate predictive model for links, and we show experimentally the success of different link re-identification strategies under varying structural characteristics of the data. %B Proceedings of the 1st ACM SIGKDD international conference on Privacy, security, and trust in KDD %S PinKDD'07 %I Springer-Verlag %C Berlin, Heidelberg %P 153 - 171 %8 2008/// %@ 3-540-78477-2, 978-3-540-78477-7 %G eng %U http://dl.acm.org/citation.cfm?id=1793474.1793485 %0 Conference Paper %B Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems %D 2006 %T Achieving anonymity via clustering %A Aggarwal,Gagan %A Feder,Tomás %A Kenthapadi,Krishnaram %A Khuller, Samir %A Panigrahy,Rina %A Thomas,Dilys %A Zhu,An %K anonymity %K Approximation algorithms %K clustering %K privacy %X Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code [15]. Sweeney [16] proposed the k-anonymity model for privacy where non-key attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an ε fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well. %B Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems %S PODS '06 %I ACM %C New York, NY, USA %P 153 - 162 %8 2006/// %@ 1-59593-318-2 %G eng %U http://doi.acm.org/10.1145/1142351.1142374 %R 10.1145/1142351.1142374 %0 Journal Article %J ACM Trans. Comput.-Hum. Interact. %D 2006 %T Shared family calendars: Promoting symmetry and accessibility %A Plaisant, Catherine %A Clamage,Aaron %A Hutchinson,Hilary Browne %A Bederson, Benjamin B. %A Druin, Allison %K calendar %K digital paper %K elderly %K family technology %K Home %K layered interface %K privacy %K universal usability %X We describe the design and use of a system facilitating the sharing of calendar information between remotely located, multi-generational family members. Most previous work in this area involves software enabling younger family members to monitor their parents. We have found, however, that older adults are equally if not more interested in the activities of younger family members. The major obstacle preventing them from participating in information sharing is the technology itself. Therefore, we developed a multi-layered interface approach that offers simple interaction to older users. In our system, users can choose to enter information into a computerized calendar or write it by hand on digital paper calendars. All of the information is automatically shared among everyone in the distributed family. By making the interface more accessible to older users, we promote symmetrical sharing of information among both older and younger family members. We present our participatory design process, describe the user interface, and report on an exploratory field study in three households of an extended family. %B ACM Trans. Comput.-Hum. Interact. %V 13 %P 313 - 346 %8 2006/09// %@ 1073-0516 %G eng %U http://doi.acm.org/10.1145/1183456.1183458 %N 3 %R 10.1145/1183456.1183458 %0 Journal Article %J IEEE Security & Privacy %D 2003 %T The dangers of mitigating security design flaws: a wireless case study %A Petroni,N. L. %A Arbaugh, William A. %K Communication system security %K computer security %K cryptography %K design flaw mitigation %K Dictionaries %K legacy equipment %K privacy %K Protection %K Protocols %K security design flaws %K security of data %K synchronous active attack %K telecommunication security %K Telecommunication traffic %K wired equivalent privacy protocol %K Wireless LAN %K wireless local area networks %K Wireless networks %X Mitigating design flaws often provides the only means to protect legacy equipment, particularly in wireless local area networks. A synchronous active attack against the wired equivalent privacy protocol demonstrates how mitigating one flaw or attack can facilitate another. %B IEEE Security & Privacy %V 1 %P 28 - 36 %8 2003/02//Jan %@ 1540-7993 %G eng %N 1 %R 10.1109/MSECP.2003.1176993 %0 Conference Paper %B 2002 International Symposium on Technology and Society, 2002. (ISTAS'02) %D 2002 %T Improving Web-based civic information access: a case study of the 50 US states %A Ceaparu,I. %A Shneiderman, Ben %K Computer aided software engineering %K Computer science %K contact information %K Educational institutions %K government data processing %K Guidelines %K home page design features %K information resources %K Laboratories %K Modems %K Navigation %K online help %K privacy %K privacy policies %K search boxes %K Tagging %K Uniform resource locators %K US states %K USA %K User interfaces %K Web sites %K Web-based civic information access %X An analysis of the home pages of all fifty US states reveals great variety in key design features that influence efficacy. Some states had excessively large byte counts that would slow users connected by commonly-used 56 K modems. Many web sites had low numbers of or poorly organized links that would make it hard for citizens to find what they were interested in. Features such as search boxes, privacy policies, online help, or contact information need to be added by several states. Our analysis concludes with ten recommendations and finds many further opportunities for individual states to improve their Websites. However still greater benefits will come through collaboration among the states that would lead to consistency, appropriate tagging, and common tools. %B 2002 International Symposium on Technology and Society, 2002. (ISTAS'02) %I IEEE %P 275 - 282 %8 2002/// %@ 0-7803-7284-0 %G eng %R 10.1109/ISTAS.2002.1013826 %0 Conference Paper %B CHI '02 extended abstracts on Human factors in computing systems %D 2002 %T Interacting with identification technology: can it make us more secure? %A Scholtz,Jean %A Johnson,Jeff %A Shneiderman, Ben %A Hope-Tindall,Peter %A Gosling,Marcus %A Phillips,Jonathon %A Wexelblat,Alan %K Biometrics %K civil liberties %K face recognition %K national id card %K privacy %K Security %B CHI '02 extended abstracts on Human factors in computing systems %S CHI EA '02 %I ACM %C New York, NY, USA %P 564 - 565 %8 2002/// %@ 1-58113-454-1 %G eng %U http://doi.acm.org/10.1145/506443.506484 %R 10.1145/506443.506484 %0 Conference Paper %B 2002 IEEE Symposium on Security and Privacy, 2002. Proceedings %D 2002 %T P5 : a protocol for scalable anonymous communication %A Sherwood,R. %A Bhattacharjee, Bobby %A Srinivasan, Aravind %K Broadcasting %K communication efficiency %K Computer science %K cryptography %K data privacy %K Educational institutions %K Internet %K large anonymous groups %K P5 protocol %K packet-level simulations %K Particle measurements %K Peer to peer computing %K peer-to-peer personal privacy protocol %K privacy %K Protocols %K receiver anonymity %K scalable anonymous communication %K security of data %K sender anonymity %K sender-receiver anonymity %K Size measurement %K telecommunication security %X We present a protocol for anonymous communication over the Internet. Our protocol, called P5 (peer-to-peer personal privacy protocol) provides sender-, receiver-, and sender-receiver anonymity. P5 is designed to be implemented over current Internet protocols, and does not require any special infrastructure support. A novel feature of P5 is that it allows individual participants to trade-off degree of anonymity for communication efficiency, and hence can be used to scalably implement large anonymous groups. We present a description of P5, an analysis of its anonymity and communication efficiency, and evaluate its performance using detailed packet-level simulations. %B 2002 IEEE Symposium on Security and Privacy, 2002. Proceedings %I IEEE %P 58 - 70 %8 2002/// %@ 0-7695-1543-6 %G eng %R 10.1109/SECPRI.2002.1004362 %0 Conference Paper %B CHI '99 extended abstracts on Human factors in computing systems %D 1999 %T Trust me, I'm accountable: trust and accountability online %A Friedman,Batya %A Thomas,John C. %A Grudin,Jonathan %A Nass,Clifford %A Nissenbaum,Helen %A Schlager,Mark %A Shneiderman, Ben %K accountability %K anonymity %K Communication %K computers and society %K ethics %K Internet %K media effects %K privacy %K Reciprocity %K repute %K social actors %K social capital %K social impacts %K trust %K value-sensitive design %K wired world %K WWW %X We live in an increasingly wired world. According to Robert Putnam, people are spending less time in persistent personal face to face interactions and more time in pursuits such as watching TV and using the Internet. At the same time, independently measured "social capital" -- the extent to which we trust and work for a common good -- is declining. In this panel, we explore: the impacts of electronic media on trust and accountability; whether and how electronic media can be designed and used to increase deserved trust and accountability; the relationship between protecting privacy and increasing the efficacy of communication; and how people's tendency to treat computers as social actors impacts these issues. In brief, how can modern technology enhance humanity's humanity? %B CHI '99 extended abstracts on Human factors in computing systems %S CHI EA '99 %I ACM %C New York, NY, USA %P 79 - 80 %8 1999/// %@ 1-58113-158-5 %G eng %U http://doi.acm.org/10.1145/632716.632766 %R 10.1145/632716.632766