David M. Raab
DM Review
May, 2005
.
My New Year’s Resolution was to research enterprise privacy technologies. Although intellectually rewarding, it didn’t seem like a very smart business move: after all, few American businesses, consumers or regulators pay much attention to privacy. The few exceptions are a handful of specific issues such as credit or health data. From a systems viewpoint, those can be handled, effectively if crudely, on a patchwork basis. So interest in comprehensive privacy infrastructure, let alone fundamental privacy technology theory, has been pretty much nil.
But as I write this in early March, the headlines include ChoicePoint’s release of 145,000 consumer records to fraudulent buyers, Bank of America’s loss of backup tapes containing credit card data on 1.2 million federal employees including U.S. senators, and the tragic hacking of Paris Hilton’s cell phone. With Paris as its poster girl, we can truly say that privacy is hot.
So perhaps my efforts have not been wasted. The seemingly random jumble of privacy problems makes more sense when placed in standard categories: Choicepoint’s fraudulent customers are a problem with authentication; Bank of America’s tape loss is a matter of secure transport; Paris Hilton personifies excessive exposure–meaning, in this case, that her data should have been encrypted.
As these examples suggest, privacy and security are closely related. Indeed, many privacy technology experts come from a security background and tend to discuss privacy in a security-like framework. The most prominent example is PETTEP (Privacy Enhancing Technology Testing and Evaluation Project; see www.ipc.on.ca), an originally-Canadian, now-international venture to build a certification infrastructure for privacy systems. PETTEP’s approach is modeled on the Common Criteria for security evaluation, a rigorous standard developed by security agencies at the U.S. and its Cold War allies. Like the Common Criteria, PETTEP defines “profiles”, which are testable requirements that achieve specific goals in specific environments. PETTEP is working on profiles for data security, privacy-protecting data management, and accountability.
The goal of PETTEP is to provide assurance that systems making privacy-related claims actually perform as specified. Giving priority to assurance–as opposed to designing the actual functional requirements–may seem peculiar. It’s tempting to blame it on the dominance of security experts in the privacy technology community. But assurance actually does play a peculiarly central role in privacy systems.
Understanding why requires stepping back from specific issues to contemplate a comprehensive privacy technology framework. There are two major framework development efforts under way today. The International Security, Trust & Privacy Alliance (ISTPA; see www.istpa.org) is a U.S.-based, industry-funded project to define a generic set of components that can be configured to implement different privacy policies. Privacy and Identity Management for Europe (PRIME; see www.prime-project.eu.org) is a European Commission project to develop an end-user tool to manage identity-related interactions. Since the tool can only function within the context of a complete system, PRIME also has defined a broader privacy technology framework.
The difference in perspective between ISTPA and PRIME is intriguing. ISTPA views privacy in terms of data management, and describes controlling the acquisition, use and disposal of personal information through its life cycle. PRIME is based on notion of individuals having multiple, partial identities. Assumed goals include revealing the least possible personal information in each situation and preventing different identities from being linked unnecessarily. The differences are narrowing, however, since ISTPA is now working with international standards bodies that have asked it to incorporate European-style data minimization.
Even in their current forms, ISTPA and PRIME are based on similar visions of a fundamental privacy transaction: people contact an organization, provide personal information with permission for limited use, and monitor compliance. Both projects elaborate on the capabilities needed to do this effectively.
Assurance is a central issue at every step. People must be assured they are dealing with the organization they think they are dealing with–and not, say, with a fake Web site in a phishing attack. They must be assured that they understand the usage rules proposed by the organization and that those rules are actually implemented in the organization’s systems. They must be assured that audit trails to monitor compliance are tamper-proof and that mechanisms to flag violations will actually work. Organizations need assurances that people are who they say they are, that the information they provide is accurate, that only the rightful owners can view and edit stored data, and that other organizations comply with restrictions on any data they share. Organizations even need assurances about themselves: that the encryption mechanisms they are using are reliable; that their communications are safe from interception; and, that their security procedures are adequate to prevent insider or external abuse.
In fact, assurance is so important that both ISTPA and PRIME architectures–which are not intended to get into details–explicitly include third-party certification and authentication services. This may seem intrusive to businesses that are used to building self-contained systems for their enterprise. Those concerned about government surveillance may feel better if the assurance services themselves are private businesses; others, more worried about private abuse, may actually prefer governmental solutions. Either approach could work. The point here is that, even though many people don’t yet recognize it, serious privacy protection is impossible without having such services available.
Use of third-party services itself implies a certain amount of standardization, just to allow communication. But the real standardization requirement is much deeper. Third-party assurance services are only effective if they interact with systems that use them correctly. This is why PETTEP’s focus on testing and evaluation makes sense. Privacy, like security, can only be maintained if implemented in a comprehensive, systematic fashion. And we can only learn from today’s–and tomorrow’s–privacy breaches if we evaluate them in the context of a coherent privacy framework. Projects like ISTPA, PRIME and PETTEP are important steps in that direction.
* * *
David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.
Leave a Reply
You must be logged in to post a comment.