Designating privacy as a human right: Values and elements to guide a U.S. law

 

Privacy expressed as a human right should be underpinning of U.S. federal law, expert says

By Michelle De Mooy

The author is Principal of De Mooy Consulting where she helps organizations innovate and build revenue through effective data management strategies. She was formerly the director of privacy and data at the Center for Democracy & Technology in Washington, D.C.


Introduction: Why privacy bills in the U.S. Congress will fail

Many of the privacy bills introduced in the United States’ Congress attempt to regulate by first defining the digital ecosystem, breaking it down into good and bad data uses, good and bad actors and instances in which an individual’s consent is required for the collection, uses, and sharing of personal data. This approach to the moving target of technology and data is almost certain to fail. No legislation that uses this approach will be able to accurately draw contours around such an amorphous and dynamic world.

To avoid this problem, federal law in the U.S. should establish privacy as a fundamental human right. Doing so would create a legal foundation that can address a wide variety of circumstances and harms related to the collection and processing of data. It would place the responsibility for protecting privacy onto entities rather than users, leveling today’s vast online power asymmetries.

I. Privacy as a human right

While privacy from government intrusion in the U.S. arguably has a Constitutional foundation in the Fourth Amendment, it does not designate privacy as a human right.

A law that defines privacy as a human right could be framed by key democratic values enshrined in the U.S. Bill of Rights. This would give lawmakers a blueprint for legislation that is both grounded in law but nimble enough to address evolving technology and data use. It would also establish moral guarantees that would both reflect current social norms around privacy (such as the public’s overwhelming willingness to donate data for public-health reasons) and shape them as they evolve (such as excluding location data from data donated for public health reasons).  

Note: This brief does not spend time on the differences between privacy and data protection, choosing instead to bound these concepts through a common principle – an individual’s inherent interest in managing their privacy. This interest is recognized variously in the UN’s Universal Declaration of Human Rights (Article 12), the European Convention on Human Rights (Article 8), the European Charter of Fundamental Rights (Article 7), the International Covenant on Civil and Political Rights (Article 17) and the Council of Europe’s “Convention for the Protection of Individuals with Regard to the Automatic Processing of Personal Data.”

II. Democratic values and privacy rights

Privacy is essential to realizing democratic values. As previously mentioned, a federal privacy law should be responsive to key democratic values enshrined in the U.S. Bill of Rights and correspond them with data-protection concerns. Values should include: autonomy and self-determination; freedom of speech and thought and; and freedom of movement and association. This correspondence is generally explored below, with detailed policy elements associated with them described in Section III. 

Autonomy and self-determination

Currently, individuals have little to no control over the bulk of what is collected, used and shared by entities online nor any real transparency into these practices. A federal privacy law must balance the public’s desire for privacy and trustworthy services and the practices of the commercial data-driven world by placing the burden for data stewardship onto entities. The law should require the protection of an individual’s data by default and allow individuals to decide if, how, when and for how long — and with which entities — the collection, processing, disclosure and retention of information, derived from or about them, may occur, including whether or not their data may be included in large-scale automated systems that produces decisions, or makes inferences, about them. These permissions should not use a ‘notice and choice’ paradigm. 

Instead, the protection of autonomy and self-determination requires the law to go further by recognizing that, as a person’s value as a human being is intrinsic, so too is data about or from them. The law should recognize personal data as akin to a person’s physical body, with the same explicit rights to control and use it as established by U.S. case law. Though the Constitution doesn’t explicitly state that Americans have a right to the privacy of their bodies, the U.S. Supreme Court has interpreted it to be so, such as regarding medical treatment and to procreate, creating clear precedents.

Managing identity

The law should also address digital identity management in order to properly allow for autonomy and self-determination. Identity management is innate to the protection of privacy because it is similarly concerned with what a person is comfortable sharing in different contexts (we do this offline by choosing how we dress or by determining with whom we will share information like our names and phone numbers). Designed correctly, identity management online gives individuals a way to express data preferences without compromising control or revealing their identity.

Freedom of speech and thought

The freedom of speech and thought can only occur when information, online or offline, is allowed to freely circulate and when individuals can maintain anonymity if they choose. Entities that post information online should continue to have protections under Section 230 of the U.S. Communications Decency Act. In harmony with Constitutional protections, privacy law should also not trump the collection, use, sharing or archiving of data that reasonably serves free speech and thought, such as information associated with journalism, literature and art, scientific or historical research, and statistical analysis. The Privacy Shield Framework’s Data Integrity and Purpose Limitation (#5b) lists some types of exempt information and uses. 

Because of the manipulation of information and the resulting influence made possible by digital technologies, the law should recognize an individual’s right to truth. Though this right is typically expressed with regard to international laws around atrocities, it can be expressed in the context of freedom of speech and thought as a right to accurate and full information, conferring an obligation onto government and private entities to provide fair and accurate information and to inform users of the origin and essential trustworthiness of that information. 

Freedom of movement and association

A person or group’s movement online and offline, or location, must be protected at the highest level in a baseline law. Deriving location usually necessitates tracking of some kind. To curtail online and offline tracking, the definition of location data should be broad enough to include any online identifier (IP address, “cookies”, device fingerprints, behavioral information, etc.) as well as physical and geo-location information. Location data should be designated as extremely sensitive, carrying with it increased protections, even during times of national crisis. 

Internet access

Internet access should be another essential, if somewhat counterintuitive, ingredient in a privacy law. Freedom of speech, thought, movement and association cannot be truly realized without the ability to participate in the public “square” (which of course today happens online) and to do so in a way that allows for obscurity and privacy. Autonomy and self-determination are also implicated by internet access – in today’s digital world, the denial of access, through action or inaction, amounts to a negation of these freedoms. This is also true when communities are underrepresented in datasets used to make decisions that impact them because of the possibility of biased and/or discriminatory effects. 

Finally, the freedom of movement should recognize the importance of individual choice in the marketplace and should prohibit a device or account being subject to “locked in” provisions. 

III. Elements of an effective privacy law

A wide variety of groups, entities, and scholars have publicly released policy frameworks for a national privacy law. Below I’ve identified 10 elements of effective privacy that should be covered by a federal law, inspired in part by these frameworks and also by laws, projects, blogs, essays, reports, and papers that I’ve reviewed, authored or read over the years. While necessary, these elements and policy details are not all-inclusive in a law that designates privacy as a human right. The 10 elements are:

  1. Transparency and Responsibility
  2. Auditing and Reporting
  3. Sensitive Information
  4. Access and Amend
  5. Collection, Sharing and Processing Limitations
  6. Portability
  7. Risk Limitation
  8. Data Quality and Integrity
  9. Ethical Design
  10. Balanced Enforcement

1. Transparency and Responsibility

Entities must be transparent with users, the public, and regulators about ways they collect, use, share and store data. Users and regulators should have this information as well as the purposes for which data is used, the entities with which it is shared (and the reasons for sharing), any commercial transactions involving the data, the use of any automated technologies, such as AI tools like bots, and the presence of automated profiling or decision-making.

Also, raw information (defined as any ‘paid’ content which is content posted on online platform, whether from a user or the platform itself, such as advertising and including but not limited to images and audio/video recordings) posted by an online platform should identify for users its source; it should be categorized its purpose (i.e. political, advertising); and the integrity of the data should be listed along with a corresponding ‘trust’ index (similar to the Weather Channel’s Accuracy Index). In addition, all personal digital communications, such as email communications stored in the cloud, should be protected by end-to-end encryption whenever possible and should be protected from access by third parties for a time period determined by a user or user designee.

2. Auditing and Reporting 

Transparency must be tied to accountability to be truly meaningful. Entities that can access, use, and disclose data from or about individuals should be required to do regular audits of data holdings, uses and disclosures, ideally performed by neutral nonprofit third parties but these could also be performed by organizations themselves if they abide by pre-set standards,  perhaps set by the Federal Trade Commission (FTC). Entities should also be required to publicly post or publish a data protection report, certified by company officers, that chronicles the specific purposes for which user personal information is collected and retained, impact and risk assessments performed, material changes in policy or practice, and security incidents or usage violations and the company’s response.

3. Sensitive Information  

Location information and biometric information should be designated as extremely sensitive, carrying with it increased protections around security, prohibitions around sharing and use of this data, as well as increased penalties for violations. 

Deriving the location of a person or group usually necessitates tracking of some kind while collecting biometrics is usually done obviously or obscurely. To curtail online and offline tracking, the definition of location data should be broadened to include any online identifier (IP address, cookies, device fingerprints, behavioral information, etc.) as well as physical and geo-location. To curtail collection, use and disclosure of a person’s biometric information, the definition of biometric data should include physiological biometric (i.e. DNA or genetic markers, hand, face, and fingerprints, retina or eye shape) or derived biometric information (i.e. voice, gestures, eye or gaze tracking, gait, typing rhythm) and any information based on an individual’s biometric identifier, regardless of how it is captured, converted, stored, or shared. Baseline privacy legislation should require the Department of Health and Human Services (HHS), in conjunction with the FTC, to make recommendations and, eventually, rules for the regulation of health and health-related data outside of the Health Insurance Portability and Accountability Act (HIPAA). 

Additionally, freedom of movement in the marketplace should encompass the freedom to move between products and services, both by enabling data portability methods and by prohibiting a device or account being subject to “locked in” provisions. 

4. Access and Amend 

Entities should provide individuals with a simple and timely means to access, obtain, and amend any personal information directly from, derived from, and/or about them in a readable format. Like all communications with users, the information should be made available in a variety of audio and visual formats, enabled for those with physical disabilities, cognitive/neuro differences and in multiple languages. 

5. Collection, Sharing and Processing Limitations 

Information about or derived from an individual should only be collected, used, and/or disclosed for specified, explicit, and legitimate purposes, and entities should be required to get a person’s explicit permission in order to complete a specified purpose(s). Permission cannot be accomplished using pre-checked boxes on consent notices, through the mere presence of a privacy policy or a requirement to read it, or via misleading design as further detailed in the Ethical Design section of this note. These restrictions should also cover the collection, sharing and processing of personal data by non-profit entities and internet service providers. 

Because of its relationship with individual integrity, dignity, and opportunity, a person’s data should be recognized as akin to a person’s physical body. U.S. case law has established a right to protect the integrity of the body from unwanted intrusion. To paraphrase Martha Nussbaum, in her 1999 Oxford University Press book, “Sex and Social Justice,” an individual must have the ability to move freely in digital environments with the expectation that their privacy and security are protected from unwanted private or government entities, with true repercussions in the case of privacy or security violation or assault. 

Importantly, rights should be rigorously protected even during times of national crisis, such as the COVID19 public health emergency. For example, when it’s necessary for the government, or entities on behalf of the government, to track individuals and their locations to ensure the health and safety of citizens, privacy enables public trust and cooperation. Public health surveillance should be performed with complete transparency, using privacy-protective technologies to obscure individual identity, such as applying differential privacy to datasets. To reduce privacy risks, legislation should create rules around handling data during a public health crisis that are enforced by the HHS’s Office of Civil Rights. These rules should include: 

(a) Data collected should be limited to what’s necessary for health and risk assessment purposes; 

(b) Data collected should only be used for health and risk assessment purposes; 

(c) Data collected should only be shared with public health officials (defined as federal, state or local officials); 

(d) Data should not be used or stored internally by a private entity and should only be held by public health officials for as long as it is necessary for the duration of the crisis; 

(e) Privacy enhancing technologies and methodologies, such as differential privacy and aggregation, should be applied to collected data; 

(f) Data collected should be encrypted in transit and at rest with the key given only to individuals and public health officials. 

6. Portability 

Data portability gives users the flexibility they need to assert control over their data, an important component in the rights to autonomy and self-determination and the freedoms of movement and association. The law should require entities to make it easy and safe for users to move, copy or transfer their data from one place to another. It should direct the National Telecommunications and Information Administration to develop standards for aspects of data portability within a year after passage (perhaps drawing from the work done by the Data Transfer Project ) and fund technical implementation help for small and medium-sized entities. Violations should be considered by both the FTC and the U.S. Department of Justice Antitrust Division. 

7. Risk Limitation

The responsibility for identifying, limiting and mitigating risk(s) to an individual arising from harmful uses or exposure should be placed onto entities. These responsibilities should include making entities “information fiduciaries” and thus binding them to three “duties,” as presented by Jack Balkin and Jonathan Zittrain in their 2016 article in The Atlantic:

a) A“duty of care,” which requires entities to use the highest possible security measures for data when possible (and in all cases for data that is designated extremely or highly sensitive). Measure might include cryptographically protecting data at rest and in transit (such as encryption) and through authentication measures (such as tokenization), through rigorous oversight and management of data procured, used, accessed, processed, or stored (such as ensuring contracts are sufficiently protective and by performing periodic audits), by reducing the amount of data collected, retained or stored (such as through data management processes like purpose specification), and by providing users with security options in some circumstances (such as the ability to choose local storage over cloud storage); 

b) A “duty of loyalty,” requiring entities to protect against unwanted or harmful uses of a person’s data (such as those that would be unexpected, out of context and/or highly offensive to a reasonable user and/or those that may result in reasonably foreseeable material physical or financial harm to a user including automated and machine learning profiling processes that 10 may cause negatively biased or discriminatory effects. For language along these lines, see the Data Care Act of 2018, legislation introduced in December 2018 by Senator Schatz (D-HI) and 15 other Democratic senators.  “Duty of loyalty” should also include adopting the concept of “beneficence,” requiring entities to limit risks to individuals by determining the merit of data use versus the value (or negative impact) to the person, similar to a privacy risk assessment but with requirements for entities to document positive impact on individuals; and, 

3) A “duty of confidentiality,” a concept already used in the healthcare system, which would require entities to keep secure at all times data that is created, stored, maintained, or transmitted, including requirements around securing devices and data stores and enacting administrative measures, such as training employees to recognize threats like phishing emails. 

8. Data Quality and Integrity 

Data quality and integrity are together an integral part of protecting privacy mostly because they allow for improved data minimization (by increasing the reusability of data) and improved data security (by increasing the traceability, recoverability and auditability of data) — and also because they allow for higher quality data and more accurate enforcement. Entities should take reasonable steps to ensure the accuracy, completeness, reliability, and provenance of the data they collect, use, share and store throughout its lifecycle and to communicate these attributes to individuals when requested. 

9. Ethical Design 

Technology is not value neutral — it is designed by humans, at some point, and imbued with their experiences and values, consciously and unconsciously. When design is delivered at scale and leveraged as a tool to deliberately deceive, obfuscate, or misinform, it violates human autonomy and dignity. U.S. privacy law grounded in privacy as a human right should:

  1. Recognize the power imbalance created by abusive design — by restricting interfaces that deceive or mislead users. Such design could include an interface that subverts privacy by making data sharing the default or by consent notices that offer no real choices for users;
  2. Limit technology that refuses individual obscurity, such as facial-recognition technology that is deployed without clear purpose or context and;
  3. Ban entities from enabling sensitive information to be made public via search; and un-permissioned behavioral or psychological research or studies performed on users based on their activity or with the goal of promoting engagement or product conversion. 

To create accountability around design, the law might adopt standards similar to those found in product safety and consumer-protection laws to address deceptive, abusive, and dangerous design. Many of the concepts in this section are found in Woodrow Hartzog’s excellent 2018 book,  “Privacy’s Blueprint: The Battle to Control the Design of New Technologies.” An example of legislation that includes design provisions is the Deceptive Experiences to Online Users Reduction (DETOUR) Act, introduced in April 2019 by Sens. Warner (D-VA) and Fischer (R-NE). 

10. Balanced Enforcement 

A U.S. privacy law should strike a balance between the competing concerns of businesses and individuals when it comes to enforcement: while rights require protection and accountability for that protection, the norms of digital commerce in the U.S. were conceived in an era of emerging “surveillance capitalism.” It will take time for organizations and individuals to evolve toward a model of privacy as a human right. 

Oversight and penalties

The law should allow for civil, administrative and criminal penalties that are flexible when possible and proportional to the violation. They should reflect guidance from the FTC on what penalties are appropriate in which circumstances. Relatedly, the FTC currently lacks the resources, rulemaking authority and effective enforcement powers it needs to provide adequate oversight. It should either be granted the ability to more broadly issue civil penalties (including first time offenses) and true rulemaking authority beyond the constraints of the Magnuson-Moss Warranty Act and/or should be tasked with the creation of a U.S. Data Protection Authority that does have these authorities. In the latter case, the FTC should make recommendations to Congress on the range of abilities, tasks, as well as budget and staffing needs, of such an agency. 

Time-limited pre-emption of state laws and private right of action

U.S. federal privacy law should take a cue from computing systems, where context switches or preemption is a momentary interruption of a task that will be returned to later, by including a time-limited preemption of state laws. The preemption should either fully sunset and/or enter a review period (conducted by the FTC) three years after the law is enacted. 

Including a private right of action (PROA) in a privacy law is a controversial subject, to say the least, but it’s possible to create a scenario that is balanced. The law should allow for an immediate PROA around data security and for uses and disclosures of sensitive information, similar to the one found in the California Consumer Privacy Act (CCPA) which allows individuals to sue companies for negligent security practices if their data is involved in a breach. then enact a PROA for other provisions of a baseline privacy law after three years, such as data collection, processing, and sharing limitations, with the FTC empowered to make rules ability, guidance and/or certification for lawsuits.  That’s similar to the one found in the Illinois’ Biometric Information Privacy Act (BIPA) which allows individuals to sue companies who mishandle or mis-collect covered information. 

Conclusion: A path forward

Consider for a moment that the United Nations Declaration of Human Rights was ratified in 1948. That was 72 years ago. It’s a reminder that privacy did not begin with the Internet and it should not end with it. It’s also a rebuke to a country that has held itself up as a shining beacon of freedom and human rights for most of its 241-year old history but has failed to codify the essential freedom of privacy as a human right. 

Codifying privacy as a human right is crucial to a functioning and fair democracy and the U.S. Congress should craft legislation to make it happen. Privacy in the U.S., and the freedoms associated with it, has primarily been interpreted through case law and protected through outdated laws based on industry sector and data type, both of which have created gaping holes in protection for individuals, particularly those from marginalized communities, who consistently face higher levels of corporate and government surveillance and pay a higher premium for privacy when it is made available. They have also done little to stop the monopolistic growth of private companies that are continually fueled by large amounts of individual data and therefore incentivized to keep the weak status quo. Adopting privacy as a human right can put a finger on the imbalance of protection in this country, righting the vast inequalities produced by spotty case law and an outdated legal regime, and offer a guidepost for new legislation that can keep pace with ever-evolving technology and public opinion.