The privacy box: A software proposal
First Monday

The privacy box: A software proposal



Abstract
The contradiction of social networks is that revealing of personal, private information can have harmful consequences, yet users continue to disclose such information at an alarming rate. Ironically, the advent of social network sites opens the possibility of a relatively safe place to disclose private information. This article proposes a “privacy box” application to be used within social network sites that would require users to accept a pre–written promise of confidentiality before gaining access to personal information. Although it would not serve as a universal remedy for privacy harms on social network sites, it could serve to carve out a space for relatively safe self–disclosure online.

Contents

Introduction
The dual boundary concept
Privacy box functionality
Potential impact of the privacy box
Conclusion

 


 

Introduction

“The information contained within this box is considered private. In order to view it, the user has requested a promise of confidentiality from you. Would you like to continue?”

“Conversation has a kind of charm about it, an insinuating and insidious something that elicits secrets from us just like love or liquor.” [1] The need for individuals to share private information is innate [2]. However, with the emergence of online communication, it has never been more dangerous. Information intended for a small, select group of people is often exposed to millions when posted online. Social network sites like Facebook, MySpace, and Ning open the possibility of controlled disclosure online. Suppose social network sites allowed individuals to condition the disclosure of information upon a promise of confidentiality? Would this feature make social network sites a safe place to reveal private information?

Some social network sites (most noticeably Facebook), are allowing third party developers to create applications within a user’s profile [3]. This article advocates the creation of software which I refer to as the “privacy box,” whereby a user could enter information they wish to share with other connected “friends,” but request a promise of confidentiality in return. When a potential promisor views the promisee’s profile within the social network site, they would be presented with merely a text box containing the message that “the information contained within this box is considered private. In order to view it, the user has requested a promise of confidentiality from you.” The user, upon clicking on the “proceed” link, would then be required to accept a pre–written promise of confidentiality which could be e–mailed to both the promisor and promisee for archival processing. This software would allow the user to divide what she wishes to be freely available and what she wishes to protect with explicit promises of confidentiality. The user could even specify which people in her network would be given the option of promising confidentiality. Because of the “networked friend” nature of social network sites, users would have ample opportunity to screen potential recipients of information by viewing their profile and engaging in pre–disclosure communication. In addition to serving the normative function of reinforcing confidentiality obligations, it is possible that the private box could form binding confidentiality agreements or serve as the basis for the legal remedy known as promissory estoppel, an equitable doctrine aimed at enforcing promises.

This proposal begins with a review of the dual boundary concept, the psychology theory upon which the software is based. It then details the proposed functionality of the software. Finally, it explores how the software could serve to effectuate various legal remedies.

 

++++++++++

The dual boundary concept

The privacy box would be a digital manifestation of a psychology theory known as the “dual boundary concept” (DBC). The dual boundary concept was first proposed by psychologists Valerian J. Derlega and Alan L. Chaikin. It is a theory explicitly based on psychologist Irwin Altman’s theory of privacy regulation.

Altman’s privacy regulation theory

Psychologist Irwin Altman is credited with one of the most prominent and widely accepted theories regarding psychological privacy. In Altman’s book The environment and social behavior [4], he posits that privacy is the “the selective control of access to the self” [5]. Derlega and Chaikin explicitly adopted Altman’s definition of privacy [6] which provides:

“Privacy is conceived of as an interpersonal boundary process by which a person or group regulates interaction with others. By altering the degree of openness of the self to others, a hypothetical personal boundary is more or less receptive to social interaction with others. Privacy is, therefore, a dynamic process involving selective control over a self–boundary, either by an individual or by a group.” [7]

Altman theorizes that privacy has five properties:

  1. Privacy involves a mental process whereby we change how open or closed we are in response to changes in our internal states and external conditions.

  2. There is a difference between actual and desired levels of privacy.

  3. Privacy is a non–monotonic function, with an optimal level of privacy and the possibilities of too much privacy.

  4. Privacy is bi–directional, involving inputs from others (e.g., noise) and outputs to others (e.g., oral communications).

  5. Privacy applies at the individual and group levels of analysis [8].

Altman’s theories involve a multitude of privacy regulation methods (e.g., verbal content, territorial behavior, cultural norms) all of which, he theorizes, operate interchangeably or complimentary to each other. Altman’s privacy regulation theory is a companion to his social penetration theory, which focuses on self–disclosure reciprocity, and it is broad enough to be a general theory about the regulation of social interaction [9].

The fundamental elements of the dual boundary concept

Much like Altman, the dual boundary concept focuses on the concepts of privacy and interpersonal boundary regulation. Although the dual boundary concept focuses exclusively on self–disclosure, it does so within context of psychological privacy. Regarding violation of that privacy, the authors posit “[i]f one can choose how much or how little to divulge about onseself to another voluntarily, privacy is maintained. If another person can influence how much information we divulge about ourselves or how much information input we let in about others, a lower level of privacy exists.” [10] Moreover, “[a]djustment in self–disclosure outputs and inputs is an example of boundary regulation and the extent of control we maintain over this exchange of information contributes to the amount of privacy we have in a social relationship.” [11]

Fundamentally, DBC proposes that protection of psychological privacy occurs through regulation of self–disclosure through a dual boundary system. First, “people function within a dyadic boundary perceived by a person as a safe zone within which to disclose to an invited other and across which disclosure does not pass, either at the time of disclosure or subsequently. The other boundary is the self or personal boundary that separates the discloser from the would–be audience.” [12] DBC then posits that “when the personal boundary is closed, private information is withheld. When it is opened, the individual discloses to others so long as the dyadic boundary is perceived as closed.” [13] Figure 1 (below) serves as a visualization of how the DBC operates.

 

Figure 1: Self-disclosure as a function of self- and dyadic-boundary adjustments
Figure 1: Self–disclosure as a function of self– and dyadic–boundary adjustments. The hyphenated lines represent an open boundary and the solid lines represent a closed boundary. The figure shows that if the dyadic boundary is perceived as open, then the self boundary will remain closed to protect privacy. However, once the dyadic boundary is perceived as closed, the self–boundary will open and self–disclosure will occur [14].

 

The dyadic boundary “insures the discloser’s safety from leakage of information to uninvited third parties; this boundary establishes the precondition for self–disclosure, but is not self–disclosure. It is — as constructed by an individual — the boundary within which it is safe to disclose to the invited recipient and across which the self–disclosure will not pass.” [15] In other words, “the disclosure is safe with the recipient, as perceived by the discloser.” [16]

The self–boundary is opened when we disclose informaton. Of critical importance is “that adjustments in self–boundary regulation may fluctuate, involving different levels of desired self–disclosure outputs and inputs, depending on the content area of disclosure, social relationship, and personality boundaries.” [17] For example, individuals might maintain “relatively rigid self–boundaries in content areas that relate to personal problems, while maintaining open contact (i.e., inputs and outputs) in areas that do not provoke anxiety or stress … The self–boundary may be open or closed depending on such interpersonal factors as one’s perceived trust of a disclosure target.” [18]

Numerous factors influence whether an individual perceives the dyadic boundary as closed, that is, that the information is safe from further unwanted disclosure by the recipient. For example, the level of friendship or acquaintance, legal or normative obligations of confidentiality, or even the amount of reciprocal self–disclosure by the recipient all affect the degree to which the dyadic boundary is perceived as open or closed [19]. Another important factor that could affect the dyadic boundary is anonymity, which, when present, insures the dyadic boundary will not be violated. The privacy box functions as a digital manifestation of the theory as the promise of confidentiality will serve as closing the dyadic boundary, and the release of information within will serve as opening the self–boundary. Thus, it could successfully encourage self–disclosure.

 

++++++++++

Privacy box functionality

The unique attributes of social network sites, such as the ability to compartmentalize information and an ascertainable number of potential recipients of information, make it an ideal environment for the privacy box.

Function and operation of social network sites

Many definitions and conceptions exist for social network sites, so it is important to specify what is meant by the term in this article. At its core, an online social network is a Web site designed to enable communication through the interconnection of its users. The specific definitions, features and uses of these sites vary [20], but scholars posit that a social network site is an online community that serves as a “platform for self–identification, communication and [the] unique ability to mimic human intimacy.” [21] danah boyd and Nicole Ellison define social network sites as “Web–based services that allow individuals to (1) construct a public or semi–public profile within a bounded system, (2) articulate a list of other users with whom they share a connection, and (3) view and traverse their list of connections and those made by others within the system. The nature and nomenclature of these connections may vary from site to site.” [22]

Profiles on social network sites can be given various levels of visibility by users, who can restrict access several different ways. “Public profiles are searchable and visible to anyone in cyberspace, while accessibility to private profiles is by invitation only.” [23]

Profiles are then linked to other social network site users by invitation process in order to develop an individualized network. Invitations to become “friends,” “followers,” or “buddies” (depending on the site) are extended to real–life contacts, other users within a real–life contact’s network, and total strangers. “Through these networks of associated profiles, social network site participants can post or exchange photographs and video, send messages to friends instantaneously, join interest groups dedicated to virtually any topic, and leave notes on their friends’ profiles that are visible by anyone with access to the profile.” [24]

Disclosure of information on social networks

Given that disclosure of information is the central function of social network sites, it is no surprise that users often post extremely sensitive information within. Such disclosure is further supported by findings that revelation of private information is positively linked with strength of friendship [25]. This concept of “openness as a friendship development tactic” is one of the primary forces driving self–disclosure online.

Beyond the relatively benign gossip that might lead to trivial embarrassment or hurt feelings, disclosure of private information online can lead to very tangible harm. Many employers routinely screen applicants’ social network site profiles [26]. The disclosure of sexual preference, health information (including mental states, STDs and other infections), political and social affiliation, and even misinterpreted comments on social network sites have resulted in significant emotional damage, professional and social harm, and other dramatic “off–line” consequences [27].

These scenarios bring to light the disclosure paradox of social network sites — if the posting of such sensitive information can have such dramatic consequences, then why do users, regardless of whether they are cognizant potential harm, continue to post private information about themselves on social network sites? In short, social network sites function as a tool to help fulfill the basic human need for openness as a means to maintain significant personal relationships. “We cannot be close to someone without revealing some personal, and often private, information about ourselves. Friendship means sharing, and sharing means relinquishing some privacy.” [28]

Although other communication technologies, such as the telephone, might be used for relationship–building purposes, often the superior benefits of online communication (including the convenience of asynchronous communication and ability to better regulate the flow of information) compel the need for safe disclosure of information online [29].

Although e–mail, bulletin boards, video conferencing and chat could all suffice for self–disclosure online, none have the unique traits that serve as the perfect “safe place” for online disclosures like social network sites — namely the potential to increase the amount of control users have over information, the potential for third–party applications to operate within these social network sites, the potential to identify and screen potential recipients of information, and the fact that all users of a social network site agree to the same terms–of–use agreement. These traits provide social network sites with a context conducive to legal remedies based on confidence, trust and reliance.

How the privacy box would operate

Upon initiating use of the privacy box, users could enter information they wish to share with other connected “friends” but request a promise of confidentiality before the information is divulged. When a potential promisor views the promisee’s profile within the social network site, she would be presented with a box that said merely, “The information contained within this box is considered private. In order to view it, the user has requested a promise of confidentiality from you.” The user, upon clicking on the “proceed” link would then be required to accept a pre–written promise of confidentiality which could be e–mailed to both the promisor and promisee for archival. This solution would allow the user to divide what she wishes to be freely available and what she wishes to protect with explicit promises of confidentiality. The user could even specify which people in her network would be given the option of promising confidentiality. Because of the “networked friend” nature of social network sites, users would have ample opportunity to screen potential recipients of information by viewing their profiles and engaging in pre–disclosure communication.

The language of the promise would read “By clicking agree, user promises to keep the private information contained within confidential. [Privacy Box User name] is relying on user’s promise of confidentiality as the condition for disclosure.” Although more specific language might (although not necessarily) be needed to form a contract, such formalities might not be necessary for other potential remedies.

 

++++++++++

Potential impact of the privacy box

Benefits

The privacy box could give rise to several legal remedies designed to enforce confidentiality. Some could be based in tort (a civil wrong) such as a suit for public disclosure of private facts or perhaps the developing tort of breach of confidentiality [30]. The privacy box could also give rise to legally binding contracts. However, these remedies are difficult to apply and often provide little recourse. The public disclosure tort is difficult to apply when the information revealed was self–disclosed [31]. Contracts can be problematic because minors are not allowed to enter into them and without specific terms spelled out, they are difficult to enforce [32].

However, the privacy box seems custom designed for the application of an equitable doctrine known as promissory estoppel. The importance of promissory estoppel is magnified in light of the failure of traditional legal remedies to offer effective protection for the safe disclosure of true, private information on a social network site [33].

At its core, promissory estoppel is an equitable doctrine that “operates to enforce a promise even though the formal requisites of contract are absent.” [34] The Restatement (Second) of Contracts gives clarity and weight to the doctrine of promissory estoppel, providing “a promise which the promisor should reasonably expect to induce action or forbearance on the part of the promise or a third person and which does induce such action or forbearance is binding if injustice can be avoided only by enforcement of the promise. The remedy granted for breach may be limited as justice requires.” [35] It is a doctrine arising out of equity law [36], designed to prevent harm resulting from reasonable and detrimental reliance [37].

Generally, for a successful claim of promissory estoppel, an injured party must prove that 1) a clear, definite or unambiguous promise was made, 2) that the person making the promise intended to induce reliance on the part of the promisee (the person to whom the promise was made) or should have reasonably expected and foreseen that it would be relied upon by the promisee, 3) that there was actual substantial, reasonable or justifiable reliance on the promise by the promisee to his or her detriment, and 4) that the promise must be enforced to prevent injustice [38].

Promises on social network sites

Use of the privacy box would enable the user who disclosed information to prove the existence of a clear, definite or unambiguous promise [39]. “A promise is a manifestation of intent by the promisor to be bound, and is to be judged by an objective standard. It need not be express, but may be implied from conduct and words.” [40]

By conditioning the disclosure of information upon an explicit promise of confidentiality, viewers could not credibly deny the existence of a clear and unambiguous promise. This is particularly true if, as a function of the privacy box, copies of the agreement were e–mailed automatically to both parties. Unlike e–mail, chat and other methods of online communication, information on a social network site is typically highly compartmentalized. Users enter information in many different fields, where it appears as fragmented displays on a user’s screen. Most importantly, some social network sites (most noticeably Facebook), are allowing third party developers to create applications within a user’s profile [41]. By compartmentalizing only private information, users could make it very clear what is considered private. Courts generally support this theory. In Cohen v. Cowles Media, the Minnesota Supreme Court found that an explicit promise of confidentiality supported a successful claim of promissory estoppel [42].

Intended and detrimental reliance on social network sites

This requirement contains obligations for both the user disclosing information and the viewer promising confidentiality. First, the viewer promising confidentiality must have intended to induce reliance on the part of the user disclosing information or should have reasonably expected and foreseen that the disclosing user would rely on the viewer’s promise of confidentiality. Because the very function of a social network site is to disseminate information, a very strong presumption against an expectation of confidentiality could be inferred. As a result, any communication of personal information must carry with it an explicit and overriding context of confidentiality.

By conditioning the disclosure of information upon the affirmative promise of confidentiality within a social network site, promisees would likely be able to defeat most claims that the viewer promising confidentiality did not foresee reliance on the promise. Indeed, it is difficult to conceive of a context more agreeable to this requirement than where the user withholds personal information until the potential viewer reads language emphasizing the confidential nature of the information and explicitly agrees to keep the information confidential.

The second part of this requirement mandates that the disclosing user reasonably took action in reliance on the promise. Following the logic of Cohen, the refusal to disclose personal information until after a promise of confidentiality has been made would seem to serve as nearly indisputable reliance on that promise [43].

Avoiding injustice by enforcing promises on social network sites

The equitable nature of promissory estoppel makes it flexible enough to apply to a broad spectrum of situations as justice dictates, but also renders its application difficult to predict. Even if promise and reliance can be proven, the promise will only be enforced “if justice can be avoided only by enforcement of the promise.” [44]

Under the logic of Cohen, courts would agree that it would be unjust for the law to approve of the breaking of a promise. As previously mentioned, the court reasoned that in view of the newspapers’ own admission of the “importance of honoring promises of confidentiality, and absent the showing of any compelling need in this case to break that promise, we conclude that the resultant harm to Cohen requires a remedy here to avoid injustice.” [45] However, “plaintiffs seeking to rely on promissory estoppel always face uncertainty as to whether a court would perceive they are entitled to recover as a matter of policy.” [46]

Norms

While legal remedies could serve to help those egregiously harmed by violations of confidence, the remedies proposed in this paper are difficult to enforce except in the more extreme instances, which is precisely the benefit of the proposal. Few users would wish to participate in a highly litigious online environment. Instead, perhaps the privacy box’s main benefit could be to remind users of their obligation of confidentiality, which could result in more instances of confidences being kept. Much like reading speed limit signs on the highway, the act of explicitly promising of confidentiality could serve as a reminder which, over time, could help develop norms of confidentiality.

Weaknesses

The privacy box could not serve as a panacea to remedy harms resulting from confidential disclosure online. It is ill–suited to protect both trivial and gravely sensitive personal information. The law does not deal in trivialities. Enforcing obligations of confidentiality should not be done lightly, and should be requested sparingly. Additionally, the difficulty in recovering damages for privacy violations renders it somewhat undesirable for information that could damage an individual beyond tangible harm. For example, while the promissory estoppel can help an individual recovering from expectational loss (loss of jobs, clients, etc. …), it does not provide for recovery for emotional harm.

The design of the privacy box requires “extra clicks” which might seem burdensome to casual users. Finally, the fiscal viability of the privacy box is uncertain. Ostensibly, business models exist that could support the effort and technology required to support it, but the potential liability assumed by the administrator might be too great to warrant excessive involvement.

 

++++++++++

Conclusion

The privacy box is based on psychology theory and could give rise to several legal remedies designed to protect confidential disclosure. By making a promise of confidentiality explicit, the user could receive more normative protection, as society expects those who make promises of confidentiality to keep them. Because third party developers can create software for some social network sites, it would not be necessary to convince social network sites themselves to create this software. Rather, with the social network site’s permission, the privacy box would simply be installed by those who wished to use it.

At its core, the privacy box requires users to accept a pre–written promise of confidentiality. The e–mail message that would be sent to the promisor and promisee for archival could serve as accessible proof of an agreement. The software could also help the promisor internalize her duty of confidentiality. It would allow the user to parse what information she wishes to be freely available and what information she wishes to protect with explicit promises of confidentiality. The user could even specify which people in her network would be given access to the software. Users would have ample opportunity to screen potential recipients of information by viewing their profiles and engaging in pre–disclosure communication. Although the privacy box would not serve as a panacea for privacy harms on social network sites, it could serve to carve out a small space for relatively safe self–disclosure online. End of article

 

About the author

Woodrow Hartzog is Roy H. Park Fellow and a Ph.D. student in the School of Journalism and Mass Communication at the University of North Carolina at Chapel Hill. He earned a LL.M. at The George Washington University Law School and a J.D. at Samford University.

 

Acknowledgements

The author would like to thank Fred Stutzman for his excellent comments on this paper.

 

Notes

1. Seneca, Roman Philosopher 1 A.D., Encyclopedia Britannica Online, at www.britannica.com.

2. See generally Aaron Ben–Ze’ev, 2003, “Privacy, emotional closeness, and openness in cyberspace,” Computers in Human Behavior, volume 19, number 4 (July), pp. 451–467.

3. See Facebook Developers, at http://developers.facebook.com/?ref=pf.

4. Irwin Altman, 1975. The environment and social behavior: Privacy, personal space, territory, crowding. Monterey, Calif.: Brooks/Cole.

5. Ibid.

6. Valerian Derlega and Alan Chaikin, 1977. “Privacy and self–disclosure in social relationships,” Journal of Social Issues, volume 33, number 3, pp. 102–115.

7. Altman, 1975, p. 6.

8. Altman, 1975; see also Stephen Margulis, 2003, “On the status and contribution of Westin’s and Altman’s theories of privacy,” Journal of Social Issues, volume 59, number 2, pp. 411–429.

9. Margulis, 2003.

10. Derlega and Chaikin, 1977, p. 103.

11. Ibid.

12. Margulis, 2003, p. 422.

13. Ibid.

14. Derlega and Chaikin, 1977, p. 105.

15. Ibid. at p. 104.

16. Ibid.

17. Ibid. at p. 105.

18. Ibid.

19. Ibid.

20. See danah boyd and Nicole Ellison, 2007. “Social network sites: Definition, history, and scholarship,” Journal of Computer–Mediated Communication, volume 13, number 1, pp. 210–230, and at http://jcmc.indiana.edu/vol13/issue1/boyd.ellison.html.

21. Patricia Sanchez Abril, 2007, “A (My)Space of one’s own: On privacy and online social networks,” Northwestern Journal of Technology and Intellectual Property, volume 6, p. 75; see also Ian Byrnside, 2008, “Six clicks of separation: The legal ramifications of employers using social networking sites to research applicants,” Vanderbilt Journal of Entertainment and Technology Law, volume 10, pp. 445—477.

22. danah boyd and Nicole Ellison, 2007. “Social network sites: Definition, history, and scholarship,” Journal of Computer–Mediated Communication, volume 13, number 1, pp. 210–230, and at http://jcmc.indiana.edu/vol13/issue1/boyd.ellison.html.

23. Ibid.

24. Ibid.

25. Irwin Altman, Anne Vinsel, and Barbara Brown, 1981, “Dialectic conceptions in social psychology: An application to social penetration and privacy regulation,” Advances in Experimental Psychology, volume 14, pp. 107–157. Altman’s “social penetration theory” posits, among other things, that the development of intimate relationships is dependent on the amount and degree of reciprocal self–disclosure.

26. See generally Byrnside, 2008; Carly Brandenburg, 2008, “The newest way to screen job applicants: A social networker’s nightmare,” Federal Communications Law Journal, volume 60, pp. 597–626.

27. Amina Sonnie, 2007. “Social networking sites: Enter at your own risk,” IEEE–USA Today’s Engineer Online, at http://www.todaysengineer.org/2007/Jan-Feb/networking.asp; AP, 2007. “Would–be teacher denied degree over ‘drunken pirate’ MySpace photo sues university,” FOXNews.com, at http://www.foxnews.com/story/0,2933,269079,00.html, accessed 29 July 2009.

28. Ben–Ze’ev, 2003, p. 458; see Altman, et al., 1981. Under social penetration theory, such disclosures could serve to build relationships, see also Charles Fried, 1970, An anatomy of values: Problems of personal and social choice. Cambridge, Mass.: Harvard University Press, p. 142.

“To be friends or lovers persons must be intimate to some degree with each other. Intimacy is the sharing of information about one’s actions, beliefs or emotions which one does not share with all, and which one has the right not to share with anyone. By conferring this right, privacy creates the moral capital which we spend in friendship and love.”

29. Ibid., p. 457 (“[I]n online relationships, people typically share personal information they do not share with their intimate offline partners.” Given the importance of emotional self–disclosure, “online relationships often have a higher degree of intimacy than offline relationships.”).

30. See Neil M. Richards and Daniel J. Solove, 2007, “Privacy’s other path: Recovering the law of confidentiality,” Georgetown Law Journal, volume 96, pp. 156–158.

31. Lior Strahilevitz, 2005, “A social networks theory of privacy,” University of Chicago Law Review, volume 72, pp. 919–988.

32. Susan Gilles, 1995, “Promises betrayed: Breach of confidence as a remedy for invasions of privacy,” Buffalo Law Review, volume 42, pp. 1–84.

33. Patricia Sanchez Abril, 2007, “Recasting privacy torts in a spaceless world,” Harvard Journal of Law and Technology, volume 21, pp. 1–47; Eugene Volokh, 2000, “Freedom of speech and information privacy: The troubling implications of a right to stop people from speaking about you,” Stanford Law Review, volume 52, pp. 1,049–1,124; and, Andrew J. McClurg, 2006, “Kiss and tell: Protecting intimate relationship privacy through implied contracts of confidentiality,” University of Cincinnati Law Review, volume 74, pp. 887–939. For a more in–depth legal analysis of the application of promissory estoppel within online communities, see Woodrow Hartzog, 2009. “Promises and privacy: Promissory estoppel and confidential disclosure in online communities,” Temple Law Review, volume 82, number 4 (forthcoming).

34. Susan Gilles, 1995, “Promises betrayed: Breach of confidence as a remedy for invasions of privacy,” Buffalo Law Review, volume 42, pp. 1–84.

35. § 90(1) (1981).

36. White v. Roche Biomedical Lab., Inc. 807 F. Supp. 1212 (D.S.C. 1992), order aff’d, 998 F.2d 1011 (4th Cir. 1993); Jarvis v. Ensminger, 134 P.3d 353 (Alaska 2006); Bicknese v. Sutula, 2003 WI 31 (2003).

37. Crouse v. Cylops Indus., 560 Pa. 394 (2000).

38. 31 C.J.S. Estoppel and Waiver § 117 (citing Simpson v. Murkowski, 129 P.3d 435 (Alaska 2006); Cherokee Metro. Dist. v. Simpson, 148 P.3d 142 (Colo. 2006); Zollinger v. Carrol, 137 Idaho 397, 49 P.3d 402 (2002); Heidbreder v. Carton, 645 N.W.2d 355 (Minn. 2002); Citiroof Corp. v. Tech Contracting Co., Inc., 159 Md. App. 578, 860 A.2d 425 (2004); Clevenger v. Oliver Ins. Agency, Inc., 237 S.W. 3d588 (Mo. 2007); Filippi v. Filippi, 818 A.2d 608 (R.I. 2003); Davis v. Greenwood School Dist. 50, 365 S.C. 629 (2005); Brit v. Wells Fargo Home Mortg., Inc., 75 P.3d 640 (Wyo. 2003); Bicknese, 2003 WI at 31).

39. 31 C.J.S. Estoppel and Waiver § 117 (citing Citiroof Corp. v. Tech Contracting Co., Inc., 159 Md. App. 578, 860 A.2d 425 (2004); Clevenger v. Oliver Ins. Agency, Inc., 237 S.W. 3d588 (Mo. 2007); Filippi v. Filippi, 818 A.2d 608 (R.I. 2003); Davis v. Greenwood School Dist. 50, 365 S.C. 629 (2005)).

40. Ibid. (citing Major Mat Co. v. Monsanto Co. 969 F.2d 579 (7th Cir. 1992); Coca–Cola Co. Foods Div. v. Olmarc Packaging Co., 620 F. Supp. 966 (N.D. Ill. 1985); First Nat. Bank of Cicero v. Sylvester, 196 Ill. App. 3d 902, 554 N.E.2d 1063 (1st Dist. 1990); Martin v. Scott Paper Co., 511 A.2d 1048 (Me. 1986)).

41. See Facebook Developers, at http://developers.facebook.com/?ref=pf.

42. Cohen v. Cowles Media Co. 478 N.W.2d 387 (Minn. 1992), on remand from 501 U.S. 663 (1991) (“[W]e have, without dispute, the reporters’ unambiguous promise to treat Cohen as an anonymous source.”).

43. Gilles, 1995, p. 35 (citing Cohen, 479 N.W.2d at 391; Cohen v. Cowles Media Co., 445 N.W.2d 248, 254 (Minn. Ct. App. 1989), aff’d in part and rev’d in part, 457 N.W.2d 199 (Minn. 1990), rev’d, 501 U.S. 663 (1991)).

44. Ibid. at 36 (citing RESTATEMENT (SECOND) OF CONTRACTS § 90 cmt. b (1981)).

45. Ibid. at p. 392.

46. Gilles, 1995, p. 37.

 

References

Patricia Sanchez Abril, 2007. “A (My)Space of one’s own: On privacy and online social networks,” Northwestern Journal of Technology and Intellectual Property, volume 6, pp. 74–88.

Patricia Sanchez Abril, 2007. “Recasting privacy torts in a spaceless world,” Harvard Journal of Law and Technology, volume 21, pp. 1–47.

Irwin Altman, 1975. The environment and social behavior: Privacy, personal space, territory, crowding. Monterey, Calif.: Brooks/Cole.

Irwin Altman, Anne Vinsel, and Barbara Brown, 1981. “Dialectic conceptions in social psychology: An application to social penetration and privacy regulation,” Advances in Experimental Psychology, volume 14, pp. 107–157.http://dx.doi.org/10.1016/S0065-2601(08)60371-8

Aaron Ben–Ze’ev, 2003. “Privacy, emotional closeness, and openness in cyberspace,” Computers in Human Behavior, volume 19, number 4 (July), pp. 451–467.http://dx.doi.org/10.1016/S0747-5632(02)00078-X

danah boyd and Nicole Ellison, 2007. “Social network sites: Definition, history, and scholarship,” Journal of Computer–Mediated Communication, volume 13, number 1, pp. 210–230, and at http://jcmc.indiana.edu/vol13/issue1/boyd.ellison.html, accessed 30 October 2009.

Carly Brandenburg, 2008. “The newest way to screen job applicants: A social networker’s nightmare,” Federal Communications Law Journal, volume 60, pp. 597–626.

Ian Byrnside, 2008. “Six clicks of separation: The legal ramifications of employers using social networking sites to research applicants,” Vanderbilt Journal of Entertainment and Technology Law, volume 10, pp. 445—477.

Valerian Derlega and Alan Chaikin, 1977. “Privacy and self–disclosure in social relationships,” Journal of Social Issues, volume 33, number 3, pp. 102–115.http://dx.doi.org/10.1111/j.1540-4560.1977.tb01885.x

Charles Fried, 1970. An anatomy of values: Problems of personal and social choice. Cambridge, Mass.: Harvard University Press.

Susan Gilles, 1995. “Promises betrayed: Breach of confidence as a remedy for invasions of privacy,” Buffalo Law Review, volume 42, pp. 1–84.

Stephen Margulis, 2003. “On the status and contribution of Westin’s and Altman’s theories of privacy,” Journal of Social Issues, volume 59, number 2, pp. 411–429.http://dx.doi.org/10.1111/1540-4560.00071

Andrew J. McClurg, 2006. “Kiss and tell: Protecting intimate relationship privacy through implied contracts of confidentiality,” University of Cincinnati Law Review, volume 74, pp. 887–939.

Neil M. Richards and Daniel J. Solove, 2007. “Privacy’s other path: Recovering the law of confidentiality,” Georgetown Law Journal, volume 96, pp. 156–158.

Amina Sonnie, 2007. “Social networking sites: Enter at your own risk,” IEEE–USA Today’s Engineer Online, at http://www.todaysengineer.org/2007/Jan-Feb/networking.asp, accessed 30 October 2009.

Lior Strahilevitz, 2005. “A social networks theory of privacy,” University of Chicago Law Review, volume 72, pp. 919–988.

Eugene Volokh, 2000. “Freedom of speech and information privacy: The troubling implications of a right to stop people from speaking about you,” Stanford Law Review, volume 52, pp. 1,049–1,124.

 


Editorial history

Paper received 18 September 2009, accepted 19 October 2009.


Creative Commons License
“The privacy box: A software proposal” by Woodrow Hartzog is licensed under a Creative Commons Attribution–Noncommercial–Share Alike 3.0 United States License.

The privacy box: A software proposal
by Woodrow Hartzog.
First Monday, Volume 14, Number 11 - 2 November 2009
https://journals.uic.edu/ojs/index.php/fm/article/view/2682/2361





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.