There is a major controversy rocking the UK over the far-reaching press gag orders known as “super-injunctions,” especially because they’ve been brought to the fore by a sex scandal between famous footballer Ryan Giggs and reality TV star Imogen Thomas. (This blog post is now officially illegal in the UK.) In my latest TIME.com Techland post, I explain the controversy and say that while the injunction is legally enforceable—Facebook has a London office with over 50 employees, and today comes word that Twitter is starting up its UK operation—they are not practically enforceable because once out, the information cannot be controlled. I wrote:
Controlling information is possible, but only at the margin and at great cost. As information technology advances, that margin at which information can be controlled gets thinner and thinner, and the costs of doing so become greater and greater. So given the apparent futility of keeping facts secret, you’d think officials would look to find better ways of confronting the new reality. That’s unfortunately not the case.
“Why are we assuming that the world of communication, developing as rapidly as it is, can never be brought under control by other technological developments?” asked the head of the U.K.’s judiciary yesterday. “I am not giving up on the possibility that people who in effect peddle lies about others through modern technology may one day be brought under control.”
And we should not forget to look in the mirror. While the U.S. has some of the world’s most extensive free speech and press liberties, it seems every week there is a new proposal to control what information can be published online.
Watch the Mercatus Center Panel on The FCC'S Wireless Competition Report
Every year since 1995, the Federal Communications Commission has released a report on the state of competition in the wireless market. Last year’s report was the first not to find the market “effectively competitive.” As a result, expectations are high for the new annual report. How it determines the state of competition in the wireless market could affect regulatory policy and how the Commission looks at proposed mergers.
Tune in here to watch this afternoon’s panel discussion on these issues, brought to you by the Mercatus Center at George Mason University’s Technology Policy Program.
The panel features:
Thomas W. Hazlett, Professor of Law & Economics, George Mason University School of Law
Joshua D. Wright, Associate Professor of Law, George Mason University School of Law
Robert M. Frieden, Professor of Telecommunications & Law, Penn State University
In a post at Techland yesterday I noted that the FCC and FEMA’s new “PLAN” text-based emergency alert system might do little good since new media seems to always beat government to get out critical information:
If history is any guide, however, you may not get any messages from 1600 Pennsylvania. Since the Emergency Alert System was created in 1963, it’s never been activated, despite hurricanes, earthquakes, tornadoes, the Cuban Missile Crisis, the Oklahoma City bombing, and 9/11. Why?
The chairman of the FCC during the 9/11 attacks, Michael Powell, says that “The explosion of 24-hour-a-day, 7-day-a-week media networks in some ways has proven to supplant those original conceptions of a senior leader’s need to talk to the people.”
Given that it was Twitter, and not the President’s address, that recently broke the killing of Osama Bin Laden, you have to wonder whether the new service will be just as swiftly supplanted.
Another thing occurred to me talking to a colleague today. The PLAN system relies on cell carriers’ ability to track your geographic location so that targeted warning messages can be sent to your phone depending on where it is you are at the moment. Also, as far as I can tell from the FCC’s fact sheet, you’re automatically signed up for the system when you buy a phone and you cannot opt-out of presidential messages. I wonder if we’ll see a congressional hearing on the use of geo data without consumer consent?
On this blog, Adam Thierer has often written about the implicit quid pro quo between tracking and free online services. It seems to me that many folks find this an abstract concept. Here is Brinn writing in the late 90s about the possibility of an explicit quid pro quo:
An Economy of Micropayments? I cannot predict whether such an experiment would succeed, though using a “carrot”—or what chaos theorists call an “attractor state”—offers better prospects than the [IP owner’s] coalition’s present strategy of saber rattling and making hollow legal threats. In fact, the same approach might be used to deal with other aspects of “information ownership,” even down to the change of address you file with the post office. Perhaps someday advertisers and mail-order corporations will pay fair market value for each small use, either directly to each person listed or through royalty pools that assess users each time they access data on a given person. Or we might apply the concept of “trading-out”: getting free time at some favorite per-use site in exchange for letting the owners act as agents for our database records. It could be beneficial to have database companies competing with each other, bidding for the right to handle our credit dossiers, perhaps by offering us a little cash, or else by letting us trade our data for a little fun. Proponents of such a “micropayment economy” contend that the process will eventually become so automatic and computerized that it effectively fades into the background. People would hardly notice the dribble of royalties slipping into their accounts when others use “their” facts—any more than they would note the outflowing stream of cents they pay while skimming on the Web.
That is essentially what happened, except without all the transactions costs. It seems to me that all Do Not Track will do is introduce the transactions costs that we have so far avoided to the benefit of innovation. Who will this change benefit? The few people who are not willing to make the trade and who today have options to opt out. This leaves the majority of us who are willing to make the bargain in a very un-Coasean world.
“There’s No Data Sheriff on the Wild Web,” is an article by Nick Bilton in the New York Times this weekend, pointing out that no federal law punishes the massive breaches of personal information like the recent Epsilon and Sony cases.
"There needs to be new legislation and new laws need to be adopted" to protect the public, said Senator Richard Blumenthal, Democrat of Connecticut, who has been pressing Sony to answer questions about its data breach and what the company did to avoid it. "Companies need to be held accountable and need to pay significantly when private and confidential information is imperiled."
But how? Privacy experts say that Congress should pass legislation regulating companies if they collect certain types of information. If such laws existed today, they say, Sony could be held responsible for failing to properly protect the data by employing up-to-date security on its systems.
Or at the very least, companies would be forced to update their security systems. In underground online forums last week, hackers said Sony’s servers were severely outdated and infiltrating them was relatively easy.
While there may be no law requiring site operators to keep their networks updated and secure, it’s not as if they currently have no incentive to do so, and it’s not as if they are completely unaccountable. Witness the (at least) two lawsuits already filed against Sony. One in Canada for $1 billion and one in the U.S. looking for class action status. Not to mention that the PlayStation network is still down and losing money, as well as Sony’s reputation loss. Are you now more or less likely to buy a PlayStation as your next console?
To the extent we do need legislation, it’s not to tell firms to keep their Apache servers up to date. There are plenty of terrible things that happen to a firm if it doesn’t take the security of its customers’ data seriously. Sony is living proof of that. Adding a criminal fine to the pile likely won’t improve private incentives. What prescriptive legislation might to do, however, is put federal bureaucrats in charge of security standards, which is not a good thing in my book.
The missing incentive here might be the incentive to disclose that a breach has occurred. Rep. Mary Bono Mack has suggested that she might introduce legislation to require such disclosures. Such legislation may well be responding to a real and harmful information asymmetry. If a firm could preserve such an asymmetry, then the usual incentives wouldn’t work.
Rather than trying to legislatively predict and preempt security breaches, when it comes to the security of personal information it might be better to seek a policy of transparency and resiliency. As I explain in my latest TIME Techland piece, we may now be in a world were it’s next to impossible to ensure that at lease some of our private personal information that is digitized and connected to the net won’t be compromised. To attempt to put that genie back in the bottle might be not only futile, but counterproductive. Instead, we may be better served by being informed when our data is compromised, seeking civil redress, and learning to cope with the new reality. As I write in the piece:
On net, the fact that we now live in a hyper-connected world where information can’t be controlled is a good thing. The cultural, social, economic and political benefits of such a transparent system will likely outweigh the price we pay in privacy and security. And that’s especially the case if learn to live with that reality.
Human beings are incredibly resilient, and faced with a new environment, we adapt. When major changes take place—-from natural disasters to the Industrial Revolution—-we learn to live in the new context, but only if we acknowledge the new reality. We need to get used to this new world in which information can’t be controlled.
Maybe a new social norm will develop that accepts that everyone will have embarrassing facts about them online, and that it’s OK because we’re human. Maybe if we assumed that data breaches are inevitable, we wouldn’t give up on securing networks, but we might do more to cope. For example, the technology exists to make all credit card numbers single-use to a particular vendor, so they’re of little value to hackers.
Welcome to the new world. Information wants to be free. The Net interprets information control as damage and routes around it. Get used to it.
Here’s a doozy for the cyber-hype files. After it was announced that CIA Director Leon Panetta would take over at the Department of Defense, Rep. Jim Langevin, co-chair of the CSIS cybersecurity commission and author of comprehensive cybersecurity legislation, put out a statement that read in part:
“I am particularly pleased to know that Director Panetta will have a full appreciation for the increasing sense of urgency with which we must approach cybersecurity issues. Earlier this year, Panetta warned that ‘the next Pearl Harbor could very well be a cyberattack.”
That’s from a statement made by Panetta to a house intelligence panel in February, and it’s an example of unfortunate rhetoric that Tate Watkins and I cite in our new paper. Pearl Harbor left over two thousand persons dead and pushed the United States into a world war. There is no evidence that a cyber-attack of comparable effect is possible.
What’s especially unfortunate about that kind of alarmist rhetoric, apart from the fact that unduly scares citizens, is that it is often made in support of comprehensive cybersecurity legislation, like that introduced by Rep. Langevin. That bill gives DHS the authority to issue standards for, and audit for compliance, private owners of critical infrastructure.
What qualifies as critical infrastructure? The bill has an expansive definition, so let’s hope that the “computer experts” cited in this National Journal story on the Sony PlayStation breach are not the ones doing the interpreting:
While gaming and music networks may not be considered “critical infrastructure,” the data that perpetrators accessed could be used to infiltrate other systems that are critical to people’s financial security, according to some computer experts. Stolen passwords or profile information, especially codes that customers have used to register on other websites, can provide hackers with the tools needed to crack into corporate servers or open bank accounts.
It’s not hard to imagine a logic that leads everything to be considered “critical infrastructure” because, you know, everything’s connected on the network. We need to be very careful about legislating great power stemming from vague definitions and doing so on little evidence and lots of fear.