Sprint’s Chairman, Masayoshi Son, is coming to Washington to explain how wireless competition in the US would be improved if only there were less of it.
After buying Sprint last year for $21.6 billion, he has floated plans to buy T-Mobile. When antitrust officials voiced their concerns about the proposed plan’s potential impact on wireless competition, Son decided to respond with an unusual strategy that goes something like this: The US wireless market isn’t competitive enough, so policymakers need to approve the merger of the third and fourth largest wireless companies in order to improve competition, because going from four nationwide wireless companies to three will make things even more competitive. Got it? Me neither.
An argument like that takes nerve, especially now. When AT&T attempted to buy T-Mobile a few years ago, Sprint led the charge against it, arguing vociferously that permitting the market to consolidate from four to only three nationwide wireless companies would harm innovation and wireless competition. After the Administration blocked the merger, T-Mobile rebounded in the marketplace, which immediately made it the poster child for the Administration’s antitrust policies.
It also makes Son’s plan a non-starter. Allowing Sprint to buy T-Mobile three years after telling AT&T it could not would take incredible regulatory nerve. It would be hard to convince anyone that such an immediate about face in favor of the company that fought the previous merger the hardest isn’t motivated by a desire to pick winners in losers in the marketplace or even outright cronyism. That would be true in almost any circumstance, but is doubly true now that T-Mobile is flourishing. It’s hard to swallow the idea that it would harm competition if a nationwide wireless company were to buy T-Mobile — unless the purchaser is Sprint.
The special irony here is that Son has built his reputation on a knack for relentless innovation. When he bought Sprint, he expressed confidence that Sprint would become the number 1 company in the world. But, a year later, it is T-Mobile that is rebounding in the marketplace, even though T-Mobile has fewer customers than Sprint and less spectrum than Sprint. Buying into T-Mobile’s success now wouldn’t improve Son’s reputation for innovation, but it would double down on his confidence. I expect US regulators will want to see how he does with Sprint before betting the wireless competition farm on a prodigal Son.
Yesterday, an administrative judge ruled in Huerta v. Pirker that the FAA’s “rules” banning commercial drones don’t have the force of law because the agency never followed the procedures required to enact them as an official regulation. The ruling means that any aircraft that qualifies as a “model aircraft” plausibly operates under laissez-faire. Entrepreneurs are free for now to develop real-life TacoCopters, and Amazon can launch its Prime Air same-day delivery service.
Laissez-faire might not last. The FAA could appeal the ruling, try to issue an emergency regulation, or simply wait 18 months or so until its current regulatory proceedings culminate in regulations for commercial drones. If they opt for the last of these, then the drone community has an interesting opportunity to show that regulations for small commercial drones do not pass a cost-benefit test. So start new drone businesses, but as Matt Waite says, “Don’t do anything stupid. Bad actors make bad policy.”
Kudos to Brendan Schulman, the attorney for Pirker, who has been a tireless advocate for the freedom to innovate using drone technology. He is on Twitter at @dronelaws, and if you’re at all interested in this issue, he is a great person to follow.
The House Subcommittee on Communications and Technology will soon consider whether to reauthorize the Satellite Television Extension and Localism Act (STELA) set to expire at the end of the year. A hearing scheduled for this week has been postponed on account of weather.
Congress ought to scrap the current compulsory license in STELA that governs the importation of distant broadcast signals by Direct Broadcast Satellite providers. STELA is redundant and outdated. The 25 year-old statute invites rent-seeking every time it comes up for reauthorization.
At the same time, Congress should also resist calls to use the STELA reauthorization process to consider retransmission consent reforms. The retransmission consent framework is designed to function like the free market and is not the problem.
Those advocating retransmission consent changes are guilty of exaggerating the fact that retransmission consent fees have been on the increase and blackouts occasionally occur when content producers and pay-tv providers fail to reach agreement. They are also at fault for attempting to pass the blame. DIRECTV dropped the Weather Channel in January, for example, rather than agree to pay “about a penny a subscriber” more than it had in the past.
A DIRECTV executive complained at a hearing in June that “between 2010 and 2015, DIRECTV’s retransmission consent costs will increase 600% per subscriber.” As I and other have noted in the past, retransmission consent fees account for an extremely small share of pay-tv revenue. Multichannel News has estimated that only two cents of the average dollar of cable revenue goes to retransmission consent.
According to SNL Kagan, retransmission-consent fees were expected to be about 1.2% of total video revenue in 2010, rising to 2% by 2014. at that rate, retrans currently makes up about 3% of total video expenses.
Among other things, DIRECTV recommended that Congress use the STELA reauthorization process to outlaw blackouts or permit pay-tv providers to deliver replacement distant broadcast signals during local blackouts. In effect, DIRECTV wants to eliminate the bargaining power of content producers, and force them to offer their channels for retransmission at whatever price DIRECTV is willing to pay.
There is a need for regulatory reform in the video marketplace. Unfortunately, proposals such as these do not advance that goal. The government intervention DIRECTV is seeking would simply add to the problem by forcing local broadcasters to subsidize pay-tv providers instead of being allowed to recover the fair market value of their programming. Broadcaster Marci Burdick was correct when she observed that regulation which unfairly siphons local broadcast revenue could have the unintended effect of reducing the “quality and diversity of broadcast programming, including local news, public affairs, severe weather, and emergency alerts, available both via [pay-tv providers] and free, over-the-air to all Americans.”
Broad regulatory reform of the video marketplace can and should be considered as part of the process House Energy and Commerce Committee Chairman Fred Upton (R-MI) and Communications and Technology Subcommittee Chairman Greg Walden (R-OR) recently announced by which the committee will examine and update the Communications Act.
It seems to me that a lot of the angst about the Comcast-Netflix paid transit deal results from a general discomfort with two-sided markets rather than any specific harm caused by the deal. But is there any reason to be suspicious of two-sided markets per se?
Consider a (straight) singles bar. Men and women come to the singles bar to meet each other. On some nights, it’s ladies’ night, and women get in free and get a free drink. On other nights, it’s not ladies’ night, and both men and women have to pay to get in and buy drinks.
There is no a priori reason to believe that ladies’ night is more just or efficient than other nights. The owner of the bar will benefit if the bar is a good place for social congress, and she will price accordingly. If men in the area are particularly shy, she may have to institute a “mens’ night” to get them to come out. If women start demanding too many free drinks, she may have to put an end to ladies’ night (even if some men benefit from the presence of tipsy women, they may not be as willing as the women to pay the full cost of all of the drinks). Whether a market should be two-sided or one-sided is an empirical question, and the answer can change over time depending on circumstances.
Some commentators seem to be arguing that two-sided markets are fine as long as the market is competitive. Well, OK, suppose the singles bar is the only singles bar in a 100-mile radius? How does that change the analysis above? Not at all, I say.
Analysis of two-sided markets can get very complex, but we shouldn’t let that complexity turn into reflexive opposition.
Google’s announcement this week of plans to expand to dozens of more cities got me thinking about the broadband market and some parallels to transportation markets. Taxi cab and broadband companies are seeing business plans undermined with the emergence of nimble Silicon Valley firms–Uber and Google Fiber, respectively.
The incumbent operators in both cases were subject to costly regulatory obligations in the past but in return they were given some protection from competitors. The taxi medallion system and local cable franchise requirements made new entry difficult. Uber and Google have managed to break into the market through popular innovations, the persistence to work with local regulators, and motivated supporters. Now, in both industries, localities are considering forbearing from regulations and welcoming a competitor that poses an economic threat to the existing operators.
Notably, Google Fiber will not be subject to the extensive build-out requirements imposed on cable companies who typically built their networks according to local franchise agreements in the 1970s and 1980s. Google, in contrast, generally does substantial market research to see if there is an adequate uptake rate among households in particular areas. Neighborhoods that have sufficient interest in Google Fiber become Fiberhoods.
Similarly, companies like Uber and Lyft are exempted from many of the regulations governing taxis. Taxi rates are regulated and drivers have little discretion in deciding who to transport, for instance. Uber and Lyft drivers, in contrast, are not price-regulated and can allow rates to rise and fall with demand. Further, Uber and Lyft have a two-way rating system: drivers rate passengers and passengers rate drivers via smartphone apps. This innovation lowers costs and improves safety: the rider who throws up in cars after bar-hopping, who verbally or physically abuses drivers (one Chicago cab driver told me he was held up at gunpoint several times per year), or who is constantly late will eventually have a hard time hailing an Uber or Lyft. The ratings system naturally forces out expensive riders (and ill-tempered drivers).
Interestingly, support and opposition for Uber and Google Fiber cuts across partisan lines (and across households–my wife, after hearing my argument, is not as sanguine about these upstarts). Because these companies upset long-held expectations, express or implied, strong opposition remains. Nevertheless, states and localities should welcome the rapid expansion of both Uber and Google Fiber.
The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions. Of course, there is a decades-long debate about when deregulation turns into subsidies, and this conversation applies to Uber and Google Fiber.
That debate is important, but regulators and policymakers should take every chance to roll back the rules of the past–not layer on more mandates in an ill-conceived attempt to “level the playing field.” Transportation and broadband markets are changing for the better with more competition and localities should generally stand aside.
The volatility of Bitcoin prices is one of the strongest headwinds the currency faces. Unfortunately, until my quantitative analysis last month, most of the discussion surrounding Bitcoin volatility so far has been anecdotal. I want to make it easier for people to move beyond anecdotes, so I have created a Bitcoin volatility index at btcvol.info, which I’m hoping can become or inspire a standard metric that people can agree on.
The volatility index at btcvol.info is based on daily closing prices for Bitcoin as reported by CoinDesk. I calculate the difference in daily log prices for each day in the dataset, and then calculate the sample standard deviation of those daily returns for the preceding 30 days. The result is an estimate of how spread out daily price fluctuations are—volatility.
The site also includes a basic API, so feel free to integrate this volatility measure into your site or use it for data analysis.
I of course hope that Bitcoin volatility becomes much lower over time. I expect both the maturing of the ecosystem as well as the introduction of a Bitcoin derivatives market will cause volatility to decrease. Having one or more volatility metrics will help us determine whether these or other factors make a difference.
You can support btcvol.info by spreading the word or of course by donating via Bitcoin to the address at the bottom of the site.
Net Neutrality Opinion Indicates Internet Service Providers Are Entitled to First Amendment Protection
Verizon v. FCC, the court decision overturning the Federal Communications Commission’s (FCC) net neutrality rules, didn’t rule directly on the First Amendment issues. It did, however, reject the reasoning of net neutrality advocates who claim Internet service providers (ISPs) are not entitled to freedom of speech.
The court recognized that, in terms of the functionality that it offers consumers and the economic relationships among industry participants, the Internet is as similar to analog cable networks as it is to analog telephone networks. As a result, the court considered most of the issues in the net neutrality case to be “indistinguishable” from those addressed in Midwest Video II, a seminal case addressing the FCC’s authority over cable systems. The court’s emphasis on the substantive similarities between analog cable services, which are clearly entitled to First Amendment protection, indicates that ISPs are likewise entitled to protection.
Net neutrality advocates argued that ISPs are not First Amendment “speakers” because ISPs do not exercise editorial discretion over Internet content. In essence, these advocates argued that ISPs forfeited their First Amendment rights as a result of their “actual conduct” in the marketplace.
Though the court didn’t address the First Amendment issues directly, the court’s reasoning regarding common carrier issues indicates that the “actual conduct” of ISPs is legally irrelevant to their status as First Amendment speakers.
In Verizon v. FCC , the FCC argued that its net neutrality rules couldn’t be considered common carrier obligations with respect to edge providers because ISPs did not have direct commercial relationships with edge providers. But the court concluded that the nature of preexisting commercial relationships between ISPs and edge providers was irrelevant to the legal status of ISPs:
[T]he Commission appears to misunderstand the nature of the inquiry in which we must engage. The question is not whether, absent the [net neutrality rules], broadband providers would or did act as common carriers with respect to edge providers; rather, the question is whether, given the rules imposed by the [FCC], broadband providers are now obligated to act as common carriers.
FCC v. Verizon, No. 11-1355 at 52 (2014) (emphasis in original).
A court must engage in a similar inquiry when determining whether ISPs are “speakers” entitled to First Amendment protection. The question is not whether ISPs would or actually have exercised editorial discretion in the past. There is no Constitutional requirement that ISPs (or anyone else) must speak at the earliest opportunity in order to preserve their right to speak in the future. The question is whether ISPs have the legal option of speaking — i.e., exercising editorial discretion.
Of course, everyone knows ISPs have the ability to exercise such discretion. The court noted there was little dispute regarding the FCC’s finding that that ISPs have the technological ability to distinguish among different types of Internet traffic. Indeed, ISPs’ ability to exercise editorial discretion is the very reason the FCC adopted its net neutrality rules. It is also for this reason that, for First Amendment purposes, ISPs are substantially similar to television broadcasters and analog cable operators for whom First Amendment protections have already been applied.
Some net neutrality advocates attempt to skirt this fact by arguing that ISPs don’t “need” to exercise editorial discretion because today’s ISPs are less capacity constrained than broadcasters and analog cable operators. The essence of this argument is that the First Amendment permits the government to abridge a potential speaker’s freedom of speech if, in the government’s subjective view, the speaker would be able to get along just fine without speaking.
In their zeal to defend net neutrality, these advocates appear to have forgotten that, no matter how comfortable or familiar it may be, a muzzle is still a muzzle. The courts have not.
In Verizon v. FCC, the court recognized that the relationships among ISPs, their subscribers, and edge providers are “indistinguishable” from those present in the analog cable market addressed by the Supreme Court in Midwest Video II:
The Midwest Video II cable operators’ primary “customers” were their subscribers, who paid to have programming delivered to them in their homes. There, as here, the Commission’s regulations required the regulated entities to carry the content of third parties to these customers—content the entities otherwise could have blocked at their discretion. Moreover, much like the rules at issue here, the Midwest Video II regulations compelled the operators to hold open certain channels for use at no cost—thus permitting specified programmers to “hire” the cable operators’ services for free.
FCC v. Verizon, No. 11-1355 at 54 (2014).
The court rejected the FCC’s arguments attempting to distinguish the Internet from cable — arguments that are substantially the same as those advanced by net neutrality advocates in the First Amendment context.
First, the court was unmoved by the argument that Internet content is delivered to end users only when an end user “requests” it, i.e., by clicking on a link. The court noted that cable customers could not actually receive content on a particular cable channel either unless they affirmatively chose to watch those channels, i.e., by changing the channel. (See id.) The court recognized that, “The access requested by [cable video] programmers in Midwest Video II, like the access requested by edge providers here, is the ability to have their communications transmitted to end-user subscribers if those subscribers so desire.” (Id.)
Second, the court considered the capacity differences between the analog cable systems at issue in Midwest Video II and the broadband Internet to be irrelevant to common carriage analysis:
Whether an entity qualifies as a carrier does not turn on how much content it is able to carry or the extent to which other content might be crowded out. A short train is no more a carrier than a long train, or even a train long enough to serve every possible customer.
FCC v. Verizon, No. 11-1355 at 55 (2014). The capacity issue is irrelevant to the applicability of the First Amendment for the same reason. A speaker has the right to refrain from speaking even if speaking would be undemanding.
Finally, the court concluded that the FCC could not distinguish its net neutrality rules from the rules at issue in Midwest Video II using another variation on the “actual conduct” argument. In Midwest Video II, the Supreme Court emphasized that the FCC cable regulations in question “transferred control of the content of access cable channels from cable operators to members of the public.” Midwest Video II, 440 U.S. at 700. In Verizon v. FCC, the FCC argued that its net neutrality rules had not “transferred control” over the Internet content transmitted by ISPs because, “unlike cable systems, Internet access providers traditionally have not decided what sites their end users visit.” (FCC Brief at 65) The court did not consider the “actual conduct” of ISPs a relevant distinction:
The [net neutrality] regulations here accomplish the very same sort of transfer of control: whereas previously broadband providers could have blocked or discriminated against the content of certain edge providers, they must now carry the content those edge providers desire to transmit.
FCC v. Verizon, No. 11-1355 at 56 (2014).
Based on the court’s repeated emphasis on the substantive similarities between analog cable services, which the Supreme Court has held are “speakers”, and Internet services, it should now be obvious that ISPs are also “speakers” entitled to First Amendment protection. The use of Internet protocol rather than analog cable technology to deliver video services changes neither the economic nor the First Amendment considerations applicable to network operators, edge providers, and end users.
To be clear, application of the First Amendment to ISPs does not automatically mean that net neutrality rules would be unconstitutional. Whether a particular regulation is violative of the First Amendment depends on the applicable level of judicial scrutiny, the importance of the government interest at stake, and the degree of relatedness between the law and its purpose. Whether net neutrality rules would survive First Amendment scrutiny would thus depend in part on their own terms and the government’s rationale for adopting them.
That is why the applicability of the First Amendment to ISPs is so important. When Constitutional rights are at stake, the government has stronger incentives to adopt regulations that are well-reasoned and likely to achieve their intended goals than it does when it makes rules in the ordinary administrative context.
- – -
 The doctrine of constitutional avoidance counsels against deciding a constitutional question when a case can be resolved on some other basis. Once the court concluded that the FCC exceeded its authority in adopting the anti-blocking and anti-discrimination rules, the court had no need to address their constitutionality.
 Even if the “actual conduct” argument were valid, it would not control application of the First Amendment to ISPs. The fact that ISPs don’t exercise editorial discretion was motivated in part by FCC policies that chilled or prohibited the exercise of such discretion.
- In the dial-up era, telephone companies were subject to common carrier regulations prohibiting their exercise of editorial discretion over Internet content transmitted by third-party companies (e.g., America Online, who exercised editorial discretion over Internet content) while reducing economic incentives for telephone companies to provide their own Internet services;
- Though the FCC exempted cable broadband services from common carrier regulation relatively early in the broadband era, the FCC simultaneously asked whether and to what extent it should impose editorial restrictions on such services;
- In conjunction with its subsequent order extending the cable broadband exemption to telephone companies, the FCC issued a Broadband Policy Statement announcing that it would take action if it observed ISPs exercising editorial discretion; and
- After the DC Circuit ruled that the Broadband Policy Statement was unenforceable, the FCC adopted the net neutrality rules that the court struck down in Verizon v. FCC.
This history indicates that the “actual conduct” of ISPs evidences nothing more than their intent to comply with FCC rules and policies. It would be absurd to conclude that ISPs forfeited their right to First Amendment protection by virtue of their regulatory compliance.
It appears that Federal Communications Commission (FCC) Chairman Tom Wheeler is returning to a competition-based approach to communications regulation. Chairman Wheeler’s emphasis on “competition, competition, competition” indicates his intent to intervene in communications markets only when it is necessary to correct a market failure.
I expect most on both sides of the political spectrum would welcome a return to rigorous market analysis at the FCC, but you can’t please all of the people all of the time. The American Television Alliance (ATVA), whose FCC petition wouldn’t withstand even a cursory market power analysis, is sure to be among the displeased.
The ATVA petition asks the FCC to regulate prices for retransmission consent (the prices video service providers (VSPs) pay for the rights to provide broadcast television programming to pay-TV subscribers) because retransmission fees and competition among VSPs are increasing. Though true, this data doesn’t indicate that TV stations or broadcast television networks have market power — it indicates that legislative and policy efforts to increase competition among VSPs are working.
The increase in retransmission consent fees is the natural consequence of the increase in competition among VSPs. When incumbent cable companies were the dominant VSPs, they could use the threat of a blackout to force broadcasters to grant retransmission consent at extremely low prices (or even for free). If a TV station balked, it risked losing substantial advertising revenue because there was no other VSP to retransmit the station’s signal.
As a result of increasing competition among VSPs, broadcasters are finally in a position to negotiate fairer prices for their content. When a VSP threatens a blackout today, a broadcaster has the option of calling the VSP’s bluff, as Wall Street observed when Time Warner yanked CBS off the air during a dispute about wireless distribution rights last fall. Now that there are competitive VSPs in most markets, cable operators have something to lose from a blackout too — their subscribers.
VSPs have responded to increasing market competition by asking the government for special treatment. ATVA has cloaked their rent-seeking request in the language of market power, but haven’t provided any analysis supporting their contention that retransmission consent fees are “too high.” They appear to be hoping that, if they cry wolf loud enough, they can avoid paying a fairer price for television programming.
If retransmission fees were really “too high,” one would expect that they would be significantly higher than the fees VSPs charge for their own content. According to the data, however, VSPs charge significantly more for their affiliated content than broadcasters charge for retransmission consent. In 2012, VSPs paid an average of $1.50 for the top ten channels affiliated with cable networks. In comparison, VSPs paid an average of $0.58 in 2012 for the right to retransmit the channels of the top ten TV station companies (e.g., Sinclair) — sixty one percent (61%) less than VSPs were willing to pay for their affiliated content. (Sources: Kagan and SNL)
Are the significantly higher prices cable networks charged for their programming in 2012 driven by consumer ratings? No. Kagan data indicates that, in 2012, VSPs paid approximately the same amount — $0.57 per subscriber — for CNN (CNN en Español sold for $0.58) as the average for the top ten TV stations. Despite its similar price, however, CNN averaged only about 600,000 daily viewers during primetime whereas each of the national broadcast network news programs averaged over 8 million evening viewers daily. This viewership data, albeit limited, indicates that broadcasters are charging ten times less for their programming than VSPs charge for similar programming.
The premium VSPs pay for their own content reflects the economics of the video programming market. Though competition among VSPs has increased, there is still significantly greater concentration and market power in the video distribution market than in the video programming market. According to the FCC’s most recent video competition report, only about one-third (35%) of homes had access to at least four VSPs in 2011. (See Fifteenth Report at Table 2) The FCC found that, even in areas with four VSPs, the Herfindahl-Hirschman Index (HHI), a common measure of horizontal market concentration, was over 2,500 (a highly concentrated marketplace). (See id. at ¶ 37) In comparison, there were more than twenty national video programming networks. (See id. at App. B)
Even a cursory review of the data indicates that recent increases in retransmission consent fees are a sign of market success, not a failure. It should be no surprise that, as competition among VSPs has increased, the price of retransmission consent has increased with it. It is the predictable result of cable’s decreasing monopsony power.
Ladar Levison, founder of encrypted email service Lavabit, discusses recent government action that led him to shut down his firm. When it was suspected that NSA whistleblower Edward Snowden used Lavabit’s email service, the FBI issued a National Security Letter ordering Levison to hand over SSL keys, jeopardizing the privacy of Lavabit’s 410,000 users. Levison discusses his inspiration for founding Lavabit and why he chose to suspend the service; how Lavabit was different from email services like Gmail; developments in his case and how the Fourth Amendment has come into play; and his involvement with the recently-formed Dark Mail Technical Alliance.
- Lavabit, Levison
- Dark Mail, Zimmerman, Callas, Janke, Levison
- Lawyers raise civil liberty concerns in Lavabit case, Salon
- Lawyers for Lavabit founder: judges may dismiss civil liberties concerns, The Guardian
On Saturday, C-SPAN aired a segment of The Communicators featuring me and Free Press’ Chance Williams. In the 30-minute segment, Chance and I discussed the future of net neutrality now that the FCC’s Open Internet rules are vacated. You can see the taping here or below.
I am speaking on a panel tomorrow at the Dirksen Senate Office Building at an R Street Institute event on patent reform. Here’s R Street’s description:
The patent reform debate has been painted as one of inventors vs. patent troll victims. Yet these two don’t have to be enemies. We can protect intellectual property, and stomp out patent trolls.
If you’re just tuning in, patent trolls are entities that hoard overly broad patents, but do not use them to make goods or services, or advance a useful secondary market. While there’s a place for patent enforcement, these guys take it way too far.
These entities maliciously threaten small businesses, inventors, and consumers, causing tens of billions in economic damage each year. Since litigation costs millions of dollars, businesses are forced to settle even when the claim against them is spurious.
Fortunately, with growing awareness and support, the patent trolls’ lucrative racket is in jeopardy. With Obama’s patent troll task force, the passage of the Innovation Act in the House, state legislation tackling demand letters, and further action in the courts, we appear to be closer than ever to achieving real reform.
Please join us for a lunch and panel discussion of the nature of the patent troll problem, the industries it affects, and the policy solutions being considered.
Zach Graves, Director of Digital Marketing & Policy Analyst, R Street Institute (Moderator)
Eli Dourado, Research Fellow, Mercatus Center
Whitaker L. Askew, Vice President, American Gaming Association
Robin Cook, Assistant General Counsel for Special Projects, Credit Union National Association
Julie Hopkins, Partner, Tydings & Rosenberg LLP
The festivities begin at noon. The event is open to the public, and you can register here.
In December, Reps. Upton and Walden announced that they intend to update the Communications Act, which saw its last major revision in 1996. Today marks the deadline to submit initial comments regarding updating the Act. Below is my submission, which includes reference to a Mercatus paper by Raymond Gifford analyzing the Digital Age Communications (DACA) reports. These bipartisan reports would largely replace and reform our deficient communications laws.
Dear Chairman Upton,
As you and Rep. Walden recently acknowledged, U.S. communications law needs updating to remove accumulated regulatory excess and to strengthen market forces. When the 1934 Communications Act was passed, there was a national monopoly telephone provider and Congress’s understanding of radio spectrum physics was rudimentary. Chief among the Communication Act’s many flaws was giving the Federal Communication Commission authority to regulate wired and wireless communications according to “public interest, convenience, and necessity,” an amorphous standard that has been frequently abused. If delegating this expansive grant of discretion to the FCC was ever sensible, it clearly no longer is. Today, eight decades later, with competition between video, telephone, and Internet providers taking place over wired and wireless networks, the public interest standard simply invites costly rent-seeking and stifles technologies and business opportunities.
Like an old cottage receiving several massive additions spanning decades by different clumsy architects, communications law is a disorganized and dilapidated structure that should be razed and reconstituted. As new technologies emerged since the 1930s—broadcast television, cable, satellite, mobile phones, the Internet—and upended existing regulated businesses, the FCC and Congress layered on new rules attempting to mitigate the distortions.
Congressional attempts at reforming communications laws have appeared regularly ever since the 1996 amendments. During the last such attempt, in 2011, the Mercatus Center released a study discussing and summarizing a model for communications law reform known as the Digital Age Communications Act (DACA). That model legislation—consisting of five reports released in 2005 and 2006—came from the bipartisan DACA Working Group. The reports addressed five areas:
1. Regulatory framework;
2. Universal service;
3. Spectrum reform;
4. Federal-state jurisdiction; and
5. Institutional reform.
The DACA reports represent a flexible, market-oriented agenda from dozens of experts that, if implemented, would spur innovation, encourage competition, and benefit consumers. The regulatory framework report is the centerpiece recommendation and adopts a proposal largely based on the Federal Trade Commission Act, which provides a reformed FCC with nearly a century of common law for guidance. Significantly, the reports replace the FCC’s misused “public interest” standard with the general “unfair competition standard” from the FTC Act.
Despite the passage of time, those reports have held up remarkably well. The 2011 Mercatus paper describing the DACA reports is attached for submission in the record. The scholars at Mercatus are happy to discuss this paper and the cited materials below—including the DACA reports—further with Energy & Commerce Committee staff as they draft white papers and reform proposals.
Thank you for initiating discussion about updating the Communications Act. Reform can give America’s innovative technology and telecommunications sector a predictable and technology-neutral legal framework. When Congress replaces command-and-control rules with market forces, consumers will be the primary beneficiaries.
Research Fellow, Technology Policy Program
Mercatus Center at George Mason University
JEFFREY A. EISENACH ET AL., THE TELECOM REVOLUTION: AN AMERICAN OPPORTUNITY (1995).
Raymond L. Gifford, The Continuing Case for Serious Communications Law Reform, Mercatus Center Working Paper No. 11-44 (2011).
PETER HUBER, LAW AND DISORDER IN CYBERSPACE: ABOLISH THE FCC AND LET COMMON LAW RULE THE TELECOSM (1997).
The war among the states to see who can lavish the film industry with more generous tax credits in their attempt to become “the next Hollywood” continues, and it is quickly descending into a classic race to the bottom. A front-page article in today’s Wall Street Journal notes that the tax incentive bidding war has gotten so intense that it is hollowing out the old Hollywood labor pool and sending it on a road trip across the America in search of tax-induced job activity:
As film and TV production scatters around the country, more workers… are packing up from California and moving to where the jobs are. Driving this exodus of lower-wage workers — stunt doubles, makeup artists, production assistants and others who keep movie sets humming — are successful efforts by a host of states to use tax incentives to poach production business from California. [...]
Only two movies with production budgets higher than $100 million filmed in Los Angeles in 2013, according to Film L.A. Inc., the city’s movie office. In 1997, the year “Titanic” was released, every big-budget film but one filmed at least partially in the city. The number of feature-film production days in Los Angeles peaked in 1996 and fell by 50% through last year, according to Film L.A. Projects such as reality television and student films have picked up some of the slack. But overall entertainment-industry employment has slid. About 120,000 Californians worked in the industry in 2012, down from 136,000 in 2004, according to the U.S. Bureau of Labor Statistics.
The labor migration has arisen in part because California hasn’t competed aggressively on the tax-break front, officials and executives say, while states like Georgia have made efforts to grab a sizable chunk of the industry. More than 40 states and 30 foreign countries are offering increasingly generous and creative tax incentives to lure entertainment producers.
On one hand, hooray for labor mobility! But seriously, this stinks because this labor shift is taking place in a wholly unnatural way, with a complex and growing web of tax inducements leading to massive distortions in this marketplace. While proponents will insist these programs are job creators for the communities that win, in reality, they are really just job reshufflers that net limited jobs at that. Meanwhile, the costs to their taxpayers grows as more and more state and local governments jump in this game. It’s classic “smokestack chasing” activity, except in this case the firms probably didn’t even create that many jobs while they were there and then you don’t even have a factory left when the firms leave town!
If things continue like this, it probably won’t be long before some “innovative” state or local government leader gets the idea of actually just paying some film producers cold hard cash to come set up shop in their area. Hey, at least that way the programs would be on-budget and nominally more accountable!
Anyway, I’ve documented the cost of this ruinous race to the bottom in my essay, “State Film Industry Incentives: A Growing Cronyism Fiasco,” which documents the economic evidence about just how inefficient these programs are in practice. I later expanded that essay and included in my massive paper with Brent Skorup, “A History of Cronyism and Capture in the Information Technology Sector.” Warning: It makes for miserable reading if you care about fiscal accountability and good government. Maybe somebody will make a movie about this racket someday! (But don’t hold your breath.)
P.S. For more on the corrupting influence of cronyism on American capitalism, please visit this Mercatus Center page for a comprehensive set of studies on the issue. Also, check out this outstanding paper by my colleague Matt Mitchell (“The Pathology of Privilege: The Economic Consequences of Government Favoritism“) and this excellent recent book on cronyism by Randall G. Holcombe and Andrea Castillo. And here’s a little slide show I put together on the costs of cronyism.
The Internet is abuzz with news that Federal Communications Commission Chairman Tom Wheeler favors a case-by-case approach to addressing Internet competition issues. It is the wisest course, and perhaps the most courageous. Some on the right will say he is going too far, and some on the left will say he isn’t going far enough. That is one reason Wheeler’s approach should be commended. Staunch disagreements about net neutrality and other Internet governance issues reflect the uncertainty inherent in a dynamic market.
Chairman Wheeler’s comments this week echoed Socrates (“I’m not smart enough to know what comes next [in innovation]”) and, to my surprise, Virginia Postrel (the Chairman favors addressing Internet issues “in a dynamic rather than a static way”). He recognizes that, in a two-sided market, there is no reason to assume that ISPs will necessarily have the ability to charge content providers rather than the other way around. The potential for strategic behavior on the Internet today is radically different than in the dial-up Internet era, and the Chairman appears prepared to consider those differences in his approach to communications regulation.
The Chairman also noted that section 706 gives the FCC authority over the entire Internet. Though my friends at TechFreedom have expressed alarm that the Chairman thinks this is positive, an approach that recognizes the potential for strategic behavior by so-called edge providers is preferable to the one-sided approach embodied in net neutrality. The FCC’s decision to impose strict limitations on only one side of the two-sided Internet marketplace was bound to create market distortions and always smacked of cronyism. A broader approach, fairly applied, is more likely to discourage strategic behavior and protect consumers than the FCC’s previous net neutrality rules, which were designed to protect the commercial interests of edge providers.
To be clear, I remain unconvinced that intervention is necessary. But that is the virtue of the common law approach. If anticompetitive behavior occurs, the FCC would have the ability to take action. If not, the market would have the freedom to experiment with new business models and service arrangements. In comparison, a per se rule “will almost always favor one group over another.”
There is another reason the Chairman should be commended for not rushing to reinstate the invalidated net neutrality rules – respect for the role of Congress. As Commissioner Pai noted in his statement on the DC Circuit’s decision striking down the rules, it was “the second time in four years” that the court had ruled that the agency exceeded its authority in attempting to regulate the Internet. In the meantime, Congress has begun a #CommActUpdate process to modernize the statute for the Internet era. In these circumstances, comity counsels that the FCC defer to Congress on Internet rules. A case-by-case approach would give the FCC flexibility to address any serious anti-competitive or consumer issues that might arise while avoiding the issuance of comprehensive rules in the face of a Congressional rewrite. That is indeed wise.
Last week, it was my great pleasure to be invited on NPR’s “On Point with Tom Ashbrook,” to debate Jeffrey Rosen, a leading privacy scholar and the president and chief executive of the National Constitution Center. In an editorial in the previous Sunday’s New York Times (“Madison’s Privacy Blind Spot”), Rosen proposed “constitutional amendment to prohibit unreasonable searches and seizures of our persons and electronic effects, whether by the government or by private corporations like Google and AT&T.” He said his proposed amendment would limit “outrageous and unreasonable” collection practices and would even disallow consumers from sharing their personal information with private actors even if they saw an advantage in doing so.
I responded to Rosen’s proposal in an essay posted on the IAPP Privacy Perspectives blog, “Do We Need A Constitutional Amendment Restricting Private-Sector Data Collection?” In my essay, I argued that there are several legal, economic, and practical problems with Rosen’s proposal. You can head over to the IAPP blog to read my entire response but the gist of it is that “a constitutional amendment [governing private data collection] would be too sweeping in effect and that better alternatives exist to deal with the privacy concerns he identifies.” There are very good reasons we treat public and private actors differently under the law and there “are all far more practical and less-restrictive steps that can be taken without resorting to the sort of constitutional sledgehammer that Jeff Rosen favors. We can protect privacy without rewriting the Constitution or upending the information economy,” I concluded.
But I wanted to elaborate on one particular thing I found particularly interesting about Rosen’s comments when we were on NPR together. During the show, Rosen kept stressing how we needed to adopt a more European construction of privacy as “dignity rights” and he even said his proposed privacy amendment would even disallow individuals from surrendering their private data or their privacy because he viewed these rights as “unalienable.” In other words, from Rosen’s perspective, privacy pretty much trumps everything, even if you want to trade it off against other values.
I’ve been seeing more and more privacy advocates and scholars adopt this attitude, including Anita Allen, Julie Cohen, Siva Vaidhyanathan, and others. Allen, for example, says that privacy is such a “foundational” human right that it some cases the law should override individual choice when consumers act against their own privacy interests. Cohen and Vaidhyanathan make similar arguments in their recent books. Vaidhyanathan claims that consumers are being tricked by the “smokescreen” of “free” online services and “freedom of choice.” Although he admits that no one is forced to use online services and that consumers are also able to opt-out of most of services or data collection practices, he argues that “such choices mean very little” because “the design of the system rigs it in favor of the interests of the company and against the interests of users.” “Celebrating freedom and user autonomy is one of the great rhetorical ploys of the global information economy,” he says.“We are conditioned to believe that having more choices–empty though they may be–is the very essence of human freedom. But meaningful freedom implies real control over the conditions of one’s life.” These are the sort of arguments I increasingly hear made by privacy scholars when claiming that consumers simply can’t be left free to make choices for themselves in this regard.
In an interesting recent article in the Harvard Law Review, privacy scholar Daniel Solove notes that what binds these thinkers and their work together is, in essence, a sort of privacy paternalism. The point of most modern privacy advocacy has been to better empower consumers to make privacy decisions for themselves. But, Solove notes, “the implication [of these privacy scholar's work] is that the law must override individual consent in certain instances.” Yet, if that choice is taken away from us by law, Solove notes, then privacy regulation, “risks becoming too paternalistic. Regulation that sidesteps consent denies people the freedom to make choices,” Solove argues.
Jeff Rosen now appears to be adopting the sort of approach Solove identifies by claiming that privacy is an “unalienable right” such that it cannot be traded away for other things. By making that choice for us, Rosen’s proposed amendment would, therefore, suffer from that same sort of privacy paternalism Solove identifies. In a forthcoming law review aritcle that will appear in the Maine Law Review, I identify some of the problems associated with privacy paternalism. Most obviously, these scholars should keep in mind that not everyone shares the same privacy values as they do and that many of us will voluntarily trade some of our data for the innovative information services and devices that we desire. If imposed in the form of legal sanctions, privacy paternalism would open the door to almost boundless controls on the activities of both producers and consumers of digital services, potentially limiting future innovations in this space.
For example, when we were on NPR together, Rosen mentioned wireless geolocation technology as a potential source of serious privacy harm, although he did not make it clear whether he wanted it stopped entirely or what. If used improperly, wireless geolocation technology certainly can raise serious privacy concerns. But wireless geolocation technology is also what powers the mapping and traffic services that most of us now take for granted. Many of us expect — no, we demand — that our digital devices be able to give us real-time mapping and traffic notification capabilities. And most of us are willing to make the minor privacy trade-off associated with sharing our location constantly in exchange for the right to receive these services, which are also provided to us free of charge.
So, what would Rosen’s proposed amendment have to say about this trade-off? Would these wireless geolocation technologies be banned altogether, even if consumers desire them? It isn’t really clear at this point because he hasn’t offered us many details about his proposal. But, to the extent it would preempt these technological capabilities on the grounds that our locational privacy is somehow in unalienable right, then that seems like a fairly paternalistic approach to policy and it it would seem to confirm Thomas Lenard and Paul Rubin’s claim that “many of the privacy advocates and writers on the subject do not trust the consumers for whom they purport to advocate.”
Such paternalism is particularly problematic in this case since privacy is such a highly subjective value and one that evolves over time. As Solove notes, “the correct choices regarding privacy and data use are not always clear. For example, although extensive self-exposure can have disastrous consequences, many people use social media successfully and productively.” Privacy norms and ethics are changing faster than ever today. One day’s “creepy” tool or service is often the next day’s “killer app.”
Balancing Values; Considering Costs
As I will discuss in my forthcoming Maine Law Review article and I also discussed in my recent George Mason University Law Review article, at least here in the United States, consumer protection standards have traditionally depended on a clear showing of actual, not prospective or hypothetical, harm. In some cases, when the potential harm associated with a particular practice or technology is extreme in character and poses a direct threat to physical well-being, law has preempted the general presumption that ongoing experimentation and innovation should be allowed by default. But these are extremely rare scenarios, at least as it pertains to privacy concerns under American law, and they mostly involved health and safety measures aimed at preemptively avoiding catastrophic harm to individual or environmental well-being. In the vast majority of other cases, our culture has not accepted that paternalistic idea that law must “save us from ourselves” (i.e., our own irrationality or mistakes). As Solove notes in his recent essay, “People make decisions all the time that are not in their best interests. People relinquish rights and take bad risks, and the law often does not stop them.”
Sometimes privacy advocates also ignore the costs of preemptive policy action and don’t bother conducting a serious review of the potential costs of their regulatory proposals. As a result, preemptive policy action is almost always the preferred remedy to any alleged harm. “By limiting or conditioning the collection of information, regulators can limit market manipulation at the activity level,” Ryan Calo argues in a recent paper. “We could imagine the government fashioning a rule — perhaps inadvisable for other reasons―that limits the collection of information about consumers in order to reduce asymmetries of information.” [*Clarification: In a comment down below and a subsequent Twitter exchange, Ryan clarifies that he ultimately does not come down in favor of such a rule, preferring instead to find various other incentives to solve these problems. I thank him for this clarification -- and definitely welcome it! -- although I found his position somewhat murky after debating him personally on these issues recently. Nonetheless, I apologize if I mischaracterized his position in any way here.]
Unfortunately, Professor Calo does not fully consider the corresponding cost of such regulatory proposals in calling for the enactment of such a rule. If preemptive regulation slowed or ended certain information practices, it could stifle the provision of new and better services that consumers demand, as I have noted elsewhere. It might also trump other choices or values that consumers care about. While privacy is obviously an incredibly important value, we cannot assume that it is the only value, or the most important value, at stake here. Consumers also care about having access to a constantly growing array of innovative goods and services, and they also care about getting those goods and services at a reasonable price.
Moving from “Rights Talk” to Practical Privacy Solutions
This is the point in the essay where some readers are getting pretty frustrated with me and thinking I am some sort of nihilist who doesn’t give a damn about privacy. I assure you that nothing is further from the truth and that I care very deeply about privacy.
But if you really care about expanding the horizons of privacy protection in our modern world, at some point you have to accept that all the “rights talk” and top-down enforcement efforts in the world are not necessarily going to help as much as you wish they would. The same thing is true for online safety, digital security, and IP protection efforts: No matter how much you might wish the opposite was true, information control is just really, really hard. Legal and regulatory approaches to bottling up information flows will inevitably be several steps behind cutting-edge technological developments. (I’ve discussed these issues in several essays here, including: “Privacy as an Information Control Regime: The Challenges Ahead,” “Copyright, Privacy, Property Rights & Information Control: Common Themes, Common Challenges,” and “When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed.”)
That doesn’t mean we should surrender in our efforts to identify more concrete privacy harms, but we should recognize that it will always be a hugely contentious matter and that a great many people will gladly trade away their privacy in a way that others will consider outrageous. In a free society, we must allow them to do so if they derive greater utility from other things. A paternalistic approach based on a sort of privacy fundamentalism will deny them the right to make that choice for themselves. And, practically speaking, no matter how much some might think that privacy values are “unalienable,” the reality is that there will be no way to stop many others from making different choices and relinquishing their privacy all the time.
Educating and empowering citizens is the better way to address this issue. We can try to teach them to make better privacy choices and treat their information, and information about others, with far greater care. We should also work to provide citizens more tools to help accomplish those goals. And if the problem is “information asymmetry” or some general lack of awareness about certain data collection and use practices, then let’s work even harder to make sure consumers are aware of those practices and what they can do about them.
It’s all part of the media literacy and digital citizenship agenda that we need to be investing much more of time and resources into. I outlined that approach in much more detail in this law review article. We need diverse tools and strategies for a diverse citizenry. We need to be talking to both consumers and developers about smarter data hygiene and sensible digital ethics. We need more transparency. We need more privacy privacy professionals working inside organizations to craft sensible data collection and use policies. And so on. Only by working to change attitudes about privacy, online “Netiquette,” and more ethical data use, can we really start to make a dent in this problem.
If nothing else, we must understand the limitations of information control in such highly context-specific harm scenarios. Prof. Rosen might want to ask himself how long it would take to even get his proposed constitutional amendment in place and what the chances are such a movement would even been successful. But, again, and far more importantly, Prof. Rosen and advocates of similar regulatory approaches should remember that their values are not shared by everyone and that, in a free society, a value as inherently subjective as privacy is likely to remain a hugely contentious, every-changing matter, especially when elevated to the level of constitutional rights talk. We need practical solutions to our privacy problems, not pie-in-the-sky Hail Mary schemes that are unlikely to go anywhere and, even if they did, would end up being too heavy-handed and potentially override individual autonomy in the process.
Last night, I appeared on a short segment on the PBS News Hour discussing, “What’s the future of privacy in a big data world?” I was also joined by Jules Polonetsky, executive director of the Future of Privacy Forum. If you’re interested, here’s the video. Transcript is here. Finally, down below the fold, I’ve listed a few law review articles and other essays of mine on this same subject.
- “The Pursuit of Privacy in a World Where Information Control Is Failing,” Harvard Journal of Law & Public Policy, 36 (2013): 409–55.
- “A Framework for Benefit-Cost Analysis in Digital Privacy Debates,” George Mason University Law Review, 20, no. 4 (Summer 2013): 1055–105.
- “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology, 14 (2013): 309–86.
Testimony / Filings:
- Senate Testimony on Privacy, Data Collection & Do Not Track, April 24, 2013.
- Comments of the Mercatus Center to the FTC in Privacy & Security Implications of the Internet of Things, May 31, 2013.
Blog posts & opeds:
- “Edith Ramirez’s ‘Big Data’ Speech: Privacy Concerns Prompt Precautionary Principle Thinking,” Technology Liberation Front, August 29, 2013.
- “Relax and Learn to Love Big Data,” US News & World Report, September 13, 2013.
- “Who Really Believes in ‘Permissionless Innovation’?” Technology Liberation Front, March 4, 2013.
- “What Does It Mean to ‘Have a Conversation’ about a New Technology?” Technology Liberation Front, May 23, 2013.
- “Planning for Hypothetical Horribles in Tech Policy Debates,” Technology Liberation Front, August 6, 2013.
- “On the Line between Technology Ethics vs. Technology Policy,” Technology Liberation Front, August 1, 2013.
- “Can We Adapt to the Internet of Things?” IAPP Privacy Perspectives, June 19, 2013.
Jack Schinasi discusses his recent working paper, Practicing Privacy Online: Examining Data Protection Regulations Through Google’s Global Expansion published in the Columbia Journal of Transnational Law. Schinasi takes an in-depth look at how online privacy laws differ across the world’s biggest Internet markets — specifically the United States, the European Union and China. Schinasi discusses how we exchange data for services and whether users are aware they’re making this exchange. And, if not, should intermediaries like Google be mandated to make its data tracking more apparent? Or should we better educate Internet users about data sharing and privacy? Schinasi also covers whether privacy laws currently in place in the US and EU are effective, what types of privacy concerns necessitate regulation in these markets, and whether we’ll see China take online privacy more seriously in the future.
On its face, Verizon won a resounding victory in Verizon v. FCC since the controversial net neutrality regulations were vacated by all three DC Circuit judges. This marks the second time in four years the FCC had its net neutrality enforcement struck down.
Look at published reactions, though, and you’ll see that both sides feel they suffered a damaging loss in yesterday’s decision.
Prominent net neutrality advocates say “the court loss was even more emphatic and disastrous than anyone expected” and a “FEMA-level fail.”
Conversely, critics of net neutrality say that it was a “big win for FCC” and that “the court has given the FCC near limitless power to regulate not just broadband, but the Internet itself.”
Most analysis of the case will point out that it’s a mixed bag for both sides. What is clear is that the net neutrality movement suffered an almost complete loss in the short term. The FCC’s regulations from the Open Internet Order preventing ISPs from “unreasonable discrimination” and “blocking” of Internet traffic were struck down. The court said those prohibitions are equivalent to common carrier obligations. Since ISPs are not common carriers–per previous FCC rulings–most of the Open Internet Order was vacated.
The long term is more uncertain and net neutrality critics have ample reason to be concerned. The court yesterday said the FCC has broad authority to regulate ISPs’ treatment of traffic under Section 706 of the 1996 Telecommunications Act. This somewhat unanticipated conclusion–given its breadth–leaves the FCC with several options if it wants to enact net neutrality or “net neutrality-lite” regulations.
Putting aside the possibility that the FCC or Verizon will appeal the decision, these are the developments to watch:
1. Title II reclassification.
The FCC could always reclassify ISPs as common carriers and subject them to common carrier obligations. I think this is unlikely for several reasons.
First, reclassification would absolutely poison relationships with Congressional Republicans, some important Democrats, and the broadband industry. This is a large reason why then-FCC Chairman Genachowski did not seriously pursue reclassification in 2010. If anything, the political climate is worse for reclassification. Republicans and ISPs simply oppose reclassification more than Democrats and advocates support it.
Second, the content companies–like Google, Hulu, and Netflix–who would ostensibly benefit from net neutrality seem to have cooled to the idea. Part of content companies’ waning interest in net neutrality, I suspect, is exhaustion. This fight has gone on for a decade with little to show for it. They may also realize that ISPs are not likely to engage in truly abusive behaviors. Broadband speeds and capacity have advanced substantially in a decade and concerns about being squeezed out have lessened. There are also powerful norms that ISPs are not likely to violate. Consumers don’t like unseemly behavior by ISPs–like throttling a competing VoIP or video provider. If only because of the PR risk, ISPs have significant incentives to maintain the level of service they have historically provided.
Third, reclassification is a time-consuming and legally fraught process. Even the most principled net neutrality proponents don’t want ISPs subjected to every applicable Title II obligation. But “forbearance” of Title II regulations means several regulatory proceedings, each one potentially subject to litigation.
Finally, Chairman Tom Wheeler, fortunately, does not appear to be an ideologue willing to spend most of his tenure as chairman re-fighting this bitter fight. His comments last month were telling:
I think we’re also going to see a two-sided market where Netflix might say, ‘well, I’ll pay in order to make sure that . . . my subscriber receives, the best possible transmission of this movie.’ I think we want to let those kinds of things evolve.
This statement struck dread in the hearts of many net neutrality proponents. I’ve always believed he was talking about specialized services when he made this statement since pay-for-priority deals were essentially banned by the Open Internet Order. Regardless, his apparent comfort with changing pricing dynamics in two-sided markets indicates he is not a net neutrality partisan. I suspect Chairman Wheeler wants to go down as the chairman who guided America to a mobile future. His priorities seem to be in getting spectrum auctions right, not in rehashing old battles.
2. Pay-for-priority deals.
The legal uncertainties need to be settled before ISPs begin looking at prioritization deals, but they’ll probably pursue some. For example, gaming services might want to pay ISPs to make sure gamers receive low latency connections and large enterprise customers might want prioritized traffic for services like virtual desktops for, say, on-the-road employees. No one knows how common these deals will be. In any case, these deals will probably be closely monitored by the FCC for perceived abuses of market power, as explained next.
3. Increased FCC scrutiny using Section 706.
Substantial and costly scrutiny of ISPs’ traffic management from the FCC is the long-term fear. It now appears that the FCC has many tools to regulate how ISPs treat traffic under Section 706. I call this net neutrality-lite but 706 authority has the potential to be a more powerful weapon than the Open Internet Order. Not only can the FCC use 706 to regulate ISPs through adjudications, the mere threat of using 706 against ISPs may induce compliance. If there is a bright side to the court’s recognition of the FCC’s 706 authority, it’s that it makes Title II reclassification of ISPs less likely.
Verizon v. FCC was mostly a win for those of us who viewed the Open Internet Order as a regulatory overreach. Risks remain since net neutrality as a policy goal will not die, but reclassification is a long shot, fortunately. Policy watchers will be analyzing Wheeler’s actions, in particular, to see whether the FCC pursues its Section 706 authority to regulate ISPs. Hopefully the court’s decision is accepted as final and marks the end of the most heated battles over net neutrality. The FCC could then turn its attention to important issues like spectrum auctions, the IP transition, and the rapidly changing television market.
When Google announced it was acquiring digital thermostat company Nest yesterday, it set off another round of privacy and security-related technopanic talk on Twitter and elsewhere. Fear and loathing seemed to be the order of the day. It seems that each new product launch or business announcement in the “Internet of Things” space is destined to set off another round of Chicken Little hand-wringing. We are typically told that the digital sky will soon fall on our collective heads unless we act preemptively to somehow head-off some sort of pending privacy or security apocalypse.
Meanwhile, however, a whole heck of lot of people are demanding more and more of these technologies, and American entrepreneurs are already engaged in heated competition with European and Asian rivals to be at the forefront of the next round Internet innovation to satisfy those consumer demands. So, how is this going to play out?
This gets to what becoming the defining policy issue of our time, not just for the Internet but for technology policy more generally: To what extent should the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? We can think of this as “the permission question” and it is creating a massive rift between those who desire more preemptive, precautionary safeguards for a variety of reasons (safety, security, privacy, copyright, etc.) and those of us who continue to believe that permissionless innovation should be the guiding ethos of our age. The chasm between these two worldviews is only going to deepen in coming years as the pace of innovation around new technologies (the Internet of Things, wearable tech, driverless cars, 3D printing, commercial drones, etc) continues to accelerate.
Sarah Kessler of Fast Company was kind enough to call me last night and ask for some general comments about Google buying Nest and she also sought out the comments of Marc Rotenberg of EPIC about privacy in the Internet of Things era more generally. Our comments provide a useful example of the divide between these two worldviews and foreshadow debates to come:
With an estimated 50 billion connected objects coming online by 2050, some see good reason to put policies in place that regulate the new categories of data they will collect about the people who use those products. “The basic problem with the Internet of Things, unless privacy safeguards are established up front, is that users will lose control over the data they generate,” Marc Rotenberg, the president of the Electronic Privacy Information Center, told Fast Company in an email. Others see the emerging category as a perfect reason to block omnibus attempts to regulate user data. “If we spend all of our time living in fear of hypothetical worst-case scenarios, then the best-case scenarios will never come about,” says Adam Thierer, a Senior Research Fellow at George Mason University’s Mercatus Center. “That’s the nature of how innovation works. You have to allow for risks and experimentation, and even accidents and failures, if you want to get progress.”
Last week, I wrote about this conflict of visions in my dispatch from the CES show and this topic is also the focus of my forthcoming eBook, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” To reiterate what I already said, my book will describe the future of the Internet of Things and all technology policy as a grand battle the “precautionary principle” and “permissionless innovation.” The “precautionary principle” refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions. The other worldview, “permissionless innovation,” refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.
While those adhering to the precautionary principle mindset tend to favor “top-down” legalistic approaches to solving those potential problems that might creep up, those of us who favor the premissionless innovation approach favor “bottom-up” solutions that evolve over time but do not interrupt the ongoing experimentation and innovation that consumers demand. What does a “bottom-up” approach mean in practice? Education and empowerment, social pressure, societal norms, voluntary self-regulation, and targeted enforcement of existing legal norms (especially through the common law) are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I” (i.e., permissioned) nature.
We really should not underestimate the power of norms and public pressure to “regulate” in this regard, perhaps even better than law, which tends to be too slow-moving to make much of a difference. In my book I spend a great deal of time talking about how other technological innovations have been shaped by social norms, public pressure, and press attention. That same will be true for the Internet of Things and various new technologies I discuss in my book. Others will gradually adapt to the new technological realities and integrate these new devices and services into their lives over time.
Perhaps, then, it will be the case that if Google does something particularly bone-headed with Nest that a public backlash will ensue. Or maybe some consumers will just reject Nest and look for other options, which is apparently what Rotenberg is doing according to the Fast Company article. Of course, as I noted in concluding the interview, others may act quite differently and accept Nest and other new Internet of Things technologies, even if there are some privacy or security downsides. As I told Sarah Kessler, while I was visiting the consumer electronics show last week, I heard it was freezing back here in DC. If I would have had Nest in my house, perhaps Google Now could have alerted me to the dangerously low temps in my house and suggested that I raise the temp remotely before my pipes froze. As I noted to Kessler:
“Would that have been creepy?” he says. “To me it would have been helpful. So for everything that people regard as a negative, I can usually find a positive. And if there’s that balance there, then it should be left to individuals to decide for themselves how to decide that balance.”
Finally, since I often get accused of being some sort of nihilist in these debates, I want to make it clear that ethics should influence all these discussions, but I prefer that we not impose ethics in a heavy-handed, inflexible way through preemptive, proscriptive regulatory controls. It makes more sense to wait and see how things play out before regulating to address harms, once we figure out which ones are real. (See the second and third essays listed below for more on ethics and technological innovation.) But we absolutely need to be engaging in robust societal discussions about digital ethics, digital citizenship, privacy and security by design, and sensible online etiquette. I’ve spent a lifetime writing about the power of that approach in the context of online child safety and I think it is equally applicable for privacy and security-related matters. In particular, we need to talk to our kids and our future technologists and innovators about smarter digital habits that respect the safety, security, and privacy of others. Those conversations can help us chart a more sensible path forward without sacrificing the many benefits that accompany the ongoing technological revolution we are blessed to be experiencing today.
- “Who Really Believes in ‘Permissionless Innovation’?” Technology Liberation Front, March 4, 2013.
- “What Does It Mean to ‘Have a Conversation’ about a New Technology?” Technology Liberation Front, May 23, 2013.
- “On the Line between Technology Ethics vs. Technology Policy,” Technology Liberation Front, August 1, 2013.
- “Planning for Hypothetical Horribles in Tech Policy Debates,” Technology Liberation Front, August 6, 2013.
- “Edith Ramirez’s ‘Big Data’ Speech: Privacy Concerns Prompt Precautionary Principle Thinking,” Technology Liberation Front, August 29, 2013.
- “When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed,” Technology Liberation Front, April 29, 2011.
- “Copyright, Privacy, Property Rights & Information Control: Common Themes, Common Challenges,” Technology Liberation Front, April 10, 2012.
- “Can We Adapt to the Internet of Things?” IAPP Privacy Perspectives, June 19, 2013.
- “Why Do We Always Sell the Next Generation Short?” Forbes, January 8, 2012.
- “The Six Things That Drive ‘Technopanics,’” Forbes, March 4, 2012.
With each booth I pass and presentation I listen to at the 2014 International Consumer Electronics Show (CES), it becomes increasingly evident that the “Internet of Things” era has arrived. In just a few short years, the Internet of Things (IoT) has gone from industry buzzword to marketplace reality. Countless new IoT devices are on display throughout the halls of the Las Vegas Convention Center this week, including various wearable technologies, smart appliances, remote monitoring services, autonomous vehicles, and much more.
This isn’t vaporware; these are devices or services that are already on the market or will launch shortly. Some will fail, of course, just as many other earlier technologies on display at past CES shows didn’t pan out. But many of these IoT technologies will succeed, driven by growing consumer demand for highly personalized, ubiquitous, and instantaneous services.
But will policymakers let the Internet of Things revolution continue or will they stop it dead in its tracks? Interestingly, not too many people out here in Vegas at the CES seem all that worried about the latter outcome. Indeed, what I find most striking about the conversation out here at CES this week versus the one about IoT that has been taking place in Washington over the past year is that there is a large and growing disconnect between consumers and policymakers about what the Internet of Things means for the future.
When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers. And that’s what has them so excited and ready to embrace these new technologies. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.
But at least so far, most consumers don’t seem to share the same worries. Instead, they are too busy shouting “More, More, More!” IoT technologies have generated enormous interest and every projection I’ve seen so far shows that explosive growth can be expected across all classes of devices. ABI Research estimates that there are more than ten billion wirelessly connected devices in the market today and more than thirty billion devices expected by 2020. Last year Cisco projected that by 2020 thirty-seven billion intelligent things will be connected and communicating but has now apparently revised that estimate upward to 40 or 50 billion. Thus, we are well on the way to a world where “everyone and everything will be connected to the network.”
Yet, it remains unclear what the IoT public policy landscape will look like in coming years and what disposition lawmakers and regulators will adopt toward these new amazing new technologies. Two distinct policy disposition are clashing over what approach should govern the future of innovation in this space.
I discussed this tension during a CES panel this morning on “The Internet of Things and the Home of the Future.” It featured outstanding opening remarks by FTC Commissioner Maureen K. Ohlhausen, who made the case for regulatory humility and focusing on how these new technologies can empower individuals in important new ways. “The Internet has evolved in one generation from a network of electronically interlinked research facilities in the United States to one of the most dynamic forces in the global economy, in the process reshaping entire industries and even changing the way we interact on a personal level,” she noted. “And the Internet of Things offers the promise of even greater progress ahead for consumers and competition.” I strongly encourage you to read Commissioner Ohlhausen’s entire speech. It is terrific and sets exactly the right tone for these discussions.
After Commissioner Ohlhausen spoke, we had a panel discussion that was expertly moderated by tech policy guru Larry Downes and which included remarks from Robert M. McDowell (Hudson Institute), Jeff Hagins, (Smart Things), Robert Pepper (Cisco), Marc Rogers (Lookout), and me.
When I spoke, I described the future of the Internet of Things as a grand battle of two alternative worldviews: the “precautionary principle” and “permissionless innovation.” The “precautionary principle” refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions. The other worldview, “permissionless innovation,” refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.
I’ll soon be releasing a new eBook about this conflict of visions. The book will be called, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom” and it should be out in the next few weeks. In it, I will explain how precautionary principle thinking is increasingly creeping into modern information technology policy discussions, explain how that is dangerous and must be rejected, and argue that policymakers should instead unapologetically embrace and defend the permissionless innovation vision — not just for the Internet but also for all new classes of networked technologies and platforms.
This intellectual tension is already evident in debates over the Internet of Things. While we are still very early in this debate, we can expect rising calls for preemptive regulatory controls on IoT technologies based on various safety, security, and especially privacy rationales. If the precautionary principle mentality wins out and trumps the permissionless innovation ethos that has already powered the first wave of the digital revolution, it will have profound ramifications.
As I’ll note in my forthcoming eBook, preserving and extending the permissionless innovation ethos to the Internet of Things is not about “protecting corporate profits” or assisting any particular technology, industry sector, or set of innovators. Rather, preserving an environment in which permissionless innovation can flourish is about ensuring that individuals as both citizens and consumers continue to enjoy the myriad benefits that accompany an open, innovative information ecosystem. More profoundly, this general freedom to innovate is essential for powering the next great wave of industrial innovation and rejuvenating our dynamic, high-growth economy. Even more profoundly, this is about preserving social and economic freedom more generally while rejecting the central-planning mentality and methods that throughout history have stifled human progress and prosperity.
Safety, security, and privacy problems will continue to persist, of course, and we should work to find practical, “bottom-up” solutions to them. As I detail in my eBook, education and empowerment, social pressure, societal norms, voluntary self-regulation, transparency efforts, and targeted enforcement of existing legal norms (especially through the common law) are almost always superior to “top-down,” command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I” (i.e., permissioned) nature. Preemptive technological controls of that sort would limit new innovation in this space and sacrifice the many benefits that will flow to consumers from continued experimentation.
Those who advocate precautionary regulatory approaches to the Internet of Things should think through to consequences of preemptively prohibiting technological innovation and realize that not everyone shares their same values, especially pertaining to privacy, which is a highly subjective concept that is often difficult to legislate around. We should instead find ways work with together to seek out those practical, bottom-up solutions that will help individuals, institutions, and society learn how to better cope with technological change over time. Using this approach, we can embrace our dynamic future together without doing permanent damage to our innovative minds and economy.