Technology Liberation Front
Syndicate content
Keeping politicians' hands off the Net & everything else related to technology
Updated: 8 hours 4 min ago

NETmundial is about to begin

Wed, 04/23/2014 - 08:55

As I blogged last week, I am in São Paulo to attend NETmundial, the meeting on the future of Internet governance hosted by the Brazilian government. The opening ceremony is about to begin. A few more observations:

  • The Brazilian Senate passed the landmark Marco Civil bill last night, and Dilma Rousseff, the Brazilian president, may use here appearance here today to sign it into law. The bill subjects data stored on Brazilians anywhere in the world to Brazilian jurisdiction and imposes net neutrality domestically. It also provides a safe harbor for ISPs and creates a notice-and-takedown system for offensive content.
  • Some participants are framing aspects of the meeting, particularly the condemnation of mass surveillance in the draft outcome document, as civil society v. the US government. There is a lot of concern that the US will somehow water down the surveillance language so that it doesn’t apply to the NSA’s surveillance. WikiLeaks has stoked some of this concern with breathless tweets. I don’t see events playing out this way. I am as opposed to mass US surveillance as anyone, but I haven’t seen much resistance from the US government participants in this regard. Most of the comments by the US on the draft have been benign. For example, WikiLeaks claimed that the US “stripped” language referring to the UN Human Rights Council; in fact, the US hasn’t stripped anything because it is not in charge (it can only make suggestions), and eliminating the reference to the HRC is actually a good idea because the HRC is a multilateral, not a multistakeholder, body. I expect a strong anti-surveillance statement to be included in the final outcome document. If it is not, it will probably be other governments, not the US, that block it.
  • In my view, the privacy section of the draft still needs work, however. In particular, it is important to cabin the paragraph to address governmental surveillance, not to interfere with voluntary, private arrangements in which users disclose information to receive free services.
  • I expect discussions over net neutrality to be somewhat contentious. Civil society participants are generally for it, with some governments, businesses, parts of the technical community, and yours truly opposed.
  • Although surveillance and net neutrality have received a lot of attention, they are not the important issues at NETmundial. Instead, look for the language that will affect “the future of Internet governance,” which is after all what the meeting is about. For example, will the language on stakeholders’ “respective roles and responsibilities” be stricken? This is language held over from the Tunis Agenda and it has a lot of meaning. Do stakeholders participate as equals or do they, especially governments, have separate roles? There is also a paragraph on “enhanced cooperation,” which is a codeword for governments running the show. Look to see in the final draft if it is still there.
  • Speaking of the final draft, here is how it will be produced: During the meeting, participants will have opportunities to make 2-minute interventions on specific topics. The drafting group will make note of the comments and then retreat to a drafting room to make final edits to the draft. This is, of course, not really the open governance process that many of us want for the Internet, one where select, unaccountable participants have the final say. Yet two days is not a long enough time to really have an open, free-wheeling drafting conference. I think the structure of the conference, driven by the perceived need to produce an outcome document with certainty, is unfortunate and somewhat detracts from the legitimacy of whatever will be produced, even though I expect the final document to be OK on substance.
Categories: Tech Polis

Will the FCC Force Television Online Even If Aereo Loses in Court?

Tue, 04/22/2014 - 11:44

The Supreme Court hears oral arguments today in a case that will decide whether Aereo, an over-the-top video distributor, can retransmit broadcast television signals online without obtaining a copyright license. If the court rules in Aereo’s favor, national programming networks might stop distributing their programming for free over the air, and without prime time programming, local TV stations might go out of business across the country. It’s a make or break case for Aereo, but for broadcasters, it represents only one piece of a broader regulatory puzzle regarding the future of over-the-air television.

If the court rules in favor of the broadcasters, they could still lose at the Federal Communications Commission (FCC). At a National Association of Broadcasters (NAB) event earlier this month, FCC Chairman Tom Wheeler focused on “the opportunity for broadcast licensees in the 21st century . . . to provide over-the-top services.” According to Chairman Wheeler, TV stations shouldn’t limit themselves to being in the “television” business, because their “business horizons are greater than [their] current product.” Wheeler wants TV stations to become over-the-top “information providers”, and he sees the FCC’s role as helping them redefine themselves as a “growing source of competition” in that market segment.

If TV stations share Chairman Wheeler’s vision for their future, the FCC’s “help” in redefining the role of broadcast licensees in the digital era could represent a potential win rather than a loss. If Wheeler truly seeks to enable TV stations to deliver a competitive, fixed and mobile cable-like service, it could signal a positive shift in the FCC’s traditionally stagnant approach to broadcast regulation.

Like all regulatory pronouncements, the devil is always in the details — notwithstanding the existing and legitimate skepticism that TV stations have as to whether the FCC can and will treat them fairly in the future. For better or worse, many will judge the “success” of the broadcast incentive auction by the amount of revenue it raises. This reality provides the FCC with unique incentives to “encourage” TV stations to give up their spectrum licenses. In Washington, “encouragement” can range from polite entreaty to regulatory pain.

After the FCC imposed new ownership limits on TV stations last month, some fear the FCC will choose pain as its persuader. Last month’s FCC action prompts them to ask, if Wheeler is sincere in his desire to help broadcasters pivot to a broader business model, why impose new ownership limits on TV stations that could hinder their ability to compete with cable and over-the-top companies?

Chairman Wheeler attempted to address this question in his NAB speech, but his answer was oddly inconsistent with his broader vision. He said the FCC’s new ownership limits are rooted in the traditional goals of competition, diversity, and localism among TV stations. That only makes sense, however, if you believe TV stations should compete only with other TV stations. Imposing new ownership limits on TV stations won’t help them pivot to a future in which they compete in a broader “information provider” market — it would hinder them.

I expect TV station owners are wondering: If we accept Chairman Wheeler’s invitation to look beyond our current product, will he meet us on the horizon? Or will we find ourselves standing there alone? It’s hard to predict the future, because the future is always just over the horizon.

Categories: Tech Polis

Patrick Byrne on online retailers accepting Bitcoin

Tue, 04/22/2014 - 06:00

Post image for Patrick Byrne on online retailers accepting Bitcoin

Patrick Byrne, CEO of Overstock.com, discusses how Overstock.com became one of the first online retail stores to accept Bitcoin. Byrne provides insight into how Bitcoin lowers transaction costs, making it beneficial to both retailers and consumers, and how governments are attempting to limit access to Bitcoin. Byrne also discusses his project DeepCapture.com, which raises awareness for market manipulation and naked short selling, as well as his philanthropic work and support for education reform.

Download

Related Links
Categories: Tech Polis

Pre-NETmundial Notes

Fri, 04/18/2014 - 10:29

Next week I’ll be in São Paulo for the NETmundial meeting, which will discuss “the future of Internet governance.” I’ll blog more while I’m there, but for now I just wanted to make a few quick notes.

  • This is the first meeting of its kind, so it’s difficult to know what to expect, in part because it’s not clear what others’ expectations are. There is a draft outcome document, but no one knows how significant it will be or what weight it will carry in other fora.
  • The draft outcome document is available here. The web-based tool for commenting on individual paragraphs is quite nice. Anyone in the world can submit comments on a paragraph-by-paragraph basis. I think this is a good way to lower the barriers to participation and get a lot of feedback.
  • I worry that we won’t have enough time to give due consideration to the feedback being gathered. The meeting is only two days long. If you’ve ever participated in a drafting conference, you know that this is not a lot of time. What this means, unfortunately, is that the draft document may be something of a fait accompli. Undoubtedly it will change a little, but the amount of changes that can be contemplated will be limited due to sheer time constraints.
  • Time will be even more constrained by the absurd amount of time allocated to opening ceremonies and welcome remarks. The opening ceremony begins at 9:30 am and the welcome remarks are not scheduled to conclude until 1 pm on the first day. This is followed by a lunch break, and then a short panel on setting goals for NETmundial, so that the first drafting session doesn’t begin until 2:30 pm. This seems like a mistake.
  • Speaking of the agenda, it was not released until yesterday. While NETmundial has indeed been open to participation by all, it has not been very transparent. An earlier draft outcome document had to be leaked by WikiLeaks on April 8. Not releasing an agenda until a few days before the event is also not very transparent. In addition, the processes by which decisions have been made have not been transparent to outsiders.

See you all next week.

Categories: Tech Polis

New Paper on the Cybersecurity Framework

Thu, 04/17/2014 - 10:46

Andrea Castillo and I have a new paper out from the Mercatus Center entitled “Why the Cybersecurity Framework Will Make Us Less Secure.” We contrast emergent, decentralized, dynamic provision of security with centralized, technocratic cybersecurity plans. Money quote:

The Cybersecurity Framework attempts to promote the outcomes of dynamic cybersecurity provision without the critical incentives, experimentation, and processes that undergird dynamism. The framework would replace this creative process with one rigid incentive toward compliance with recommended federal standards. The Cybersecurity Framework primarily seeks to establish defined roles through the Framework Profiles and assign them to specific groups. This is the wrong approach. Security threats are constantly changing and can never be holistically accounted for through even the most sophisticated flowcharts. What’s more, an assessment of DHS critical infrastructure categorizations by the Government Accountability Office (GAO) finds that the DHS itself has failed to adequately communicate its internal categories with other government bodies. Adding to the confusion is the proliferating amalgam of committees, agencies, and councils that are necessarily invited to the table as the number of “critical” infrastructures increases. By blindly beating the drums of cyber war and allowing unfocused anxieties to clumsily force a rigid structure onto a complex system, policymakers lose sight of the “far broader range of potentially dangerous occurrences involving cyber-means and targets, including failure due to human error, technical problems, and market failure apart from malicious attacks.” When most infrastructures are considered “critical,” then none of them really are.

We argue that instead of adopting a technocratic approach, the government should take steps to improve the existing emergent security apparatus. This means declassifying information about potential vulnerabilities and kickstarting the cybersecurity insurance market by buying insurance for federal agencies, which experienced 22,000 breaches in 2012. Read the whole thing, as they say.

Categories: Tech Polis

Renters and Rent-Seeking in San Francisco

Tue, 04/15/2014 - 08:53

[The following essay is a guest post from Dan Rothschild, director of state projects and a senior fellow with the R Street Institute.]

As anyone who’s lived in a major coastal American city knows, apartment renting is about as far from an unregulated free market as you can get. Legal and regulatory stipulations govern rents and rent increases, what can and cannot be included in a lease, even what constitutes a bedroom. And while the costs and benefits of most housing policies can be debated and deliberated, it’s generally well known that housing rentals are subject to extensive regulation.

But some San Francisco tenants have recently learned that, in addition to their civil responsibilities under the law, their failure to live up to some parts of the city’s housing code may trigger harsh criminal penalties as well. To wit: tenants who have been subletting out part or all of their apartments on a short-term basis, usually through web sites like Airbnb, are finding themselves being given 72 hours to vacate their (often rent-controlled) homes.

San Francisco’s housing stock is one of the most highly regulated in the country. The city uses a number of tools to preserve affordable housing and control rents, while at the same time largely prohibiting higher buildings that would bring more units online, increasing supply and lowering prices. California’s Ellis Act provides virtually the only legal and effective means of getting tenants (especially those benefiting from rent control) out of their units — but it has the perverse incentive of causing landlords to demolish otherwise useable housing stock.

Again, the efficiency and equity ramifications of these policies can be discussed; the fact that demand curves slope downward, however, is really not up for debate.

Under San Francisco’s municipal code it may be a crime punishable by jail time to rent an apartment on a short-term basis. More importantly, it gives landlords the excuse they need to evict tenants they otherwise can’t under the city’s and state’s rigorous tenant protection laws. After all, they’re criminals!

Here’s the relevant section of the code:

Any owner who rents an apartment unit for tourist or transient use as defined in this Chapter shall be guilty of a misdemeanor. Any person convicted of a misdemeanor hereunder shall be punishable by a fine of not more than $1,000 or by imprisonment in the County Jail for a period of not more than six months, or by both. Each apartment unit rented for tourist or transient use shall constitute a separate offense.

Here lies the rub. There are certainly legitimate reasons to prohibit the short-term rental of a unit in an apartment or condo building — some people want to know who their neighbors are, and a rotating cast of people coming and going could potentially be a nuisance.

But that’s a matter for contracts and condo by-laws to sort out. If people value living in units that they can list on Airbnb or sublet to tourists when they’re on vacation, that’s a feature like a gas stove or walk-in closet that can come part-and-parcel of the rental through contractual stipulation. Similarly, if people want to live in a building where overnight guests are verboten, that’s something landlords or condo boards can adjudicate. The Coase Theorem can be a powerful tool, if the law will allow it.

The fact that, so far as I can tell, there’s no prohibition on having friends or family stay a night — or even a week — under San Francisco code, it seems that the underlying issue isn’t a legitimate concern about other tenants’ rights but an aversion to commerce. From the perspective of my neighbor, there’s no difference between letting my friend from college crash in my spare bedroom for a week or allowing someone I’ve never laid eyes on before do the same in exchange for cash.

The peer production economy is still in its infancy, and there’s a lot that needs to be worked out. Laws like those in San Francisco’s that circumvent the discovery process of markets prevent landlords, tenants, condos, homeowners, and regulators from leaning from experience and experimentation — and lock in a mediocre system that threatens to put people in jail for renting out a room.

Categories: Tech Polis

Our new draft paper on Bitcoin financial regulation: securities, derivatives, prediction markets, & gambling

Thu, 04/10/2014 - 14:23

opengraphI’m thrilled to make available today a discussion draft of a new paper I’ve written with Houman Shadab and Andrea Castillo looking at what will likely be the next wave of Bitcoin regulation, which we think will be aimed at financial instruments, including securities and derivatives, as well as prediction markets and even gambling. You can grab the draft paper from SSRN, and we very much hope you will give us your feedback and help us correct any errors. This is a complicated issue area and we welcome all the help we can get.

While there are many easily regulated intermediaries when it comes to traditional securities and derivatives, emerging bitcoin-denominated instruments rely much less on traditional intermediaries. Additionally, the block chain technology that Bitcoin introduced for the first time makes completely decentralized markets and exchanges possible, thus eliminating the need for intermediaries in complex financial transactions. In the article we survey the type of financial instruments and transactions that will most likely be of interest to regulators, including traditional securities and derivatives, new bitcoin-denominated instruments, and completely decentralized markets and exchanges.

We find that bitcoin derivatives would likely not be subject to the full scope of regulation under the Commodities and Exchange Act because such derivatives would likely involve physical delivery (as opposed to cash settlement) and would not be capable of being centrally cleared. We also find that some laws, including those aimed at online gambling, do not contemplate a payment method like Bitcoin, thus placing many transactions in a legal gray area.

Following the approach to Bitcoin taken by FinCEN, we conclude that other financial regulators should consider exempting or excluding certain financial transactions denominated in Bitcoin from the full scope of the regulations, much like private securities offerings and forward contracts are treated. We also suggest that to the extent that regulation and enforcement becomes more costly than its benefits, policymakers should consider and pursue strategies consistent with that new reality, such as efforts to encourage resilience and adaptation.

I look forward to your comments!

Categories: Tech Polis

New Books in Technology podcast about my new book

Mon, 04/07/2014 - 10:33

It was my great pleasure to join Jasmine McNealy last week on the “New Books in Technology” podcast to discuss my new book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. (A description of my book can be found here.)

My conversation with Jasmine was wide-ranging and lasted 47 minutes. The entire show can be heard here if you’re interested.

By the way, if you don’t follow Jasmine, you should begin doing so immediately. She’s on Twitter and here’s her page at the University of Kentucky School of Library and Information Science.  She’s doing some terrifically interesting work. For example, check out her excellent essay on “Online Privacy & The Right To Be Forgotten,” which I commented on here.

Categories: Tech Polis

Can NSA Force Telecom Companies To Collect More Data?

Sun, 04/06/2014 - 21:44

Recent reports highlight that the telephone meta-data collection efforts of the National Security Agency are being undermined by the proliferation of flat-rate, unlimited voice calling plans.  The agency is collecting data for less than a third of domestic voice traffic, according to one estimate.

It’s been clear for the past couple months that officials want to fix this, and President Obama’s plan for leaving meta-data in the hands of telecom companies—for NSA to access with a court order—might provide a back door opportunity to expand collection to include all calling data.  There was a potential new twist last week, when Reuters seemed to imply that carriers could be forced to collect data for all voice traffic pursuant to a reinterpretation of the current rule.

While the Federal Communications Commission requires phone companies to retain for 18 months records on “toll” or long-distance calls, the rule’s application is vague (emphasis added) for subscribers of unlimited phone plans because they do not get billed for individual calls.

The current FCC rule (47 C.F.R. § 42.6) requires carriers to retain billing information for “toll telephone service,” but the FCC doesn’t define this familiar term.  There is a statutory definition, but you have to go to the Internal Revenue Code to find it.  According to 26 U.S.C. § 4252(b),

the term “toll telephone service” means—

(1) a telephonic quality communication for which

(A) there is a toll charge which varies in amount with the distance and elapsed transmission time of each individual communication…

This Congressional definition describes the dynamics of long-distance pricing in 1965, but it pre-dates the FCC rule (1986) and it’s still on the books.

Distance subsequently became virtually irrelevant as a cost factor due to improving technology by the 1990s, when long-distance prices became based on minutes of use only (although clashing federal and state regulatory regimes frequently did result in higher rates for many short-haul intrastate calls as compared to long-haul interstate calls).  Incidentally, it was estimated at the time that telephone companies spent between 30 and 40 percent of their revenues on their billing systems.

In any event, with the elimination of distance-sensitive pricing, the Internal Revenue Service’s efforts to collect the Telephone Excise Tax—first enacted during the Spanish American War—were stymied.  In 2006, the IRS announced it would no longer litigate whether a toll charge that varies with elapsed transmission time but not distance (time-only service) is taxable “toll telephone service.”

I don’t see why telecom companies are required to collect and store for 18 months any telephone data, since it’s hard to imagine they are providing any services these days that actually qualify as “toll telephone service,” as that term is currently defined in the United States Code.

Categories: Tech Polis

A Short Response to Michael Sacasas on Advice for Tech Writers

Thu, 04/03/2014 - 10:41

What follows is a response to Michael Sacasas, who recently posted an interesting short essay on his blog The Frailest Thing, entitled, “10 Points of Unsolicited Advice for Tech Writers.” As with everything Michael writes, it is very much worth reading and offers a great deal of useful advice about how to be a more thoughtful tech writer. Even though I occasionally find myself disagreeing with Michael’s perspectives, I always learn a great deal from his writing and appreciate the tone and approach he uses in all his work. Anyway, you’ll need to bounce over to his site and read his essay first before my response will make sense.

______________________________

Michael:

Lots of good advice here. I think tech scholars and pundits of all dispositions would be wise to follow your recommendations. But let me offer some friendly pushback on points #2 & #10, because I spend much of my time thinking and writing about those very things.

In those two recommendations you say that those who write about technology “[should] not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today.” And you also warn “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.”

I think these two recommendations are born of a certain frustration with the tenor of much modern technology writing; the sort of Pollyanna-ish writing that too casually dismisses legitimate concerns about the technological disruptions and usually ends with the insulting phrase, “just get over it.” Such writing and punditry is rarely helpful, and you and others have rightly pointed out the deficiencies in that approach.

That being said, I believe it would be highly unfortunate to dismiss any inquiry into the nature of individual and societal acclimation to technological change. Because adaptation obviously does happen! Certainly there must be much we can learn from it. In particular, what I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies “unsettled” well-established personal, social, cultural, and legal norms.

To be clear, I entirely agree with your admonition: “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.” But, again, we can agree at least agree that such acclimation has happened regularly throughout human history, right?  What were the mechanics of that process? As social norms, personal habits, and human relationships were disrupted, what helped us muddle through and find a way of coping with new technologies? Likewise, as existing markets and business models were disrupted, how were new ones formulated in response to the given technological disruption? Finally, how did legal norms and institutions adjust to those same changes?

I know you agree that these questions are worthy of exploration, but I suppose where we might part ways is over the question of the metrics by which judge whether “the changes were inconsequential or benign.” Because I believe that while technological change often brings sweeping and quite consequential change, there is a value in the very act of living through it.

In my work, including my latest little book, I argue that humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, however, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Even if you don’t agree with all of that, again, I would think you would find great value in studying the process by which such adaptation happens. And then we could argue about whether it was all really worth it! Alas, at the end of the day, it may be that we won’t be able to even agree on a standard by which to make that judgment and will instead have to settle for a rough truce about what history has to teach us that might be summed up by the phrase: “something gained, something lost.”

With all this in mind, let me suggest this friendly reformulation of your second recommendation: Tech writers should not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today. But how people and institutions learned to cope with those concerns is worthy of serious investigation. And what we learned from living through that process may be valuable in its own right.

I have been trying to sketch out an essay on all this entitled, “Muddling Through: Toward a Theory of Societal Adaptation to Disruptive Technologies.” I am borrowing that phrase (“muddling through”) from Joel Garreau, who used it in his book “Radical Evolution” when describing a third way of viewing humanity’s response to technological change. After discussing the “Heaven” (optimistic) and “Hell” (skeptical or pessimistic) scenarios cast about by countless tech writers throughout history, Garreau outlines a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.” That pretty much sums up my own perspective on things, but much study remains to be done on how that very messy process of “muddling through” works and whether we are left better off as a result. I remain optimistic that we do!

As always, I look forward to our continuing dialog over these interesting issues and I wish you all the best.

Cheers,

Adam Thierer

Categories: Tech Polis

“Big Data” Inquiry Should Study Economics & Free Speech: TechFreedom urges reform of blanket surveillance and FTC processes

Wed, 04/02/2014 - 20:12

Monday, TechFreedom submitted comments urging the White House to apply economic thinking to its inquiry into “Big Data,” also pointing out that the worst abuses of data come not from the private sector, but government. The comments were in response to a request by the Office of Science and Technology Policy.

“On the benefits of Big Data, we urge OSTP to keep in mind two cautions. First, Big Data is merely another trend in an ongoing process of disruptive innovation that has characterized the Digital Revolution. Second, cost-benefit analyses generally, and especially in advance of evolving technologies, tend to operate in aggregates which can be useful for providing directional indications of future trade-offs, but should not be mistaken for anything more than that,” writes TF President Berin Szoka.

The comments also highlight the often-overlooked reality that data, big or small, is speech. Therefore, OSTP’s inquiry must address the First Amendment analysis. Historically, policymakers have ignored the First Amendment in regulating new technologies, from film to blogs to video games, but in 2011 the Supreme Court made clear in Sorrell v. IMS Health that data is a form of speech. Any regulation of Big Data should carefully define the government’s interest, narrowly tailor regulations to real problems, and look for less restrictive alternatives to regulation, such as user empowerment, transparency and education. Ultimately, academic debates over how to regulate Big Data are less important than how the Federal Trade Commission currently enforces existing consumer protection laws, a subject that is the focus of the ongoing FTC: Technology & Reform Project led by TechFreedom and the International Center for Law & Economics.

More important than the private sector’s use of Big Data is the government’s abuse of it, the group says, referring to the NSA’s mass surveillance programs and the Administration’s opposition to requiring warrants for searches of Americans’ emails and cloud data. Last December, TechFreedom and its allies garnered over 100,000 signatures on a WhiteHouse.gov petition for ECPA reform. While the Administration has found time to reply to frivolous petitions, such as asking for the construction of a Death Star, it has ignored this serious issue for over three months. Worse, the administration has done nothing to help promote ECPA reform and, instead, appears to be actively orchestrating opposition to it from theoretically independent regulatory agencies, which has stalled reform in the Senate.

“This stubborn opposition to sensible, bi-partisan privacy reform is outrageous and shameful, a hypocrisy outweighed only by the Administration’s defense of its blanket surveillance of ordinary Americans,” said Szoka. “It’s time for the Administration to stop dodging responsibility or trying to divert attention from the government-created problems by pointing its finger at the private sector, by demonizing private companies’ collection and use of data while the government continues to flaunt the Fourth Amendment.”

Szoka is available for comment at media@techfreedom.org. Read the full comments and see TechFreedom’s other work on ECPA reform.

Categories: Tech Polis

How to Privatize the Internet

Wed, 04/02/2014 - 11:52

Today on Capitol Hill, the House Energy and Commerce Committee is holding a hearing on the NTIA’s recent announcement that it will relinquish its small but important administrative role in the Internet’s domain name system. The announcement has alarmed some policymakers with a well-placed concern for the future of Internet freedom; hence the hearing. Tomorrow, I will be on a panel at ITIF discussing the IANA oversight transition, which promises to be a great discussion.

My general view is that if well executed, the transition of the DNS from government oversight to purely private control could actually help secure a measure of Internet freedom for another generation—but the transition is not without its potential pitfalls.

The NTIA’s technical administration of the DNS’ “root zone” is an artifact of the Internet’s origins as a U.S. military experiment. In 1989, the government began the process of privatizing the Internet by opening it up to general and commercial use. In 1998, the Commerce Department created ICANN to oversee the DNS on a day-to-day basis. The NTIA’s announcement is arguably the culmination of this single decades-long process of privatization.

The announcement also undercuts the primary justification used by authoritarian regimes to agitate for control of the Internet. Other governments have long cited the United States’ unilateral control of the root zone, arguing that they, too, should have roles in governing the Internet. By relinquishing its oversight of the DNS, the United States significantly undermines that argument and bolsters the case for private administration of the Internet.

The United States’ stewardship of the root zone is largely apolitical. This apolitical approach to DNS administration is precisely what is at stake during the transition, hence the three pitfalls the Obama administration must avoid to preserve it.

The first pitfall is the most serious but also the least likely to materialize. Despite the NTIA’s excellent track record, authoritarian regimes like Russia, China, and Iran have long lobbied for the ITU, a clumsy and heavily politicized U.N. technical agency, to take over the NTIA’s duties. In its announcement, the NTIA said it would not accept a proposal from an intergovernmental organization, a clear rebuke to the ITU.

Nevertheless, liberal governments would be wise to send the organization a clear message in the form of much-needed reform. The ITU should adopt the transparency we expect of communications standards bodies, and it should focus on its core competency—international coordination of radio spectrum—instead of on Internet governance. If the ITU resists these reforms at its Plenipotentiary Conference this fall, the United States and other countries should slash funding or quit the Union.

ICANN’s Governmental Advisory Committee (GAC) presents a second pitfall. Indeed, the GAC is already the source of much mischief. For example, France and Luxembourg objected to the creation of the .vin top-level domain on the grounds that “vin” (wine) is a regulated term in those countries. Brazil and Peru have held up Amazon.com’s application for .amazon despite the fact that they previously agreed to the list of reserved place names, and rivers and states were not on it. Last July, the U.S. government, reeling from the Edward Snowden revelations, threw Amazon and the rule of law under the bus at the GAC as a conciliatory measure.

ICANN created the GAC to appease other governments in light of the United States’ outsized role. Since the United States is giving up its special role, the case for the GAC is much diminished. In practice, the limits on the GAC’s power are gradually eroding. ICANN’s board seems increasingly hesitant to overrule it out of fear that governments will go back to the ITU and complain that the GAC “isn’t working.” As part of the transition of the root zone to ICANN, therefore, new limits need to be placed on the GAC’s power. Ideally, it would dissolve the GAC.

The third pitfall comes from ICANN itself. The organization is awash in cash from domain registration fees and new top-level domain name applications—which cost $185,000 each—and when the root zone transition is completed, it will face no external accountability. Long-time ICANN insiders speak of “mission creep,” noting that the supposedly purely technical organization increasingly deals with trademark policy and has aided police investigations in the past, a dangerous precedent.

How can we prevent an unaccountable, cash-rich technical organization from imposing its own internal politics on what is supposed to be an apolitical administrative role? In the long run, we may never be able to stop ICANN from becoming a government-like entity, which is why it is important to support research and experimentation in peer-to-peer, decentralized domain name systems. This matter is under discussion, among other places, at the Internet Engineering Task Force, which may ultimately play something of a counterweight to an independent ICANN.

Despite these potential pitfalls, it is time for an Internet that is fully in private hands. The Obama administration deserves credit for proposing to complete the privatization of the Internet, but we must also carefully monitor the process to intercept any blunders that might result in politicization of the root zone.

Categories: Tech Polis

America in the golden age of broadband

Wed, 04/02/2014 - 11:20

This blog was made in cooperation with Michael James Horney, George Mason University master’s student, based upon our upcoming paper on broadband innovation, investment and competition.

Ezra Klein’s interview with Susan Crawford paints a glowing picture of  publicly provided broadband, particularly fiber to the home (FTTH), but the interview missed a number of important points.

The international broadband comparisons provided were selective and unstandardized.  The US is much bigger and more expensive to cover than many small, highly populated countries. South Korea is the size of Minnesota but has 9 times the population. Essentially the same amount of network can be deployed and used by 9 times as many people. This makes the business case for fiber more cost effective.  However South Korea has limited economic growth to show for its fiber investment. A recent Korean government report complained of “jobless growth”.  The country still earns the bulk of its revenue from the industries from the pre-broadband days.

It is more realistic and correct to compare the US to the European Union, which has a comparable population and geographic areas.  Data from America’s National Broadband Map and the EU Digital Agenda Scoreboard show that  the US exceeds the EU on many important broadband measures, including the deployment of fiber to the home (FTTH), which is twice the rate of EU.  Considering where fiber networks are available in the EU, the overall adoption rate is just 2%.  The EU government itself, as part of its Digital Single Market initiative, has recognized that its approach to broadband has not worked and is now looking to the American model.

The assertion that Americans are “stuck” with cable as the only provider of broadband is false.  It is more correct to say that Europeans are “stuck” with DSL, as 74% of all EU broadband connections are delivered on copper networks. Indeed broadband and cable together account for 70% of America’s broadband connections, with the growing 30% comprising FTTH, wireless, and other  broadband solutions.  In fact, the US buys and lays more fiber than all of the EU combined.

The reality is that Europeans are “stuck” with a tortured regulatory approach to broadband, which disincentivizes investment in next generation networks. As data from Infonetics show, a decade ago the EU accounted for one-third of the world’s investment in broadband; that amount has plummeted to less than one-fifth today. Meanwhile American broadband providers invest at twice the rate of European and account for a quarter of the world’s outlay in communication networks. Americans are just 4% of the world’s population, but enjoy one quarter of its broadband investment.

The following chart illustrates the intermodal competition between different types of broadband networks (cable, fiber, DSL, mobile, satellite, wifi) in the US and EU.

US (%)

EU (%)

Availability of broadband with a download speed of 100 Mbps or higher

57*

30

Availability of cable broadband

88

42

Availability of LTE

94**

26

Availability of FTTH

25

12

Percent of population that subscribes to broadband by DSL

34

74

Percent of households that subscribe to broadband by cable

36***

17

 

The interview offered some cherry picked examples, particularly Stockholm as the FTTH utopia. The story behind this city is more complex and costly than presented.  Some $800 million has been invested in FTTH in Stockholm to date with an additional $38 million each year.  Subscribers purchase the fiber broadband with a combination of monthly access fees and increases to municipal fees assessed on homes and apartments. Acreo, a state-owned consulting company charged with assessing Sweden’s fiber project concludes that the FTTH project shows at best a ”weak but statistically significant correlation between fiber and employment” and that ”it is difficult to estimate the value of FTTH for end users in dollars and some of the effects may show up later.”

Next door Denmark took a different approach.  In 2005, 14 utility companies in Denmark invested $2 billion in FTTH.  With advanced cable and fiber networks, 70% of Denmark’s households and businesses has access to ultra-fast broadband, but less than 1 percent subscribe to the 100 mbps service.  The utility companies have just 250,000 broadband customers combined, and most customers subscribe to the tiers below 100 mbps because it satisfies their needs and budget. Indeed 80% of the broadband subscriptions in Denmark are below 30 mbps.  About 20 percent of homes and businesses subscribe to 30 mbps, but more than two-thirds subscribe to 10 mbps.

Meanwhile, LTE mobile networks have been rolled out, and already 7 percent (350,000) of Danes use 3G/4G as their primary broadband connection, surpassing FTTH customers by 100,000.  This is particularly important because in many sectors of the Danish economy, including banking, health, and government, users can only access services only digitally. Services are fully functional on mobile devices and their associated speeds.  The interview claims that wireless will never be a substitute for fiber, but millions of people around the world are proving that wrong every day.

The price comparisons provided between the US and selected European countries also leave out compulsory media license fees (to cover state broadcasting) and taxes that can add some $80 per month to the cost of every broadband subscription. When these real fees are added up, the real price of broadband is not so cheap in Sweden and other European countries.  Indeed, the US frequently comes out less expensive.

The US broadband approach has a number of advantages.  Private providers bear the risks, not taxpayers. Consumers dictate the broadband they want, not the government.  Also prices are scalable and transparent. The price reflects the real cost. Furthermore, as the OECD and the ITU have recognized, the entry level costs for broadband in the US are some of the lowest in the world. The ITU recommends that people pay no more than 5% of their income for broadband; most developed countries fall within 2-3% for the highest tier of broadband, including the US.  It is only fair to pay more more for better quality. If your needs are just email and web browsing, then basic broadband will do. But if you wants high definition Netflix, you should pay more.  There is no reason why your neighbor should subsidize your entertainment choices.

The interview asserted that government investment in FTTH is needed to increase competitiveness, but there was no evidence given.  It’s not just a broadband network that creates economic growth. Broadband is just one input in a complex economic equation.  To put things into perspective, consider that the US has transformed its economy through broadband in the last two decades.   Just the internet portion alone of America’s economy is larger than the entire GDP of Sweden.

The assertion that the US is #26 in broadband speed is simply wrong. This is an outdated statistic from 2009 used in Crawford’s book. The Akamai report references is released quarterly, so there should have been no reason not to include a more recent figure in time for publication in December 2012. Today the US ranks #8 in the world for the same measure. Clearly the US is not falling behind if its ranking on average measured speed steadily increased from #26 to #8. In any case, according to Akamai, many US cities and states have some of the fastest download speeds in the world and would rank in the top ten in the world.

There is no doubt that fiber is an important technology and the foundation of all modern broadband networks, but the economic question is to what extent should fiber be brought to every household, given the cost of deployment (many thousands of dollars per household), the low level of adoption (it is difficult to get a critical mass of a community to subscribe given diverse needs), and that other broadband technologies continue to improve speed and price.

The interview didn’t mention the many failed federal and municipal broadband projects.  Chattanooga is just one example of a federally funded fiber projects costing hundreds of millions of dollars with too few users  A number of municipal projects that have failed to meet expectations include Chicago, Burlington, VT; Monticello, MN; Oregon’s MINET, and Utah’s UTOPIA.

Before deploying costly FTTH networks, the feasibility to improve existing DSL and cable networks as well as to deploy wireless broadband markets should be considered. As case in point is Canada.  The OECD reports that both Canada and South Korea have essentially the same advertised speeds, 68.33 and 66.83 Mbps respectively.  Canada’s fixed broadband subscriptions are shared almost equally between DSL and cable, with very little FTTH.   This shows that fast speeds are possible on different kinds of networks.

The future demands a multitude of broadband technologies. There is no one technology that is right for everyone. Consumers should have the ability to choose based upon their needs and budget, not be saddled with yet more taxes from misguided politicians and policymakers.

Consider that mobile broadband is growing at four times the rate of fixed broadband according to the OECD, and there are some 300 million mobile broadband subscriptions in the US, three times as many fixed broadband subscriptions.  In Africa mobile broadband is growing at 50 times the rate of fixed broadband.  Many Americans have selected mobile as their only broadband connection and love its speed and flexibility. Vectoring on copper wires enables speeds of 100 mbps. Cable DOCSIS3 enables speeds of 300 mbps, and cable companies are deploying neighborhood wifi solutions.  With all the innovation and competition, it is mindless to create a new government monopoly.  We should let the golden age of broadband flourish.

Source for US and EU Broadband Comparisons: US data from National Broadband Map, “Access to Broadband Technology by Speed,” Broadband Statistics Report, July 2013, http://www.broadbandmap.gov/download/Technology%20by%20Speed.pdf and http://www.broadbandmap.gov/summarize/nationwide. EU data from European Commission, “Chapter 2: Broadband Markets,” Digital Agenda Scoreboard 2013 (working document, December 6, 2013), http://ec.europa.eu/digital-agenda/sites/digital-agenda/files/DAE%20SCOREBOARD%202013%20-%202-BROADBAND%20MARKETS%20_0.pdf.

*The National Cable Telecommunications Association suggests speeds of 100 Mbps are available to 85% of Americans.  See “America’s Internet Leadership,” 2013, www.ncta.com/positions/americas-internet-leadership.

**Verizon’s most recent report notes that it reaches 97 percent of America’s population with 4G/LTE networks. See Verizon, News Center: LTE Information Center, “Overview,” www.verizonwireless.com/news/LTE/Overview.html.

***This figure is based on 49,310,131 cable subscribers at the end of 2013, noted by Leichtman Research http://www.leichtmanresearch.com/press/031714release.html compared to 138,505,691 households noted by the National Broadband Map.

Categories: Tech Polis

Bitcoin hearing in the House today, fun event tonight

Wed, 04/02/2014 - 10:15

Later today I’ll be testifying at a hearing before the House Small Business Committee titled “Bitcoin: Examining the Benefits and Risks for Small Business.” It will be live streamed starting at 1 p.m. My testimony will be available on the Mercatus website at that time, but below is some of my work on Bitcoin in case you’re new to the issue.

Also, tonight I’ll be speaking at a great event hosted by the DC FinTech meetup on “Bitcoin & the Internet of Money.” I’ll be joined by Bitcoin core developer Jeff Garzik and we’ll be interviewed on stage by Joe Weisenthal of Business Insider. It’s open to the public, but you have to RSVP.

Finally, stay tuned because in the next couple of days my colleagues Houman Shadab, Andrea Castillo, and I will be posting a draft of our new law review article looking at Bitcoin derivatives, prediction markets, and gambling. Bitcoin is the most fascinating issue I’ve ever worked on.

Here’s Some Bitcoin Reading…

And here’s my interview with Reihan Salam discussing Bitcoin…

Categories: Tech Polis

Video – DisCo Policy Forum Panel on Privacy & Innovation in the 21st Century

Wed, 04/02/2014 - 09:32

Last December, it was my pleasure to take part in a great event, “The Disruptive Competition Policy Forum,” sponsored by Project DisCo (or The Disruptive Competition Project). It featured several excellent panels and keynotes and they’ve just posted the video of the panel I was on here and I have embedded it below. In my remarks, I discussed:

  • benefit-cost analysis in digital privacy debates (building on this law review article);
  • the contrast between Europe and America’s approach to data & privacy issues (referencing this testimony of mine);
  • the problem of “technopanics” in information policy debates (building on this law review article);
  • the difficulty of information control efforts in various tech policy debates (which I wrote about in this law review article and these two blog posts: 1, 2);
  • the possibility of less-restrictive approaches to privacy & security concerns (which I have written about here as well in those other law review articles);
  • the rise of the Internet of Things and the unique challenges it creates (see this and this as well as my new book); and,
  • the possibility of a splintering of the Internet or the rise of “federated Internets.”

The panel was expertly moderated by Ross Schulman, Public Policy & Regulatory Counsel for CCIA, and also included remarks from John Boswell, SVP & Chief Legal Officer at SAS, and Josh Galper, Chief Policy Officer and General Counsel of Personal, Inc. (By the way, you should check out some of the cool things Personal is doing in this space to help consumers. Very innovative stuff.) The video lasts one hour. Here it is:

Categories: Tech Polis

Congress Should Lead FCC by Example, Adopt Clean STELA Reauthorization

Tue, 04/01/2014 - 11:31

After yesterday’s FCC meeting, it appears that Chairman Wheeler has a finely tuned microscope trained on broadcasters and a proportionately large blind spot for the cable television industry.

Yesterday’s FCC meeting was unabashedly pro-cable and anti-broadcaster. The agency decided to prohibit television broadcasters from engaging in the same industry behavior as cable, satellite, and telco television distributors and programmers. The resulting disparity in regulatory treatment highlights the inherent dangers in addressing regulatory reform piecemeal rather than comprehensively as contemplated by the #CommActUpdate. Congress should lead the FCC by example and adopt a “clean” approach to STELA reauthorization that avoids the agency’s regulatory mistakes.

The FCC meeting offered a study in the way policymakers pick winners and losers in the marketplace without acknowledging unfair regulatory treatment. It’s a three-step process.

  • First, the policymaker obfuscates similarities among issues by referring to substantively similar economic activity across multiple industry segments using different terminology.
  • Second, it artificially narrows the issues by limiting any regulatory inquiry to the disfavored industry segment only.
  • Third, it adopts disparate regulations applicable to the disfavored industry segment only while claiming the unfair regulatory treatment benefits consumers.

The broadcast items adopted by the FCC yesterday hit all three points.

“Broadcast JSAs”

The FCC adopted an order prohibiting two broadcast television stations from agreeing to jointly sell more than 15% of their advertising time using the three-step process described above.

  • First, the FCC referred to these agreements as “JSA’s” or “joint sales agreements”.
  • Second, the FCC prohibited these agreements only among broadcast television stations even though the largest cable, satellite, and telco video distributors sell their advertising time through a single entity.
  • Third, FCC Chairman Tom Wheeler said all the agency was “doing [yesterday was] leveling the negotiating table” for negotiations involving the largely unrelated issue of “retransmission consent”, even though the largest cable, satellite, and telco video distributors all sell their advertising through a single entity.

If the FCC had acknowledged that cable, satellite, and telcos jointly sell their advertising, and had the FCC included them in its inquiry as well, Chairman Wheeler could not have kept a straight face while asserting that all the agency was doing was leveling the playing field. Hence the power of obfuscatory terminology and artificially narrowed issues.

“Broadcast Exclusivity Agreements”

The FCC also issued a further notice yesterday seeking comment on broadcast “non-duplication exclusivity agreements” and “syndicated exclusivity agreements.” These agreements, which are collectively referred to as “broadcast exclusivity agreements”, are a form of territorial exclusivity: They provide a local television station with the exclusive right to transmit broadcast network or syndicated programming in the station’s local market only.

Unlike cable, satellite, and telco television distributors, broadcast television stations are prohibited by law from entering into exclusive programming agreements with other television distributors in the same market: The Satellite Television Extension and Localism Act (STELA) prohibits television stations from entering into exclusive retransmission consent agreements — i.e., a television station must make its programming available to all other television distributors in the same market. Cable, satellite, and telco distributors are legally permitted to enter into exclusive programming agreements on a nationwide basis — e.g., DIRECTV’s NFL Sunday Ticket.

If the FCC is concerned by the limited form of territorial exclusivity permitted for broadcasters, it should be even more concerned about the broader exclusivity agreements that have always been permitted for cable, satellite, and telco television distributors. But the FCC nevertheless used the three-step process for picking winners and losers to limit its consideration of exclusive programming agreements to broadcasters only.

  • First, the FCC uses unique terminology to refer to “broadcast” exclusivity agreements (i.e., “non-duplication” and “syndicated exclusivity”), which obfuscates the fact that these agreements are a limited form of exclusive programming agreements.
  • Second, the FCC is seeking comment on exclusive programming agreements between broadcast television stations and programmers only even though satellite and other video programming distributors have entered into exclusive programming agreements.
  • Third, it appears the pretext for limiting the scope of the FCC’s inquiry to broadcasters will again be “leveling the playing field” between broadcasters and other television distributors — to benefit consumers, of course.

“Joint Retransmission Consent Negotiations”

Finally, the FCC prohibited a television broadcast station ranked among the top four stations (as measured by audience share) from negotiating “retransmission consent” jointly with another top four station in the same market if the stations are not commonly owned. The FCC reasoned that “the threat of losing programming of two more top four stations at the same time gives the stations undue bargaining leverage in negotiations with [cable, satellite, and telco television distributors].”

As an economic matter, “retransmission consent” is essentially a substitute for the free market copyright negotiations that could occur absent the “compulsory copyright license” in the 1976 Copyright Act and an earlier Supreme Court decision interpreting the term “public performance”. In the absence of retransmission consent, compensation for the use of programming provided by broadcast television stations and programming networks would be limited to the artificially low amounts provided by the compulsory copyright license.

To the extent retransmission consent is merely another form of program licensing, it is indistinguishable from negotiations between cable, satellite and telco distributors and cable programming networks — which typically involve the sale of bundled channels. If bundling two television channels together “gives the stations undue bargaining leverage” in retransmission consent negotiations, why doesn’t a cable network’s bundling of multiple channels together for sale to a cable, satellite, or telco provider give the cable network “undue bargaining leverage” in its licensing negotiations? The FCC avoided this difficultly using the old one, two, three approach.

  • First, the FCC used the unique term “retransmission consent” to refer to the sale of programming rights by broadcasters.
  • Second, the FCC instituted a proceeding seeking comment only on “retransmission consent” rather than all programming negotiations.
  • Third, the FCC found that lowering retransmission consent costs could lower the prices consumers pay to cable, satellite, and telco television distributors — to remind us that it’s all about consumers, not competitors.

If it were really about lowering prices for consumers, the FCC would also have considered whether prohibiting channel bundling by cable programming networks would lower consumer prices too. For reasons left unexplained, cable programmers are permitted to bundle as many channels as possible in their licensing negotiations.

“Clean STELA”

After yesterday’s FCC meeting, it appears that Chairman Wheeler has a finely tuned microscope trained on broadcasters and a proportionately large blind spot for the cable television industry. To be sure, the disparate results of yesterday’s FCC meeting could be unintentional. But, even so, they highlight the inherent dangers in any piecemeal approach to industry regulation. That’s why Congress should adopt a “clean” approach to STELA reauthorization and reject the demands of special interests for additional piecemeal legislative changes. Consumers would be better served by a more comprehensive effort to update video regulations.

Categories: Tech Polis

The Beneficial Uses of Private Drones [Video]

Fri, 03/28/2014 - 12:10

Give us our drone-delivered beer!

That’s how the conversation got started between John Stossel and me on his show this week. I appeared on Stossel’s Fox Business TV show to discuss the many beneficial uses of private drones. The problem is that drones — which are more appropriately called unmanned aircraft systems — have an image problem. When we think about drones today, they often conjure up images of nefarious military machines dealing death and destruction from above in a far-off land. And certainly plenty of that happens today (far, far too much in my personal opinion, but that’s a rant best left for another day!).

But any technology can be put to both good and bad uses, and drones are merely the latest in a long list of “dual-use technologies,” which have both military uses and peaceful private uses. Other examples of dual-use technologies include: automobiles, airplanes, ships, rockets and propulsion systems, chemicals, computers and electronic systems, lasers, sensors, and so on. Put simply, almost any technology that can be used to wage war can also be used to wage peace and commerce. And that’s equally true for drones, which come in many sizes and have many peaceful, non-military uses. Thus, it would be wrong to judge them based upon their early military history or how they are currently perceived. (After all, let’s not forget that the Internet’s early origins were militaristic in character, too!)

Some of the other beneficial uses and applications of unmanned aircraft systems include: agricultural (crop inspection & management, surveying); environmental (geological, forest management, tornado & hurricane research); industrial (site & service inspection, surveying); infrastructure management (traffic and accident monitoring); public safety (search & rescue, post-natural disaster services, other law enforcement); and delivery services (goods & parcels, food & beverages, flowers, medicines, etc.), just to name a few.


Watch the latest video at video.foxbusiness.com

This is why it is troubling that the Federal Aviation Administration (FAA) continues to threaten private drone operators with cease-and-desist letters and discourage the many beneficial uses of these technologies, even as other countries rush ahead and green-light private drone services. As I noted on the Stossel show, while the FAA is well-intentioned in its efforts to keep the nation’s skies safe, the agency is allowing hypothetical worst-case scenarios get in the way of beneficial innovation. A lot of this fear is driven by privacy concerns, too. But as Brookings Institution senior fellow John Villasenor has explained, we need to be careful about rushing to preemptively control new technologies based on hypothetical privacy fears:

If, in 1995, comprehensive legislation to protect Internet privacy had been enacted, it would have utterly failed to anticipate the complexities that arose after the turn of the century with the growth of social networking and location-based wireless services. The Internet has proven useful and valuable in ways that were difficult to imagine over a decade and a half ago, and it has created privacy challenges that were equally difficult to imagine. Legislative initiatives in the mid-1990s to heavily regulate the Internet in the name of privacy would likely have impeded its growth while also failing to address the more complex privacy issues that arose years later.

This is a key theme discussed throughout my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” The central lesson of the booklet is that living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. We shouldn’t let our initial (and often irrational) fears of new technologies dictate the future course of innovation.We can and will find constructive solutions to the hard problems posed by new technologies because we creative and resilient creatures. And, yes, some regulation will be necessary. But how and when we regulate matters profoundly. Preemptive, precautionary-based proposals are almost never the best way to start.

Finally, as I also noted during the interview with Stossel, it’s always important to consider trade-offs and opportunity costs when discussing the disruptive impact of new technologies. For example, while some fear the safety implications of private drones, we should not forget that over 30,000 people die in automobile-related accidents every year in the United States. While the number of vehicle-related deaths has been declining in recent years, that remains an astonishing number of deaths. What if a new technology existed that could help prevent a significant number of these fatalities? Certainly, “smart car” technology and fully autonomous “driverless cars” should help bring down that number significantly. But how might drones help?

Consider some of the mundane tasks that automobiles are used for today. Cars are used to go grab dinner or have someone else deliver it, to pick up medicine at a local pharmacy, to have newspapers or flowers delivered, and so on. Every time a human gets behind the wheel of an automobile to do these things the chance for injury or even death exists, even close to home. In fact, a large percentage of all accidents happen with just a few miles of the car owner’s home. A significant number of those accidents could be avoided if we were able to rely on drone-delivery of things we today use cars and trucks for.

These are just some of the things to consider as the debate over unmanned aircraft systems continues. Drones have gotten a very bad name thus far, but we should remain open-minded about their many beneficial, peaceful, and pro-consumer uses.

(For more on this issue, read this April 2013 filing to the FAA I wrote along with my Mercatus colleagues Eli Dourado and Jerry Brito.)



Normal
0




false
false
false

EN-US
X-NONE
X-NONE

























DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>



/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Categories: Tech Polis

The End of Net Neutrality and the Future of TV

Wed, 03/26/2014 - 11:03

Some recent tech news provides insight into the trajectory of broadband and television markets. These stories also indicate a poor prognosis for a net neutrality. Political and ISP opposition to new rules aside (which is substantial), even net neutrality proponents point out that “neutrality” is difficult to define and even harder to implement. Now that the line between “Internet video” and “television” delivered via Internet Protocol (IP) is increasingly blurring, net neutrality goals are suffering from mission creep.

First, there was the announcement that Netflix, like many large content companies, was entering into a paid peering agreement with Comcast, prompting a complaint from Netflix CEO Reed Hastings who argued that ISPs have too much leverage in negotiating these interconnection deals.

Second, Comcast and Apple discussed a possible partnership whereby Comcast customers would receive prioritized access to Apple’s new video service. Apple’s TV offering would be a “managed service” exempt from net neutrality obligations.

Interconnection and managed services are generally not considered net neutrality issues. They are not “loopholes.” They were expressly exempted from the FCC’s 2010 (now-defunct) rules. However, net neutrality proponents are attempting to bring interconnection and managed services to the FCC’s attention as the FCC crafts new net neutrality rules. Net neutrality proponents have an uphill battle already, and the following trends won’t help.

1. Interconnection becomes less about traffic burden and more about leverage.

The ostensible reason that content companies like Netflix (or third parties like Cogent) pay ISPs for interconnection is because video content unloads a substantial amount of traffic onto ISPs’ last-mile networks.

Someone has to pay for network upgrades to handle the traffic. Typically, the parties seem to abide by the equity principle that whoever is sending the traffic–in this case, Netflix–should bear the costs via paid peering. That way, the increased expense is incurred by Netflix who can spread costs across its subscribers. If ISPs incurred the expense of upgrades, they’d have to spread costs over its subscriber base, but many of their subscribers are not Netflix users.

That principle doesn’t seem to hold for WatchESPN, which is owned by Disney. WatchESPN is an online service that provides live streams of ESPN television programming, like ESPN2 and ESPNU, to personal computers and also includes ESPN3, an online-only livestream of non-marquee sports. If a company has leverage in other markets, like Disney does in TV programming markets, I suspect ISPs can’t or won’t charge for interconnection. These interconnection deals are non-public but Disney probably doesn’t pay ISPs for transmitting WatchESPN traffic onto ISPs’ last-mile networks. The existence of a list of ESPN’s “Participating Providers” indicates that ISPs actually have to pay ESPN for the privilege of carrying WatchESPN content.

Netflix is different from WatchESPN in significant ways (it has substantially more traffic, for one). However, it is a popular service and seems to be flexing its leverage muscle with its Open Connect program, which provided higher-quality videos to participating ISPs. It’s plausible that someday video sources like Netflix will gain leverage, especially as broadband competition increases, and ISPs will have to pay content companies for traffic, rather than the reverse. When competitive leverage is the issue, antitrust agencies, not the FCC, have the appropriate tools to police business practices.

2. The rise of managed services in video.

Managed services include services ISPs provide to customers like VoIP and video-on-demand (VOD). They are on data streams that receive priority for guaranteed quality assurance since customers won’t tolerate a jittery phone call or movie stream. Crucially, managed services are carried on the same physical broadband network but are on separate data streams that don’t interfere with a customer’s Internet service.

The Apple-Comcast deal, if it comes to fruition, would be the first major video offering provided as a managed service. (Comcast has experimented with managed services affiliated with Xbox and TiVo.) Verizon is also a potential influential player since it just bought an Intel streaming TV service. Future plans are uncertain but Verizon might launch a TV product that it could sell outside of the FiOS footprint with a bundle of cable channels, live television, and live sports.

Net neutrality proponents decry managed services as exploiting a loophole in the net neutrality rules but it’s hardly a loophole. The FCC views managed services as a social good that ISPs should invest in. The FCC’s net neutrality advisory committee last August released a report and concluded that managed services provide “considerable” benefits to consumers. The report went on to articulate principles that resemble a safe harbor for ISPs contemplating managed services. Given this consensus view, I see no reason why the FCC would threaten managed services with new rules.

3. Uncertainty about what is “the Internet” and what is “television.”

Managed services and other developments are blurring the line between the Internet and television, which makes “neutrality” on the Internet harder to define and implement. We see similar tensions in phone service. Residential voice service is already largely carried via IP. According to FCC data, 2014 will likely be the year that more people subscribe to VoIP service than plain-old-telephone service. The IP Transition reveals the legal and practical tensions when technology advances make the FCC’s regulatory silos–”phone” and “Internet”–anachronistic.

Those same technology changes and legal ambiguity are carrying over into television. TV is also increasingly carried via IP and it’s unclear where “TV” ends and “Internet video” begins. This distinction matters because television is regulated heavily while Internet video is barely regulated at all. On one end of the spectrum you have video-on-demand from a cable operator. VOD is carried over a cable operator’s broadband lines but fits under the FCC’s cable service rules. On the other end of the spectrum you have Netflix and YouTube. Netflix and YouTube are online-only video services delivered via broadband but are definitely outside of cable rules.

In the gray zone between “TV” and “Internet video” lies several services and physical networks that are not entirely in either category. These services include WatchESPN and ESPN3, which are owned by a cable network and are included in traditional television negotiations but delivered via a broadband connection.

IPTV, also, is not entirely TV nor Internet video. AT&T’s UVerse, Verizon’s FiOS, and Google Fiber’s television product are pure or hybrid IPTV networks that “look” like cable or satellite TV to consumers but are not. AT&T, Verizon, and Google voluntarily assent to many, but not all, cable regulations even though their service occupies a legally ambiguous area.

Finally, on the horizon, are managed video and gaming services and “virtual MSOs” like Apple’s or Verizon’s video products. These are probably outside of traditional cable rules–like program access rules and broadcast carriage mandates–but there is still regulatory uncertainty.

Broadband and video markets are in a unique state of flux. New business models are slowly emerging and firms are attempting to figure out each other’s leverage. However, as phone and video move out of their traditional regulatory categories and converge with broadband services, companies face substantial regulatory compliance risks. In such an environment, more than ever, the FCC should proceed cautiously and give certainty to firms. In any case, I’m optimistic that experts’ predictions will be borne out: ex ante net neutrality rules are looking increasingly rigid and inappropriate for this ever-changing market environment.

Related Posts

1. Yes, Net Neutrality is a Dead Man Walking. We Already Have a Fast Lane.
2. Who Won the Net Neutrality Case?
3. If You’re Reliant on the Internet, You Loathe Net Neutrality.

Categories: Tech Polis

Video Double Standard: Pay-TV Is Winning the War to Rig FCC Competition Rules

Tue, 03/25/2014 - 13:44

Most conservatives and many prominent thinkers on the left agree that the Communications Act should be updated based on the insight provided by the wireless and Internet protocol revolutions. The fundamental problem with the current legislation is its disparate treatment of competitive communications services. A comprehensive legislative update offers an opportunity to adopt a technologically neutral, consumer focused approach to communications regulation that would maximize competition, investment and innovation.

Though the Federal Communications Commission (FCC) must continue implementing the existing Act while Congress deliberates legislative changes, the agency should avoid creating new regulatory disparities on its own. Yet that is where the agency appears to be heading at its meeting next Monday.

recent ex parte filing indicates that the FCC is proposing to deem joint retransmission consent negotiations by two of the top four Free-TV stations in a market a per se violation of the FCC’s good-faith negotiation standard and adopt a rebuttable presumption that joint negotiations by non-top four station combinations constitute a failure to negotiate in good faith.” The intent of this proposal is to prohibit broadcasters from using a single negotiator during retransmission consent negotiations with Pay-TV distributors.

This prohibition would apply in all TV markets, no matter how small, including markets that lack effective competition in the Pay-TV segment. In small markets without effective competition, this rule would result in the absurd requirement that marginal TV stations with no economies of scale negotiate alone with a cable operator who possesses market power.

In contrast, cable operators in these markets would remain free to engage in joint negotiations to purchase their programming. The Department of Justice has issued a press release “clear[ing] the way for cable television joint purchasing” of national cable network programming through a single entity. The Department of Justice (DOJ) concluded that allowing nearly 1,000 cable operators to jointly negotiate programming prices would not facilitate retail price collusion because cable operators typically do not compete with each other in the sale of programming to consumers.

Joint retransmission consent negotiations don’t facilitate retail price collusion either. Free-TV distributors don’t compete with each other for the sale of their programming to consumers — they provide their broadcast signals to consumers for free over the air. Pay-TV operators complain that joint agreements among TV stations are nevertheless responsible for retail price increases in the Pay-TV segment, but have not presented evidence supporting that assertion. Pay-TV’s retail prices have increased at a steady clip for years irrespective of retransmission consent prices.

To the extent Pay-TV distributors complain that joint agreements increase TV station leverage in retransmission consent negotiations, there is no evidence of harm to competition. The retransmission consent rules prohibit TV stations from entering into exclusive retransmission consent agreements with any Pay-TV distributor — even though Pay-TV distributors are allowed to enter into such agreements for cable programming — and the FCC has determined that Pay- and Free-TV distributors do not compete directly for viewers. The absence of any potential for competitive harm is especially compelling in markets that lack effective competition in the Pay-TV segment, because the monopoly cable operator in such markets is the de facto single negotiator for Pay-TV distributors.

It is even more surprising that the FCC is proposing to prohibit joint sales agreements among Free-TV distributors. This recent development apparently stems from a DOJ Filing in the FCC’s incomplete media ownership proceeding.

A fundamental flaw exists in the DOJ Filing’s analysis: It failed to consider whether the relevant product market for video advertising includes other forms of video distribution, e.g., cable and online video programming distribution. Instead, the DOJ relied on precedent that considers the sale of advertising in non-video media only.

Similarly, the Department has repeatedly concluded that the purchase of broadcast television spot advertising constitutes a relevant antitrust product market because advertisers view spot advertising on broadcast television stations as sufficiently distinct from advertising on other media (such as radio and newspaper). (DOJ Filing at p.8)

The DOJ’s conclusions regarding joint sales agreements are clearly based on its incomplete analysis of the relevant product market.

Therefore, vigorous rivalry between multiple independently controlled broadcast stations in each local radio and television market ensures that businesses, charities, and advocacy groups can reach their desired audiences at competitive rates. (Id. at pp. 8-9, emphasis added)

The DOJ’s failure to consider the availability of advertising opportunities provided by cable and online video programming renders its analysis unreliable.

Moreover, the FCC’s proposed rules would result in another video market double standard. Cable, satellite, and telco video programming distributors, including DIRECTV, AT&T U-verse, and Verizon FIOS, have entered into a joint agreement to sell advertising through a single entityNCC Media (owned by Comcast, Time Warner Cable, and Cox Media). NCC Media’s Essential Guide to planning and buying video advertising says that cable programming has surpassed 70% of all viewing to ad-supported television homes in Prime and Total Day, and 80% of Weekend daytime viewing. According to NCC, “This viewer migration to cable [programming] is one of the best reasons to shift your brand’s media allocation from local broadcast to Spot Cable,” especially with the advent of NCC’s new consolidated advertising platform. (Essential Guide at p. 8) The Essential Guide also states:

  • “It’s harder than ever to buy the GRP’s [gross rating points] you need in local broadcast in prime and local news.” (Id. at p. 16)
  • “[There is] declining viewership on broadcast with limited inventory creating a shortage of rating points in prime, local news and other dayparts.” (Id. at p. 17)
  • “The erosion of local broadcast news is accelerating.” (Id. at p. 18)
  • “Thus, actual local broadcast TV reach is at or below the cume figures for wired cable in most markets.” (Id. at p. 19)

This Essential Guide clearly indicates that cable programming is part of the relevant video advertising product market and that there is intense competition between Pay- and Free-TV distributors for advertising dollars. So why is the FCC proposing to restrict joint marketing agreements among Free-TV distributors in local markets when virtually the entire Pay-TV industry is jointly marketing all of their advertising spots nationwide?

The FCC should refrain from adopting new restrictions on local broadcasters until it can answer questions like this one. Though it is appropriate for the FCC to prevent anticompetitive practices, adopting disparate regulatory obligations that distort competition in the same product market is not good for competition or consumers. Consumer interests would be better served if the FCC decided to address video competition issues more broadly — or there might not be any Free-TV competition to worry about.

Categories: Tech Polis

New Book Release: “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”

Tue, 03/25/2014 - 11:06

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today.

The second major objective of the book, as is made clear by the title, is to make a forceful case in favor of the latter disposition of “permissionless innovation.” I argue that policymakers should unapologetically embrace and defend the permissionless innovation ethos — not just for the Internet but also for all new classes of networked technologies and platforms. Some of the specific case studies discussed in the book include: the “Internet of Things” and wearable technologies, smart cars and autonomous vehicles, commercial drones, 3D printing, and various other new technologies that are just now emerging.

I explain how precautionary principle thinking is increasingly creeping into policy discussions about these technologies. The urge to regulate preemptively in these sectors is driven by a variety of safety, security, and privacy concerns, which are discussed throughout the book. Many of these concerns are valid and deserve serious consideration. However, I argue that if precautionary-minded regulatory solutions are adopted in a preemptive attempt to head-off these concerns, the consequences will be profoundly deleterious.

The central lesson of the booklet is this: Living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.

Again, that doesn’t mean we should ignore the various problems created by these highly disruptive technologies. But how we address these concerns matters greatly. If and when problems develop, there are many less burdensome ways to address them than through preemptive technological controls. The best solutions to complex social problems are almost always organic and “bottom-up” in nature. Luckily, there exists a wide variety of constructive approaches that can be tapped to address or alleviate concerns associated with new innovations. These include:

  • education and empowerment efforts (including media literacy, digital citizenship efforts);
  • social pressure from activists, academics, and the press and the public more generally.
  • voluntary self-regulation and adoption of best practices (including privacy and security “by design” efforts); and,
  • increased transparency and awareness-building efforts to enhance consumer knowledge about how new technologies work.

Such solutions are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I?” (i.e., permissioned) nature. The problem with “top-down” traditional regulatory systems is that they often tend to be overly-rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. It raises the cost of starting or running a business or non-business venture, and generally discourages activities that benefit society.

To the extent that other public policies are needed to guide technological developments, simple legal principles are greatly preferable to technology-specific, micro-managed regulatory regimes. Again, ex ante (preemptive and precautionary) regulation is often highly inefficient, even dangerous. To the extent that any corrective legal action is needed to address harms, ex post measures, especially via the common law (torts, class actions, etc.), are typically superior. And the Federal Trade Commission will, of course, continue to play a backstop here by utilizing the broad consumer protection powers it possesses under Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” In recent years, the FTC has already brought and settled many cases involving its Section 5 authority to address identity theft and data security matters. If still more is needed, enhanced disclosure and transparency requirements would certainly be superior to outright bans on new forms of experimentation or other forms of heavy-handed technological controls.

In the end, however, I argue that, to the maximum extent possible, our default position toward new forms of technological innovation must remain: “innovation allowed.” That is especially the case because, more often than not, citizens find ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes. We should have a little more faith in the ability of humanity to adapt to the challenges new innovations create for our culture and economy. We have done it countless times before. We are creative, resilient creatures. That’s why I remain so optimistic about our collective ability to confront the challenges posed by these new technologies and prosper in the process.

If you’re interested in taking a look, you can find a free PDF of the book at the Mercatus Center website or you can find out how to order it from there as an eBook. Hardcopies are also available. I’ll be doing more blogging about the book in coming weeks and months. The debate between the “permissionless innovation” and “precautionary principle” worldviews is just getting started and it promises to touch every tech policy debate going forward.


_______________

Related Essays:

Categories: Tech Polis