A common question among smart Bitcoin skeptics is, “Why would one use Bitcoin when you can use dollars or euros, which are more common and more widely accepted?” It’s a fair question, and one I’ve tried to answer by pointing out that if Bitcoin were just a currency (except new and untested), then yes, there would be little reason why one should prefer it to dollars. The fact, however, is that Bitcoin is more than money, as I recently explained in Reason. Bitcoin is better thought of as a payments system, or as a distributed ledger, that (for technical reasons) happens to use a new currency called the bitcoin as the unit of account. As Tim Lee has pointed out, Bitcoin is therefore a platform for innovation, and it is this potential that makes it so valuable.
Eric Posner is one of these smart skeptics. Writing in Slate in April he rejected Bitcoin as a “fantasy” because he felt it didn’t make sense as a currency. Since then it’s been pointed out to him that Bitcoin is more than a currency, and today at the New Republic he asks the question, “Why would you use Bitcoin when you can use PayPal or Visa, which are more common and widely accepted?”
He answers his own question, in part, by acknowledging that Bitcoin is censorship-resistant. As he puts it, “If you live in a country with capital controls, you can avoid those[.]” So right there, it seems to me, is one good reason why one might want to use Bitcoin instead of PayPal or Visa. Another smart skeptic, Tyler Cowen, acknowledges this as well, even if only to suggest that the price of bitcoins will fall “if/when China fully liberalizes capital flows[.]”
Another reason why one would use Bitcoin instead of PayPal or Visa is that it’s cheaper. Posner disputes this, arguing that Bitcoin’s historic volatility makes it risky to hold Bitcoins, necessitating hedging, and therefore making it no less costly than traditional payments systems. (Cowen was one of the first to make this argument.) But this is not true.
First of all, I would argue that there’s nothing inherent in Bitcoin that makes it necessarily as volatile as it has been; its volatility to date comes largely from the fact that it’s thinly traded. If its adoption continues apace, and its infrastructure continues to be developed, there’s no reason to think it will forever be as volatile as it has been to date. But that’s conjecture. More to the point is the proof that’s in the pudding: There are tens of thousands of merchants accepting bitcoins for payment today (and growing), and the number of transactions accepted by those merchants has been exploding as well, setting a record on Black Friday. Can it be that even with the necessary hedging, Bitcoin is cheaper?
At least for some types of transactions I think the answer is unquestionably yes. Take international remittances, which is a $500 billion industry. Sending money to Kenya using Western Union MoneyGram or some other traditional money transmitter costs around five to ten percent of the amount being sent, and can take days for the deposit to take place. A new startup, BitPesa, is looking to charge only three percent, and to carry out transfers virtually instantaneously. So hedging costs would have to be more than five to ten percent to make this not worthwhile. It’s an empirical question, but it seems to me the fact that so many are jumping in helps give us a hint as to the answer. Perhaps we can look to Bitpay’s 1% fee as a market estimate of the cost of hedging.
Well then, so far I count two things that Bitcoin can do that traditional payments systems cannot: it is censorship resistant and it is cheaper. Oh, wait. I actually mentioned another one: it’s faster. Traditional wire transfers can take days or even weeks to clear, while Bitcoin takes minutes. And yet there’s more.
As Eli Dourado just pointed out in a previous post, built into Bitcoin is a facility for decentralized arbitration. Essentially, Bitcoin allows for transactions that require two out of three signatures to verify a transaction, thus allowing payer and payee to turn to an arbitrator if there is a dispute about whether the payment should go through. Paypal and credit card companies essentially provide this service today, but as Eli points out, decentralized arbitration would likely be cheaper and would certainly enjoy much more competition. That’s four things Bitcoin can do that traditional payments networks cannot, but let me quickly add a fifth. There’s no reason that the arbitrator must be a human; using Bitcoin’s scripting language the arbitrator can be a trusted automated source of information that on a regular basis broadcasts facts such as the price of gold, or price of stocks, or sports scores. Make that data stream your arbitrator and, voila, you have a decentralized predictions market. (Ed Felten at Princeton is working on executing the concept.)
One more before I sign off and go drink with the rest of the Tech Liberation gang at our 15th Alcohol Liberation Front this evening, to which you’re all invited. Bitcoin allows for microtransactions in a way that’s never before been possible. First of all, because bitcoin transactions can be cheap, you can send incredibly small amounts (say five cents or half a cent) that would be cost-prohibitive using traditional payments systems. There’s a start-up called BitWall that essentially allows publishers to easily charge tiny amounts for their content. Now, believe me, I know all the arguments for and against micropayments for content. My only point is that Bitcoin has the potential to further reduce the friction of such payments. But that’s not the exciting part. More interesting are really, really small microtransactions.
Bitcoin transactions are cheap, but you wouldn’t think they’re cheap enough that you could conduct hundreds per second. But the thing is, you can, using the micropayments channels feature of the Bitcoin protocol. It’s not yet been widely exploited, but it’s there in the spec waiting to be. I won’t go into the technical details in this post, but essentially you transmit one large transaction to the network (you can think of this like a deposit, say of $10), then you conduct as many tiny transactions between payer and payee not broadcast to the network (therefore ‘free’), and finally you broadcast how much of the initial amount remains with each party. What this means is that you can now offer metered services based on microtransactions.
One good example of how this would be useful is Wi-Fi access, which Mike Hearn explains in this video. Today we are surrounded by wi-fi hotspots, but we can’t use them because they are password protected, in part because there’s no good way to charge for their use. When you can pay to use a wi-fi hotspot, it usually entails creating an account with the provider and then purchasing a block of time, perhaps more than you need. Now imagine if you could connect to any open hotspot, without first creating any kind of account, and paying your way by the second or the kilobyte. That’s possible today with Bitcoin, it’s just going to take some time to be implemented. And think of all the other as-yet unimagined ways that this ability to meter could be put to use!
That’s six ways to answer the question, “Why would you use Bitcoin when you can use PayPal or Visa.” There are more. Hearn discusses a bunch in the video. These are all very real in the sense that they are all technically possible today, but certainly speculative in that there remain regulatory and market hurdles ahead. I can certainly understand why some would be skeptical of Bitcoin’s long-term success (I for one am not certain of it), but I really hope we can get to the point were that skepticism is based on more than misunderstandings about what Bitcoin is or what it can and cannot do.
There is bipartisan agreement that the 1996 Telecom Act was antiquated only shortly after President Clinton’s signature had dried on the legislation. There is also consensus that spectrum policy, still largely grounded in the 1934 communications statute, absolutely distorts today’s wireless markets. And there is frequent criticism from thought leaders, right and left, that the FCC has been, for decades, too accommodating to the firms it regulates and too beholden to the status quo (economist Thomas Hazlett quips the agency’s initials stand for “Forever Captured by Corporations”).
For these reasons, members of Congress every few years announce their intention to reform the 1934 and 1996 communications laws and modernize the FCC. Yesterday, some powerful House members unexpectedly reignited hopes that Congress would overhaul our telecom, broadband, and video laws. In a Google Hangout (!), Reps. Fred Upton and Greg Walden said they wanted to take on the ambitious task of passing a new law in 2015.
Much depends on next year’s elections and the composition of Congress, but hopefully the announcement spurs a major re-write that eliminates regulatory distortions in communications, much as airlines and transportation were deregulated in the 1970s–an effort led by reformist Democrats.
About ten years ago, more than fifty scholars and technologists crafted reports which constituted the Digital Age Communications Act (or DACA) that is largely deregulatory (a majority of the group had served in Democratic administrations, interestingly enough). In 2005, then-Sen. Jim DeMint proposed a bill similar to the working group’s proposals. The working group’s recommendations aged very well in eight years–which you can’t say about the 1996 Act–and represents a great starting point for future legislation.
As Adam has said the DACA reports have five primary reform objectives:
- Replacing the amorphous “public interest” standard with a consumer welfare standard, which is more well-established in field of antitrust law
- Eliminate regulatory silos and level the playing field through deregulation
- Comprehensively reform spectrum not just through more auctioning but through clear property rights
- Reform universal service by either voucherizing it or devolving it to the States and let them run their own telecom welfare programs; and
- Significantly reforming & downsizing the scope of the FCC’s power of the modern information economy
DACA redefines the FCC as a specialized competition agency for the communications sector. The FCC largely sees itself as a competition agency today but the current statutes don’t represent that gradual change in purpose. The FCC is slow, arbitrary, Balkanizes industries artificially, and attempts to regulate in areas it isn’t equipped to regulate–the agency has a notoriously bad record in federal courts. These characteristics create a poor environment for substantial investments in technology and communications infrastructure. The DACA proposals aren’t perfect but it is a resilient framework that minimizes the effect of special interests in communications and encourages investments that improve consumers’ lives.
One of the criticisms leveled at Bitcoin by those people determined to hate it is that Bitcoin transactions are irreversible. If I buy goods from an anonymous counterparty online, what’s to stop them from taking my bitcoins and simply not sending me the goods? When I buy goods online using Visa or American Express, if the goods never arrive, or if they aren’t what was advertised, I can complain to the credit card company. The company will do a cursory investigation, and if they find that I was indeed likely ripped off, they will refund me my money. Credit card transactions are reversible, Bitcoin transactions are not. For this service (among others), credit card companies charge merchants a few percentage points on the transaction.
The problem with this account is that it’s not true: Baked into the Bitcoin protocol, there is support for what are known as “m-of-n” or “multisignature” transactions, transactions that require some number m out of some higher number n parties to sign off.
The simplest variant is a 2-of-3 transaction. Let’s say that I want to buy goods online from an anonymous counterparty. I transfer money to an address jointly controlled by me, the counterparty, and a third-party arbitrator (maybe even Amex). If I get the goods, they are acceptable, and I am honest, I sign the money away to the seller. The seller also signs, and since 2 out of 3 of us have signed, he receives his money. If there is a problem with the goods or if I am dishonest, I sign the bitcoins back to myself and appeal to the arbitrator. The arbitrator, like a credit card company, will do an investigation, make a ruling, and either agree to transfer the funds back to me or to the merchant; again, 2 of 3 parties must agree to transfer the funds.
This is not an escrow service; at no point can the arbitrator abscond with the funds. The arbitrator is paid a market rate in advance for his services, which are offered according to terms agreed upon by all three parties. This is better than the equivalent service using credit cards, because credit cards rely on huge network effects and consequently there are only a handful of suppliers of such transaction arbitration. Using Bitcoin, anyone can be an abitrator, including the traditional credit card companies (although they might have to lower their fees). Competition in both terms and fees is likely to result in better discovery of efficient rules for dispute resolution.
While multisignature transactions are not well understood, they are right there in the Bitcoin protocol, as much a valid Bitcoin transaction as any other. So some Bitcoin transactions are irreversible; others are reversible, exactly as reversible as credit card transactions are.
Bitrated.com is a new site (announced yesterday on Hacker News) that facilitates setting up multisignature transactions. Bitcoin client support for multisignature transactions is limited, so the site helps create addresses that conform to the m-of-n specifications. At no point does the site have access to the funds in the multisignature address.
In addition, Bitrated provides a marketplace where people can advertise their arbitration services. Users are able to set up transactions using arbitrators both from the site or from anywhere else. The entire project is open source, so if you want to set up a competing directory, go for it.
What excites me most about the decentralized arbitration afforded by multisignature transactions is that it could be the beginnings of a Common Law for the Internet. The plain, ordinary Common Law developed as the result of competing courts that issued opinions basically as advertisements of how fair and impartial they were. We could see something similar with Bitcoin arbitration. If arbitrators sign their transactions with links to and a cryptographic hash of a PDF that explains why they ruled as they did, we could see real competition in the articulation of rules. Over time, some of these articulations could come to be widely accepted and form a body of Bitcoin precedent. I look forward to reading the subsequent Restatements.
Multisignature transactions are just one of the many innovations buried deep in the Bitcoin protocol that have yet to be widely utilized. As the community matures and makes full use of the protocol, it will become more clear that Bitcoin is not just a currency but a platform for financial innovation.
Originally posted at elidourado.com.
Alice Marwick, assistant professor of communication and media studies at Fordham University, discusses her newly-released book, Status Update: Celebrity, Publicity, and Branding in the Social Media Age. Marwick reflects on her interviews with Silicon Valley entrepreneurs, technology journalists, and venture capitalists to show how social media affects social dynamics and digital culture. Marwick answers questions such as: Does “status conscious” take on a new meaning in the age of social media? Is the public using social media the way the platforms’ creators intended? How do you quantify the value of online social interactions? Are social media users becoming more self-censoring or more transparent about what they share? What’s the difference between self-branding and becoming a micro-celebrity? She also shares her advice for how to make Twitter, Tumblr, Instagram and other platforms more beneficial for you.
- Status Update: Celebrity, Publicity, and Branding in the Social Media Age, Marwick
- Engineered Performances Alice E. Marwick’s ‘Status Update’, The New York Times
- Biography, Marwick
Just a quick reminder to join us this Wednesday night (Dec. 4) for the next “Alcohol Liberation Front” happy hour featuring many Tech Liberation Front contributors and friends. The happy hour will be held at Churchkey (1337 14th St., NW) at 6 p.m. Churchkey is one of the very best beer bars not just in D.C. but in all of America. If you’ve never been there before, you are in for a real treat.
In addition to mixing and mingling with the witty and wacky TLF crew, we have a special surprise for attendees: Our guests will be given an early preview of our prototype TLF drone! Our Advanced Robotics Division here at the TLF has been hard at work on the “FreedomCopter” and we look forward to showing guests how we plan to use it coming years to spread the good word of tech liberty! We plan on doing special fly-bys during the evening and buzzing past EPIC and CDT headquarters to have our autonomous agent inquire about our general freedom to tinker, innovate, and gather information freely. We look forward to their response.
No word yet if our Advanced Robotics Division will have the new driverless “TLF-Mobiles” ready in time to give inebriated guests a free ride home, but we will do our best.
Hope to see you on Wednesday night.
Yesterday at Forbes, William Pentland had an interesting piece on possible disintermediation in the electricity market.
In New York and New England, the price of electricity is a function of the cost of natural gas plus the cost of the poles and wires that carry electrons from remotely-sited power plants to end users. It is not unusual for customers to spend two dollars on poles and wires for every dollar they spend on electrons.
The poles and wires that once reduced the price of electricity for end users are now doing the opposite. To make matters worse, electricity supplied through the power grid is frequently less reliable than electricity generated onsite. In other words, rather than adding value in the form of enhanced reliability, the poles and wires diminish the reliability of electricity.
If two thirds of the cost of electricity is the distribution mechanism, then, as Pentland notes, there is a palpable opportunity to switch to at-home electricity generation. Some combination of solar power, batteries, and natural gas-fired backup generators could displace the grid entirely for some customers. And if I understand my electricity economics correctly, if a significant fraction of customers go off-grid, the fixed cost of maintaining the grid will be split over fewer remaining customers, making centrally-generated electricity even more expensive. The market for such electricity could quickly unravel.
While it remains to be seen whether electricity generation will indeed become decentralized, such disintermediation would be the continuation of a decades-long social trend. It all began (plausibly) in 1984. The Macintosh was released, and desktop computing became a thing. Desktop printers disintermediated printing departments, Kinkos, and the steno pool. The Internet has disintermediated telephone companies, music labels, television networks, newspapers, and much more. Online education is unbundling university courses.
What’s even more exciting is the next generation of disintermediating technologies. Bitcoin could displace some financial institutions—to varying degrees, banks, the Federal Reserve, Western Union, and credit card companies. Mesh networks could solve the last-mile problem of Internet service delivery, which tends to be monopolized or at least concentrated. 3D printers could disintermediate supply chains. 3D chemical printers could disintermediate drug companies and the FDA.
Delivery drones like Amazon Prime Air‘s arguably disrupt package delivery services, though not entirely because FedEx and UPS will still run drone-utilizing distribution networks. More importantly, delivery drones disintermediate the real estate market for small businesses. It will no longer be important, if you run a local business, to have a storefront in a prime location. Your customers can order online and items can be delivered to them in half an hour straight from the factory or artisanal workshop. It could be the Etsyfication of the economy.
If information, electricity, money, and production all get disintermediated, what is left? If these trends continue, the future will be one in which human interaction is unmediated, and to a surprising degree, unregulable. It will be difficult to stop a willing buyer and seller from transacting. Information about the proposed transaction might not be censorable. Payment via Bitcoin or other cryptocurrencies can’t be stopped. Production and delivery of the item may be difficult or impossible to detect and intercept.
Intermediaries are often used by governments as points of control. As we shed intermediaries, it may become possible to live one’s entire life without any particular authority even knowing that one exists. I doubt that we’ll ever get that far in the process, because using non-abusive intermediaries often makes economic sense. But for the next few decades, at least, I expect the trend to continue and the world to get a lot more interesting.
Originally posted at elidourado.com
Both parties of Congress has been increasingly critical of federal agencies’ inefficient use of spectrum in the past few years and it seems like agencies are getting the message. The NTIA, which is the official manager of federal agency spectrum, released a letter yesterday announcing that the Department of Defense would be relocating some of its systems. Defense had reached an agreement with broadcasters that Defense systems will share spectrum in the Broadcast Auxiliary Service (BAS) band.
The soon-to-be vacated band held by Defense will eventually be auctioned off–hopefully in 2014–for billions of dollars and likely used for mobile broadband provided by wireless carriers like AT&T, Verizon, Sprint, and T-Mobile. These carriers face serious congestion problems because of government-created scarcity of spectrum.
The carriers actually had targeted some of BAS spectrum because they weren’t convinced Defense would be willing to move their systems. The broadcaster deal reached with Defense means everyone’s apparently happy–the broadcasters can keep their BAS spectrum, the feds get new equipment and Congress off their back (temporarily), and the carriers get new spectrum for auction.
The deal is welcome news because the spectrum will be put to a higher-valued use once auctioned. The federal government pays almost nothing for its own spectrum and is a poor steward of the resource. Transferring spectrum from agencies to carriers means lower phone bills and more mobile broadband coverage. Government agencies are notoriously resistant to moving their systems or sharing with others, so entering into a sharing pact with the broadcasters indicates some of the resistance is thawing.
It’s not unequivocal good news, though.
The government is clearing out from a 25 MHz band of spectrum and occupying the larger, 85 MHz BAS band that will be shared with broadcasters. The military will need a larger band because sharing imposes some capacity constraints necessitating new, agile systems that search the airwaves to make sure they don’t interfere with existing broadcast users. Dynamic sharing like this only adds to the cost and complexity and may imperil next years’ planned auction.
Further, the BAS band is unavailable for auction only because of the antiquated command-and-control regime the FCC uses to award spectrum licenses. BAS is mostly used for electronic news gathering, which relays local and national newscasts from reporters on the scene to broadcast studios. Broadcasters have used BAS spectrum since the 1960s when it was allocated to them for free.
In a market, broadcasters likely would not have as much BAS spectrum as they currently have. In fact, because of technology changes and squeezed newsroom budgets, broadcasters are finding cheaper alternatives. Increasingly, journalists are using carriers’ LTE technology to transmit their breaking newscasts since the technology costs a fraction of the cost of news vans and equipment needed for BAS transmissions. That is to say, there are alternative business models in the absence of Soviet-style allocations.
So despite these industry changes, BAS spectrum cannot be auctioned for its highest-valued use (probably mobile broadband) under current FCC rules. Further, it will be even more difficult to bring the benefits of auctions to the airwaves if federal users are intermingling with existing users, broadcasters in this case. It’s a trend to be wary of. Let’s just hope that next year’s planned auctions occur on time so that more consumers can benefit from mobile broadband.
It’s been way too long since the Tech Liberation Front hosted an IRL meetup, more than a year in fact, so we’re looking to make amends next week. You’re invited to the 15th Alcohol Liberation Front happy hour, which we’ll hold at Churchkey on 14th Street at 6 p.m. on Wednesday, December 4th.
Lots of us from the TLF gang will be there, including quite a few of our out-of-town contributors. So please come by and have a beer with us, and bring a friend!
In my Reason column this week I took inspiration from the fact that I will soon be sporting a Narrative Clip life-logging camera, and I wrote about our coming sousveillance future when everyone will be recording everyone else with wearable cameras. Lo and behold, looks like our good friend Fred Smith of CEI last night lived that future.
That’s a video posted by a biker who apparently wears a camera on his helmet and records his rides. He was calling the police to report a car blocking the bike lane when Fred and his wife Fran asked him not to. One thing I find fascinating is that being recorded, their instinct was to record back with the cameras on their phones.
As wearables become mainstream we’re going to begin to see many more videos like this, and I leave it to the reader to decide whether that’s a good thing. Sousveillance, whether we like it or not, will be a giant accountability machine. Obviously, recording the behavior of police and other government agents will help keep them accountable, but we’ll also be recording each other. Indeed, this biker wears a camera in part, I’m sure, to hold others accountable should anything happen to him on the road. What’s interesting is that what we will be held accountable for will be not just traffic accidents, but also sidewalk interactions that until now would have remained private and anonymous. Do check out my column in which I go into much more detail about the coming mainstreaming of sousveillance.
I’m pleased to announce that Alex Tabarrok and I have a new working paper out from the Mercatus Center today, “Public Choice and Bloomington School Perspectives on Intellectual Property.” The paper will appear in Public Choice in 2014.
Here’s the abstract:
We mine two underexplored traditions for insights into intellectual property: the public choice or Virginia school, centered on James Buchanan and Gordon Tullock, and the Bloomington or Institutional Analysis and Development school, centered on Elinor Ostrom and Vincent Ostrom. We apply the perspectives of each school to issues of intellectual property and develop new insights, questions, and focuses of attention. We also explore tensions and synergies between the two schools on issues of intellectual property.
The gist of the paper is that the standard case for intellectual property—that a temporary monopoly is needed in order to recoup the sunk costs of innovation or creation—ignores issues raised by the two schools we investigate.
From a public choice perspective, a temporary monopoly provides enormous opportunities for rent seeking. Copyright and patent owners are constantly manipulating the political environment to expand either the duration of the monopoly or the scope of what can be monopolized. We document the evolution of intellectual property in the United States from its modest origins to its current strong and expansive state.
From a Bloomington perspective, the standard case for IP wrongly treats the commons as a kind of wasteland. In fact, numerous innovations and sprawling creative works occur without monopolization—just look at Wikipedia. Innovation occurs when the right institutional structures are in place, and intellectual property that is too severe can hamper the smooth operation of these institutions. Too much IP can harm as much as too little.
Read the whole thing, cite it copiously, etc.
“Selfie” was selected today as the word of the year by the Oxford English Dictionary’s editors, beating both “twerking” and “bitcoin.” Bitcoin’s company in that word list makes me appreciate the fact that others may be as sick of hearing about Bitcoin as I am about twerking. Nevertheless, it’s a pretty important week for Bitcoin, an I wanted to highlight some of the work I’ve been doing.
Yesterday the Senate Homeland Security and Governmental Affairs Committee held a hearing on the promises and challenges that virtual currencies hold for consumers and law enforcement respectively. I testified at that hearing and video of my testimony is below. You can also check out the written testimony, which is an updated version of the Bitcoin primer for policymakers I wrote with Andrea Castillo earlier this year. And ahead of the hearing I published an op-ed in The Guardian arguing that if the U.S. doesn’t foster a sane regulatory environment for Bitcoin, entrepreneurs will go to other jurisdictions that do.
All in all the hearing was hearteningly positive. The federal regulators and law enforcement representatives all agreed that Bitcoin is a lawful and legitimate payments system and that it holds great promise. They also agreed that plain old cash and centralized virtual currencies (contra Bitcoin’s decentralized design) are much greater magnets for money laundering, and that they needed no new laws or authority to deal with illegal uses of Bitcoin. I discuss the hearing and its implications on today’s Cato Daily Podcast with Caleb Brown.
Finally, I think there are lots of folks, especially in the wonkosphere, who think they know what Bitcoin is, but really don’t, and so the opinions they offer about its viability or significance are based on misunderstanding. For example, Neil Irwin at Wonkblog today wrote a 700-word post to suggest that what Bitcoin needs is a central bank. Now, if he’s trolling, kudos to him. But I really think he’s innocently ignorant of the fact that Bitcoin’s seminal design feature is that it is a decentralized payments system, and that the moment you add a central banker (which would in any case be impossible) you would no longer have Bitcoin, but Facebook Credits or Microsoft Points or airline miles.
So, if you think you have an inkling about what Bitcoin is, but you’re not too sure, or you don’t know why it’s so significant, please check out my cover story in the December issue of Reason, which was just made available online. Apart from explaining the basics, I go into detail about the little understood fact that Bitcoin is much more than just money. Value transmission is just the most obvious use case for Bitcoin, and thus the one that’s being built out first, but the Bitcoin platform is essentially a decentralized ledger, so it is also able to support property registrations, decentralized futures markets, and much more.
And truly finally, if you want to keep up with all the happenings in Bitcoin, including the Senate Banking Committee hearing later today, check out MostlyBitcoin.com, a site a built for myself but that I hope is useful to others that tracks Bitcoin stories in the mainstream media.
The Hill is reporting that Rep. Goodlatte, under pressure from “companies like Microsoft, IBM and Apple,” is planning to drop the provision in his patent reform bill that expands the Covered Business Method (CBM) program. Mike Masnick also has commentary.
Julie Samuels explains CBM review:
The “Covered Business Method Review” (CBM) was first introduced in 2011′s America Invents Act. It created, for a limited time, an additional avenue of patent review at the Patent Office. Unfortunately, as drafted, it really was only intended to apply to patents that deal with financial institutions.
CBM is a good program. First, we have long favored the use of Patent Office procedure to challenge patents; it is much cheaper and much quicker than going to court. Second, it allows for more ways to challenge patents than other types of Patent Office review—making it a more robust procedure that promises to knock out more improvidently granted patents. Third, it automatically puts concurrent patent litigation between the parties on hold.
Putting ongoing litigation on hold is no small thing. Patent litigation often costs each side well into the millions of dollars, while CBMs cost just a fraction of that. This means that more people will be in a position to challenge bad patents and fight back against the trolls who wield those patents.
The original Goodlatte bill would have expanded CBM review to patents beyond the financial sector.
From a public choice perspective, it is unsurprising that finance would have better patent law than the rest of the economy: finance is a concentrated industry that can go up politically against and offset another concentrated industry, the patent bar. But non-finance covered business method patents are asserted against all kinds of companies, for practices as banal as retrieving data from a database (not joking: “A method of retrieving information from a database record having plural fields“) or selling things online (“An apparatus to market and/or sell goods and/or services over an electronic network“). The fact that the victims of these patent assertions are dispersed throughout the economy means that they are not organized enough to effectively oppose the patent interests that are lobbying against the CBM program expansion.
Still, it is very disappointing that Rep. Goodlatte is caving to such lobbying. I already thought that his bill did not go far enough; now it goes even less far.
Tomorrow, the Federal Trade Commission (FTC) will host an all-day workshop entitled, “Internet of Things: Privacy and Security in a Connected World.” [Detailed agenda here.] According to the FTC: “The workshop will focus on privacy and security issues related to increased connectivity for consumers, both in the home (including home automation, smart home appliances and connected devices), and when consumers are on the move (including health and fitness devices, personal devices, and cars).”
Where is the FTC heading on this front? This Politico story by Erin Mershon from last week offers some possible ideas. Yet, it still remains unclear whether this is just another inquiry into an exciting set of new technologies or if it is, as I worried in my recent comments to the FTC on this matter, “the beginning of a regulatory regime for a new set of information technologies that are still in their infancy.”
First, for those not familiar with the “Internet of Things,” this short new report from Daniel Castro & Jordan Misra of the Center for Data Innovation offers a good definition:
The “Internet of Things” refers to the concept that the Internet is no longer just a global network for people to communicate with one another using computers, but it is also a platform or devices to communicate electronically with the world around them. The result is a world that is alive with information as data flows from one device to another and is shared and reused for a multitude of purposes. Harnessing the potential of all of this data for economic and social good will be one of the primary challenges and opportunities of the coming decades.
The report continues on to offer a wide range of examples of new products and services that could fulfill this promise.
What I find somewhat worrying about the FTC’s sudden interest in the Internet of Things is that it opens to the door for some regulatory-minded critics to encourage preemptive controls on this exciting new wave of digital age innovation, based almost entirely on hypothetical worst-case scenarios they have conjured up. And plenty of those boogeyman scenarios are floating around already because the Internet of Things has created a potential perfect storm of four major information policy concerns: online safety, privacy, security, and even intellectual property issues. You can find concerned critics from each of those quarters already wringing their hands about what the Internet of Things means for their pet issues.
This is why in both my filing to the agency and in an upcoming eBook, I discuss the danger of letting “precautionary principle” reasoning trump the alternative paradigm of “permissionless innovation.” As I’ve explained here before as well in this longer law review article, the precautionary principle generally holds that, because a given new technology could pose some theoretical danger or risk in the future, public policies should control or limit the development of such innovations until their creators can prove that they won’t cause any harms.
The problem with letting such precautionary thinking guide policy is that it poses a serious threat to technological progress, economic entrepreneurialism, and human prosperity. Under an information policy regime guided at every turn by a precautionary principle, technological innovation would be impossible because of fear of the unknown; hypothetical worst-case scenarios would trump all other considerations. Social learning and economic opportunities become far less likely, perhaps even impossible, under such a regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living.
For these reasons, to the maximum extent possible, the default position toward new forms of technological innovation should be innovation allowed. This policy norm is better captured in the well-known Internet ideal of “permissionless innovation,” or the general freedom to experiment and learn through trial-and-error experimentation.
Which leads back to the FTC workshop tomorrow. Which path will the agency head down? If the recent comments of FTC Chairwoman Edith Ramirez are any indication, there is certainly a healthy appetite for precautionary principle policymaking, at least as it pertains to “big data.” As I noted here in a critique of one of her recent speeches, Chairwoman Ramirez has offered “a rather succinct articulation of precautionary principle thinking as applied to modern data collection practices.”
She worried that “‘big data’ leads to the indiscriminate collection of personal information,” and that “the indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the offchance that it might prove useful is not consistent with privacy best practices,” she continued, and she went on to argue that “Information that is not collected in the first place can’t be misused” and then suggests a parade of horribles that will befall if such data collection is allowed at all. So, it would not be surprising to see her extend that sort of precautionary reasoning to the Internet of Things since all those fears would apply equally to it.
A better approach can be found in some remarks delivered by Ramirez’s fellow FTC Commissioner Maureen K. Ohlhausen. In an important speech last month entitled, “The Internet of Things and the FTC: Does Innovation Require Intervention?” Ohlhausen noted that, “The success of the Internet has in large part been driven by the freedom to experiment with different business models, the best of which have survived and thrived, even in the face of initial unfamiliarity and unease about the impact on consumers and competitors.” This reflects Ohlhausen’s general embrace of permissionless innovation reasoning and a rejection of the precautionary principle mindset articulated by FTC Chairwoman Ramirez.
More importantly, in her speech, Commissioner Ohlhausen went on to highlight another crucial point about why the precautionary mindset is dangerous when enshrined into laws or regulations. Put simply, many elites and regulatory advocates ignore regulator irrationality or regulatory ignorance. That is, they spend so much time focused on the supposed irrationality of consumers and their openness to persuasion or “manipulation” that they ignore the more concerning problem of the irrationality or ignorance of those who (incorrectly) believe they are always in the best position to solve every complex problem. Regulators simply do not possess the requisite knowledge to perfectly plan for every conceivable outcome. This is particularly true for information technology markets, which generally evolve much more rapidly than other sectors, and especially more rapidly that law itself.
That insight leads Ohlhausen to issue a wise word of caution to her fellow regulators:
It is  vital that government officials, like myself, approach new technologies with a dose of regulatory humility, by working hard to educate ourselves and others about the innovation, understand its effects on consumers and the marketplace, identify benefits and likely harms, and, if harms do arise, consider whether existing laws and regulations are sufficient to address them, before assuming that new rules are required.
That is absolutely right and this again makes it clears how Commissioner Ohlhausen’s approach to technological innovation is consistent with the permissionless innovation approach while Chairwoman Ramirez’s is based on precautionary principle thinking. This conflict of visions dominates almost all policy debates over new technology today, even if it is not always on such vivid display as it is in this case.
This also makes it abundantly clear just what is at stake as the FTC embarks on its exploration of the Internet of Things. Will we continue to embrace and defend the philosophy that made America’s digital economy the envy of the world (i.e., “permissionless innovation”), or will we be paralyzed by fear of the unknown and hypothetical worst-case scenarios. As I have said here many times before, living in constant fear of such worst-case scenarios — and premising public policy upon them — means that best-cast scenarios will never come about.
So, stay tuned. The fight over the Internet of Things promises to be one of the most important public policy battles in the technology policy arena for many years to come.
This issue will be the focus of my forthcoming eBook, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” but until that is released, here are a few other recommended readings on the topic:
- “Who Really Believes in ‘Permissionless Innovation’?” Technology Liberation Front, March 4, 2013.
- “What Does It Mean to ‘Have a Conversation’ about a New Technology?” Technology Liberation Front, May 23, 2013.
- “Planning for Hypothetical Horribles in Tech Policy Debates,” Technology Liberation Front, August 6, 2013.
- “On the Line between Technology Ethics vs. Technology Policy,” Technology Liberation Front, August 1, 2013.
- “Edith Ramirez’s ‘Big Data’ Speech: Privacy Concerns Prompt Precautionary Principle Thinking,” Technology Liberation Front, August 29, 2013.
- “When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed,” Technology Liberation Front, April 29, 2011.
- “Copyright, Privacy, Property Rights & Information Control: Common Themes, Common Challenges,” Technology Liberation Front, April 10, 2012.
- “Can We Adapt to the Internet of Things?” IAPP Privacy Perspectives, June 19, 2013.
Testimony / Filings:
- Senate Testimony on Privacy, Data Collection & Do Not Track, April 24, 2013
- Comments of the Mercatus Center to the FTC in Privacy & Security Implications of the Internet of Things, May 31, 2013.
- Comments of the Mercatus Center to FAA on commercial domestic drones (with Jerry Brito and Eli Dourado) , April 23, 2013.
Journal articles & book chapters:
- “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology, Vol. 14, (2013): 309-386.
- “The Pursuit of Privacy in a World Where Information Control Is Failing,” Harvard Journal of Law & Public Policy, Vol. 36, (2013): 409-455.
- “A Framework for Benefit-Cost Analysis in Digital Privacy Debates,” George Mason University Law Review, Vol. 20, No. 4 (Summer 2013): 1055-1105.
From the time Tom Wheeler was nominated to become the next FCC Chairman, many have wondered, “What would Wheeler do?” Though it is still early in his chairmanship, the only ruling issued in Chairman Wheeler’s first meeting signals a pro-investment approach to communications regulation.
The declaratory ruling clarified that the FCC would evaluate foreign investment in broadcast licensees that exceeds the 25 percent statutory benchmark using its existing analytical framework. It had previously been unclear whether broadcasters were subject to the same standard as other segments of the communications industry. The ruling recognized that providing broadcasters with regulatory certainty in this respect would promote investment and that greater investment yields greater innovation.
The FCC’s decision to apply the same standards for reviewing foreign ownership of broadcasters as it applies to other segments of the communications industry is very encouraging. It affirms the watershed policy decisions in the USF/ICC Transformation Order, in which the FCC concluded that “leveling the playing field” promotes competition whereas implied subsidies deter investment and are “unfair for consumers.”
Chairman Wheeler’s separate statement is also very encouraging. Its first sentence declares that, “Promoting a regulatory framework that does not inhibit the flow of capital to the US communications sector is an important goal of Commission policy.” This Chairman understands that, in a global economy, U.S. companies must compete with innovators around the world to obtain the necessary investment to develop new information technologies and deploy new communications infrastructure. His separate statement indicates the Chairman’s intent to renew the FCC’s commitment to encouraging private investment.
Regrettably, the Chairman’s separate statement is potentially troubling as well. After noting that the broadcast incentive auction is intended to allow the market to assure that the spectrum is put to its highest and best use, Chairman Wheeler says he will “assess foreign ownership petitions and applications by looking at, among other factors, whether they will help to fulfill these goals, including efficient spectrum usage.”
It is not entirely clear what the Chairman meant by this non sequitur (would the FCC impose channel sharing conditions on stations seeking approval for foreign investment exceeding the benchmark?). But it indicates a willingness to use the FCC’s authority over mergers and acquisitions to promote unrelated policy goals through the imposition of unrelated conditions. As I’ve noted previously, using the FCC’s transaction authority in this way silences public debate over critical policy issues and shields the resulting decision from judicial review – due process protections that are essential to ensure that the FCC acts in the public interest. Ironically, the prospect of unpredictable, case-by-case conditions on foreign investment would appear to be at odds with the Chairman’s goal of promoting a regulatory framework that doesn’t inhibit the flow of private capital to the U.S. communications industry.
It is also possible that the Chairman was merely attempting to deter speculative investments in broadcast spectrum that could sabotage the incentive auction. The success of the incentive auction is critical to the future of our mobile broadband ecosystem, and it is appropriate that the FCC be mindful of sudden, significant foreign investments in broadcast spectrum in these circumstances.
It is still early in Wheeler’s chairmanship, and the future is bright in the spring. If the Chairman maintains his focus on pro-investment policies during his term, the future could be brighter in every season.
I recently prepared a paper for the Expanding Opportunities for Broadcasters Coalition and Consumer Electronics Association that provides empirical data regarding the costs of restricting the eligibility of large firms to participate in FCC spectrum auctions (available in PDF here). The paper demonstrates that there is no significant likelihood that an open incentive auction would substantially harm the competitive positions of Sprint and T-Mobile. It also demonstrates that Sprint and T-Mobile have incentives to constrain the ability of Verizon and AT&T to expand their network capacity, and that Sprint and T-Mobile could consider FCC restraints on their primary rivals a “win” even if Sprint and T-Mobile don’t place a single bid in the incentive auction. (Winning regulatory battles is a lot cheaper than winning spectrum in a competitive auction.)
Some might think it is implausible that Sprint or T-Mobile would decide to forgo participation in the incentive auction. However, the recent announcement by Sprint that it won’t compete in the H block auction highlights the difficulty in predicting accurately whether any particular company will participate in a particular auction. Sprint’s announcement stunned market analysts, who had considered Sprint a key contender for the H block spectrum. Until recently, Sprint had given every indication it was keen to acquire this spectrum, which is located directly adjacent to the nationwide G block that Sprint already owns. It participated heavily in the FCC’s service rules proceeding for the H block (WT Docket No. 12-357) and even conducted its own testing to assist the FCC in assessing the technical issues. But, by the time the H Block auction was actually announced, Sprint decided its business would be better served by focusing its efforts on the deployment of its trove of spectrum in the 2.5 GHz band.
Such reversals are not unusual during the FCC auction process. Frontline Wireless, a company that no longer exists, successfully persuaded the FCC that it would build a nationwide, interoperable public safety network in the 700 MHz band, if the FCC imposed a public/private partnership condition on the D Block. But, shortly before the auction was scheduled to start, Frontline announced that it had been unable to obtain sufficient financing, and as a result, the D block was never sold.
To be clear, I’m not suggesting that Sprint or Frontline acted deceitfully in seeking spectrum rules they considered favorable to their interests without actually participating in the resulting auction. My point is that there is a critical distinction between regulatory efforts and business decisions. Companies often participate in regulatory proceedings to optimize their potential business options, but the results they seek are just that – options – until a business decision must be made.
This distinction leads to another important point: It is impossible for the FCC to predict accurately the ultimate business decisions of multiple independent companies whose particular business plans and the circumstances determining them are unknown to the FCC or anybody else. A particular company often cannot accurately predict its own decisions in rapidly changing circumstances (e.g., when Frontline was lobbying the FCC, it could not know with certainty that it would obtain the financing it required to buy the D Block). This inherent uncertainty is why the discredited licensing methodology of comparative hearings failed. It required the FCC to make reliable predictive judgments about the needs and efficiency of potential spectrum users, which proved to be an impossible task.
Ironically, the bidding restrictions proposed for the incentive auction are a form of “comparative hearing lite”. The DOJ’s recommendation – that the FCC “ensure” that Sprint and T-Mobile win spectrum in the incentive auction – is based on its own predictive judgments regarding the relative spectrum needs of all four nationwide mobile providers and their willingness to use future spectrum resources efficiently. Of course, there is no reason to believe that the DOJ is capable of judging such matters more reliably than the FCC did during the era of comparative hearings. As the H and D Block auctions demonstrate, it is impossible for the DOJ to know whether Sprint and T-Mobile will even show up to participate in the incentive auction.
“Net neutrality is a dead man walking,” Marvin Ammori stated in Wired last week, citing the probable demise of the FCC’s Open Internet rules in court. I’d agree for a different reason. Net neutrality has been dead ever since the FCC released its net neutrality order in December 2010. (This is not to say the damaging rules should be upheld by the DC Circuit. For many reasons, the Order should be struck down.) I agree with Ammori because we already have the Internet “fast lane” many net neutrality proponents wanted to prevent. Since that goal is precluded, all the rules do is hang Damocles’ Sword over ISPs regarding traffic management.
The 2010 rules managed to make both sides unhappy. The ISPs face severe penalties if three FCC commissioners believe ISP network management practices “unreasonably discriminate” against certain traffic. Public interest groups, on the other hand, were dissatisfied because they wanted ISPs reclassified as common carriers to prevent deep-pocketed content creators from allying with ISPs to create an Internet “fast lane” for some companies, relegating most other websites to the so-called “winding dirt road” of the public Internet.
Proponents emphasize different goals of net neutrality (to the point–many argue–it’s hard to discern what the term means). But if preventing the creation of a fast lane is the main goal of net neutrality, it’s dead already. Consider two popularly-cited net neutrality “violations” that do not violate the Open Internet Order: Netflix’ Open Connect program and Comcast not counting its Xfinity video-on-demand (VOD) service against customers’ data limits
Both cases involve the creation of a fast lane for certain content and activists rail against them. Both cases also involve network practices expressly exempted from net neutrality regulations. The FCC exempted these sorts of services because they are important, benefit the public, and should be encouraged. With Open Connect, Netflix scatters its many servers across the country closer to households, which allows its content to stream at a higher quality than most other video sites. Comcast gives its Xfinity VOD fast-lane treatment as well, which is completely legal since VOD from a cable company is a “specialized service” exempt from the rules.
“Specialized service” needs some explanation since it’s a novel concept from the FCC order. The net neutrality rules distinguish between “broadband Internet access service” (BIAS)–to which the regulations apply–and specialized (or managed) services–to which they don’t apply. The exemption of specialized services opens up a dangerous loophole in the view of proponents.
BIAS is what most consider “the Internet.” It’s the everyday websites we access on our computers and smartphones. What are specialized services? In the sleepy month of August the FCC’s Open Internet Advisory Committee released its report on what criteria specialized service needs to meet to be exempt from net neutrality scrutiny (these are influential and advisory, but not binding):
1. The service doesn’t reach large parts of the Internet, and
2. The service is an “application level” service.
The Advisory Committee also thought that “capacity isolation” is a good indicator that a service should be exempt. With capacity isolation, the ISP has one broadband connection going to the home but is separating the service’s data stream from the conventional Internet stream consumers use to visit Facebook, YouTube, and the like. This is how Comcast’s streaming of Xfinity to Xboxes is exempt–it is a proprietary network going into the home. As long as carriers don’t divert BIAS capacity for the application, the FCC will likely turn a blind eye.
What are some examples? Specialized service is marked by higher-quality streams that typically don’t suffer from jitter and latency. If you have “digital voice” from Comcast, for example, you are receiving a specialized service–proprietary VoIP. Specialized service can also include data streams like VOD, e-reader downloads, heart monitor data, and gaming services. The FCC exempted these because some are important enough that they shouldn’t compete with BIAS Internet. It would be obviously damaging to have digital phone service or health monitors getting disrupted because others are checking up on their fantasy football team. The FCC also wanted to spur investment in specialized services and video companies like Netflix are considering pairing up with ISPs to deliver a better experience to customers.
That is to say, the net neutrality effort has failed even worse than most realize. The FCC essentially prohibited innovative business models in BIAS, freezing that service into common-carrier-like status. Further, we have an Internet fast lane (which I consider a significant public benefit, though net neutrality proponents often do not). As business models evolve and the costs of server networks fall, our two-tier system will become more apparent.
The following is a guest post by James C. Cooper of George Mason University School of Law.
What are the limits to the FTC’s Section 5 antitrust authority? The short answer is, who knows. The FTC has been on a 100-year quest to find the maleficence that it alone was meant to combat. Early in its history, the Supreme Court appeared to give the FTC license to challenge a wide range of conduct that had little to do with competition. A series of appellate setbacks in the 1980s – relating largely to claims that Section 5 could reach tacit collusion and oligopolistic interdependence – led the Commission to retrench. Since then, the FTC has avoided litigating a Section 5 case, focusing primarily on invitations to collude (ITCs), and breaches of agreements to disclose or to license standard essential patents. Of course since all of these cases have settled, no court has had to opportunity to weigh in on whether Congress meant Section 5 to cover this type of conduct.
In my new Mercatus Center working paper, The Perils of Excessive Discretion: The Elusive Meaning of Unfairness in Section 5 of the FTC Act, I argue that the undefined nature of Section 5 leaves the FTC with broad discretion to investigate and extract settlements from companies. Although the appellate rebukes of the 1980s provide some clear boundaries, given firms’ understandable aversion to litigation – especially when only injunctive relief is on the table, and when the risk of follow-on private suits is much lower than it would be under a Sherman Act settlement – there is still a relatively large zone in which the FTC can develop this quasi Section 5 common law with little fear of triggering litigation, which would lead to appellate review. (A similar problem exists with respect to the FTC’s use of its Section 5 authority to become the de facto national privacy and data security regulator, but that’s another post).
Some commissioners saw the Google case as a perfect vehicle for the elusive “stand alone Section 5 case.” But rather than clarifying things, the Commission left a muddle. Although the Commission eventually decided to close its investigation, the multiple statements accompanying this decision suggest several directions in which some commissioners were willing to take Section 5, without offering any coherent framework or limits, revealing the truly confused nature of Section 5 and the concomitant wide discretion that the Commission enjoys to determine what Section 5 covers.
So what are the costs of so much discretion in the hands of the FTC? Uncertainty and rent seeking. Businesses uncertain about where the line between illegality and legality rests may be tempted to pull their competitive punches to limit the risk of an FTC investigation. Further, because defining what constitutes an “unfair method of competition” is so subjective an exercise, firms rationally devote resources to curry favor with those who reside at 600 Pennsylvania Avenue. One only need to look at the well-documented lobbying fest – both by Google and its opponents – that accompanied the Google investigation. This diversion of resources from productive to redistributive use may be a boon for private lawyers and economists, but it’s bad for consumers.
What are the answers? Probably the best course would be for the FTC – or Congress –permanently to tether Section 5 to the Sherman Act. Section 5 may have had a role to play very early in its history to the extent that the Sherman Act was too narrowly construed, but we don’t have that problem any more. Even after cases like Trinko, Twombly, and Credit Suisse, the Sherman Act is capacious, fully capable of accommodating conduct that threatens competition. True, this path would leave breaches of FRAND commitments and ITCs involving small firms beyond the FTC’s bailiwick. But the costs of ignoring such conduct are likely to be low. Breaches of FRAND commitments are at base contract disputes between sophisticated parties, and its not as if there is a lack of institutions to deal with this problem: courts have shown themselves able to wade into these complex issues. ITCs can be harmful, but only when the invitation blossoms into an agreement, in which case it can be challenged under the Sherman Act.
Another course would be for the Commission to issue guidelines. This path – the unfairness, deception, and ad substantiation statements – did wonders for the legitimacy of the FTC’s consumer protection program in the 1980s. What should Section 5 guidelines look like? They should proscribe a narrow domain, focusing only on conduct that (1) clearly is harmful (or poses a significant threat of substantial harm) to consumers through its effect on competition, (2) is unlikely to generate any cognizable efficiencies, and (3) but for the application of Section 5, would remain unremedied. In practice, this would mean retaining only ITCs and certain information sharing by non-dominant firms that is likely to facilitate collusion.
The economic sophistication of antitrust jurisprudence has progressed light years since the last Supreme Court case involving a Section 5 claim, during the era of Schwinn and Utah Pie. Maybe the fact that the Commission and antitrust commentators have searched so hard and so long for the elusive conduct that Section 5 alone was designed to tackle is a signal that such conduct does not exist. Perhaps Section 5 should go the way of the Robinson-Patman Act, another antitrust statute of a similar vintage that has been overtaken by economics to the point that neither the FTC nor the Antitrust Division enforces it.
Sen. Edward J. Markey (D-Mass.) and Rep. Joe Barton (R-Texas) have reintroduced their “Do Not Track Kids Act,” which, according to this press release, “amends the historic Children’s Online Privacy Protection Act of 1998 (COPPA), will extend, enhance and update the provisions relating to the collection, use and disclosure of children’s personal information and establishes new protections for personal information of children and teens.” I quickly scanned the new bill and it looks very similar to their previous bill of the same name that they introduced in 2011 and which I wrote about here and then critiqued at much greater length in a subsequent Mercatus Center working paper (“Kids, Privacy, Free Speech & the Internet: Finding The Right Balance”).
Since not much appears to have changed, I would just encourage you to check out my old working paper for a discussion of why this legislation raises a variety of technical and constitutional issues. But I remain perplexed by how supporters of this bill think they can devise age-stratified online privacy protections without requiring full-blown age verification for all Internet users. And once you go down that path, as I note in my paper, you open up a huge Pandora’s Box of problems that we have already grappled with for many years now. As I noted in my paper, the real irony here is that the “problem with these efforts is that expanding COPPA would require the collection of more personal information about kids and parents. For age verification to be effective at the scale of the Internet, the collection of massive amounts of additional data is necessary.”
But that’s hardly the only problem. How about the free speech rights of teens? They do have some, after all, but this bill could create new limitations on their ability to freely surf the Internet, gather information, and communicate with others.
In the end, I don’t expect this bill to pass; it’s mostly just political grandstanding “for the children.” But it’s a real shame that smart people waste their time with counter-productive and constitutionally suspect measures such as these instead of focusing their energy on more constructive educational efforts and awareness-building approaches to online safety and privacy concerns. Again, read my paper for more details on that alternative approach to these issues.
My friend and frequent co-blogger Larry Downes has shown how lawmaking in the information age is inexorably governed by “The Law of Disruption” or the fact that “technology changes exponentially, but social, economic, and legal systems change incrementally.” This law is “a simple but unavoidable principle of modern life,” he said, and it will have profound implications for the way businesses, government, and culture evolve going forward. “As the gap between the old world and the new gets wider,” he argues, “conflicts between social, economic, political, and legal systems” will intensify and “nothing can stop the chaos that will follow.” This has profound ramifications for high-tech policymaking, or at least it should.
A powerful illustration of the Law of Disruption in action comes from this cautionary tale told by telecom attorney Jonathan Askin in his new essay, “A Remedy to Clueless Tech Lawyers.” In the early 2000s, Askin served as legal counsel to Free World Dialup (FWD), “a startup that had the potential to dramatically disrupt the telecom sector” with its peer-to-peer IP network that could provide free global voice communications. Askin notes that “FWD paved the way for another startup—Skype. But FWD was Skype before Skype was Skype. The difference was that FWD had U.S. attorneys who put the reigns on FWD to seek FCC approvals to launch free of regulatory constraints.” Here’s what happened to FWD according to Askin:
In lightning regulatory speed (18 months), the FCC acknowledged that FWD was not a telecom provider subject to onerous telecom regulations. Sounds like a victory, right? Think again. During the time it took the FCC to greenlight FWD, the foreign founders of Skype proceeded apace with no regard for U.S. regulatory approvals. The result is that Skype had a two-year head start and a growing embedded user base, making it difficult for FWD, constrained by its U.S.-trained attorneys, to compete.
FWD would eventually shut down while Skype still thrives.
This shows how, no matter how well-intentioned any particular laws or regulation may be, they will be largely ineffective and possibly quite counter-productive when stacked against the realities of the fundamental “law of disruption” because they simply will not be able to keep up with the pace of technological change. “Emerging technologies change at the speed of Moore’s Law,” Downes notes, “leaving statutes that try to define them by their technical features quickly out of date.”
With information markets evolving at the speed of Moore’s Law, I have argued here before that we should demand that public policy do so as well. We can accomplish that by applying Moore’s Law to all current and future technology policy laws and regulations through two simple principles:
- Principle #1 – Every new technology proposal should include a provision sunsetting the law or regulation 18 months to two years after enactment. Policymakers can always reenact the rule if they believe it is still sensible.
- Principle #2 – Reopen all existing technology laws and regulations and reassess their worth. If no compelling reason for their continued existence can be identified and substantiated, those laws or rules should be repealed within 18 months to two years. If a rationale for continuing existing laws and regs can be identified, the rule can be re-implemented and Principle #1 applied to it.
If critics protest that some laws and regulation are “essential” and can make the case for new or continued action, nothing is stopping Congress from legislating to continue those efforts. But when they do, they should always include a 2-year sunset provision to ensure that those rules and regulations are given a frequent fresh look.
Better yet, we should just be doing a lot less legislating and regulating in this arena. The only way to ensure that more technologies and entrepreneurs don’t end up like FWD is to make sure they don’t have to deal with mountains of regulatory red tape to begin with.
I think I owe Tom Brokaw an apology. When I first started reading his most recent Wall Street Journal column, “Imagine the Tweets During the Cuban Missile Crisis,” I assumed that I was in for one of those hyper-nosalgic essays about how the ‘good ‘ol days’ of mass media had passed us by and why the new media era is an unmitigated disaster. Instead, I was pleased to read his very balanced and sensible view of the old versus news media environments. Reflecting on the evolution of the media marketplace over the past 50 years since JFK’s assassination, Brokaw notes that:
The media climate has changed dramatically. The New Frontier, as Kennedy liked to call his administration, received a great deal of attention, but 50 years ago the major national information sources consisted of a handful of big-city daily newspapers, a few weekly news periodicals and two dominant TV network evening newscasts. Now the political news comes at us 24/7 on cable, through the air, the digital universe, on radio and print. And it comes to us more and more as opinion rather than a recitation of the facts as best they can be determined. News is a hit-and-run game, for the most part, with too little accountability for error.
This leads Brokaw to wonder if the amazing media metamorphosis has been, on net, positive or negative. “The virtual town square has been wired and expanded,” he notes, “but the question remains whether more voices make for a healthier political climate. With a keystroke we can easily move from an online credible source of information to a website larded with opinion or deliberately malicious erroneous claims. Have we simply enlarged the megaphone, cranked up the decibel level, and rallied the like-minded without regard to facts or consequences?”
While he’s obviously concerned about what we might label “quality control issues” associated with some new media outlets, Brokaw’s answer to the previous question he posed generally gets it right:
Still, as a child of an earlier media era, I much prefer the contemporary news and information culture—even when I am occasionally singled out by one side or the other for something I’ve said. I like the range of choices, the new voices, the ease of cross-checking and getting the most obscure information with a minimum of effort. This empowers us as no technological advancement has before. And while it may be easier to stay within one’s ideological comfort zone, left or right, it is a good deal more stimulating to wander beyond the boundaries to find what else is out there.
Good for Tom Brokaw. That generally reflects my own thinking on the issue, which can be found in the essays down below. Generally speaking, we’re better off with today’s world of information abundance than the old world of information scarcity, limited outlets, constrained choices, and homogenous fare. That’s not to say everything is perfect in the new media ecosystem. In particular, Brokaw is right to point to the quality control issues that accompany a world were every voice can be heard. But we’re still figuring out ways to grapple with that problem, largely by encouraging still more voices to join the endless conversation and check the assertions made by others. As Brokaw correctly notes, “This empowers us as no technological advancement has before.” And it leads to more truth and wisdom in the long-run.
- Thoughts on Andrew Keen, Part 1: Why an Age of Abundance Really is Better than an Age of Scarcity
- We Are Living in the Golden Age of Children’s Programming
- Book Review: Eli Pariser’s “Filter Bubble”
- Television: From Vast Wasteland to Vast Wonders
- testimony at FCC’s Hearing on “Serving the Public Interest in the Digital Era”
- Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society