If Boeing is allowed to certify that a crash-prone aircraft is safe, and Facebook can violate users’ privacy expectations, should companies and industries ever be allowed to police themselves? The debate is heating up particularly in the U.S. tech sector with growing calls to regulate – or even break up – the likes of Google, Apple and Amazon.
It turns out to be possible, at least sometimes, for companies and industries to govern themselves, while still protecting the public interest. Groundbreaking work by Nobel Prize-winning political economist Elinor Ostrom and her husband Vincent found a solution to a classic economic quandary, in which people – and businesses – self-interestedly enrich themselves as quickly as possible with certain resources including personal data, thinking little about the secondary costs they might be inflicting on others.Holger Motzkau/Wikimedia Commons, CC BY-SA
As the director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, I have been involved in numerous projects studying how to solve these sorts of problems when they arise, both online and offline. Most recently, my work has looked at how to manage the massively interconnected world of sensors, computers and smart devices – what I and others call the “internet of everything.”
I’ve found that there are ways companies can become leaders by experimenting with business opportunities and collaborating with peers, while still working with regulators to protect the public, including both in the air and in cyberspace.
In a classic economic problem, called “the tragedy of the commons,” a parcel of grassland is made available for a community to graze its livestock. Everyone tries to get the most benefit from it – and as a result, the land is overgrazed. What started as a resource for everyone becomes of little use to anyone.
For many years, economists thought there were only two possible solutions. One was for the government to step in and limit how many people could graze their animals. The other was to split the land up among private owners who had exclusive use of it, and could sustainably manage it for their individual benefit.
The Ostroms, however, found a third way. In some cases, they revealed, self-organization can work well, especially when the various people and groups involve can communicate effectively. They called it “polycentric governance,” because it allows regulation to come from more than just one central authority. Their work can help determine if and when companies can effectively regulate themselves – or whether it’s best for the government to step in.
A polycentric primer
The concept can seem complicated, but in practice it is increasingly popular, in federal programs and even as a goal for governing the internet.
Scholars such as Elinor Ostrom produced a broad swath of research over decades, looking at public schools and police department performance in Midwestern U.S. cities, coastal overfishing, forest management in nations like Nepal, and even traffic jams in New York City. They identified commonalities among all these studies, including whether the group’s members can help set the rules by which their shared resources are governed, how much control they have over who gets to share it, how disputes are resolved, and how everyone’s use is monitored.
All of these factors can help predict whether individuals or groups will successfully self-regulate, whether the challenge they’re facing is climate change, cybersecurity, or anything else. Trust is key, as Lin Ostrom said, and an excellent way to build trust is to let smaller groups make their own decisions.
Polycentric governance’s embrace of self-regulation involves relying on human ingenuity and collaboration skills to solve difficult problems – while focusing on practical measures to address specific challenges.
Self-regulation does have its limits, though – as has been clear in the revelations about how the Federal Aviation Administration allowed Boeing to certify the safetyof its own software. Facebook has also been heavily criticized for failing to block an anonymous horde of users across the globe from manipulating people’s political views.
Polycentric regulation is a departure from the idea of “keep it simple, stupid” – rather, it is a call for engagement by numerous groups to grapple with the complexities of the real world.
Both Facebook and Boeing now need to convince themselves, their employees, investors, policymakers, users and customers that they can be trusted. Ostrom’s ideas suggest they could begin to do this by engaging with peers and industry groups to set rules and ensure they are enforced.
Governing the ‘internet of everything’
Another industry in serious need of better regulations is the smart-device business, with tens of billions of connected devices around the world, and little to no concern for user security or privacy.
Customers often buy the cheapest smart-home camera or digital sensor, without looking at competitors’ security and privacy protections. The results are predictable – hackers have hijacked thousands of internet-connected devices and used them to attack the physical network of the internet, take control of industrial equipment, and spy on private citizens through their smartphones and baby monitors.Saklakova/Shutterstock.com
Some governments are starting to get involved. The state of California and the European Union are exploring laws that promote “reasonable” security requirements, at least as a baseline. The EU is encouraging companies to band together to establish industry-wide codes of conduct.
Getting governance right
Effective self-governance may seem impossible in the “Internet of everything” because of the scale and variety of groups and industries involved, but polycentric governance does provide a useful lens through which to view these problems. Ostrom has asserted this approach may be the most flexible and adaptable way to manage rapidly changing industries. It may also help avoid conflicting government regulations that risk stifling innovation in the name of protecting consumers without helping either cause.
But success is not certain. It requires active engagement by all parties, who must share a sense of responsibility to the customers and mutual trust in one another. That’s not easy to build in any community, let alone the dynamic tech industry.
Government involvement can help build bridges and solidify trust across the private sector, as happened with cybersecurity efforts from the National Institute for Standards and Technology. Some states, like Ohio, are even rewarding firms for using appropriate self-regulation in their cybersecurity decision-making.
Polycentric governance can be flexible, adapting to new technologies more appropriately – and often more quickly – than pure governmental regulation. It also can be more efficient and cost-effective, though it’s not a cure for all regulatory ills. And it’s important to note that regulation can spur innovation as well as protect consumers, especially when the rules are simple and outcome focused.
Consider the North American Electric Reliability Council. That organization was originally created as a group of companies that came together voluntarily in an effort to protect against blackouts. NERC standards, however, were eventually made legally enforceable in the aftermath of the Northeast blackout of 2003. They are an example of an organic code of conduct that was voluntarily adopted and subsequently reinforced by government, consistent with professor Ostrom’s ideas. Ideally, it should not require such a crisis to spur this process forward.
Ultimately, what’s needed – and what professor Ostrom and her colleagues and successors have called for – is more experimentation and less theorizing. As the 10-year anniversary of Ostrom’s Nobel Prize approaches, I believe it is time to put her insights to work, offering industries the opportunity to self-regulate where appropriate while leaving the door open for the possibility of government action, including antitrust enforcement, to protect the public and promote cyber peace.
Scott Shackelford does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Authors: Scott Shackelford, Associate Professor of Business Law and Ethics; Director, Ostrom Workshop Program on Cybersecurity and Internet Governance; Cybersecurity Program Chair, IU-Bloomington, Indiana University