2015 predictions, but more importantly to you, laced with explosive ideas with, at or from which your mind can run...
I have been silent for months now, thinking about how to launch something better, something more worthy of the extraordinary opportunities and risks that surround us. Indeed, whether we deserve it or not, changes in information technology are giving us lawyers and managers of risk, compliance, ethics, security and privacy our biggest opportunity to change what we do that we have been offered in a long time, and perhaps the most challenging cry for help that we will ever hear. This post is 2015 predictions that are more likely than most to come true, because it predicts themes on which this blog will focus first to shake up our world. More importantly to you, right now, I have tried to lace it with explosive ideas with, at or from which your mind can run; I look forward to hearing from you whichever of those three directions you choose.
1. Wrestling with Tech Angels for What Law Should Do
The rule of law has been a foundation of growth in technology, but the profession of law may be one of the last to grow with technology. Technology did invent a law, but because technology invented it, it could be neither natural law, judge-made law nor statute. Building on Alan Kay’s maxim that “The best way to predict the future is to invent it,” the law that technology invented, Moore’s Law, became a law because it turned a surprising observation into a business plan, making it very easy to repeal or replace.
If you want to invent the future, you might do well to ask what people or organizations will need. The legal profession’s little sparks toward inventing the future can be so isolated that their embers glow for decades. In 2006, Richard Susskind devoted just a few pages at the end of The End of Lawyers? to what lawyers could actually do in the future, and we keep seeing their embers in posts on alternative legal careers right up to this moment. Those little embers will not start a bonfire; they are sustaining innovations rather than disruptive innovations. They help the ancient guilds of the barristers and solicitors that still dominate the profession of law become a leaner and more techie.
Meanwhile the world on which those guilds perform their ancient rites has become so different, so new, that the question is no longer whether law can keep up with it so much as whether law can even see it at all. More than half of the assets of even old economy companies are now knowledge assets (and all companies are even losing their insurance coverage for these assets). Only very rarely can organizations protect their dynamic databases that are central to these knowledge assets with patents, particularly after Alice, and only petrified compilations tend to be eligible for copyright protection, even in Europe with its Database Directive. Law will deal with the immediate threats posed by technology that are impossible not to see, like the threat of drones flying directly into the paths of commercial airliners that the FAA will probably address soon. Bigger and more inchoate threats call for new liability rules of the road for this industrial revolution, but in order to get there, we need to log many more nights wrestling with the angels of technology.
2. Protection as the End, and Governance a Means
In the absence of protection by law or by insurance, organizations must protect knowledge assets themselves; the more organizations understand their knowledge assets, the more they view protection of those assets as critical. There are two critical means to the critical end of knowledge asset protection: contractual and operational protections. First of all, data sharing is fundamental to realizing the value of knowledge assets, so data licensing agreements and other contracts governing the creation, ownership, maintenance, use, disclosure, aggregation, return and destruction of data are the foundation of the knowledge economy.
The other critical area of knowledge asset protection is operational or programmatic. The dominant term for it now may be “information governance” or “big data governance,” and many of its advocates and practitioners mistakenly see it as an end rather than a means to the critical ends of protection of knowledge assets. It is not just that governance is a nice-to-have and protection a need-to-have. It is more than that; in an era of ubiquitous, instantaneous, powerful search, governance, having lost its value for retrieval and use, might have no value at all except insofar as it provides needed protection.
If information governance is a means, not an end, we can abandon a lot of the old methods of information governance either when they lose their value or more effective means to achieve the goal of protection are devised. If we try to destroy or radically change means that define professions, however, we may have to do battle with professional associations that rise up to defend their turf, be they older professions built on classification systems no longer necessary in this era of search — e.g., archivists, records managers, librarians — or, sometimes, professions as new as privacy (see section 4, below).
3. Getting Cybersecurity to Stop Chasing its Tail
One profession from which we certainly will not get any push-back against changing methods is cybersecurity, which leads the rest of information governance in incorporating high-velocity search, a good thing not only for the security of personal information but for the secrecy of trade secrets and protection of databases and IP. It is always adapting, not only as a field but in relation to each particular new threat or vulnerability (as opposed to traditional organizational policies or programs the highest values of which were often consistency of application). That resilience and superhuman response speed make cybersecurity the big data hero of information governance. But they are not enough.
The best thinkers in cybersecurity do even more than just constant adaptation to the barrage of attacks hitting their organizations every day; they explore the root causes of those attacks being so numerous and so easy. For example, Bruce Schneier nailed the wild insecurity and unpatchability of the internet of things (IoT) — that will probably be the root cause of some of the major security incidents of 2015 — in early 2014. Schneier noted that none of the players in the IoT supply chain — not the chip maker, nor the original device manufacturer, nor the brand-name company that adds the user interface — has the incentives, expertise, “or even ability” to patch the software once it’s shipped.
Understanding this issue, one can (in theory) go up the chain — a friendly amendment to Moore’s Law — to the chip maker, and perhaps as Schneier suggests, the ISP is the entity properly incented to do it. Certainly new IoT regulatory requirements like the FDA security rule for medical devices and the coverage of medical devices now capture stores of patient data by HIPAA create incentives at the device level, too. And we can hope that the murder of a Vice President on Homeland through hacking his heart monitor framed the issue well enough, given that VP Cheney turned off his new heart’s wifi connection.
In fact, however, even as the payment card industry begins to make the big retailer card breaches a thing of the past through its new standard, the hacking of homes, hearts and other important objects newly connected to the Internet is likely to become next big area of breaches. Responders will want and in some cases need the high-velocity visibility into the widely dispersed network of the IoT and data loss prevention capabilities that they now have in organizations. Even if they could get it, though, the cost would dwarf the cost of a secure supply chain, and the results would not be as good (in security, let alone privacy).
This example shows how the two means of knowledge asset protection discussed in section 2 — contractual and operational — converge in the networked world, and why lawyers and procurement and compliance strategists are sometimes as important to cybersecurity as is the chief information security officer.
4. Privacy Advocates and Regulators Need to Start Playing with a Full Deck
If cybersecurity is sometimes trapped in an endless cycle of transforming its responses to constantly changing threats almost quickly enough, privacy has in some respects been having the same conversation for more than 40 years. This point was made very powerfully this year by Professor Chris Hoofnagle, a great historian and advocate who summarized and made available the discussions of the committee whose report created, in 1973, the Fair Information Practices (FIPs, or later sometimes FIPPs). Anyone who doubts the enduring value of the FIPs even now on the second half of Ray Kurzweil’s chess board should read their concise yet living “basic history” by privacy expert Robert Gellman. The fact that not only the FIPs but the conversations about the FIPs have not changed much in more than 40 years, however, makes one ask what has been growing outside of the shadow of that oak.
Moreover, there was a moment in early 2014 that begs for someone not too scared to ask questions about the designer clothes that the emperor appears to be wearing. The moment was when the very inventor of the incredibly attractive phrase “Privacy by Design,” then-Ontario Information Privacy Commissioner Ann Cavoukian, Ph.D., published with other distinguished authors The Unintended Consequences of Privacy Paternalism. That paper was not unique in channeling the fury of privacy regulators, advocates and many scholars at what they see — with some reason, even though Fred Cate had been publishing many of the same ideas since at least 2006 — as tech companies’ efforts to undermine the basic principles of privacy; the hardest-hitting (and even more alliterative) may have been Hoofnagle’s The Potemkinism of Privacy Pragmatism. The historical irony of Cavoukian’s paper is that her adoption of the term “paternalism” may presage her own Privacy by Design remaining just another Potemkin village; indeed, without (libertarian or soft) paternalism, privacy itself may become a Potemkin village.
Let me start with Privacy by Design. If you believe that you are getting that job done by building free and simple choices about privacy into consumers’ experience of products and services as well as employees’ and citizens’ experience of the choices offered to them, I would tell you that you are playing with a FIPs deck, but not a full deck. A FIPs deck is designed — to use the terms of the behavioral economics of Kahneman and Twersky, Thaler and Sunstein, in a deliberately irritating way — for “econs,” and the full deck for “humans.” What the full deck has that the FIPs deck does not is “choice architecture.” Choice architecture might be described as the paternalistic art of changing behavior through anticipating Kahneman’s faster, more instinctive and emotional “System 1,” or it might simply be described as what advertising is, who Steve Jobs was, or how about the air we breathe?
Because such paternalism certainly would not “qualify” (Gellman: “While transparency is a classic FIPs principle, neither Privacy by Design nor Simplified Choice qualifies.”), and might be viewed by many privacy advocates and regulators as a violation of Openness or Transparency, when it comes to influencing security and privacy behavior, the people working against stronger security and privacy are generally playing with a much fuller deck than the people advocating stronger security and privacy. To the extent that Privacy by Design is about improving human behavior in ways that the slower, more deliberative and logical System 2 would entirely favor, it must adopt some degree of paternalism with System 1. I will be blogging a lot more about what I mean by that, and of course welcome your responses; for today, I just want to bring you one little acorn from outside the shadow of the FIPs oak; perhaps you will want to toss it into your New Year’s bonfire.
Happy New Year!