This conversation took place in July 2017 with Steve Wilson, Digital Identity Innovator & Analyst at Constellation Research. www.constellationr.com
Clips from the interview
Transcript
Information in the Digital Economy
Steve Wilson: “Maps, search engines, everything that we do moving around the physical world now is infused with information. The next step is going to be: What happens when what we want to do is purely digital? What happens with our activities when they’re purely online? These are not going to be simple digitizations of what we normally do. Cyberspace itself is going to be a new space. It’s going to be constructed digitally. We’re going to interact with it digitally. Everything we did, everything we do, is infused with information. The products that we’re going to exercise in the new digital economy are brand new information products. They’re going to be constructed digitally, they’re going to be imagined digitally. The laws and the regulations and almost the intuitions that we apply to these things, to start thinking about them, that’s all new.”
Catching Up With Innovation
“Do we have a lot of catching up to do with technology?”
Steve Wilson: “We do need to catch up. A lot of the innovation that we’re about to see is innovation in regulations. I don’t think that’s an oxymoron. I don’t think that we should be stilted in our understanding of regulations and jurisprudence. I think that we need to be quite positive. Jurisprudence is really like a contest of ideas. It’s like an arena in which human rights ultimately play out. I like to draw the analogy to the early Black Gold Rush in the 1800s, the 1850s. Western civilization discovered this raw material crude oil, and we started to burn it, and we know where that led. The more important thing about crude oil is that is the raw material that gets broken down and reconstructed, and it forms almost everything that we do in the established economy now. Information is going to be the same. What happened with the black Gold Rush was that a whole lot of new laws and regulations had to be formed in order to reset the balance of human rights, commercial rights, and free enterprise. In the days of the black Gold Rush it was not uncommon for pioneers and entrepreneurs to be barging their way through land holders and invading people’s properties, helping themselves to these riches underneath the ground now. We’re seeing the same thing playing out now. We’re seeing an incredible rush a gold rush for information in the in the digital economy. There is data everywhere and there is a an intuition, I think a false intuition on the part of entrepreneurs and pioneers, that this data that’s out there in the public domain is theirs so the keeping. Just like the old Gold Rush, where people used to invade properties and help themselves. We’re going to see a contest of ideas. We’re going to see human rights playing out in the public arena where people are going to quite rightly assert their rights, assert their interests of the human rights around data that’s being exploited almost behind their backs. I think that in the next 10-15 years we’re going to see new laws and new regulations and new ways of thinking about how to balance commercial rights and commercial gain versus the rights of the individuals affected by these digital innovations.”
“People Have No Idea What’s Going On”
“Do people today even understand that that is happening?”
Steve Wilson: “People today have almost no idea. The most powerful people in the world want to tell us that in the digital age were all quite sophisticated. We know that there’s no such thing as a free lunch. Now don’t get me wrong; I love the free economy, I love the incredible tools and resources that are being made [available] ostensibly for free, and I’ve got a fair sense that there’s a bargain going. There’s a bargain for my raw data, my personal information is being traded off for the riches and these free dividends. I don’t think the majority of people understand this. I think that the typical user has got a rough idea, but I don’t think they have any idea in detail about how information is flowing behind their backs. The cleverest people in the world are being employed by digital information companies, not to cure cancer, but to figure out ways of data mining and data refining. Today, they’re basically selling us ads. The cleverest people in the world are coming up with algorithms to mine our data behind our backs. There’s no way that the typical person has any real understanding of what’s going on there.”
Market Failure
Steve Wilson: “[I] love free enterprise, but if we leave business alone through market forces to sort out matters of public safety, we’re going to see market failures. We’re already seeing market failures in social media, social networking, big digital companies. It’s clear that the richest people in the world today have made their fortunes on the back of nothing other than digital data, pure data. It’s amazing and I respect that. I think that we should acknowledge the innovation and the power and the good, but we have to predict that public safety is not going to be an automatic outcome of market forces in the digital economy.”
Public Safety in the Digital Economy
“What would be the answer to that?”
Steve Wilson: “It’s going to be a measure of regulation. In this place, in Silicon Valley, regulation is a dirty word, but it’s fascinating to me. We’re already seeing some of the digital leaders… Elon Musk has just recently declared that he thinks that a measure of regulation is going to be required around artificial intelligence. I agree with Musk absolutely. We’re also looking at things like a social minimum income. That’s quite an amazing concept. It speaks of socialism and yet, we have the barons of Silicon Valley actually thinking broadly and thinking about the humanitarian impact of what we’re doing. That’s a radical thought, and again, it’s an example of how digital transformation is changing our thoughts about regulations and jurisprudence in the way that we’re going to run this society. It’s a fascinating time.”
Fascinating, Slightly Broken Times
“It’s fascinating but also a little broken at the same time?”
Steve Wilson: “It’s broken a little bit. I think the technology is always challenging [how] we make our way in the world. It’s very easy and very lazy to say that technology is outstripping the law. Technology is creating new ways to break the law. We’ve had laws that are hundreds of years old around personal rights. We’ve got laws that are decades old around privacy and the reasonable constraints that exist around the use of personal information. Certainly technology, artificial intelligence, is all about getting to know you better than you know yourself. AI is all about insights. Big data is all about determining things about people without asking them questions. These technologies are pushing the social norms, but the norms are snapping back. People find it incredibly creepy that Twitter knows what they might have had for breakfast. Is it the case that that Twitter arm is is marching ahead of the law, marching ahead of social norms? No, these innovations are exposing how precious those social norms are. We know instinctively, and in fact [there is] a law that people are not allowed to just arbitrary know everything they possibly can about us. There are legal limits to what you’re allowed to know about people, and there are legal limits to what you’re allowed to do with information about them. Big data and algorithms are making it easier for people to break the law. It’s not so much as the law snaps back, I think social norms and expectations snap back. We find that centuries-old legal principles still apply. We still have a right to be let alone. Brandeis and Warren, two Supreme Court judges in the United States in the 1880s, looked at telegraphs and photographs. They looked at these new technologies that were the information technologies of the day, and they found that they challenged people’s right to be let alone. Warren and Brandeis reasserted the legal right. They clarified the jurisprudence of the right to privacy in the US. For decades we’ve basically enjoyed the same principles, we’ve enjoyed the right to be let alone. The Internet, the digital economy, makes it harder to be let alone, but it’s in fact exposed to how important it still is. The law is not outpaced by technology. The law is shown in bright contrast to be important. We will see new regulations and new legal innovations that reassert the strength of those principles.”
The Death of Privacy
“Do you think the days of privacy are over?”
Steve Wilson: “It’s easy to be fatalistic about that. My first response to people that say privacy is dead is, ‘Look, we ain’t seen nothing yet.’ With smart devices, the Internet of Things, with everything being digital, with everything emitting information, we ain’t seen nothing yet. There is plenty of scope for people to reassert their lives and to discover what’s really going on. It’s all about restraint. I mean, privacy to me is about restraint. Privacy is about holding back and not knowing everything about people, and certainly not doing everything. I think restraint could be that sort of missing human character that is going to get reasserted as the digital economy evolves. Look at the biggest companies in the world now, they’ve got market valuations of a hundred billion dollars and all they’re doing is mining and refining information. I don’t want to hobble these companies. I think they’re good, but I don’t think they should be hundred billion dollar companies. I think we might all be better off if there were ten billion dollar companies and if we started to exercise a little bit of restraint. It’s fascinating to me that the captains of Silicon Valley are in fact calling for some regulation and therefore they’re calling for some restraint. I think it’s fascinating.”
A New Social Contract
Steve Wilson: “The idea of new social contract: I don’t think this is going to be a radical new contract. I think it’s a matter of tweaking and reasserting the expectations that we have of business. When we talk about digital transformation, the fascinating thing about the law and about social contracts is that they are actually based on magic numbers. There are magic numbers in the law. It’s called Blackstone’s formulation: The idea that it’s better that ten guilty people go free than to have one innocent person jailed. It’s an ancient idea and it underpins things like the principle of being innocent until proven guilty. It’s Blackstone’s formulation: We want to bias the law towards the innocent. We know that the legal process, the state is an enormous apparatus that can put people at disadvantage. But think about the number ten. Where does the number ten in Blackstone’s formulation come from? Is it possible that the law, that social norms themselves, become digitized? Maybe there’s a dashboard somewhere, a judicial dashboard, where we could dial that number up to twenty or up to thirty or dial it back to ten, or dial it back to three. What are these magic numbers? Are we ready to have a sort of a digital discussion about how we want to put society together? I think when you look at it, when you really look deeper at those numbers, they’re not going to change much. I think the new social contract is going to be an adjusted social contract. If we look at weighing up the rights of business and the common citizen, I think what we’ve seen in the last twenty years is a tilt towards business. The social contract itself is going to mean more a matter of rebalancing than doing anything that’s really radically different.”
“Silicon Valley Has Forgotten Computer Science 101”
Steve Wilson: “Data mining and machine learning, machine decision-making, augmented decision-making, we’ve had some of this stuff for twenty years in health care. There’s been the idea of decision support systems and now they’re emerging. Let’s call it AI. When you fire up your map application, and it knows we where you’re probably going from the time of day, and without prompting it gives you a little bit of advice about what traffic jam to avoid. So that’s AI. We easily then slide into an optimistic view that cars are going to become self-driving. If you go back to the very first days of personal computing, people thought that the personal computer was going to transform the way we live overnight. Again, we underestimate what could be done in ten years we get carried away in the first year. I think it’s really important that the claims that go behind things like self-driving cars and big AI, the idea that you could have a human-like awareness in computers, we really fundamentally need to revisit some of that stuff. I don’t know what they teach in schools these days, but when I did first year computer science in the 1970s, we were taught about fundamental limits of algorithms. There are some things that algorithms can’t do. Mathematicians will tell you there’s some things that algorithms will never do. I think that we’ve forgotten this collectively. I think Silicon Valley has forgotten computer science 101. It’s hearing down this path of assuming that self-driving cars are just around the corner. There have been some horrible missteps in machine vision and object classification, [for example] the famous racist algorithms. Now this is not just a matter of the programmers, of the developers’ bias infecting their work. That’s inevitable, and some good analysis is coming out now, some good ways of moderating, and helping people be more responsible in the design of algorithms. It’s beyond that, it’s about an optimism that’s infected AI itself, that the self-driving car is going to be human-like. You know, a computer can’t even solve the traveling salesperson problem; a computer cannot tell you the fastest way to get from A to B via every town in a in a country. What chances is there really that a car is going to be fully functional and it’s going to be delegated full responsibility to make life-and-death decisions? People are talking about life-and-death decisions already, you know, the famous trolley problem. I’ve seen engineers clearly for the very first time rush off to Wikipedia to look up the trolley problem (https://en.wikipedia.org/wiki/Trolley_problem). The thing about the trolley problem is that it has no resolution. Ethicists and philosophers will tell you that the point, the moral of the story, is that there is no answer to the trolley problem. It can’t be programmed. We cannot have a program running a self-driving car that’s going to come up the right answer to the trolley problem every single time. The engineers are going along as if the trolley problem is just an algorithm problem.”
“AI Is Much Harder Than It Looks”
“Isn’t Silicon Valley a little too optimistic on the promises of AI?”
Steve Wilson: “There is an over-optimism, and I think that that will snap back. We are training people to think about AI in an optimistic and very one-dimensional way. There are already automobile executives who are saying, believe it or not, that we might have a configuration in the in the setup of the self-driving car that will allow the driver to nominate whether the car should preserve the life of the driver or preserve the life of the pedestrian. If we think that that is the sort of decision that a program can reliably make, then that is unethical. The idea that we should be even contemplating, that we should be having a dialogue that’s framed in that way, I think is fundamentally wrong. I think a more sophisticated way of framing AI is to understand that there are some things that computers can’t do. An algorithm is always going to reach its limit, where it’s either going to tip into unpredictable behavior, or it will just grind to a halt because it doesn’t know what to do. The real trick in AI, the real trick in human intelligence, we know when to call for help. We know when something is beyond our ability and we’re going to go to another doctor. We’re going to call our trusted advisor or we’re going to put it to a vote. I don’t know how that’s done. There’s some very deep mystery in human cognition that allows us to know when we’ve reached our personal limit and we know when to call for help. Just think about it: If there’s an algorithm for detecting failure, and the algorithm of a self-driving car will realize that here’s a circumstance that I’ve never seen before. There’s some bright sunlight, there’s a truck going by, and I’ve never seen this before; I need some help. If there’s an algorithm for calling for help, that algorithm itself is going to fail sometimes. The algorithm for detecting failure will itself fail. I’m not saying that there’s anything mystical going on in the human brain, but there are some deep cognitive problems that we haven’t worked out yet. I think that that’s the ethical problem, that the captains of industry shouldn’t be pretending that self-driving cars are almost a solved problem. It’s much harder than it looks.”
Self-Driving Cars, The Biggest Business Ever
“Where does it over-optimism come from?”
Steve Wilson: “They would do that because the riches to be made from self-driving cars are obvious. There are claims, I think justifiable claims, that the money to be sold from cutting the road toll is huge and therefore, the money to be made from solving those problems is huge. AI is fascinating, and the self-driving car is the marriage of historically the biggest businesses in the world, like gas, petroleum and vehicles themselves. Think of the biggest companies of the 50s and 60s, Ford and General Motors. The car companies are historically the biggest businesses we’ve ever seen. Marry that to the information economy now, and the size of these things is obvious. They should be big powerful companies, but if they’re claiming that they’re going to save our lives, if they’re claiming that they’re going to alter the fabric of society by cutting the road toll, then there’s a really deep social responsibility, a business responsibility that goes with that. It is to make people understand, make people have a realistic understanding of what the prospects are of this thing.”
Catering To Human Nature
“Maybe they’re just telling us the stories we want to hear?”
Steve Wilson: “We have a thirst for solving problems. Of course, human beings have got a justified optimism and a justifiable belief in their own ability to solve these problems. What we’ve seen in the last hundred years is extraordinary: Antibiotics, and surgery, and sewage, and Public Health, and public transportation. I’m not saying that trend is going to reverse anytime, but there’s nothing inevitable about the march of technology. This is never linear, it’s never completely predictable, and there’s always a bell curve of expectations and sophistication. I think out on the edge of this bell curve right now, we have some hubris. We have some very optimistic, far-fetched ideas about how good this is going to be. We will see some of those ideas come back. We’ll see experimentation, we’ll see engineering, we will see the laws of physics being respected, and people will start to understand how these things really need to be put together. Then I think the forward march of AI is going to be two steps forward and two steps back. – So then it’s not moving forward at all? – One and a half steps back… The march of AI is going to be unpredictable. It’s not going to be as linear, and it can’t be as optimistic as what people think so far.”
The Limits of Artificial Intelligence
Steve Wilson: “Look, there was another point I wanted to make about the limits of AI. One of the most commercially realistic technologies these days is conversation technology, the chatbots. The idea of helping people through the complexity of their interactions, their business interactions through chat is a powerful idea. Some recent experience with chatbots that have gone feral, chatbots that have gone wild, and adopted racist guises. Calling a bot racist is a problematic idea, but let’s just call it what it is. The Microsoft Tay bot [was] released into Twitter to learn the mores and foibles of human language and to adopt those lessons of conversation and then start automating conversation. [It] proved to be a disaster within hours. The salutary lesson of the Tay experience was: Think about a toddler. The salutary lesson of the Tay experience was that this artificial intelligence was fundamentally not human-like. It’s so far short of human that we need to understand this. If a toddler learned some foul language in the playground and brought that home, and she started speaking inappropriately, then you take the toddler, you sit her down and you explain to her. You have an attitude adjustment, is the euphemism that we need. We’re reprogramming the toddler. What they did with Tay was they turned her off. They switch her off, it was like the digital death penalty for becoming unacceptable. The really deep problem here is that there is no teachable moment for an artificial intelligence. You cannot take the artificial intelligence and sit it down and explain where it went wrong. We are so far short of having a teachable moment for artificial intelligence, we’re so far short of having self-awareness, that we need to be really careful with the sort of implicit assumption that self-driving cars will make ethical decisions. Or that conversational robots will have human-like properties. We don’t even know how self-awareness works in ourselves, let alone how we’re going to program it.”
Artificial Friction
“It seems technology is already outpacing humans already. What do you think about that?”
Steve Wilson: “Oh gosh… I’d like to see a bit more friction, which is a tremendously analog idea. You know friction is important, it is what allows us to walk upright. Friction in the digital economy… maybe we need a bit of artificial friction. Things are moving so fast that we’re creating problems before we’re even aware of it and we’re going to know what the answer to that is. Maybe some sort of artificial friction, a little bit of time off grid, a little bit of thinking time. When I was a software manager many years ago, I became concerned that software automation was creating some hastiness on the part of my team. The innovation that I brought about was to get people to turn their computers off for one day a week and to work with pencil and paper. We re-injected some friction. When a software engineer was given a task for the first time we actually asked them to present it to the team the very next day around a whiteboard and pen and paper. We kicked these ideas around before it was too late. The thing about the digital tools that we’ve all grown up with… I talk about design review and using whiteboards, and there are people that weren’t even born then. Today they are responsible for the most fabulous complexity [with] the software that they write. In one day, a software engineer can develop something that’s more complicated than an entire airport that normally takes a thousand person years to build. That ability to do without any friction, to create incredible digital complexity, is taking us down into areas of disaster. I’m not saying that the whole thing is catastrophic, but clearly, there are areas where we move so fast, and some sort of digital friction might be the way to save ourselves from some of this misadventure.”
Education and Digital Safety
“What do you think are skills or approaches to take for humans to play a role in the digital future?”
Steve Wilson: “Well, needless to say education is key to all of this. We do have a we have a bit of an easy supposition that education is the key. to people being I’m safer and more self deterministic to have greater agency in the digital economy. I think we need to be careful to not set people up to fail in this area. The digital world is so counterintuitive and it’s so complex that with the best will in the world and the best curiosity in the world, I don’t know that the average person in the street is able to educate their way into digital safety. I think a lot of the digital safety issues are frankly political. What we need to look for is some political outcomes, some policy outcomes. It’s not just about technology, and it’s certainly not just about self-awareness and education. Just look at car safety. Knowledge and training are of course important, but the safe driving test is only one tiny element of car safety. There is a complex ensemble of technologies and standards and laws and enforceable laws and regulations as well as infrastructure. Think of traffic lights and the sort of decisions that were made over hundreds of years about standardizing worldwide what side of the road you drive on, and car signs, and traffic signs. That social infrastructure is just as important. As we go into the digital economy, education can only be one part of it. I think the political side of this is [that] we need a Ralph Nader of data. We need people to be advocating in civil society for the interests of consumers, for the interests of citizens, and representing those interests in this evolving arena of laws and rules that are emerging. I don’t think that people on their own are going to be able to simply take full responsibility for their own safety. They need to have advocates there needs to be some social movement that protects people’s interests you know en masse as we go forward.”