Contributed by Clive Dilnot


So one of the big takeaways for me just over the morning is the importance of connectivity. I even love the bus where there’s the two faces and they connect between the … That’s kind of obvious, I think, for people that are in design and I like to think that it should become more and more obvious for [inaudible 00:53:58]. The choice [00:54:00] of our name as a company, 5D, is all about that. The fifth D is supposed to be connection to the human element.

So for us, and from the time we founded the company, that was really the emphasis, but trying to figure out how to do that is the challenge. That is the challenge, because fundamentally it seems to me that all technology is either pushing us in one direction or another, where direction number one is [00:54:30] greater isolation and individualism or greater interconnectivity. And I think we’re at a crossroads.

For me, the way I verbalize the crossroads in my work, from a technology perspective, is a crossroads where right now we have a lot of emphasis, especially from Silicon Valley and from Washington, DC, towards increased centralization. [00:55:00] I don’t just mean that in a political sense, I mean literally the structure of our technological world, how those connections are literally being emplaced into the physical world right now. Or the opposite, which is to dismantle the system of centralized control that has been emplaced and has literally enspaced around our planet, hemming us in, controlling us [00:55:30] and watching us, and to move towards a much, much more swarm based form of intelligence which is all about nearest neighbor interactions, and truly a sense of community, which is really what Rich was talking about today.

So I’m thinking about all these things not just in a traditional community sense, but literally I’m like, “How do we structure the technology of our world?” And one way I [00:56:00] put this, I was asked to come give and talk to the leadership of the Department of Transportation a while back, because I [inaudible 00:56:04] through this whole process for a decade with the Pentagon in terms of how the Pentagon was going to structure it’s world. They made some terrible choices, cost the country billions of dollars and ended up with nothing. So I pleaded with the Department of Transportation, “Do not make the same mistakes. Do not in your search for optimality so disregard [00:56:30] the need for robust peer-to-peer interaction that we end up with a disaster.”

They do not understand in this conversation what I am trying to tell them. They have difficulty imagining how their perfect algorithms, and by the way, including the University of Michigan, had done these wonderful simulations that [inaudible 00:56:52] if all the cars have GPS positioning we can just create this wonderful interconnected highway [00:57:00] system. We can control all the cars and everything will be great. And my simulation proves that so you can’t even argue with me. The DOT is like, “So all these universities, they’ve all done the research, it’s very clear. Like I don’t understand what the problem would be. Like why would you fight this? Aren’t you a technologist?”

I’m trying to figure out how to explain to them the problem of a centralized control approach, which by the way is responsible, in my [00:57:30] mind, and this is hyperbole, but for all the ills we’ve talked about today. I’m trying to help them imagine what that future would look like so I show them movie clips. I show them a movie clip of Die Hard 4 where Bruce Willis is trying to combat an ecosystem that has been taken over, so it’s like a smart city ecosystem that’s been taken over by terrorists who can basically control all the lights. So they’re wreaking havoc on everything, because [00:58:00] everyone’s dependent on this smart grid to tell them what to do the terrorists get to basically play havoc and control everything.

So that’s one obvious security risk. Most people now understand that, that’s why there’s so much money in cybersecurity, but what I say to them, “Don’t misunderstand. There is no safe network.” The Snowden issue should now have made that abundantly clear to the federal government if they didn’t understand that before. There is no safe network.

[00:58:30] And they go, “Well, what do you mean? What should we do?” I’m, “Don’t centralize any of the control.” “Well, how would that happen? And what do you even mean?” And so I show them a clip of the Star Wars movie where the drone control ship gets shot down out of the sky and then all the drones fall over dead. I’m like, “Literally that is your plan. You don’t know it, but you are literally trying to create a drone control ship, put it in Washington, DC, and then let us shoot [00:59:00] it.” You know what I mean?

And I will personally go do that to show you if you build it, because a world where we’re all dependent on a centralized control system, even if the intentions of that control ship are good, which they almost never are, but even if the intentions were good it’s just a disaster waiting to happen. You can argue how long it will take for the drone control ship to get shot down, but it will. And in many cases it will just shoot itself down.

So you understand [00:59:30] the problem is if people can’t imagine the future they’re trying to create, if they can’t extrapolate forward, they’re screwed. And in that sense we’re screwed. And we’ve done it over and over again, and we’re just going to keep doing it. If we don’t understand this fundamental nature of control, this fundamental nature of what happens when you let control become centralized, then we’re just doomed to see it happen over, and over, and over again. And that’s [01:00:00] what happened with Detroit roadways, it’s the same problem. Of course-What happened with Detroit roadways; it’s the same problem. Of course they’ll be optimal, how can it not be optimal? All the roads are going to be so fast. All the cars are just going to stream through. No one will ever have to wait, because your car will just carry you at 70 miles an hour. That’s not what happens. What happens is [inaudible 01:00:21] traffic, where people spend hours a day waiting. How did that happen? It’s because people fail to imagine, [01:00:30] at the beginning of their design process. That’s one thing.

Of course then, the question is, “I don’t understand, what are we supposed to do? Are you just saying don’t do technology?” I’m a technologist, I finally believe, to put it on the table; robots are the answer to make a lot of our problems better. Even, by the way, I think robots may even be necessary for us to understand what it means to be human. [01:01:00] We’re not doing a good job of that, in general.

For me, robots can be our mirror. They can be an opportunity to make better connections, and more connections. I’m not against technology. For me, I want to displace command and control into your neighborhood, literally and figuratively. I want nearest neighbor interactions to mediate how we move, how we communicate, how we interact. We can do that, [01:01:30] we actually can do that.

It is a fundamental switch from centralized positioning to peer to peer positioning. Literally, although most people, again, have problems understanding this; vehicle to vehicle communication, especially with accurate positioning, covers a multitude of sins. You don’t actually need to know your global position, humans drive all the time without knowing their global position. [01:02:00] You don’t even need to know your longitudinal position on the road; that’s a misconception.

We do have an opportunity to rethink how we’re structuring the world. It will happen, it should happen. Another example of this, is that right now there is a war being waged in billions and billions of dollars, many billions of dollars are being spent trying to guarantee that we have a map based future. [01:02:30] Auto just got bought, cruise just got bought for a billion. There’s another company called Quantegy, that just got valued at 1.6 billion. A great paradox of this, I want my company to be valued at several billion dollars, but I’m fighting very hard, and in many cases against some of my constituents. I don’t want to be a company that forces [01:03:00] us all into a planning-centric model based world.

What do I mean by that? Google is literally trying to ensure that you can’t drive in the cities of tomorrow, without a connection to their server in Mountain View. From a control perspective, again it seems like not the right way to do things. If your car has to actually mediate all of its interactions with the world, through a data connection to [01:03:30] Mountain View and a preordained map, that they have to update on a regular basis; A) that is not good robotics, B) are you seriously not worried about what that would mean, for them to control all that data? If you’re worried about the government controlling this stuff, you should be much more concerned about a few companies controlling it.

I do not mean that Google is evil, I just mean we need to carefully [01:04:00] think about the design of this problem. I would say even on a more philosophical level, map based systems always suck. You can’t plan your way through anything interesting; everything interesting in your life will not be on a map. We are fundamentally, as intelligent motivated beings, [01:04:30] involved in this dynamic Peloton that is speeding down the road very fast, and the movements of those cyclists are not preordained, they can’t be. If you try to do that, you would suck as a cyclist. You cannot path plan through a Peloton, you cannot plan the race.

If Messi was trying to do what computer scientists try to do with robots, he would be a terrible, terrible soccer player. A soccer [01:05:00] player forms nearest neighbor connections in real time. They recognize patterns, and they respond, and they seize opportunities that they could not possibly have thought about beforehand. That’s true of investment, it’s true of your lesson plans.

I was a teacher for a year, all my worst classes were when I tried to stick to my lesson plan. I just frigging gave up. I love thinking on my feet, if I responded to the kids, it was [01:05:30] a million times better than if I turned into, “Why can’t I get through my lesson plan?” It’s because my plan sucked. It wasn’t like I could make a better plan, it just doesn’t work.

We, when we design our robots, have to think about this. Literally, when we craft robot behavior, which is what I do, or at least what I used to do; we have to build faith into our robotic control systems. We have to make them reactive and responsive; planning, although lord [01:06:00] knows computer scientists love to do it, doesn’t actually work. You can’t path plan through a busy Boston convention center, if there are too many people in the way. The Google car cannot make its way through a crowded parade, because, “Who put all these damn people into my map? I know cannot see any of the features that I needed to see into my map.”
This is why a map based world will not work. It always works great in simulation, by the way. Simulation is [01:06:30] doomed to succeed, but the world is not a simulation. A big part of the push, honestly, is to make the world more and more like a simulation. People want to control the universe, they do want to make the world like, “That art exhibit shouldn’t exist. That’s ridiculous, that Hallberg project. That’s not in the map. What is that? We can’t monetize that, that’s not part [01:07:00] of our plan. Why is that there? Get it out.”

You get my point about that, right? Autonomous cars are the answer; bullshit. If I take an autonomous car, and I put in Delhi, that car would be stuck just like you would be. There is no amount of cogitation or map based planning that will fix that problem, it will be just as stuck as you [01:07:30] are. In fact, I would go as far as to say … By the way, I’m not trying to pick on Google, it’s not a Google problem, it’s a fundamental problem; that actually the response times of these so-called autonomous cars are way worse than an attentive human, and humans degrade much more gracefully.

When people say, “No, no, no. If we have autonomy on all the cars, then all the traffic problems go away.” No. At best, [01:08:00] we can fit a few more cars. That’s at best, and that’s not actually the present situation. The present situation is that those cars are phenomenally cautious. They would actually cause complete gridlock in most big cities, far worse than your opportunistic taxi drivers, who are models of smooth efficient driving actually, when it really comes down to it.

You understand my point; which is that cars are the problem, not the solution. [01:08:30] When we think about design, as we think about designing this ecosystem of the future that we want; let’s be honest. What we’re being told is a lie; it is a lie being propagated by folks who want to buy themselves seven more years, to slowly change their business model, period. That is a fact, as much as anything is a fact. It is a consciously [01:09:00] derived business strategy, to tell people that the problem that they really should care about, which is why they spend an hour and a half to two hours in traffic every day, that those good hearted car companies are trying to fix that problem.

That is not the truth. Again, you would say, “What is the alternative?” There are really good alternatives, like a shared mobility system based on peer [01:09:30] to peer positioning, where off highway pods can move around and pick you up. It’s wonderful, and by the way, I would argue it creates community. Cars, to use Rich’s phrase, they promote individualism. They hem you in, you are inside of your sheath. A pod, where a pod here is designed as a personal transportation system that you don’t own, that comes and meets you, picks you up, and carries you in a seamless [01:10:00] ecosystem. Think of it like we’re doing this…ecosystem, right? Think about it like we’re doing this as a segue right now with Polaris and a few others. That is an idea where the community then is all interacting and intersecting constantly. Do you see the difference? One of them is based on peer to peer connectivity and literally causes neighbor to neighbor connectivity, and the other one is being controlled by satellites up in space, and where everything you do is being tracked and monetized. We have these choices and we have to think [01:10:30] about these impacts, right? Fundamentally though, humans always want to have their cake and eat it too. I just want to be clear. The choice that I face all the time is a fundamental technology choice between optimality, which always sounds great in boardroom conversations. It is impossible to beat optimality. Except that it never works. But in spreadsheets, [01:11:00] it’s always fantastic.

When we make this choice to humanize and to do what I consider good robotics, which I’ll describe in a little bit more detail, it’s really a choice, surprisingly, to [inaudible 01:11:17] optimality, which is really the human condition, by the way. If I tell you to vacuum the floor, you don’t go build up a map of the room and then give it to your computer scientist friend to run an algorithm to figure out the optimal path [01:11:30] and then build an indoor positioning system to make sure that you don’t overlap at all as you move the vacuum back and forth. No, you just freaking go do it. You do it in a sub-optimal way and it’s just fine. Do you see what I’m saying? So I would argue that increased push towards this sort of always tiny route, elusive efficiency goal is actually a great way to make sure everything stays [01:12:00] super crappy.

Burn your GPS, I mean, be aware that there is a choice where if you want perfect optimality, you will be tracked, you will be controlled and everything you do will be monetized. It’s sort of like a bit of a conundrum we just have to be aware of, but fundamentally, I would [inaudible 01:12:26] you … We are becoming [01:12:30] robot-like far faster than robots are becoming humanlike. When I founded the human robot interaction conference, together with Alan Schultz and Mike Goodrich, at that time, we really thought that we were just going to keep making robots more and more like human centric and that is not what’s happening. There are some goofy-like [01:13:00] robots that look like furry animals and stuff, but that’s not fundamentally what’s pervading our world, right?

What’s pervading our world is people essentially being literally told what to do by their phone. Who to have sex with, what books to read, what shows to watch and in the case of Uber and [inaudible 01:13:19], I mean, to be clear, that is a robotic system. That is a server in mountain view actually telling humans to go left or right. Do you understand? That is a control [01:13:30] algorithm that has rendered humans into robots. That is what it’s doing. I’m not saying it’s bad, I’m not saying it’s evil. I use it all the time. I’m just saying be aware that those humans have a choice. Right? They can turn off their cell phone, but then they don’t get paid. Do you see what I’m saying? They’re literally getting paid by an AI system that is telling them where to go. It is what it is, [01:14:00] right?

When I then move the conversation on to what it means to be human, because I love this phrase that I was given, I’ll never forget it. It was like, HEC. Humanize, empower and connect. That, sir, is not mine. But what does it mean to be human? Like it is the most fundamental question that has been brought up 10 times this morning. I don’t know. I literally don’t know. I’ve been doing studies on [01:14:30] really, to me, very fascinating issues. Like how many humans can actually imagine? Sounds like a strange question. Apparently, it’s not, from a medical perspective. There are many people who literally cannot imagine. They cannot make pictures in their head and they cannot see forward into the future pictographically. That is an actual cognitive problem. There are robots that literally can. Do you know what I mean? They can [01:15:00] see images, learn from them and project forward into the future so they are imagining. Mixed reality type stuff, right? This is all being done.

If a robot can literally create a future environment for you to then move around in, that’s what imagination is. The robot is actually imagining a future playground for you, that is interactive. You get my point, right? Which is the lines between what we would traditionally think of like human [01:15:30] and robot are seriously becoming blurred. I think it’s kind of cool. To be clear, I’m not saying that’s a bad thing. I’m just saying we really actually do have to think hard about what we mean when we say humanize. Because the reality is, and I’m sorry, most of you are so intelligent, I feel kind of bad even saying this, but it just is one of the simple true things that I think, you oblige me and let me say [01:16:00] it, humans do a tremendous amount of things which we do not want to capture in the design of robots, right?

My friend, Ron Arkin, when he was paid by the Pentagon to do a several year study on the ethics of robotics, he came back with some really, kind of obvious but tough assertions that the military scratched their head on for quite a while, which was, well, are we [01:16:30] modelling human ethics in our robot behavior? Because much of what humans do in war is not ethical. What do we mean? To put it simply, do we want to model human ethics? When we talk about the ethics of design, maybe we should specifically say, “Let’s model dog ethics.” Maybe robots should be more like dogs. My dog has better [01:17:00] ethical behavior than I do. You know what I mean? My dog is so wonderful. I had not seen any unethical behavior from my dog. Lots of unethical behavior from me.

I’m just trying to be really bluntly honest, which is, humans are unjust, they oppress, they enslave, they kill, they take advantage of people unfairly. That’s unfortunately human ethics. [01:17:30] When we’re modeling that, I mean, you could make the argument it is inhumane to put human ethics into a robot. They might even be inhumane to let humans govern humans. If historic evidence is any guide, right? Forgive me, I’m trying to be provocative. But I’m just saying, what does it freaking mean to have human ethics and do we want human ethics? Maybe we want superhuman ethics. Maybe we want [01:18:00] post-human ethics. I’m not trying to be overly clever with semantics, but you understand my point. Because when it comes to programming robot behavior, to be clear, you have to have a goal.

You are literally trying to create a pattern of behavior and you have to actually answer really hard questions. Like if this connected vehicle that is being controlled has to make a choice between killing you, the driver, [01:18:30] or pedestrian, I mean … There’s an actual survey on that topic, which some of you probably read, right? And it’s split. Again, which is the wonderful thing about humans. You can find the human to say just about anything, right? Many humans are like, “Yeah, you should totally, totally take me out. Especially if you can save two pedestrians. Write that algorithm, take me out. I’m totally cool with that.” Then lots of people are like, “It’s my effing car. [01:19:00] Is this even a question you’re seriously asking me? I want my car to protect me and my family. But that’s a fundamental ethical question and [inaudible 01:19:11] we don’t even know what we think is ethical for the most part. That’s a hotly debated topic.