September 28, 2018

Joe Toscano: the ethics of big data

Joe Toscano

Joe Toscano

Design Good Founder

Joe Toscano is an award-winning designer and former Google consultant who left his role in 2017 due to ethical concerns within the industry. He has since started a social innovation organization, called, and has written a book, called Automating Humanity, in order to educate people about the issues surrounding Silicon Valley and help define solutions to the problems. 

Eli: Joe Toscano, founder of Design Good, author, and speaker; welcome to Conversations on DesignBetter.Co!

Joe: Hey, thanks for having me. Glad to be here.

Eli: Absolutely. Before we dive into the meatier content, I wonder if you could just help define something, which I’ve happened on in some of the articles you’ve written: that’s this term “ambient computing.” I think folks are often familiar with artificial intelligence or machine learning, but ambient might be new to them. It was certainly a new thing for me. Maybe you could talk about that a little bit, why it’s important.

Joe: Absolutely. Actually, this goes back a couple years for me. We were doing some stuff with InVision where they were asking designers, “What do you want to do in the future? What are your hopes for this year?” Things like that. I had said, “I want to start focusing on building systems of interactive pieces and technologies that work together and create something more meaningful.”

You think of ambient in terms of environments. You have the ambience of a place, right? It’s something that’s just in the background. We don’t really notice it but it’s there. It works. That’s what these technologies are headed toward.

When you hear Google focusing on Home or Amazon focusing on automating different services within your ordering system, these systems are just working in the background and you don’t have to go play with the screen, get glued clicking around. It’s just technology that works for you.

On Caffeinated Mornings, Joe Toscano discusses how we can create proactive, adaptable regulation that satisfies both the needs of consumer safety and commercial success in the face of the AI revolution.

Eli: Maybe you could tell us the story of how that leads into the work that you’re doing with Design Good and the founding story behind Design Good.

Joe: Design Good is a 501(c)(3) non-profit I started last year when I left Google. I left my job consulting for Google because of ethical concerns, not necessarily just within Google but within the Valley in general. I saw things that were incredibly exciting, talking about the ambient era of computing, I saw a lot of future tech. I also saw a lot of things that I thought were concerning, and having traveled around the world talking to people about this future tech world, what I heard more than anything was people talking about how scared they were of it.

I think that has to do a lot with the fact that if we have this ambient technology world where things are working for us, in a sense, we’re losing control or we are giving it away to the machines. That is hard sometimes, especially when you don’t understand what’s happening behind the scenes. You’re just giving trust for no reason.

A big reason why I left are these concerns. I want to address them head on with the public. I want to help people understand what’s happening. I want to help them feel like they do have some control in this.

Ultimately, my work with Design Good is to help the public gain their trust back and to provide technologists with the tools and knowledge to create ethical technologies that consumers can trust while still driving bottom line. That’s my bigger mission with Design Good, and ultimately it’s a trust-based economy, but I think we, as technologists, have a lot of work to do to clean up.

Eli: On your site, I see that you mention the parallels with the Better Business Bureau— almost being a Better Business Bureau for digital products—but you’re in early days still. Maybe you could talk a little about how you see the vision for that being implemented.

Joe: I am actively working with several people who are working on regulation and doing the work to form future ideas for regulation throughout Europe and the United States. I believe what we’re going to need moving forward are auditors, not only for algorithms but also for design systems.

When I say “working on the Better Business Bureau,” that makes it something that is tangible and easy for people to understand. Essentially, imagine Design Good becoming an auditor, an independent governing body that will assess these industry giants or anybody who wants to get the stamp of approval, make sure there’s no dark patterns laying throughout, make sure there’s no manipulation of the users, make sure that, in general, the system is safe from a design perspective. Then, moving forward, very similar to Better Business Bureau, maybe they get to put the Design Good logo on their site, so when consumers come they can say, “Oh, this has been checked out as a safe platform to use. I can trust it.” Again, ultimately this is about gaining trust.

I don’t plan to be the organization that tries to shut down any of these industry giants or anyone. That’s not the goal. I think there’s a lot of room for us to improve our systems in a way that is best for consumers and may force us to slow down in terms of driving bottom line for a little bit. Going in that direction, I also talk through how maybe we need to have tax incentives for businesses to employ engineers and designers to focus on privacy and security, just like businesses get tax breaks for going green, making their operations more eco-friendly. I think it’s a matter of public safety that we work through these issues, and I want to help be the intermediary who has been inside the machine but also has a connection to the people as well.

Eli: Just given the scale of the internet and the number of sites that are continuously being deployed and killed and redeployed, etc., do you see this as an open source effort along the lines of a Wikimedia type approach? What are the ways that you see it addressing the giant scale of this challenge?

Joe: I don’t actually see it necessarily as something that is open source. I actually think we need a lot of different people doing this. If we go to the auditor’s role, I think it shouldn’t just be me. I think there should be lots of different independent auditing systems or organizations and companies should be required to get a check maybe once a year or something like that. It should be random. I think if you want to have true transparency, Google, Amazon, Facebook, any of them shouldn’t be able to go to the same auditor each time. Then you build relationships, you get nearer to the people. There’s more of a chance of some kind of insider business happening as opposed to random assignments and random checks.

I don’t think I want to have this reach a large scale. That’s not the goal. I’m not trying to consume this whole industry. Also, I think there’s a certain level where we don’t need to have auditors checking things out. Like right now, the GDPR has a rule where most of it kicks in after you have about 250 employees on staff, which is a weird arbitrary number, right? In the tech world, it doesn’t really matter how many employees you have. You can make millions and billions of dollars. Instagram was acquired for a billion dollars with 15 employees. WhatsApp was in the billions … I want to say $19 billion … and they had about 50 employees at the time. You can process millions and millions of people’s data with very few employees. I think in terms of this regulation, we need to find a balance where it has to do with meaningful measures. Maybe that comes down to, like I said, the amount of data that you’re processing and controlling. Maybe that’s one of the measures. Maybe it has to do with your level of income. Things like that.

Again, going back on the scale I don’t think this is something where I want to be nitpicking on every single site. The ones who have reached a level that it is a matter of public safety, that they are checked in on, I think we need a lot of auditors to take on that job because there are, as you mentioned, a lot of those sites on the web.

Eli: Historically, the internet’s been this free and open space where people can come and create a product and have really very little regulation for the most part. I guess that’s starting to change. What do you say to folks who push back on the regulatory aspect of this project?

Joe: From pushing back, I’d say I hear it mostly from startups. I totally understand what they’re saying. That is something that I hope to avoid as well. Regulation that stifles innovation—I think that is dangerous. Not only from an economic perspective but also if we stifle innovation within the United States, and in Europe potentially too, that’s a military risk as well. We don’t really think of it like that as designers and engineers in the commercial industry but our commercial industry is beginning to outweigh our government and military capabilities. We will need some of these larger companies especially to participate in this. We don’t want to stifle innovation in either of those senses.

I hear it from the startups and I fully understand their opinion. I think they should be protected to a certain level. Then you get into bigger organizations. Any pushback you hear is mostly like, “We don’t want to slow down.” That’s all fine and dandy, but we’re not at a point in history where that is the responsible way to respond to this.

I don’t think it comes to shutting them down or breaking them up in a way that is catastrophic to the global economy, which … my fear is we’re on pace for that right now the way Congress understands these companies and the way people are pulling the emergency switch without really understanding the insides. That is a huge fear of mine. I think it’s a matter of balancing both worlds and allowing the companies to flourish but also allowing citizens to feel safe in their own homes and to teach their kids what they need to go to school for and to move forward in general through what I believe is going to be just a generational gap, a transition in technology, and a revolution in the industry.

Eli: You wrote a blog post for InVision about the legal and social responsibilities of being a designer. At a high level, could you talk about what it means to be a digital designer, and what are the responsibilities we have?

Joe: In the article, I make note that not everyone is operating on systems as large as Google or Amazon or Facebook, but we are all creating systems that impact people’s lives at this point. If you think about especially the larger companies in the world, even the people who are processing several hundred thousand people’s data, we, as designers and engineers, have become the governors of this modern government. I believe we have a responsibility to our constituents or users to take care of them, to act according to their needs, to fight for them in the war rooms and in our briefings, things like that. That’s been my goal the whole time.

I actually won an award about a year and a half ago. I was the fourth youngest early achiever from the University of Nebraska. They asked me onstage, out of nowhere, no prep or anything, “What do you love the most about your job?” I told them a very honest answer, which was, “I get to fight for the rights of the people who are using this. That’s what I do every single day. At the scale of the world,” which was an incredible thing to say, you know? It’s something I am very passionate about. That’s why I stepped up, that’s why I’m doing this. I believe we all have responsibilities to do that. It’s not just the people working at the big companies. It is all of us.

Eli: Tell us a little bit about your upcoming book Automating Humanity.

Joe: Automating Humanity will be a foundational offering for Design Good. Basically what I hope to do with Design Good long-term is to create a series of books, tutorials, cutting edge research, resources in general for technologists that focus on ethical design within the ambient era of computing. Automating Humanity will be the foundation of that. It is going to establish where we are right now in the current moment. What has been happening? What’s happening on the back end of these companies that maybe people aren’t thinking about? Trying to give them an industry insider’s perspective on this. Not just, “Here’s what the news media is saying.”

There are a lot of things that people don’t think about when it comes to managing corporations that are reaching around the globe, managing billions of dollars. Yes, they can appear evil at times. Sure, they are maybe not on their best behavior. There are also a lot of challenges I want people to be aware of. I bring that to life.

Then, two, is a short history of the internet, information, technology, and evolution. How we reached the point where we are today and why we’re moving so fast. Also, why we’re probably not going to slow down unless there’s some catastrophic failure in humanity, which fingers crossed is not happening.

Then we’ll talk about the future where it could go. In my opinion, right now there’s so much fear mongering going on. Not that people shouldn’t be concerned, because people should. People have a right to be concerned. People also need to know what’s happening and how it could go well. There’s a lot of really good things that could come out of this.

Then, what I finish off with, is how we make that future, how we move forward. This includes talking about regulation in terms of how we may consider shaking up the monopolies. Maybe not breaking them up but shaking up things so that it is more fluid and more open for the world. Talk through social media, how we add friction to the system. Instead of trying to censor people, maybe we make it harder for people to actually go viral. Not any specific individual but all people. You think of traditional media we had to buy a printing press or we had to buy a broadcast station or we had to do whatever it was. Right now, you can find this in Larry Page and Sergey Brin’s Ph.D thesis, they said the biggest danger to the internet is that there’s no friction anymore.

I walk through those and how we can maybe add some friction to the system in order to make it a little safer and incentivize good stuff to go forward and take away the incentives from the bad actors, right? I go through data and privacy standards, how we make algorithms more transparent to people, how we make it safer in general. A lot of different things get covered there.

I know it’s going to be thoughts that people haven’t considered before. I’ve taken it to technical experts. I’ve taken it to legal experts. Both sides have been like, “Wow. Never considered it this way.” I know there will be a lot of people who read it and just think in a different way after they’ve read it.

Eli: That’s what we all hope for when we write, and hope to change the world in some small way. It seems like you’ve got a lot of ambitious challenges you’re working on. One quick last question before we part. Are there any books or other resources that have been helpful to you along the way?

Joe: There’s tons of them. Last year, I think I read 44 books. I have a lot of them. If you want to reach out you can reach out at I’d say off the top of my head, for the more technical people Superintelligence by Nick Bostrom is really great. It’s theory of artificial intelligence. I’ll be honest, no matter how technical you are you’re probably going to have to read it a couple of times just because it goes there like that.

Then more simple, something that’s more accessible to everyone, I love Weapons of Math Destruction by Cathy O’Neil. I think she brings some really, really technical issues to light in a very simple way. That’s what we need right now is we need things brought to light in a way the public can understand but also allows technical experts to see something they haven’t really thought of before. Those are two that I’ll give you for now.

Eli: You’re actually the second guest to recommend Weapons of Math Destruction. That means I’ve got to go read it. It’s definitely on my list. Well, thanks so much, Joe. Good luck in all of the interesting stuff that you’re working on.

Joe: Thanks for having me.


designbetter conversations
designbetter conversations