Uncovering the World of AI with Michael Kanaan
Untold Stories of Innovation
“Numbers and language and storytelling are inextricably linked. They're very similar topics— kind of a chicken and the egg problem, which one came first? But I think they come together.” Michael Kanaan, author
From today’s episode you’ll learn:
Michael Kanaan, author of T-Minus AI, blends data with storytelling, old techniques with new ideas, and intended consequences with unintended consequences. He shares how a truly compelling narrative is rooted in and inspired by real facts and experiences. Therefore, especially when telling an innovative or brand story, numbers and language go hand in hand. They both inform and shape the story, to the point where they ought to come together. This way, the narrative has the ability to expose people to new information that is both interesting and applicable to their lives. Tune in to hear how AI, ethics, and human experience all inform each other and what the future of all may be, like how AI can actually improve instead of replace people’s jobs and connect rather than detract from social interaction, as well as moving politics forward instead of making it easy to disrupt agendas.
Michael Kanaan is the author of T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power. He was the first chairperson of artificial intelligence for the U.S. Air Force, Headquarters Pentagon. In that role, he authored and guided the research, development, and implementation strategies for AI technology and machine learning activities across its global operations. He is part of Forbes 30 Under 30 and is currently the Director of Operations for Air Force / MIT Artificial Intelligence. His first book came out this summer, about the global impact of the ever-changing world of artificial intelligence, in which he explains the realities of AI from a human-oriented perspective that's easy to comprehend.
This episode is powered by data storytelling training from Untold Content and Data+Science. Transform your data into powerful visual stories by learning best practices in data visualization and technical storytelling. Whether you’re a PowerBI or a Tableau person—or just want to better communicate your data—this workshop will inspire you to see the stories that lie in the data. Learn more at https://untoldcontent.com/datastorytellingtraining/.
Katie Trauth Taylor: [00:00:04] Welcome to Untold Stories of Innovation, where we amplify untold stories of insight, impact and innovation powered by Untold Content. I’m your host, Katie Trauth Taylor.
Katie Trauth Taylor: [00:00:19] Our guest today is Michael Kanaan. He is the author of the new book T-Minus AI. He’s also director of operations at the U.S. Air Force, MIT Artificial Intelligence Accelerator. Michael, I’m so grateful to have you on the podcast to discuss artificial intelligence and innovation storytelling.
Michael Kanaan: [00:00:36] I’m grateful to be here with you, Katie.
Katie Trauth Taylor: [00:00:38] I mentioned this when we first hopped on our call just now. But I have had my nose in your book for the last several days. I am devouring it. I’m about 10 pages from being finished, so you’ll have to help clue me in on how it ends. But it is such a powerful read, T-Minus AI, it’s just out and it really covers everything we need to know about artificial intelligence. Even for those of us in the innovation community who may not work really intimately or directly with A.I., it just leaves us with a very clear understanding of its implications and what’s constituted, what kind of counts as AI and what doesn’t and what we ought to be paying attention to.
Michael Kanaan: [00:01:17] Yeah, and that was a goal for the book, I mean, we’re talking about a technology that is ubiquitous to every interaction of our lives and that will only grow over the years to come. So what I wanted to do in the book was bring that in a very human anecdotal format. If you want to talk about artificial intelligence, you kind of have to know a little bit about evolution or biology. The skills of numbers, both big and small, how they impact us, some basics of how a computer works, of course. Language, our brains learning. And then let’s get to what is A.I.? How does it work for us? Because without the context, the conversation is too often lacking and lacking in depth and coming from a common foundation and a clear understanding of what it is. And then what we talk about every day. IA in competition, IA in business, AI in international relations. So I want to break it up into three parts of the book that were told in that way so that regardless of who you are, there’s something individually meaningful to you.
Katie Trauth Taylor: [00:02:27] Yes, well, you accomplished it. And I strongly recommend anyone listening to this conversation to read the book. One of the things that I love, and perhaps this is my flaw as a former English professor, is that you get into the human psyche, into human history and talk about how machines, machines and artificial intelligence have to reflect human intelligence and how we think and how we talk. And one of my favorite parts was how you sort of compare the origins of math to the origins of storytelling.
Michael Kanaan: [00:02:58] Yeah, numbers and language and storytelling are inextricably linked. They’re very similar topics and they’re ones that, you know, kind of a chicken and the egg problem, which one came first? But I think they come together. And what you want to do is expose people through storytelling. Humans learn best through storytelling. And that was, you know what I hoped to get across. And I’m so happy that you received that.
Katie Trauth Taylor: [00:03:24] Yeah. And of course, then the book dives really deeply into why we should be paying attention to A.I. on a global scale. Who owns it? What are the risks, threats and opportunities? And so I would love to…. I’d love to, though, before we kind of dove into all of that, to hear your personal story of innovation, what got you into the world of A.I.? Well, maybe it goes all the way back to when I wasn’t alive in 1956, because AI has been a topic that has long been discussed and debated. And in 1956, at Dartmouth, a group of really brilliant people got together and they could see what was possible in the future with the rise of machines, data, the way that we can memorialize everything around us. They set a definition of artificial intelligence, they said: “a computer performing a task that was once deemed in the human domain.” When you think about that definition, though, you can understand that since 1956, we’ve anthropomorphized so much of what A.I. is and keep kicking the can down the road because based off that definition, then surely a calculator was artificial intelligence.
Katie Trauth Taylor: [00:04:43] Right.
Michael Kanaan: [00:04:43] Then the TI 84 Pluses that we had was an even better one. Then Excel. Nowadays, Tableau.
Katie Trauth Taylor: [00:04:50] Right.
Katie Trauth Taylor: [00:04:51] And we just keep kicking the can down the road. So for myself, as we kept kicking that can down the road, I came into working at the National Air and Space Intelligence Center when AI came out of what I think is its last [unclear wording] in about 2011. In 2011, once again, we said A.I. isn’t real, but because of the rise in our ability to collect data, cloud and everything else, some advancements in computing, architectures, some new math and software. Right? Math expressed in software. Machine learning came into being. And it worked. And at that point in time, it was the image-net competition. Now the goal is with this image, not competition, to put a bunch of pictures on the Internet, grab a bunch from the Internet, put it in a database somewhere. Generally, it’s like cats and things, right? Because everything on the Internet is cats. And then run the computer against the human. And 2011 was the first time that the computer could outperform the human in these discrete tasks. Voila. Here we are, the machine learning age. Now at the same point in time, in 2011, as I mentioned, I was at the National Air and Space Intelligence Center and I was responsible for a mission called Aces High. And it was a hyperspectral imager. So this is going to get nerdy, but I’ll try to make it common-speak.
Katie Trauth Taylor: [00:06:22] I love it. Yeah, let’s dive in.
Michael Kanaan: [00:06:22] OK, so it’s a hyperspectral imager. You and I see in three colors bands. A mantis shrimp sees in something like eight or nine. There’s a philosophical conversation we could have is: “what does the mantis shrimp see that I do not?” Right.
Katie Trauth Taylor: [00:06:35] Right.
Michael Kanaan: [00:06:36] But this hyperspectral imager, it could see in hundreds of color bands. So we’re running operations from the National Air and Space Intelligence Center in the Middle East and Afghanistan. And we put this on one of our unmanned aircraft. And our goal was based off of this image you’re collecting and the reflectance of light hitting the ground from the sun because there are so many color bands it’s collecting in, then if something is spectrally significant in certain color bands, then you could deduce or identify what that material is. Think of homemade explosives and the like.
Katie Trauth Taylor: [00:07:18] Yup.
Michael Kanaan: [00:07:18] So our goal was to run this mission and it is solely to save American lives, essentially say, “wait a second, don’t turn down that street because something is there.”
Katie Trauth Taylor: [00:07:30] Yeah.
Michael Kanaan: [00:07:30] And we were wildly successful. The team, an incredible group of individuals who took this new fangled machine. We had to make it work for us. Right? About 30, 40 people, brilliant minds and really successful, but at the same time. This image [unclear wording] thing is going on. And while it maybe took us a certain amount of time to be able to alert people to what was happening on the ground. I said to myself, wait. But what about this artificial intelligence thing? This could surely make us do that faster or more precisely. And in 2011, as is still the case, most people say, I don’t know about that AI thing. it’s not real AI, right? So my love of artificial intelligence truly came from and why I moved down this path is from a place of need. To do something for someone else in the name of customer service, in the name of service in general. And from that point in time, it’s been a nine year long journey to the point we’re at now where I think the world is opening its eyes to its seriousness, its applicability to their everyday life and how it influences them. But when it comes to a story of innovation, that’s a story of artificial intelligence. That’s just my personal story. But when it comes to innovation. We have an innovator’s dilemma that I often think about, the dilemma is I want things to change. I am unhappy with the current state of being or the current state of being could be better. But I’m reminded of a quote, “the limits of my language mean the limits of my world.” So as innovators, we still have to be able to communicate. I think of the idea of taking the ideas of the new and blending it with the techniques of the old, because you can’t just do it alone. And by the way, nobody appreciates just malware in the system, right? Without a goal. So sometimes for innovators, what I think is important is to help yourself, help yourself. Right? To speak that language to, you know, the overused term #OK, boomers. But we have to be able to communicate with them because otherwise it’s just noise in the system. So one common foundation or common denominator to everything was always someone who is a champion, someone alongside of you. And I think that that’s important for innovators to remember that you’re going to stress yourself out unless you’re speaking the language of the people that you want to change. The best way to hack a bureaucracy is to understand a bureaucracy.
Katie Trauth Taylor: [00:10:29] Yeah, absolutely. So thank you so much. It’s exciting to hear what led you to the work that you do today and some of those successful missions that you’ve accomplished. And then generally, just to hear your perspectives on storytelling, the role that it plays in helping people get buy-in and traction, and as you say, speak the same language. I really appreciate that. And I think you shared a couple of examples already, but on page 128 and 129 of your book, you have this nice table where you outline the many different sectors and the ways in which artificial intelligence holds the potential to make an impact and create good. Everything from pharmaceutical research and development to retail inventory and pricing and DNA sequencing and classification or aerospace research, climate analysis. And the list goes on and on. But I’d love to hear some more of your favorite innovation stories around AI that get you excited about the future. And then we’ll talk about the “dark side,” too.
Michael Kanaan: [00:11:28] Oh, sure. I’m glad we’re going to the “light side” to begin with, right?
Katie Trauth Taylor: [00:11:33] Yes. Yes.
Michael Kanaan: [00:11:34] In a very meta-way. A.I. is all about innovation. Our goal when we’re doing an artificial intelligence project or bringing it into our organizations is simply this: ask new questions. We don’t always think about that because these words “automation” and “AI” are too interchangeably used and stuff like that all the time and AI gets a bad rap. We think that it’s going to replace the bottom of our workforce. That’s incorrect. Totally incorrect. You won’t have a successful A.I. project. In fact, what you should do is move to the top of your workforce, your subject-matter experts, the best people you have. And what you want to do is you want to start looking at the world through the lens of A.I. So I’ll give you an example. In your life, you want to think about something you do all the time, right, that you are highly accurate on, right. You have to be accurate with that prediction, with that task, with that due out, with that balancing the budget, the book, whatever it is in your personal life, everyone has them, right? Something that ideally moves at high speed, like quick decisions are made. So high accuracy, high speed. And then the other one, high volumes of data like you’re looking at a lot of stuff. Think about case law, right? And precedents. You know, we have all of these attributes to certain jobs in our lives. So what you want to do is you want to find all these examples or the data that you have put that all together and you say, “wow, I have this highly represented data set that is a lot of examples of what I do.” OK? And then here’s the rule, though. Imagine, if you will, that artificial intelligence isn’t real. It’s not a thing. It’s just this island of I.T. people who are capable of taking on all your tasks. But the rule is you can’t give them directions, only the examples we just talked about. If you do those two things and think with this kind of nuance paradigm shift that I don’t mean to be pedantic in any way. Then you found your AI problems because what ends up happening? You take that representative data, you give it to that software, the imaginary A.I., of course, and what does it do? It illuminates insights. The very purpose of machine learning is to discover human patterns. So when I think about what’s your favorite story on innovation? Well, by asking a new question, it’s simply that. It’s exactly that whole process. And I also like talking about A.I. or innovation in some different ways as well. Often we kind of umbrella everything. Everything is innovation. But it can be a singular noun too. An innovation on the system. A new question that you’re asking. So as it comes to what’s the good of it? I think it can make us be more human. I think we can get out of computer tasks that saturate our lives. Our jobs are too often computer jobs. And by the way, if an AI or automation could replace your job or someone in your workforce, that person shouldn’t be doing that job. Right? That’s not a person-job. So when we talk about the good, it’s all about asking new questions. And I think that’s special, particularly at this moment in time where we need to do that in society. And AI can help us get there.
Katie Trauth Taylor: [00:15:25] I love that. Thank you so much for sharing. And you go into this idea in the book about how AI or machine learning, what really drives it is the data that we put into it. And so one of the things that I love is, you know, people, when it comes to artificial intelligence, they can feel like it’s a very large concept, very far away from them, right? It’s this idea of robots taking over the world. And you really kind of contrast that in this book and say that’s fine. You know, I’m not saying those conversations aren’t valid, but if you’ve got a fire at your door and there are immediate issues to resolve when it comes to the way the AI is capable of making things happen right now, then that’s where we need to be focused. And that’s where we can either leverage it for good or protect against its misuses.
Michael Kanaan: [00:16:14] Exactly. When there’s a fire at the door, you’re not so much worried about the lightning in the distance.
Katie Trauth Taylor: [00:16:19] Right, right.
Michael Kanaan: [00:16:20] Every day I have conversations about, well, what about the killer robots? What about artificial intelligence on weapons? What about X, Y, whatever it may be, when today the current state of artificial intelligence is creating dystopian societies. It’s biasing against people. It’s affecting hiring actions for… I know we’re not on a video. You and I are on a video right now, but it’s just hiring more old white guys like myself.
Katie Trauth Taylor: [00:16:51] Oh, my gosh. That was one of the most powerful examples in the book. There is. Oh, my goodness. OK, if you can’t read anything except for like one chapter, read “Bias in the Machine.” I loved that chapter so much. And could you dive into the Twitter Tay, Microsoft Tay example and the Amazon hiring or I can even repeat it back because I read it again this morning. I love it so much. Not to put you on the spot, but…
Michael Kanaan: [00:17:14] Of course!
Katie Trauth Taylor: [00:17:15] But this idea of, you know, and this is a little bit getting into the “dark side” of things, but with machine learning come the same human biases, potentially, that we put into the research and the questions that we posed. The question, the way we pose it, has an impact on the data that gets collected as a result. And then our analysis can also be full of potential bias. And one of the great examples you share is [that] Amazon had a hiring algorithm around resumé reading.
Michael Kanaan: [00:17:43] Yeah, AI is like looking in a mirror period, right? I think, to a certain degree, there’s a subconscious aversion or distaste to knowing more about to look in the mirror, even if you don’t quite understand it. Sure. And all it’s going to do is reflect and formulate predictions of the current state of affairs. So in a good. So this can be very good, though, right? Because what’s the difference is that when Amazon identified that they were hiring many older white gentlemen and just biasing against that, it’s like, well, of course they are. I mean, that’s what many of our companies look like right now. We’re trying to get past that in society. But the difference was they were held to account. Right? The difference is, is that people say, “that is unacceptable and we must change.” When it came to Microsoft Tay, which was, by the way, for people listening, it was a Twitter bot that collected a whole bunch of essentially how we interact as humans and surprise. It was basically the worst of us.
Katie Trauth Taylor: [00:18:52] Yeah.
Michael Kanaan: [00:18:53] Right? It was absolutely the worst of us.
Katie Trauth Taylor: [00:18:55] Yeah.
Michael Kanaan: [00:18:56] Very depressed…very…
Katie Trauth Taylor: [00:18:58] The Algorithm was. Yeah. The algorithm driving this sort of nineteen-year-old profile bot named Tay was her algorithm was really only based on the comments that she would get on her Twitter, her tweets. And so within hours Tay was putting out racist and sexist tweets because those were the comments that were coming back on her first or her earliest tweets.
Michael Kanaan: [00:19:22] Her first “hello world.” Right?
Katie Trauth Taylor: [00:19:25] Yeah.
Michael Kanaan: [00:19:25] So that’s, you know, we’re talking about… What we’re talking about here is, again, back to the point. Machine learning applications are only designed to analyze data and formulate predictions without guidance from us. But because it’s just based on data, which, again, is a reflection of us, data has always existed, right? It’s just now we memorialize them like the tree that falls in the forest. Does it make a sound? Well, of course it does. The question is, with something there to record it nowadays, we have everything to record it. So if an algorithms analysis is just based on data, it doesn’t mean its output will be neutral or objectively fair because biases will be reflected in our data. And when they are, it stands to reason that every subsequent strategy, analysis or prediction based on the data will be biased as well. And then if we make decisions to those answers to those questions, then of course the underlying biases will, of course, perpetuate in all of our lives. And most of us do believe at the core of the matter that we’re fully aware and consciously in control of bias, inclinations and opinions, and we can intentionally include or exclude them however we see fit. During a never-ending day of decisions like not walking in front of a car. You’re biased against that. That’s not a good idea. But we’re not. Or the fact I don’t like olives. We’re unable to separate ourselves from our biases or our biases from ourselves, to get philosophical. And we’re not even aware of the prejudices we hold and we’re unaware of the many ways they influence our behavior in answering those questions. So regardless of how objective, unbiased or enlightened each of us think we are, we have tendencies in case aversions and distaste. It defines who we are. So the point is, when we’re moving forward on an AI project in your organization or anyone, you have to have representation of every one to ask those questions up front. What could be the tertiary side effects of this? And I think that’s what is special and why AI should be a topic for everyone. The future rock stars in artificial intelligence are ethicists, lawyers, teachers, parents. Right? So many more people need to be involved at the beginning. And it’s not just for those I.T. people because the questions we’re trying to solve and the questions we ask are really important. Now, back to the point, though, was Tay or the Amazon hiring bad in the long term? I don’t think so. I don’t think it was necessarily bad. I think it was a good thing. I think it illuminated something that perhaps we thought was true, we found out is true and we changed. They still do not have that algorithm in practice and Tay doesn’t exist anymore.
Katie Trauth Taylor: [00:22:29] Right.
Michael Kanaan: [00:22:29] In other countries, though… And I’m looking at you, China and Russia and some other places, they don’t get to say no, they don’t get to say that’s unfair. And by the way, I should rephrase. I’m looking at you, chinese Communist Party. You, Russian Federation, right? Not the Chinese people, not Russian citizens.
Katie Trauth Taylor: [00:22:53] Right, right.
Michael Kanaan: [00:22:54] And they don’t get a say in those questions. So I think it would be… It would be a travesty with the rights that we’re afforded here to not hold people to account, to not understand it, at least to the extent that we can communicate about it. To ask better questions, so that we don’t become more like that. That’s what… and I think that’s what’s special right now.
Katie Trauth Taylor: [00:23:19] Absolutely. You know, the fact that when the hiring algorithm at Amazon, when it was discovered that it was pushing more women’s resumes to the side and elevating men’s resumes just because the machine learning looked at the history of data of hiring and saw that there were more male resumes, and therefore it interpreted that as being desirable and it perpetuated it – not unlike the way that we did as humans – for decades and decades. And so… But you’re right. I think there’s something really powerful about that metaphor you said, holding a mirror up to ourselves. And the great end result of that is that Amazon no longer uses that algorithm or if they would build one in the future, they will try to accommodate or change based on those biases. And you’re bringing us to a… Perhaps the most critical part of your book, which is when companies and institutions are utilizing artificial intelligence in a globalized way, even if those companies are sort of headquartered in different countries, the ways that that innovation is put to use in different cultures and contexts can differ. And the rules and the regulations protecting against security and safety threats are also different. So can you speak to that aspect of A.I. and what we should be paying attention to?
Michael Kanaan: [00:24:47] Well, it’s the question of and I think we’re, of course, just going to leap ahead a little bit is who really is responsible for making these choices? Is it the developer who made the A.I. and then put it on GitHub and then somebody did something wrong with it? Right? Because its world-view or the data that it was fed was not representative of its scope of impact. I think of it like an X and Y axis. Right? So on the Y axis, we have that labeled as worldview, right? Or data. Because data is akin to experience for a machine, because that’s how we learn just from experience. On the X axis. You would have its scope of application. How many people is that affecting in which way? And is its worldview fair? Representative of the number of people it’s impacting? So how would this play out in real life? The question is, I certainly don’t want an Alexa or a Google Home in my home that was only trained on Southern white gentlemen or people only from Northern California because its scope of application is broader than that. It’s in everyone’s home. Now, if it was just, you know, this wouldn’t be an AI solution, but like a telephone switch operator or something. Right, then fine. Maybe it’s worldview doesn’t need to be very large to perform that action. So when you start kind of mapping out where things fall on this X and Y axis while we deal with, you know, explain-ability and all these anthropomorphized words of, well, how … Why did AI make that decision and how do we de-bias things and whatever? At least we can start saying, “no. I think that’s fair to its scope of influence or scope of impact.” Right? And then when it comes to well, then the question is when it’s used poorly, whose fault? The person who made it, the company who owns it, you know, on whatever platform or software they have? Is it the government? Sometimes when we talk about A.I., it’s like we throw the kitchen sink and the whole kitchen out the window.
Katie Trauth Taylor: [00:27:07] Right.
Michael Kanaan: [00:27:08] If you kill someone with a hammer, it is not the fault of Ace Hardware or of Black & Decker or whoever made the hammer. It is you. It is what you do with it. It is what your organization did with it. Right? And we have to be accountable for those things. So the way that you see this play out, you know, broadly is: well, in different places they have different biases. Again, bias isn’t necessarily a bad thing. You don’t want to de-bias everything, right? If I don’t like olives, I don’t want to bias out of my algorithm that I don’t like olives and I start cooking olive-filled dishes. Right? We just have to make sure that it’s fair. But what I think is interesting and what we as citizens. And we as a government do have a role to play. So a thought experiment, let’s say you and I, Katie, are at one of these really large Fortune 500 publicly-traded companies, right? And the conversation is, well, we need to be morally and ethically and legally sound with artificial intelligence. That’s the right thing to do. And I want to commend all these companies and their ethics boards. It’s, I mean, truly, bravo. At the same time, let’s imagine we’re in that room, though. So you probably have 10 or 15 really, really awesome bright people sitting there saying, I want to do the right thing. Inevitably, you get about three minutes into the conversation, like we have here, and it leads to well, we have to share that data so it can be representative of those people, we have to share that algorithm so that we get rid of this whole “you’re in Apple, I’m a droid.” Right? So that we can represent all and be ethically sound and do the right thing. And inevitably, in that room is also general counsel from a really reputable institution like Stanford Law. Attorney sits back there, raises his or her hand and says, hold on one second. You have a fiduciary responsibility to your shareholder not to do that. Right? I mean, because that’s your intellectual property. So as the conversation moves on, inevitably our own structure in some ways limits us. But who do we have a fiduciary responsibility to as citizens and as a government? Everyone. To everyone out there. So it calls for this reinvigoration of that conversation. Now, let’s be clear as well, though, very quickly, you could say, well, yeah, in that case, if we want AI to be fair, there shouldn’t be an Apple. There shouldn’t be a droid. There should just be one. Then all of a sudden, you start looking like China with one platform like WeChat, where people don’t have options. So you can see the slippery slope that can happen very, very, very quickly. What it really means at the end of the day to the question you asked is. You shouldn’t throw the whole kitchen out the window, right? There’s still… There are still frameworks in place that work, even though we said the AI word. Right? It’s OK. But, I think it’s far time that we start, you know, carving out some new square pegs for or square holes for square pegs, not trying to fit it in, and that comes from being informed or at least generally aware of the topic itself and the tertiary effects or secondary effects that could happen from doing one of these projects. And I think that kind of wraps up, “well, what do we need to think about right now?”
Katie Trauth Taylor: [00:31:09] Absolutely, that question of what companies do with their data and who that – who those actions benefit or harm is so complicated and of course, it varies by company to company.
Michael Kanaan: [00:31:20] We forget, you know, when you’re not paying something, you are the product, right? I mean, if you’re not paying for that, you are the product of somebody else.
Katie Trauth Taylor: [00:31:29] We do. And I mean, we give up a lot of that in exchange for personalization. Right. Welcome. Welcome to Amazon, Katie. Here are some recommendations for you.
Michael Kanaan: [00:31:38] Yup, here are some recommendations. Here’s your bunny face on Tik Tok. It’s great. And appreciate it. Right. It’s awesome, it’s capability. But think about that conversation, though, to have someone exchange. Cost free capability in their life and something that we’re used to, because somewhere down the chain you’re informing an algorithm and keeping Muslim Uighurs at bay in China, right? I mean, if you were on that platform vis-à-vis the AI is training on you, becoming more robust and you can see how that long chain ends up panning out, that’s an intellectually tough argument. You’ve got to really understand how that could even happen.
Katie Trauth Taylor: [00:32:29] Could you tap into that example, can you dive into that a little bit more for folks who are listening, who are less familiar with some of the implications?
Michael Kanaan: [00:32:38] Sure. We talked about the extent to which, OK, we get more data, means more algorithms, more robust and more performant for whatever tests they were meant to do. So when you’re sitting on a platform and for instance, you know, you put the bunny face on your face. Well, that’s computer vision, right? I mean, AI is all around us, literally, when you open your phone. And it looks at your face for facial ID so that you have security and privacy to your phone. That’s artificial intelligence. But let’s imagine, though, that perhaps that’s a company’s phone that you don’t quite agree with. Right. That doesn’t quite see the world or their culture the same way you do yours. Well, interestingly enough, remember back to that data point: you’re training that artificial intelligence. Right? You’re making it more robust and then you have to ask the question, well, tell me what perhaps a company like Baidu, Alibaba, Tencent or whatever it is, is doing with that stuff. And you might find out, after you go down the long chain, well, I actually don’t like that. That’s compromising someone else in the world. So that gets to the point, too, that everyone should be involved in a conversation as a consumer to a developer, to a supplier. You’re a part of the A.I. chain in some way.
Katie Trauth Taylor: [00:34:07] Yeah, yeah. Absolutely.
Michael Kanaan: [00:34:08] And that’s why we want to be able to have, you know, foundationally robust, intelligent conversation on the topic.
Katie Trauth Taylor: [00:34:17] Absolutely. Thank you so much for pointing that out. As everyday consumers or citizens, we have a stake in this game as innovation teams. We have a stake in this game. And like you said, maybe it’s not to take full responsibility for every misuse or use case that emerges after we’ve created something. But to have that conversation and to our best knowledge, try to anticipate that as innovation leaders and communicate that up the chain as we try to get buy-in for new projects and ideas, it’s quite critical that you spend a little bit of time at least articulating what those other use-cases might be, or at least seeking, you know, the opinions of experts who can help you think through that. And we can’t always know that’s what’s so challenging. But we do our best to present, you know, to work with the ethics we have in front of us, the decisions we have in front of us.
Michael Kanaan: [00:35:05] You’re right. We can’t always know. And that’s OK. We’re going to make mistakes. The question is: did you have the right kind of intent? Did you do due diligence, right? Can you stand in front of someone and say, well, that had a side effect? I did not realize, but here’s how we mitigated it and thought about it and now we’ll change. And that’s OK. It’s OK. I – dive in, dive into using AI in safe spaces. If you’ve got a lot of Excel files, you can use machine learning.
Katie Trauth Taylor: [00:35:41] Yes, absolutely.
Michael Kanaan: [00:35:42] If you… if you’ve got a lot of financial docs, you can use it. You know, there’s something for everyone.
Katie Trauth Taylor: [00:35:47] Definitely. And I know we’ve talked a lot about data, numbers, but at Untold we often talk about data storytelling too. And so can you share with us some of your thoughts around storytelling, the role that it plays in AI and its success or not?
Michael Kanaan: [00:36:04] Storytelling is one of the most important things, and it’s something special about humanity that we can communicate stories, that we can imagine ourselves in the feet of others, in the shoes of others without necessarily depicting it, per se, or experiencing it. For instance, if I describe to you, there is a woman running down the street with a bucket of water. Right? And it is splashing everywhere. Perhaps you’ve never done that. Maybe you have. But you’re like, oh, I can imagine that, right? Storytelling creates buy-in and it creates experiences that we can understand and are meaningful to us. And I think stories, like reading, is important, too. I mean dedicated reading and storytelling. Right? So I think back to the seminal books in my life. They’re works like, If You Give a Mouse a Cookie, Where the Wild Things Are, or maybe Good Night Moon. And I know, I know, I’m referencing some children’s books, but don’t worry, I’m going somewhere.
Katie Trauth Taylor: [00:37:13] It’s fully half of my professional life right now as we live at home with COVID and work from home is all of the books you just said with my one, four, and five year olds.
Michael Kanaan: [00:37:23] Oh, wow, these are the best books, right? Those are my favorites.
Katie Trauth Taylor: [00:37:26] Oh yeah.
Michael Kanaan: [00:37:26] But there are also books like [Carl] Sagan’s Cosmos, [Stephen] Hawking’s Universe in a Nutshell. Or, you know, when I was 10 years old reading Brian Greene’s The Elegant Universe over and over again, maybe most recently [Yuval Noah] Harari’s Sapiens or something, there are favorites like [Leo] Tolstoy, Virginia Woolf, [Aldous] Huxley, and so many more. And these books and storytelling have something in common. People, since the dawn of language learning and eventual[ly] writing have debated and discussed consciousness, theories of physics, biology, social realities, technology and all the rest of the things that constitute the human experience. The average person hears about them. Shoot, I mean, we experience them every day and we know of those words, but not always what those words mean. Essentially, for every topic, they’re brought to light but storytelling brings something to life and then inspires more. And I think that’s a distinction with a difference. So I look back and think of learning, which is really what we’re talking about here, right? Learning through storytelling. I think it’s centered around dialogues. Maybe that’s with someone else or others, but maybe that’s with yourself too, the internal one, that’s really important. And for me, when we talk about A.I., the concepts of consciousness, experience, social order, biology, the whole human story is brought together in the story of A.I.. Now, when we talk about innovation, right. Which is – that’s my personal innovation – we want to tell stories so that they can experience that idea. Take the aspects of it that mean something to them, right? Don’t run down the street with that bucket full of water, walk, right? That’s a lesson, you know, that we can take away, just like when we tell the story of, you know, an innovative group or the creation of the Post-it notes or whatever it may be for you, there are things you can take away and that is the value of storytelling to innovation.
Katie Trauth Taylor: [00:39:37] Thank you. Yes, absolutely. I really appreciate those points, and it’s… Absolutely, it’s about buy-in, about creating an experience, about drawing upon the human capability for empathy. And just to wrap-up our conversation, because I know we could talk all day. This has been wonderful and I’m so grateful. It’s fascinating to me, again, this idea of the mirror and the way that we need to think a little bit differently when we’re considering what applications to create with A.I. and it’s this change of mindset a little bit. Really, it’s interesting. It’s an interesting position for innovators to have to be in, because on one hand, you need to think of how are we going to create the right circumstances for a computer to learn? And that’s very different than the way that a human learns, at least to some degree, right? Around data and putting in this data-set. And then we also still have to story-tell to other humans to get buy-in for those efforts and to get feedback and to refine the approach and think about the impact it could have and how it’s going to help better people’s lives in whatever way that means. And so it’s not an easy job to be someone who is innovating with AI right now. But I think that not just being really smart, working with data and building algorithms, also being able to be a storyteller is what I’m hearing from you. That’s all still critical to the success of AI innovation.
Michael Kanaan: [00:41:02] It’s so renaissance, right? You have to become the Renaissance woman or man right now.
Katie Trauth Taylor: [00:41:12] Yeah.
Michael Kanaan: [00:41:12] The whole kit and caboodle, writing and storytelling, and technical proficiency, or at least to the extent that you can see what’s going on and when you’re bringing this up. I was thinking of the underlying theme of storytelling. And it’s a quote from Einstein, and he’s often attributed with saying, “you don’t really understand something unless you can explain it to your grandmother.” And I think that’s true. But if Einstein knew my own grandmother, he would have altered his words slightly in a more precise adage would be: your grandmother is likely the smartest person you’ll ever encounter. So if she doesn’t understand your explanation, it is sure no one else will either.
Katie Trauth Taylor: [00:41:54] That’s right. I love it. Thank you for re[phrasing]…
Michael Kanaan: [00:41:56] And that is… That can be a theme to every aspect of our lives, every aspect of business, this connectedness and an ability to story-tell.
Katie Trauth Taylor: [00:42:08] Incredible. Thank you. Thank you so much, Michael. I really, really enjoyed your book. I know listeners will, too, and I really enjoyed this conversation. And thank you for flipping that quote on its head. It’s really lovely. I’m glad we could end on that note.
Michael Kanaan: [00:42:23] Thanks, Katie. It was awesome to be with you today.
Katie Trauth Taylor: [00:42:25] Talk to you soon.
Michael Kanaan: [00:42:26] Bye.
Katie Trauth Taylor: [00:42:29] Thanks for listening to this week’s episode. Be sure to follow us on social media and add your voice to the conversation. You can find us at Untold Content.
You can listen to more episodes of Untold Stories of Innovation Podcast.
*Interviews are not endorsements of individuals or businesses.