What AI Can Do for You Right Now, Not Years into the Future
Are you intimidated when you hear the word AI (artificial intelligence)? On today’s episode, Jon talks with Stuart Sherman about why you should not be intimidated by AI. Stuart is the founder and CEO of IMC, the leader in artificial intelligence. Listen in and discover what you need to know about AI, what it can do for us right now and how we can actually use it to make our lives easier.
“If there’s a process that you’re doing repeatedly, you should be looking to see if there’s a piece of software that will do it for you.”
-Stuart Sherman
Timestamps:
2:30 - Understanding AI
8:15 - Does AI learn from us when we use it
9:55 - Two factors that you should know to effectively use AI for your business
13:43 - Why chatbot is not good for customer service and what should you do instead
17:04 - AI revolutionizing the creation of software that makes technical works easier
18:45 - How to think like a data scientist and apply it to your business
22:38 - The importance of AI in data analysis
28:22 - Finding the software for your needs and overcoming the challenge of using it
Resources:
Connect with Stuart:
Transcript:
Jon Voigt: Welcome to Agile Living: The Entrepreneur's Journey, the show dedicated to discovering how entrepreneurs and digital leaders are doing more with less. I'm Jon Voigt, your host and CEO of Agility, and we're on a journey across the country to learn from top digital entrepreneurs on how to live a more agile, adaptable, and fulfilling life. Thank you for joining me today, and let's dive in. Jon Voigt: Hi there. To celebrate the Agile Living Podcast launch, I've got a massive giveaway and giving three lucky winners a chance to win some of my favorite things I use frequently. The first step in being agile is your mindset, and these prices all help you with that process. First, a box of Bulletproof InstaMix. These are high-octane oils to fuel your day. I add it to my coffee or my tea almost daily. Second, a Fitbit Charge 3. I don't even have one of this yet. They're not yet released, and they look pretty cool, so it's coming out soon, and that will be the second price. Third, a microbiome test by Viome. This is a leading test in the US that tests your gut biology and tells you exactly the foods you should be eating. It's pretty cool stuff. I did it about a year ago and saw the results, and it was really amazing, the things I should and shouldn't be eating. Jon Voigt: Please note this contest will only be open to those in Canada and the US. I'm really sorry for anyone internationally, and to win, you have to do the following. Subscribe on iTunes to the show. Go to the agilecommunity.com and enter your email so we have a way to contact you, and share the show with a friend who wants to start living a more agile lifestyle on Twitter or Facebook with the tag, "Be more agile." That's it. Pretty simple. I can't wait for you to hear some of the episodes, and hopefully, you can start living a more agile life. Jon Voigt: I'm joined today by Stuart Sherman. Stuart is the founder and CEO of IMC, a leader in AI intelligence. It is amazing all the talk of AI and how many companies and systems are adopting different levels of AI all over the place. Stuart, the real trick today is how can people use AI to be quicker and adapt new things, new ways of doing things? It feels like another thing to take on and learn. It feels like we're just adding more and more on to our plate. Perhaps, you can talk about what you guys are up to in the AI space, how you see AI helping businesses in the near future, and how we can use AI to get more done with less. Stuart Sherman: Absolutely. I think that the important thing when people look at AI is that you have to start by not being intimidated and not saying, "Look, I don't know anything about AI. I don't ... It's too confusing. There are so many terms: machine learning, reinforcement learning. All of these different names and concepts." Maybe start thinking about it in the concept of if you're an entrepreneur, what are repeated processes and tasks that you do that are effectively low-value tasks but are important? A great example of this is that there's new AI tools that will book appointments for you. Jon Voigt: Right. Stuart Sherman: So, you can literally use a tool. You CC an email address, and that email address that you include will actually do the negotiating with your Outlook calendar and the person you want to have a meeting with, and set up and schedule a meeting. Jon Voigt: Right, right. Stuart Sherman: Well, you might need a human to do that, but you could replace that human with the AI, and suddenly, you have more control because it's dealing with it directly and you're seeing the outcome right away. You didn't have to walk over and talk to your PA or whatever it is. So, this kind of small examples are things where you don't have to be an expert in AI. Jon Voigt: Mm-hmm (affirmative). Stuart Sherman: There's a lot of AI that exists that you're absolutely unaware of and is invisible like when you talk to Siri or OK Google, you're actually talking to an AI. It's trying to figure out what you're trying to say and what you're talking about and Jon Voigt: Yeah ... Yes. Stuart Sherman: Go ahead. Jon Voigt: Let me talk in a little nugget there because I saw something cool, and I think it'd be cool. Stuart Sherman: Yeah. Jon Voigt: Yeah. I was at a conference last weekend. They had a demo with one of the speakers showing how Google's new phone will actually call up from your phone. It will call itself and make a hair appointment and negotiate the hair appointment, the time slots, and everything with the person just speaking, which is the opposite effect of Siri, and Google Now, and these things where you're just giving it commands. It's actually going out and doing things for you, which just blew my mind in terms of what it's doing, and people just think of it as voice recognition, but voice recognition really is AI. Stuart Sherman: Yes, absolutely, and I think that we don't understand that when we're using it because you don't realize the complexity. I love the analogy that most people know how to drive a car, very few people know how to change their own oil. Jon Voigt: Right. Stuart Sherman: You could be a race car driver and not know how to change your own oil. Jon Voigt: Right, right. Stuart Sherman: The level of expertise is totally unrelated, so when you look at this stuff like ... Just to give you a technical example just so that you can understand and people can understand. You might say, "Joe approached the bank." Well, it takes AI to figure out what that means because if I say, "Which bank did Joe approach?" You might say Bank of America, or TD Bank, or something like that, but if you knew that Joe was in a boat, you would realize that he was approaching the riverbank. Jon Voigt: Right, right. Stuart Sherman: So, it takes AI to start to understand context, so like when you ask Siri or OK Google a question, there's a lot going on behind the scenes there, which is AI, but it's invisible to you, and so the question is, do you need to know that, or do you just need to be able to use the tool? Jon Voigt: Right, right, right. Stuart Sherman: My answer is you need to be able to use the tool. Now, ultimately, I think ... and I have a regular talk that I give. The last time I gave it was last week in England where I was talking about AI and really what it is, and I love the ... I love to use the analogy that, basically, what AI is, is it's a single slice of your consciousness that's entirely focused. Stuart Sherman: So, if I wanted to teach an AI how to recognize colors, I would just teach it how to recognize colors. You could actually say, "Okay, but there's green things that are round and green things that are square, and the human eye might see these things, and because of the gradient and the lighting, it might start to recognize, 'Oh, well, the round thing is slightly darker than the square thing in a flat surface,'" or something like that. The AI is not going to get fooled. It's going to just pick up the color and it will say those two things are identical colors because it's able to understand color, not shape. Jon Voigt: Right. Stuart Sherman: So, the things that confuse humans are not confusing to AI, and as a result of that, AI is capable of doing repeated tasks perfectly as long as that repeated task is one that it understands and was programmed for. Jon Voigt: Right, right, right. Stuart Sherman: Yeah, so ... Jon Voigt: How does that tie into the whole ... You said, "Understands and is programmed for," but I know everyone is talking about AI and data, and how you have to teach it, so it's almost programmed by learning as well. Right? Stuart Sherman: Absolutely. Jon Voigt: So, that's the next stage of AI where the more it tries, the more you tell it that's right or wrong, the better it gets, and that's where it starts to surpass our saved compositions. Stuart Sherman: Correct. Yes, and there's a name for that, which we call reinforcement learning where, basically, the AI learns something, and then it shows you what it learned, and you say, "Yes, this is right," or, "No, this is wrong." Jon Voigt: Mm-hmm (affirmative). Stuart Sherman: Anybody who's using photo, modern photo editing software right now is actually to a certain degree like using the lasso tools and stuff like that, it's making a guess, and then you're correcting the guess just a little bit, and this kind of examples, the question is, does the software get smarter? Well, if you're using Pandora, or Spotify, or something like that, and you're giving thumbs up and thumbs down or YouTube Music even, you're giving thumbs up and thumbs down to songs, you're supporting reinforcement learning. Jon Voigt: Right, right. Stuart Sherman: You're teaching it what you like and what you don't like, and then it's guessing better and better, and it gets to the point where it can literally read your mind. Jon Voigt: Mm-hmm (affirmative). Scary thought. Stuart Sherman: Yes. Jon Voigt: So, I guess these are all little examples of how we use it in our consumer side and things like that, but when I talk to businesses, there's all these businesses and speakers about AI, and all these things, and I just talked to so many of our friends on businesses, and either they're trying to incorporate in some software or they're trying to build this whole thing, but there's a lot of businesses out there that just have no idea how to even use this. How do I even get started? It feels too complicated. I don't have AI data scientists and all these different things in-house, so how would someone get started, or where do you see this kind of technology really taking off even though it already has? Stuart Sherman: Yeah. So, I think that the challenge really with this stuff is how do you get started and what software exists to do it? Things that we used to consider incredibly complex are now very easy, so you can take a picture on your phone right now, and you can press the magic wand button or whatever it is, and it will tune the picture. If you're using some of the new Huawei phones, you can actually change where it's focused. This is all stuff that 10 to 15 years ago, you'd have to have deep expertise in Photoshop to be able to use. Jon Voigt: Yeah, yeah. Stuart Sherman: So, one of the things I think that you need to be conscious of as a business is what tools are starting to use AI that you can consume that are available today? Jon Voigt: Mm-hmm (affirmative). Stuart Sherman: It used to be that you'd have to buy Photoshop. Now, you can buy just about any imaging editing software or I guess then, you could buy almost any image editing software, and we're at the point now where it just comes on your ... with the camera. Jon Voigt: Built-in. Stuart Sherman: Yeah, exactly. It's built-in. So, I think that one of the things that businesses need to be is just cognizant of what's available, so if there's a process that you do regularly in your business, you should be starting to pay attention to what tools exist right now that would shorten that cycle time, whatever it is. Then, the next thing would be to say, "Do we have enough data to train a system?" Jon Voigt: Right, right. Stuart Sherman: So, like typically, our clients are large, often multi-national organizations that we're doing AI work for, and they sit on enormous amounts of data, and yet, we show up, and we say, "Okay. In order to train the system, you need this data," and they'll say, "Oh, we haven't been collecting that data. We don't have it." Jon Voigt: Right, so just that one piece of data that they don't have. They have so much other things. They think they're setup. Stuart Sherman: Right, so you can look at things like, for example, your large organization, and you haven't been recording all your calls. Jon Voigt: Mm-hmm (affirmative). Stuart Sherman: Well, you can't analyze your calls if you haven't been recording your ... Stuart Sherman: ... Calls. Well, you can't analyze your calls if you haven't been recording your calls. So one of the things I think that any organization should be doing is saying, how do we hold on to data and make sure that we standardize it so that it won't change dramatically between 2018 and 2022. Jon Voigt: Right. Because with changes you can't really have a level plain field to compare and- Stuart Sherman: Exactly. Jon Voigt: Right. Stuart Sherman: Exactly. Look, I could own a fish-and-chips stand and the data I may want to collect is the information on what brand of soft drink, what order are they ordering, one fish, two fish, fish with fries, fish with salad, all the different combinations and permutations so that I can now build a model that understands longer term what I should stock so that the appropriate food doesn't go bad. Because I might be overstocked on fish and understocked on fries or something like that. Stuart Sherman: So in order to do that, I would be needing to keep my cash register receipts for the last five years, because the data sample ... And this is a challenge that we have repeatedly is that the clients come to us and they go, yeah okay, we've got 250 examples. Well, 250 examples isn't really a very good pattern when you're trying to understand a pattern using a computer. You want 6,000 samples or 100,000 samples. Jon Voigt: So often are you brought in to actually build the samples? Stuart Sherman: Yeah. Actually, that's one of the early things that we do. An example of AI gone wrong in my opinion today is chat bots, where companies are trying to implement chat bots for customer service. The problem is that I'm a phone the company and say, I can't get into your goddamn system, and I could phone the company and say, I'm terribly sorry, I seem to have forgotten my password. Well, if the chat bot response is, "Don't worry, I'll help you reset your password." Well, the angry person doesn't want to hear that. The person who's apologetic does. So the challenge with these chat bots is that they're one-size-fits-all because they're programmed using a logic chain that basically says, look for this string of language and respond with this string of language. Stuart Sherman: So the example would be I'm trying to teach your chat bot how to answer questions about the weather. So I have a series of inputs, what's it like out? Is it hot today? Is it going to rain? A nice day out? What temperature is it? That's my training set and then the answer is it's this many degrees and sunny. Well, if I'm a sailor, this many degrees and sunny isn't enough information. I want to know that it's windy too or it's not a good day to go out sailing. If I'm a badminton player, I might also care about the wet. But if I'm just taking my dog for a walk, then it just doesn't matter. Jon Voigt: Yeah. So that's where you can take context over content, I guess. Stuart Sherman: Exactly. So one of the things that we're working with companies to do is start chatting with live people so that they're actually using their CSRs and just using an interface that allows the CSR to online chat with somebody. That kind of software is a dime a dozen and it doesn't cost much to configure and it's easy to implement, versus a full AI solution that your clients are probably going to feel angry with after you put it in because it's not going to satisfy them. I'm sure we've all been through this where you phone a 1-800 number and you're like, "Operator, give me an operator, I want to speak to an agent." Speak to a damn agent. By the time you get one you're frustrated as heck and you're like, "But I typed in my phone number already, why are you asking me for my phone number? All of this stuff. Stuart Sherman: So what we're doing is we're saying look, chat online using live humans, CSRs, and then log those chats so now you have a training set. So now what we can do is, and we do this, where we train up an AI to read the conversations and understand the nature of the kinds of questions people ask and how they feel when they ask them, what their sentiment is and emotion and what it took to satisfy them. Once you know that, you can build the system. Jon Voigt: Isn't that data set just so huge because every conversation can go in a completely different way? It must be so difficult and where is the technology now compared to where it may be in a couple years versus where it was a couple years ago? Is it really moving that fast and getting that much more advanced? Stuart Sherman: Yes, it is. So what I would say is that what we're seeing is kind of like a Moore's law in software now. Where this stuff is growing exponentially and not only that, but things that we used to actually like literally write code and need programmers before we could get it to a data scientist to do the data analysis on, now there's tools that we can just literally grab and do. So it's kind of like speech to text. Speech to text used to be magic and then everybody, like there was like Dragon Naturally Speaking came out and it was like, that's the only software that exists. Now you can find open-source speech-to-text tools and you can customize them for for nothing and actually quite easily. Stuart Sherman: So we're seeing this in the AI world and then the next question that you asked, is isn't the quantity of data a problem. Well, remembering the amount of hard drive space that's available for such a cheap price and the amount of processing power that's available at such a cheap price today, imagine it's next year and it's gonna be twice as much. So this kind of thing it's not a problem that we use to perceive it to be. Jon Voigt: It's just moving so fast. You also mentioned data scientists a couple times here and I know you and I talked about it before and the whole mindset of how we have to look at the world. I'd love you to touch on that comment because it just blew me away when we talked about it the one time and I just really believed in that direction. Stuart Sherman: Well, I think that we have to think like data scientists and the question is, what is that like? The answer really is, you kind of have to look at everything around you as a bit of an experiment that needs to be validated. So to think like a data scientist would be to say, I have a hunch, what data could I get that would support my hunch? For example, I love the analogy and I use it very regularly of there's two men who both were born and live in England right now, they're both worlds famous or I wouldn't use them as an analogy. They were both born in 1948 three weeks apart, they're both in the top 1% of income earners in the UK, both male, both been married multiple times. Both Anglican, both loved dogs. One of them is Prince Charles, the Prince of England and the other one is Ozzy Osbourne, the Prince of Darkness. Jon Voigt: Both Princes. Stuart Sherman: Exactly. You got it. So when you look at that, you have to think like a data scientist. If I knew this, then how would I test for the presence of Ozzy Osbourne versus the presence of Prince Charles? Because if I could understand that, it would make the difference between me being able to successfully sell. Like let's say Ozzy Osbourne walks into a Jaguar dealership and he wants to buy a car, and Prince Charles walks into the same dealership wanting to buy a car. Well, what would a car salesman do? Stuart Sherman: A car salesman would size them up and to Prince Charles, he would probably recognize him, of course, at first. He would say, you know what, it's a Jaguar, is a fine British automobile and it's made in Coventry by British workers and it has a long pedigree of being British and all of these facts. That basis, although interesting, would not sell the car to Ozzy Osbourne. Jon Voigt: Yeah. Different needs, different- Stuart Sherman: Exactly. With Ozzy Osbourne, you would tell him, it goes fast and chicks dig it or something like that. I could say comical things that you would say and make you laugh but ultimately those things are probably the things that would sell it to Ozzy Osbourne. So this is where thinking like a data scientist makes a difference because a data scientist starts saying, okay, what are the differences here? What are the nuances? Even if you're an entrepreneur and you have a consulting based business where all you're doing is selling your own personal services consulting to companies. Stuart Sherman: You almost want to think like a data scientist in that scenario and say, what kind of buyer do I have here? What are the motivating factors? What are the past success factors I've had with somebody with this personality? How do I shape this story? What is my likelihood of success? This kind of thinking is useful anywhere. Jon Voigt: The interesting thing is the whole you go back to the history of entrepreneurship and business and it's always about measuring things. I don't know how many years ago, there was a lot of new frameworks that came up for business and a lot of push about dashboards and measuring everything and measuring all your team and just metrics, metrics, metrics. This is kind of the next level on top of that. It's not just measuring the metrics but it's also how do you evaluate. It's been hard to do that in the past because there's so much data, but AI can ... They're amazing to sift through data. Stuart Sherman: Absolutely. And the other thing though is that we as humans are deeply flawed. And we have to understand that. Where data tends to tell the truth, humans tend to lie to themselves. There are these things called heuristics, which are basically mental shortcuts. If I'm going to describe it in the easiest way I would say that we all evolved with the heuristic that if has big teeth, it eats meat and you should be afraid of it. If I was, I don't know, George Lucas and I'm trying to create a scary creature for Star Wars