You are currently viewing Decoding the AI Dilemma: Unveiling the Hidden Pitfalls, Ethical Quandaries, and Future Realities

Decoding the AI Dilemma: Unveiling the Hidden Pitfalls, Ethical Quandaries, and Future Realities

Decoding the AI Dilemma: Unveiling the Hidden Pitfalls, Ethical Quandaries, and Future Realities written by John Jantsch read more at Duct Tape Marketing

The Duct Tape Marketing Podcast with John Jantsch In this episode of the Duct Tape Marketing Podcast, I interviewed Kenneth Wenger, an author, research scholar at Toronto Metropolitan University, and CTO of Squint AI Inc. We uncovered the intriguing world of artificial intelligence, exploring the complexities and ethical considerations associated with this rapidly evolving mainstream […]

Decoding the AI Dilemma: Unveiling the Hidden Pitfalls, Ethical Quandaries, and Future Realities written by John Jantsch read more at Duct Tape Marketing

The Duct Tape Marketing Podcast with John Jantsch

In this episode of the Duct Tape Marketing Podcast, I interviewed Kenneth Wenger, an author, research scholar at Toronto Metropolitan University, and CTO of Squint AI Inc. We uncovered the intriguing world of artificial intelligence, exploring the complexities and ethical considerations associated with this rapidly evolving mainstream technology.

Key Takeaways:

In this insightful episode, Kenneth Wenger, author and CTO of Squint AI Inc, navigates the intricacies of our society’s delimma with this rising technology: AI. Discussing its ethical considerations and societal impact as highlighted in his book: Is the Algorithm Plotting Against Us? A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI. Wenger discusses the current state of AI, emphasizing the exponential progress in models like the Transformer architecture. Unveiling the challenges and pitfalls, he stresses the need for responsible AI usage, exemplified by Squint AI’s mission. Calling it Informed Automation, as opposed to Artificial Intelligence, our conversation covers the future of this technology, envisioning AI systems with a deeper understanding of context and autonomy. Wenger’s thought-provoking insights provide a comprehensive guide for listeners, addressing the complexities of artificial intelligence and its potential impact on diverse industries.

 

Questions I ask Kenneth Wenger:

[01:44] What does Squint AI do?

[02:31] In the title your book why ask the question: Is the algorithm plotting against us?

[03:44] Where do you think we are in the continuum of the evolution of AI?

[07:56] Do you see a day where AI begins to start asking questions back?

[09:25] Can you give lay-person explanation of how AI works?

[15:30] What are the potential pitfalls of relying on AI?

[19:48] Can some of the so called ‘informed decisions’ made by AI be wrong?

[24:014] Where can people connect with you and obtain a copy of your book?

 

More About Kenneth Wenger:

Get Your Free AI Prompts To Build A Marketing Strategy:

 

Like this show? Click on over and give us a review on iTunes, please!

Connect with John Jantsch on LinkedIn

 

This episode of The Duct Tape Marketing Podcast is brought to you by ActiveCampaign

Try ActiveCampaign free for 14 days with our special offer. Sign up for a 15% discount on annual plans until Dec 31, 2023. Exclusive to new customers—upgrade and grow your business with ActiveCampaign today!

 

John (00:07): Hello, and welcome to another episode of the Duct Tape Marketing Podcast. This is John Jantsch. My guest today is Kenneth Wenger. He’s an author, research scholar at Toronto Metropolitan University and CTO of Squint AI Inc. His research interests lie in the intersection of humans and machines, ensuring that we build a future based on the responsible use of technology. We’re going to talk about his book today is The Algorithm Plotting Against Us, a Lay Person’s Guide to the Concepts, math and Pitfalls of ai. So Ken, welcome to the show.

Ken (00:44): Hi, John. Thank you very much. Thank you for having me.

John (00:46): So we are going to talk about the book, but I’m just curious, what does Squint AI do?

Ken (00:51): That’s a great question. So Squint AI is a company that we created to do some research and develop a platform that enables us to do AI in a more responsible way. So I’m sure we’re going to get into this, but I touch upon it in the book in many cases as well, where we talk about ai, ethical use of ai, some of the downfalls of ai. And so what we’re doing with Squint is we’re trying to figure out how do we try to create an environment that enables us to use AI in a way that lets us understand when these algorithms are not performing at their best, when they’re making mistakes and so on.

John (01:35): So the title of your book is The Algorithm Plotting Against This? It’s a bit of a provocative question. I mean, obviously I’m sure there are people out there that are saying no and some are saying, well, absolutely. So why ask the question then?

Ken (01:53): Well, because I actually feel like that’s a question that’s being asked by many different people actually with different meaning. So it’s almost the same as the question of is AI posing an existential threat? It’s a question that means different things to different people. So I wanted to get into that in the book and try to do two things. First, offer people the tools to be able to understand that question for themselves and first figure out where they stand in that debate, and then second, also provide my opinion along the way.

John (02:26): And I probably didn’t ask that question as elegantly as I’d like to. I actually think it’s great that you asked the question because ultimately what we’re trying to do is let people come to their own decisions rather than saying, this is true of ai, or this is not true of ai. Right?

Ken (02:40): That’s right, that’s right. And again, especially because it’s a nuanced problem and it means different things to different people.

John (02:48): So this is a really hard question, but I’m going to ask you where are we really in the continuum of ai? I mean, people who have been on this topic for many years realize it’s been built many things that we use every day and take for granted. Obviously chat, GPT brought on a whole nother spectrum of people that now at least have a talking vocabulary of what it is. But I remember I’ve, I’ve had my own business 30 years. I mean, we didn’t have the web, we didn’t have websites, we didn’t have mobile devices that certainly now play a part, but I remember as each of those came along, people were like, oh, we’re doomed. It’s over. So currently there’s a lot of that type of language surrounding ai, but where do you think we really are in the continuum of the evolution?

Ken (03:37): That’s a great question because I think we are actually very early on. I think we’ve made remarkable progress in a very short period of time, but I think it’s still, we’re at the very early stages. If you think of ai, where we are right now, we were a decade ago, we’ve made some progress, but I think fundamentally at a scientific level, we’ve only started to scratch the surface. I’ll give you some examples. So initially, the first models that were great at really giving us some proof that this new way of posing questions, neural networks, essentially they’re very complex equations. If you use GPUs to run these complex equations, then we can actually solve pretty complex problems. That’s something we realized around 2012 and then after around 2017, so between 2012 and 2017, progress was very linear. New models were created, the new ideas were proposed, but things scaled and progressed very linearly.

(04:39): But after 2017, with the introduction of the model that’s called the Transformer, which is the base architecture behind chat, GPT and all these large language models, we had another kind of realization. That’s when we realized that if you take those models and you scale them up and you scale them up in terms of the size of the model and the size of the dataset that we used to train them, they get exponentially better. That’s when we got to the point where we are today where we realized that just by scaling them, again, we haven’t done anything fundamentally different since 2017. All we’ve done is increase the size of the model, increase the size of the dataset, and they’re getting exponentially better.

John (05:18): So multiplication rather than addition?

Ken (05:22): Well, yes, exactly. Yeah. So the progress has been exponential, not only in linear trajectory, but I again, the fact that we haven’t changed much fundamentally in these models, that’s going to taper off very soon. It’s my expectation. And now where are we on the timeline, which was your original question. I think if you think about what the models are doing today, they’re doing very element. They’re doing very simple statistics. Essentially the idea of these models being called artificial intelligence, I think it’s a bit of a misnomer. I agree, and it leads to some of the questions that people have because there isn’t much deep intelligence going on. It’s just statistical modeling and very simple at that. And then where we are going from here and what I hope the future is, that’s when we start. I think the things are going to change dramatically when we start getting models that are able not just to do simple statistics, but are able to understand the context of what it is they’re trying to achieve and are able to understand the right answer as well as the wrong answer. So for example, they’re able to know when they’re talking about things they know and when they’re kind of skirting around this gray area of things they don’t really know about. Does that make sense? Yeah,

John (06:43): Absolutely. I mean, I totally agree with you on artificial intelligence. I’ve actually been calling it ia. I think it’s more of informed automation is kind of how I look at it, at least in my work. Do you see a day where prompts asking questions, that’s kind of the street use, if you will, of AI for a lot of people. Do you see a day where it starts asking you questions back? Why would you want to know that? Or what are you trying to achieve by asking this question?

Ken (07:10): Yeah, so the simple answer is yes, I definitely do, and I think that’s part of what achieving a higher level intelligence would be like. It’s when they’re not just doing your bidding, it’s not just a tool, but they kind of have their own purpose that they’re trying to achieve. And so that’s when you would see things like questions essentially arise from the system is when they have a goal they want to get at and then they figure out a plan to get to that goal. That’s when you can see emergence of things like questions to you. I don’t think we’re there yet, but I think it’s certainly possible.

John (07:44): But that’s the sci-fi version too, right? I mean where people start saying the movies, it’s like, no, Ken, you don’t get to know that information yet. I’ll decide when you can know that.

Ken (07:56): Well, you’re right. The way you asked the question was more like, is it possible in principle? I think absolutely, yes. Do we want that? I mean, I don’t know. I guess that’s part of it depends on what use case we’re thinking about, but from a first principle’s perspective, yeah, it is certainly possible to get a model to do

John (08:17): That. So I do think there are scores and scores of people. There are only understanding of AI as I go to this place where it has a box and I type in a question and it spits out an answer. Since you have both layperson and math in the title, could you give us the layperson’s version of how it does that?

Ken (08:37): Yeah, absolutely. Well, at least I’ll try. Lemme put it that way. A few moments ago when I mentioned that these models, essentially what they are, they’re very simple statistical models. That itself, that phrase itself a little bit, it’s controversial because at the end of the day, we don’t what kind of intelligence we have. So if you think about our intelligence, we don’t know whether at some level we are also a statistical model. However, what I mean by AI today in large language models like chat, GPT being simple statistical models, what I mean by that is that they’re performing a very simple task. So if you think of G pt, what they’re doing is they are trying essentially to predict the next best word in a sequence. That’s all they’re doing. And the way they’re doing that is that they calculate what are called probability distribution.

(09:35): So basically for any word in a prompt or in a corpus of text, they calculate the probability that word belongs in that sequence, and then they choose the next word with the highest probability of being correct there. Now, that is a very simple model in the following sense. If you think about how we communicate, we are having a conversation right now. I think when you ask me a question, I pause and I think about what I’m about to say. So I have a model of the world and I have a purpose in that conversation. I come up with the idea of what I want to respond, and then I use my ability to produce words and to sound them out to communicate that with you. It might be possible that I have a system in my brain that works very similar to a large language model in the sense that as soon as I start saying words, the next word that I’m about to say is one that is most likely to be correct, given the words that I just said. It’s very possible. That’s true. However, what’s different is that at least I already have a plan of what I’m about to say in some latent space. I have already encoded in some form what I want to get across, how I say it, that the ability to produce those words might be very similar to a large language model, but the difference is that a large language model is trying to figure out what it’s going to say as well as coming up with those words at the same time.

(11:08): Does that make sense? So it’s a bit like they’re rambling and sometimes if they talk for too long, they ramble in a nonsense territory because they don’t know what they’re going to say until they say it. So that’s a very fundamental difference. Yeah,

John (11:24): I have certainly seen some output that is pretty interesting along those lines. But as I heard you talk about that, I mean, in a lot of ways that’s what we’re doing is we’re querying a database of what we’ve been taught are the words that we know in addition to the concepts that we’ve studied and are able to articulate. I mean, in some ways we’re querying that to me, prompting or me asking you a question as well, I mean it works similar. Would you say

Ken (11:51): The aspect of prompting question and then answering it, it’s similar, but what is different is the concept that you’re trying to describe. So again, when you ask me a question, I think about it and I come up with, so again, I have a world model that works so far for me to get me through life, and that world model lets me understand different concepts in different ways. And when I’m about to answer your question, I think about it, I formulate a response, and then I figure out a way to communicate that with you. That step is missing from what these language models are doing. They’re getting a prompt, but there is no step in which they are formulating a response with some goal, some purpose. They are essentially getting a text and they’re trying to generate a sequence of words that are being figured out as they’re being produced, right? There’s no ultimate plan. So that’s a very fundamental difference. I

John (12:57): Do want to come to what the future holds, but I want to dwell on a couple things that you dive into in the book. What are the, other than the fear that the media spreads, what are the real and obvious pitfalls of relying on ai?

Ken (13:18): I think the biggest issue and the real motivator for me when I started writing the book is

(13:28): That it is a powerful tool for two reasons. It’s very easy to use, seemingly. You can spend a weekend learning python, you can write a few lines and you can transform, you can analyze, you can parse data that you couldn’t before just by using a library. So you don’t really have to understand what you’re doing and you can get some result that looks useful, but hidden in that process. The fact that you can take data, a large amounts of data, modify it in some way and get a response, get some result without understanding what’s happening in the middle, has huge repercussions for misunderstanding the results that you’re getting. And then if you’re using these tools in the world in a way that can affect other people. For example, let’s say you work in a financial institution and you come up with a model to figure out who you should give some approval for credit for a credit line and who you shouldn’t.

(14:41): Now, right now, banks have their own models, but if you take the AI out of it, traditionally those models are taught through by statisticians, and they may get things wrong once in a while, but at least they have a big picture of what it means to analyze data, biasing the data, what are the repercussions of bias in the data? How do you get rid of all these things or things that a good statisticians should be trained to do? But now, if you remove the statisticians, because anybody can use a model to analyze data and get some prediction, then what happens is you end up denying and approving credit lines for people with repercussions that could be driven by very negative bias in the data. It could affect a certain section of the population negatively. Maybe there’s some people that can’t get a credit line anymore just because they live in a particular neighborhood, or there’s many reasons why this could be a problem.

John (15:37): But wasn’t that a factor previously? I mean, certainly neighborhoods are considered as part of the, even in the analog models, I think.

Ken (15:46): Yeah, absolutely. So like I said, we always had a problem with bias, right, in the data, but traditionally, you would hope, so two things would happen. First, you would hope that whoever comes up with a model, just because it’s a complex problem, they have to have some statistical training, right? And an ethical statistician would have to consider how to deal with the bias in the data. So that’s number one. Number two, the problem that we have right now is that first of all, you don’t need to have that decision. You can just use a model without understanding what’s happening. And then what’s worse is that with these models, we can’t actually understand how the, or it’s very difficult traditionally to understand how the model arrived at a prediction. So if you get denied either a credit line or as I talk about in the book bail, for example, in a court case, it’s very difficult to argue, well, why was I denied this thing? And then if you go through the process of auditing it again with the traditional approach where you have a statistician, you can always ask us, so how did you model this? Why was this person denied this particular case in an audit with a neural network, for example, that becomes a lot more complicated.

John (17:00): So what you’re saying, one of the initial problems is that people are relying on the output, the data. I mean, even I use it in a very simple way. I run a marketing company and we use it a lot of times to give us copy ideas, give us headline ideas for things. So I don’t really feel like there’s any real danger in there other than maybe sounding like everybody else in your copy. But you’re saying that as people start relying on these to make decisions that are supposed to be informed, a lot of times predictions are wrong.

Ken (17:37): And so the answer is yes. Now, there is two reasons for that. And by the way, let me just go back to say that there are use cases where, of course, you have to think about this as a spectrum. There are cases where the repercussions of getting something wrong is worse than other cases. So as you say, if you’re trying to generate some copy and if it’s nonsensical, then you just go ahead and change it. And at the end of the day, you’re probably going to review it anyway. So that is probably a lower cost, the cost of a mistake that will be lower than in the case of using a model in a judicial process, for example. Now, with respect to the fact that this model sometimes make mistakes, the reason for that is that the way these models actually work is that, and the part that can be deceiving is that they tend to work really well for areas in the data that they understand very well.

(18:36): So if you think of a dataset, right? So they’re trained using a dataset or most of the data in that dataset, they’re going to be able to model it really well. And so that’s why you get models that perform, let’s say, 90% accurate on a particular dataset. The problem is that for the 10% where they’re not able to model really well, the mistakes there are remarkable and in a way that a human would not be able to make those mistakes. So what happens in those cases that first of all, when we’re training these models that we get, we say, well, we get 10% error rate in this particular dataset. The one issue is that when you take that into production, you don’t know that the incidence rate of those errors are going to be the same in the real world. You may end up being in a situation where you get those data points that lead to errors at a much higher rate than you did in your dataset, just one problem.

(19:30): The second problem is that if your use case, if your production application, it’s such where a mistake could be costly, like let’s say in a medical use case or in self-driving, when you have to go back and explain why you got something wrong, why the model got something wrong, and it is just so bizarrely different from what a human would get wrong, that’s one of the fundamental reasons why we don’t have these systems being deployed across safety critical domains today. And by the way, that’s one of the fundamental reasons why we created Squint, is to tackle specifically those problems, is to figure out how can we create a set of models or a system that’s able to understand specifically when models are getting things right and when they’re getting things wrong at runtime. Because I really think it’s one of the fundamental reasons why we haven’t advanced as much as we should have at this point. When models work really well, when they’re able to model the data, well then they work great. But for the cases where they can’t model that section of the data, the mistakes are just unbelievable. It’s things that humans would never make, those kinds of things.

John (20:40): Yeah, and obviously that’s certainly going to, that has to be solved before anybody’s going to trust sending a man’s spacecraft guided by AI or something, right? I mean, when human life is at risk, you’ve got to have trust. And so if you can’t trust that decision-making, that’s certainly going to keep people from employing the technology, I suppose,

Ken (21:04): Or using them, for example, to help in, as I was saying, in medical domains, for example, cancer diagnosis. If you want a model to be able to detect certain types of cancer given let’s say biopsy scans, you want to be able to trust the model. Now, any model, it’s going to make mistakes. Nothing is ever perfect, but you want two things to happen. First, you want to be able to minimize the types of mistakes that the model can make, and you need to have some indication that the quality of the prediction of the model isn’t great. You don’t have that. And second, once a mistake happens, you have to be able to defend that the reason the mistake happened is because the quality of the data was such that even a human couldn’t do better. We can’t have models make mistakes that a human doctor would look at and say, well, this is clearly incorrect.

John (21:54): Yeah, yeah, absolutely. Well, Ken, I want to thank you for taking a moment to stop by the Duct Tape Marketing Podcast. You want to tell people where they can connect with you if you’d like, and then obviously where they can pick up a copy of is the Algorithm Plotting Against Us?

Ken (22:09): Absolutely. Thank you very much, first of all, for having me. It was a great conversation. So yeah, you can reach me on LinkedIn and for the copy, for a copy of the book, you can get it both from Amazon as well as from our publisher website. It’s called the working fires.org.

John (22:22): Awesome. Well, again, thanks for stopping by. Great conversation. Hopefully maybe we’ll run into you one of these days out there on the road. Thank you.

powered by

Recommended Story For You :

How To Make $3493 Commissions Without Doing Any Selling

Successful dropshippers have reliable suppliers.

People Think I Use A Professional Voiceover Artist. NO! I Just Use Speechelo!

Make Money Testing Apps On Your Phone Or Tablet

Make More Money or Lose Everything

Sqribble Is The ONLY eBook Creator You’ll Ever Need.

Work & Earn as an Online Assistant

Create Ongoing Income Streams Of $500 To $1000 Or More Per Day

It's The Internet's Easiest Side Business.

without the right system making money on the web is almost impossible.

Leave a Reply