S2E9 – Design and AI

There is a lot of talk about AI these days. And when it comes to Design and Creatives, there is a lot of fear that AI will come to take our jobs. At Hidden By Design we believe that AI wont take your job. But if you dont start working with AI then Designers who work with AI will take your job.

In this episode we talk about the Turning test. You will also learn what Large Language Models and Generative AI is, and how it is used. And a bunch more.

So buckle in, and have fun in this Episode of Hidden by Design.

Fun little quizz here at the end. How many times does Martin and I say the phrase Chat GTP..PTG..GPT wrong?


The Turning Test

1984 George Orwell

Deceptive Design

Large Language Models


Martin Whiskin 0:02
You’re listening to hidden by design a podcast about the stuff that you didn’t know about design. My name is Martin. And this is

Thorbjørn Lynggaard Sørensen 0:10
Hidden by design.

Martin Whiskin 0:11
Nailed it.

Thorbjørn Lynggaard Sørensen 0:12
Oh, yeah. And my name is Thorbjørn, the podcast starts

Martin Whiskin 0:18
and we should start recording now

Thorbjørn Lynggaard Sørensen 0:20
you’re not recording? So will you take this slide? And I’ll take the next one.

Martin Whiskin 0:26
So yeah, well, you haven’t written this bit down, but I will I ad lib. Welcome to hidden by design, podcast about design for everyone. And today’s episode is season two, Episode Nine, artificial intelligence. Is it really smart?

Thorbjørn Lynggaard Sørensen 0:45
I don’t. I don’t know. I wrote smart. But maybe it should be. Is it really intelligent, I think just realized that that

Martin Whiskin 0:55
artificial intelligence, is it really intelligent. So?

Thorbjørn Lynggaard Sørensen 0:58
So we’re going to get an answer to that today? We’re also going to learn about the Turing test, Turing Turing test.

Martin Whiskin 1:08
I think it’s Turing

Thorbjørn Lynggaard Sørensen 1:10
Turing test. Yeah. So so we’re gonna learn about the Turing test by a guy called Allan Turing. We’re going to learn a little bit about what neural networks is large language models, machine learning generative AI, I’m going to try to kind of cover what these things are. We’re gonna learn who Eliza was. That’s, that’s yeah. And then we’re going to just like go through things that AI is good at and bad at. And then in the end, or at some point, at least, in the conversation, we’re going to talk about losing our jobs as designers as voice actors, I’m pretty sure that you already now heard about, you know, Natural Readers. And

Martin Whiskin 1:58
Yes, yeah, there’s a huge thing at the minute in the voice overwhelmed, or the AI voice is going to kill everyone’s career. And no, is the answer. In shorts? Good. Yeah. There’s there’s a lot of people who are sort

Thorbjørn Lynggaard Sørensen 2:15
of like, Are you sure Martin?

Martin Whiskin 2:18
I, at least, at least into I don’t need it anymore. Yeah, so so most of the people that I’ve spoken to people who make videos and that sort of thing, they’re still heavily on board with real voices. Because they understand the benefit of, you know, true human connection.

Thorbjørn Lynggaard Sørensen 2:36
So so I’m going to, I’m going to, I’m going to, I’m going to, I’m going to have an opinion about that later. Because I’m seeing stuff as well, in just like every places, but I also like so I think that would be an interesting discussion, actually.

Anyway, should we do the quote of the day?

Martin Whiskin 3:04
Yes, let’s do that. Okay, so the quote of the day for today, the question of whether a computer can think is no more interesting than the question of whether a submarine can swim. And that is from Edgar W. Dijkstra. A Dutch computer scientist. That immediately makes you think that quote, does, can you hook that up for us to the to the topic of AI, because it’s feels like this is maybe a few decades old? This this quote.

Thorbjørn Lynggaard Sørensen 3:41
it is very old, but so is, you know, the whole idea of artificial intelligence, and I have colleagues, it’s like, really, really tell like, the smartest people I know, like, one of my big heroes is an engineer and my colleague, and and when you ask him about it, he goes, like, AI just a buzzword. It’s like tomorrow, it’s going to be because this is this is technology that haven’t really developed, from a technical point of view, for the last, you know, 50 years, or, you know, it’s like for a very, very long time. And I think it’s like in a 1950s. That’s where the Turing test was kind of constructed. And it ties into this intelligence, because the argument here is that it’s not really intelligent. And I think that’s one of the key points. So if you look at why it’s up these days, and why it’s bussing, like it is, and everyone is talking about it, is the big change isn’t in technology, and and how it does this, the big change is accessibility. So with chat, GTP it all of a sudden became available to everyone. And not only available, but available in a workflow in a way that everyone could actually use it and understand it. So so so these are things that just goes way, way back. And Allan turr, Turing was kind of just like, it’s the same time as Lysa, which is the first chat bots. That was that was like so the the first it was it was made, or created in 196..4 or 5 or something like that. And so it’s like, it was a chatbot that was made us a psych psychotherapist, I think therapist, a psychologist, I can’t remember the difference, but just you know, you decide which one it is. So it was kind of constructet so that it could give you answer, and it could give you answer that felt human like. And so what Alan Turing did was he said, you know, can, can a machine imitate a human to a degree, where you as the one who’s reading, it can’t tell the difference between a machine and a human. So, so you’re having this conversation with a chatbot. And if you’re unable to see who’s who. So the the, the Turing test is where you put a computer, and a human into different rooms. And then you have a second human who’s then chatting with both the computer and the human. And he has to guess who is who? And so that’s the Turing test. And it’s simply to say, right, can we get a computer to imitate a human and a way and connection? So I think some of them was, you know, it’s only with written text, but one of the first is, like, guessed the gender of the person you’re talking to. And so a computer doesn’t have a gender, but, you know, and so you would kind of make these tests and you would try to, to get it to, to do this things. And, and Eliza kind of was like back in the day, because you would just have a prompt, and you would ask a question, and it would answer you. And it felt eerie, it felt really, really strange that you were you were talking to, to a machine. But the technology, if you talk about that, it’s like all of these things, where, where, where you have this, which is my colleague, he’s talking about these large language model, and neural networks, machine learning, all of these things where you were basically it does is it looks at a lot of data. And then it tries to predict, like I’m trying to really simplify it, but just imagine that you have a lot of data, a lot of information, a lot of written text, and the machine reads this, analyze it and look at how generally do you construct information? How, how is how is like when you have a word, what word is most likely to be the next word. And so is small scale, easy to understand kind of version of this is your keyboard on your phone, right? So you have this word suggests the next words. So that’s a very small scale version of AI, because it will suggest you the next word like so when you’re sending a text message is going to suggest you the next word, based on the conversations you had in the past. And some of these will, you know, collect data from Facebook Messenger from WhatsApp from, you know, text messages, in order to be able to construct, you know, what’s most likely to be Martin’s next word,

Martin Whiskin 9:12
normally a swear word

Thorbjørn Lynggaard Sørensen 9:13
Yes, I also had, where you kind of can’t tell the difference of me, texted with my girl texting with my friends. So it’s basically just looking at a lot of different texts. And then you have human beings like even with these new chat GTP. And you will see this as well. Right? So they’re trying to get information back if the if the if the prediction of what you wanted to know is good or not. Right. So I think in chat GTP in most of these platforms you have is like, does this work or not? Because it needs to know if the translation is good or bad so that it can learn and that’s what machine learning is all about. Right? Is that it’s learning so that the prediction becomes better and better. And I was thinking, it’s like, thinking about how, how this works. I was thinking about a game made many, many years ago, I actually, I made a game with some friends many, many years ago. But there is also another game, which kind of illustrates this principle of machine learning in a nice way, right? So the idea is that you have this shape that’s constructed of multiple shapes, right. And you have some animation, so some gears and the, the, the purpose of the shape is to get from one location to another. And so it can just be a box that rotates, right, and then if it rotates the right way, it will eventually get to the goal, because it’s just a straight line. Now, for every iteration, it’s going to generate a new or two or three variants of that box. And it’s going to attach stuff to it, and it’s going to just do random stuff. And then it’s going to see who crosses to the finish line and who does not cross the finish line. And then it’s going to just remove the ones that don’t cross the finish line and focus on the one that did cross the finish line, and then make small variations of that. And then, you know, out of those, it’s going to look at who crosses the finish line the first and who doesn’t, because some of them would rotate them the other way around. And they would just go away from the so you kind of have this, you know, the machine keeps learning what is the best result. And then with with artificial intelligence, you just scale that up. But just like to go back to the quote of the day, it is like, it’s just as interesting, as is a submarine can swim. Because in many ways, you can say, well, it is swimming, because it’s in water. But it’s a machine. And and in many ways you just it’s like it’s not really it’s not really intelligence is just trying to predict something.

Martin Whiskin 12:22
The way that I saw the quote, or felt the quote was a submarine is performed, or the the result of moving through water is the same. But it’s not swimming. So it’s yet it’s it’s the same, the same thing is happening just not how we know it, I guess ?

Thorbjørn Lynggaard Sørensen 12:44
I think maybe if I can change a word there, from know it to perceive it. Because perception is like that’s everything in this conversation, in my opinion, at least is that the perception of what’s happening? Is is so extremely important. That that what it gives us and that’s like to the answer is artificial, intelligent, intelligent. The answer is just flat out. No, it’s not. Not by a mile, like not by a longshot. But it feels intelligent, right? And you have these you see these conversations around on the internet where they’re posting pictures of, you know, I got it to say this and that feels eerie, right? And it’s not. It just a machine.

Martin Whiskin 13:38
So the eerie thing. There’s an advert that I keep seeing at the minute on tick tock and tick tock is flooded with AI voice so people are using it to read the captions that they put on. And yes, even big brands are using it for their for their adverts. So there’s a store over here called Asda it’s one of the biggest supermarkets here. And the advert. Their adverts have the AI voice. The first thing that it throws me out of the engagement of the advert immediately is the fact that is a very, to me a very wholesome British brand. And it’s an American voice is the first thing. So there’s a disconnect there. But the one that they’re showing at the minute, the text on screen has capitalized the word Asda as it is on the side of their stores. And the AI voice can’t handle it. It can’t it doesn’t read it as as though it sort of says ad or something like that. It doesn’t Asda. And again, that’s one of the first words it says and I’m thinking when I’m watching an advert for Asda, but it can’t even say the word Yeah, as does so I’m thinking about that rather than absorbing the info from the advert

Thorbjørn Lynggaard Sørensen 14:51
and and obviously that’s going to be really disliked down the line. That’s going to be that’s going to be be improved upon And I think that’s where generative AI, like. So we have these large language models, we have normal text, and then we have where it generates pictures, and it generates sounds and all sorts of different things, right. And obviously, it’s going to get better. We, we we talked about, I think, like, for me, at least, I think the biggest, the biggest change, or the biggest thing that’s going to happen like so now we’re at this, and that’s part of the buzz. Now we’re in this where everyone is amazed about what it can do, except for the one who already knew that we’ve been able to do this for for more than 60 years, like the difference now is the accessibility and that I can actually use it also, that we have machines that can process more data, right? So it becomes we’re kind of scaling it in a different way, right? It’s no longer just your phone trying to predict the next word, it’s actually processing and giving you big comprehensible answers. But when you look at it, the construction and the sentences, and all of that is still pretty bad. And it will continue to be bad if you ask me, but it will be better, obviously, because it’s it’s learning and not learning in the way that a human being is learning. But it’s it’s being becoming better at predicting what you know, it should do, if that makes sense. And talking about AI in edge, right. So this to me if we talk about the dangers or what what’s happening right now, is there’s so many things I want to say.

Martin Whiskin 16:45
And your summer, if you’re allowed to podcast,

it’s our podcast,

take ownership and take ownership.

Thorbjørn Lynggaard Sørensen 16:56
So the use of AI in ads, right, this is what I see right now is ads, I’m seeing quantity over quality. I’m seeing ads, responding to ads. It’s like in a moment, you will see I think we’re already seeing it. It’s like it’s generating stuff, and then traffic and then leads. But these are not real leads. These are other people sitting in a marketing department liking your posts, not really realizing it and commenting on and you see this as like LinkedIn is also getting it slowly. Where it’s not, it’s like low quality, we just need to flood the market with low quality advertisement. I don’t see that as a danger, I see that, that I see as a buzz thing that’s going to go away in a moment because it’s not going to create real leads the

Martin Whiskin 17:53
way I guess you could twist it into a danger by being that it will dilute quality. Like you’d spoke about quality, it dilutes that element of people’s campaigns. And the more that you see these poor quality ads, because there’s so many of them, yeah, the more it might become acceptable because it just becomes the norm Yeah,

Thorbjørn Lynggaard Sørensen 18:20
but so there’s an old term from I can’t remember way back but there’s this term called banner blindness. And, and banner blindness is basically I don’t know if you remember back in the good old warez days right where you were you were trying to download something from the internet and one thing you didn’t know for absolute sure was Do not click the banners and do not click the big red button that says Download Now when you wanted to download something and and the funny thing about this is that our brains and this is where we are actually intelligence is that we can learn things and avoid things like this. And so in the end, the big red button became invisible to us. And you see the same thing with the with the ads and for example Facebook is that they become invisible to us we don’t see them and I believe that the same thing will happen with poor quality AI generated ads.

Martin Whiskin 19:28
That’s really it. So that’s like that’s not machine learning. That’s human learning. We we become immune to them. Wow, that’s really interesting. Because that’s right I completely just flick past most adverts on Facebook for example. Waiting for my friends content that very rarely appears anymore because no one uses Facebook but

Thorbjørn Lynggaard Sørensen 19:52
exact Yeah, but but also the amount of ads like so. So Twitter, Tik Tok all of these platforms really, really have to be careful. Because the moment, the moment that you have to flick through too much, right, it becomes a hassle to find your friends. And so machine learning, and artificial intelligence, obviously, we’ll end up in a place where this era, these ads are the ones that people react to. So they will become more and more aggressive, and more and more, you know, you just like click Baity, and I think things what some of these companies are losing sight of is. So we talked about that in deceptive design patterns. So one of the things that were wet, it’s like, because a computer don’t have moral, it doesn’t have ethics, and it doesn’t have you know, it doesn’t have the sense of, of what you know what’s right and wrong. And so if what it’s been set up to do is just to generate as many leads as possible. And there’s like, so the success criteria is people resting on that image, before it goes on, we’re going to see a lot of more clickbaity stuff that that that caters file curiosity, it is like it’s going to be really, really aggressive. I think the result of that is that people will then start leaving these platforms, finding places without ads

Martin Whiskin 21:23
on Tik Tok, what I’ve noticed is that there will be human accounts, who, who get really big, and their accounts grow, you know, incredibly, and then I will see a post of theirs or a video of theirs, and I will stop. And then I’ll realize it’s an ad because the marketing agency has seen this person is now popular and an influencer. So they’re getting them to mix their style of video to become an ad. And that, for me is a really positive thing, because they’re using a human to do it. You know, they’re picking that human because of their success, and they want to buy into that person’s skill of making videos and their brand and that sort of thing. To push their own brand. Yeah,

Thorbjørn Lynggaard Sørensen 22:11
exactly. And so but that’s, that’s about that’s about liking. That’s about the deceptive design, right? Are we pushing this because we are selling and giving value to our customers? Or are we doing this because we want to make money. So we don’t care about how we actually presented we just want? Anyways, I think, I think for me, it’s, it’s interesting, at least, how, how this has been used these days.

Martin Whiskin 22:49
Someone told, and I didn’t research if this was a true fact or not, but that Tik Tok is becoming more popular among the younger generation as a search engine. So they’re looking for information from videos. And of course, if there’s going to be aI videos in there, not always with the correct information. People are going to be learning the wrong things.

Thorbjørn Lynggaard Sørensen 23:11
Exactly. But I think it’s like, at least I look at my own children. And I see that they are more aware, if you look at it, it’s like our parents generation. Born in, you know, the 40s and 50s. They had this idea that what you see in the news or read in the paper is truth. Right, so if you read it is the truth, then you have our generation who grew up with, you know, more aggressive advertisement, more aggressive, deceptive design, you know, being pushed, and a bigger accessibility to all of this information, where a lot of is bullshit, right? And you still see people from the older generation going on Facebook, and then reading that something that someone wrote and just taking Yes, truth because it’s a written media. Now, our kids generation, they know that when they reference Facebook or Tik Tok, or any of these medias, no one really is like, you have to have a proper source for you to actually use it for something in a conversation. Does it make sense what I’m saying?

Martin Whiskin 24:28
Yeah, there was a huge thing during there’s a DJ over here, who’s he’s also an author, and he’s written about stuff like this, where during the pandemic, there was a lot of people posting stuff on Facebook, like, Oh, my, my aunt, Mabel said that COVID isn’t real when it was just aliens implanting microchips in our fingernails, and because that was written down, people were believing it and there was this whole movement of people who were like it was conspiracy theories, basically. But I really, really believe that stuff.

Thorbjørn Lynggaard Sørensen 25:02
Yeah. And I think there’s a moment I hope I really truly hope there’s a movement of being critical of the sources and where you get it from. And with artificial intelligence, it’s more critical than ever. And I don’t want to go into that. But just is there anything else that you would like to know about? Artificial Intelligence, generally? And what is it? I think one term we didn’t talk about was garbage in garbage out.

Martin Whiskin 25:32
Oh, okay. What is garbage in garbage out?

Thorbjørn Lynggaard Sørensen 25:36
Well, well, I’m glad you asked. So So basically, it is, the whole idea is, and I think that’s what I was trying to get to with with the text messages and predicting the next word. And all of that stuff is that if you feed an AI garbage, it’s going to give you garbage. So because it’s learning based on text, like these large language models is just going to generate text based on the text that it reads. So if you feed it, really really, it’s like untrue stuff, if you feed it garbage, or stuff, like and so I guess this is going to be the date it’s like difficult thing with AI is, is it going to start generating based on what other AI is like, if it’s trawling the internet for information, right. And then you take a lot of blog posts, and the blog is no longer write their own blog posts. And then that information is going to then get fed into the machine. And that means that now you’re actually feeding it garbage that it made itself. And so you have this loop where, you know, and I think that’s, you know, AI talking to Ai, ai generating AI, so if you don’t have proper, written texts, that really, you know, have some, some some some solid content and some truth, what you will get what the machine will write and give back to you is going to be crap, it’s going to be untrue. If you feed it lies, it will tell you lies. Usually, it will tell you, you know, a construction and what it thinks, or what not thinks, but what it kind of predicts that you need to know. And that can be just like a true ally. And that’s why you know, search engines, I would guess that they are under pressure, because you can ask them, like ask chat GTP about something. And then, in the end, ask, Can you give me some real sources so that I can see opposing views and stuff remain

Martin Whiskin 27:47
critical? Exactly. saying Yeah. And it reminds me of, have you read 1984 George Orwell?

Thorbjørn Lynggaard Sørensen 27:54

Martin Whiskin 27:54
So there’s a bit in there, where they they are told to edit history. So they have to remove all references in books and papers and things about past events in history, that might damage the image of the government now, but the changing of history. So when you’re when AI is giving you stuff learn from AI, it gets fit, you know, the the factual content could potentially get thinner and thinner and thinner each time. And I’ve certainly had instances where I’ve asked a chat, chat GBT, about something. And it’s given me events and places and things that never even happened. Yeah. Because it’s found found it somewhere. Yes. And taken that as its gospel. And that’s the worry, I think that it will change into, you know, history will be forgotten will be changed will be edited. And people will start to believe the wrong thing. So yeah, just stay critical and check your findings.

Thorbjørn Lynggaard Sørensen 28:54
Yeah. I think there’s, there’s this, there’s this, I can’t remember who said it. But there’s this quote about who wins the war, like when two countries are in war with each other. The one who wins the war is not necessarily, you know, the ones who succeed on the battlefield, the one who wins the war is the one who decide what gets written into history books afterwards. Because that’s going to be the truth in the future. And I think that’s very much like to the point of what you’re talking about. And the reference to this book is the one who writes the one who writes history are the ones who actually decides how are we going to reflect on it? And that’s a real danger, I think, because you have one of the things that that AI is really really good at is is just generating an inspiring and so you can quickly generate posts and conspiracy theories. Like if I want to, if I want to generate 100 tweets about a political course, I can do that. I don’t have to, I don’t have to really do anything other than just generate. And you see that on these different platforms is that people didn’t come up with these things, they just copy paste from chat GPT, and put it in there. And that’s, that’s really, that’s really interesting.

Martin Whiskin 30:32
I’ve used it for, like you say, generating ideas when I’m tight for time, but I don’t use it for, you know, if I’m writing a piece for a video or something, I need it to be me. So I never use it for copy and paste. It’s always heavily edited, or just used for inspiration. So I was like, give me five headers for this topic. And then I will write the, you know, the content. But yes, it’s something that I’m very aware that because I am my business, I am my brand, everything I create needs to be very, very me. I can’t rely on somebody doesn’t yet. give that impression of me. Yeah.

Thorbjørn Lynggaard Sørensen 31:23
Yeah. And they won’t like in the end, like, I think we will get to a point where they will be really good at impersonating. And, and going back to, yeah, so going back to Eliza. So Eliza was like, the interesting thing that they found when they had this chatbot was that people would talk to it, they would chat with it. And it was, you know, pretending to be a therapist. And so what he found was that humans were chatting with it would tell Eliza set really, really deep, dark secrets, they will be just like they would they would perceive as like, they would communicate with a computer, and they would tell it stuff that you wouldn’t really normally tell other people. And I think that’s like, if we go back to that whole thing of, of, you know, it’s the perception is perception of the text that you get, is it good or not. And then other people being able to actually, you know, decipher see that this is a machine made it but the moment that you can’t like the Turing test, is the moment you can’t see the difference. It starts getting in sight, that’s when it starts getting interesting. From a from that point of view. But talking about creativity, and talking about emotion, and empathy, and all of that stuff. It cannot do this is that it absolutely cannot do any of this. And there’s, it’s not going to be able to do this. Wouldn’t say ever, but I don’t think that that’s going to happen for the next couple of 1000 years.

Martin Whiskin 33:11
So aren’t Can you answer the question for the listeners? I think they all need to hear the answer for because from what you’ve just said, Should? Should we? Or should creatives be afraid of losing their jobs?

Thorbjørn Lynggaard Sørensen 33:23
AI will not take your job. AI will not take anyone’s job, however, and I saw this somewhere I can’t remember, I think maybe it was LinkedIn. But AI won’t take your job. But people who uses AI will take your job. So people who uses AI will take people who don’t use this AI as jobs. And that’s going to happen, just as people who didn’t use computer lost the jobs to people could use a computer, right?

Martin Whiskin 33:51
Yeah. So there’s I met a photographer the other day, and she was saying how she’s a wedding photographer. And she was saying how on a wedding shoot, she can take seven or 8000 photos. And she’s got a piece of software now, that will filter out the bad ones. So it will look for things that are out of focus, or if someone’s eyes are sharp, that sort of thing and get rid of those. And that another piece of software that also knows her editing now so it can apply some of the editing before she goes in and does the finishing touches. And she said it saved her hours and hours and hours and hours. Because how could you go through 7000 photos? Oh my god, that would be a nightmare. And it’s the people that are embracing the technology that will keep going

Thorbjørn Lynggaard Sørensen 34:42
Yes. Yeah. And so so losing your job, NO! Changing your job will change and be different. Yes, definitely. And I think that’s that’s one of the things that that scares people. Because I think deep down everyone kind of knows this right? Well by the weather AI will take over the world and destroy us. Here’s just an interesting, interesting thing is that I think there was like, done this survey and 10% of the people who creates AI and work with AI 10% of these people believe that the end of the world would be caused by AI. And, and so but and this is different than previously, right? So if you look at the dangerous with movies, the dangerous wood books, the dangers of dungeons, and dragons, all of these things were always claims made by people who don’t work with it, and who don’t understand it. Typically, these things come out of ignorance. Now, 10% isn’t really that much. But that’s what’s different. Is that 10% of people who actually knows what this is about, is saying, we have to be careful here. And I think that’s, that’s an interesting, new thing.

Martin Whiskin 36:18
And on that rather joyous note,

Thorbjørn Lynggaard Sørensen 36:21
oh, no, there’s so much. So But can I just, can I just gonna say what? I have so many notes. So no one would steal a job. Damn, here’s the thing, here’s the thing, what is being used for right now and what it can be useful. And I think this is where I tend to agree with the 10%. And that’s from an ethical, that is the perception of, and, and what I see and what I believe, and how I relate to, to AI and written text, right? Because all of a sudden, we can generate a lot of text. And what we’ve been seeing for the last, I would say 10-15 years is, is politicians using this, to divide populations and to create, to create, like, you know, this thing is you tell the same lie enough times it will become true. And then you see the danger of people believing what they read. You see, if enough people say something, other people will start listening and believing it. And so it’s like, the way that it’s being used by some people, is, in my opinion, very dangerous. Because you can create a lot of content, you can post a lot of stuff very, very fast, that has a very specific political target, and we’ll move things. And if we talk about the end of the world, by AI, then no AI will not, you know, do stuff to end the human race. I simply don’t believe it. But what it might do in the hands like it’s like a weapon, in some sense, in the hands of the wrong. And if we don’t kind of handle that part of it, then I believe that the 10% of people who work with it might be right.

Martin Whiskin 38:31
So what went from a really sour note to end on when went even even darker?

Thorbjørn Lynggaard Sørensen 38:36
No, no, you have to end on a positive. Because this is really great. Like this is absolutely and as you say, you can use it for inspiration. I can’t. It’s like it’s the AI is so good at, you know, analyzing. So it’s like some things that is really, really good app, you get a medical paper from the hospital. And it tells you all sorts of stuff. And you don’t know or your lawyer, right God, and you get these papers, and you can type into chat TPS, like take this piece of text and tell me what it means like I’m 12 years old. And so it will take these information. If you’re going traveling to a different country, you just because like, please be a travel guide and tell me what’s points of interest in this city. It can be used if you’re a student. And it’s like and I think this is where it’s very interesting as well. There’s there was many, many years ago, this tool called sci-gen was made as a joke in the scientific world to generate scientific paper and it was basically just mumble jumble. But, but people were actually then started using it to create fake papers that they could file because no one was really looking and so you could get a lot of paper and the way that the scientific world is constructed is quantity quality. So the more papers you have on your name, the better it is. So they turn it around, and now they can use it to detect fake papers. And you can use AI to defeat this, like detect fake news. And all of these things are coming and being generated. And so there’s so many great good things that you can use it for, like, inspiration is just absolutely, if you’re stuck in writing an essay, or at school, you can get it to generate interesting stuff that you can dig into so that your, your stuff can get better. And you can be inspired by it. What you don’t want to do is copy paste stuff, right? Because it’s absolutely will probably not be true or good. Or, you know, it’s that’s it’s, it’s bad. So I think it’s like, I really want to end up on a positive note, because I think this development is just absolutely amazing. And we shouldn’t be afraid. We should be aware, I think that that would be my ending note. I’m so sorry. There’s so many things Martin.

Let me just repeat what we learned in this episode, right. So we learned a little bit about the Turing test. We learn about neural networks, language, large language models, machine learning and generative. We didn’t learn about generative AI. But generative AI is, you know, what is used to make pictures and sounds right. Instead of text, we learned about Eliza. And the the chat bot that’s a therapist. We learned some things that AI is good at, and not so good at. And then we learned that we shouldn’t be afraid of losing our jobs to AI. But we should be afraid of losing our jobs to people who use AI. And I think that’s it.

Martin Whiskin 41:53
So use that use AI learn how to use Yeah,

Thorbjørn Lynggaard Sørensen 41:56
learn how to use it. And I had a whole part about prompting, like getting results. But we’ll do that in a different episode in three years when when it’s developed

Martin Whiskin 42:10
when we’ve all lost our jobs.

Thorbjørn Lynggaard Sørensen 42:11
Do you have any closing words?

Martin Whiskin 42:16
Closing words? Let me think.. Bye!

Thank you for listening to another episode of Hidden by design. You can find out more about us at hidden by design.net. Or you can find us on LinkedIn. My name is Martin whisking. This is Toby on lingo. Sorensen net. Yes, got it. That’s good. You can also like, subscribe, follow the podcast on all of the platforms as important to follow it on all of the platforms. Give us five stars. And an excellent review please as well. Thank you.

Thorbjørn Lynggaard Sørensen 42:46
Can I say something?

Martin Whiskin 42:47

Thorbjørn Lynggaard Sørensen 42:48
We love you. I said something anyways, I’m a bad boy.

Leave a Reply

Your email address will not be published. Required fields are marked *