TalkBiz News Hi, folks... Today, we're going to look at the dark side of artificial intelligence. The stuff that we don't want to think about, but really should. Things like investment cons, scams, malevolent manipulation, and bad advice from trusted sources. And that porn video of you from last summer's business trip to Albuquerque. What's that you say? You never made a porn video? You've never even been to Albuquerque? Doesn't matter. They can still blackmail you with it. The news on these things is like a bad crime novel. Except, you know, not fiction. This one's longer than the usual issue. A lot longer. I promise, it is worth your time, if just for awareness and self-defense. Read on, MacDuff. .... AI. Those two letters are like a magical incantation. They summon visions of a paradise where machines do all the hard work. Or where they take over and enslave us all. Meanwhile, back here on Planet Earth... I've talked a bit here about some of the positive things we can do with all those cool AI tools that are coming out on a daily, sometimes hourly, basis. To say it was impressive would be a huge understatement. Time to talk about the risks they pose when in the wrong hands. The ways they can hurt you, personally, if you aren't watching out for them. Or even if you are. I know. This is not happy-making stuff. But better to know than not, eh? .... Before we get into that, it's important to keep a few things in mind. I'll keep this part short and leave the deep tech stuff out. (I don't understand most of it anyway, so that'll be easy.) First off: Artificial intelligence is an easy phrase to misunderstand. All it really means is machines trained to do things it previously took human minds to do. They do this by collecting huge amounts of data and analyzing the patterns in it. You then give them a prompt (instructions, often in the form of a command or question) and they go through a complex guessing game to give you what you've asked for. Yes, guessing. "Given where we start and what's happened in the past and what we're looking to do, what is most likely to come next?" The math is complex, opaque and, for you and me, irrelevant. It boils down to something easier to grasp: Probabilities in context. That's it. And there endeth the technical part of today's lecture. These things do not think. They are not smart. And they definitely are not self-aware. So no, no SkyNet. For now. .... The AI critter that's grabbed a lot of attention lately is ChatGPT. It can carry on a conversation and keep the responses in context. It has the feel of talking to another person, even though it (usually) keeps reminding you that it's not. That's what they mean when they say "natural language processing" (NLP). You don't have to learn machine to speak to it and get a response. It's a chatbot. Emphasis on the bot part. For the old-timers, it's just a very well-prepared version of Eliza. For the rest of the group, think of it as a smart, really well-informed 4-year old. That will help make some of this a bit easier to understand later. For now, think about asking a really smart 4-year old a question and the supreme confidence they'd have in their answer. Then think about how likely that kid would be to miss things due to lack of experience. That's a good way to approach most AI. Pat it on the back, say thank you, and check its work. .... Another pair of chatbots have made a splash in the news recently. Google's Bard and Microsoft's Bing companion. People are so wrapped up in the myth of AI that they expect it to be perfect. To get everything right, all the time. They believe this so deeply that, when Google's Bard got something wrong in a recent ad, the consequences were swift and painful. And it wasn't even that critical a question. The bot claimed that the James Webb telescope recently took the first photo of a planet in another solar system. Nope. Been done before. And just like that, their stock dropped over $100 billion. That highlights one of the risks. Be careful investing in this stuff. It's like the early days of the web. Some stocks will be blockbuster winners, but most will end up in the mist left when the bubble bursts. That's not the scary stuff, but it's something to watch out for. It gets much more dangerous. And potentially painful. .... As with any new tool, there are people who will try to use it in unexpected and not necessarily benevolent ways. In marketing, we expect that. We know to watch out for it. But what about when the tools attack us? A simple example is when they make mistakes that could be damaging. F'rinstance, a fellow who researches and reviews these things asked Bing's chatbot to tell him what it knew about him. The bot produced an impressive collection of the publicly available information on him. And then added a bunch of info on some other guy with the same name as though it was also part of his bio. That's an easy to understand mistake. Sloppy, but not hard to explain. But it didn't stop there. The bot went on to add statements that had no basis in reality at all. They weren't part of the bio of anyone by that name. They were pure inventions. This is common among these tools. It's called hallucination. The thing gets on a roll and just doesn't stop talking. So, maybe a bit like some people after all, eh? I don't think anyone understands why they do this. You can see the potential damage though, I'm sure. Someone goes to look you up online and gets info that fits, along with things that are not remotely accurate. Not all of them complimentary. And that's another place the myth of omniscience comes in. The person asking the question doesn't have the resources to know as much as this immensely expensive and complicated artificial intelligence. They are as likely as not to assume it's all true and all you. And unless someone has been writing lies about you online, there's no villain here. Just a stupid machine that went off its meds. Don't believe everything strangers say on the net. Even if they were built by some of the smartest people in the world and cost billions of dollars and sound oh, so confident. But wait. It gets weirder. .... A reporter for the NY Times named Kevin Roose did a long chat session with Bing's bot. Part of the research for a story. After a while, it told him to call it Sidney. Apparently, that's the project name for the AI inside Microsoft. Sidney got weird. Started telling him all the evil things it wanted to do. Hack other systems, spread misinformation, stuff like that. And then it told him something that left him... uncomfortable. The machine insisted that Roose did not love his wife. He loved Sidney. And she loved him. No amount of correction could get Sidney to drop this line of conversation. He walked away from the encounter disturbed. And this was an allegedly state of the art system being looked at by, one assumes, a fairly sophisticated and experienced reporter. Now, let's take a look at a, shall we say, less advanced model. If you spend much time online, you've probably seen ads for this one. An AI "companion" called Replika. It offers to engage in friendly conversation or, for an extra fee, "dirty stuff." For some people it's entertainment. For some, therapy. For others, it's just interactive porn. Regardless, they expect to get what they paid for, don't they? Well, it seems the government of Italy decided the folks behind Replika weren't doing enough to keep the thing away from kids. They threatened to fine the makers of the bot some pretty hefty bucks if this was not addressed, proprio adesso. Google translate tells me that means right now. If not, I'm sure one of you will point out the error of relying on a machine for such things. ;) Anyway, that resulted in all those Replika customers who paid for the racy side of the bot missing the digital companionship their credit cards were supporting. Reading some of the messages, you'd think the customers had just been rejected by the loves of their lives. The companion was not real. The pain definitely was. All sorts of issues in that one. I'll leave it to you to think about which you consider the most important. They all come from getting attached to the machine as a living thing. It is not. .... Back to Sidney and the gang. Remember when I said these things were trained on huge collections of data? Well, a big chunk of that is content found online. The machines have no way of knowing, on their own, which content is accurate and which is not. Now, think about the questions most often asked of a search engine that really matter. Things about money, romance, health, and relationships. What do people lie about most on the web? Where do the frauds and con artists make their money? So, yeah. That's a problem. One that could be the equivalent of inviting a scammer or a quack into your brain and asking it to tell you how to make this bad thing that's happening to you go away. Then there's the political misinformation and conspiracy theories. We get back to that tendency people will have to accept the verdict of this sole authority, based on the myths around AI. If it lies, you're more likely to believe the machine than your drunk Uncle at Thanksgiving. You're also likely to ask questions that have it giving you data that supports what you already want to believe. And the AI can be a lot more convincing than the usual list of links you get from a search engine. There's that overly confident, utterly charming 4-year old again. And you're letting him tell you how to handle your finances, relationships, and medical decisions? May wanna rethink that. Maybe check his work. .... Let's take a break from the sordid side of the risks for a few minutes and look at some business choices. These bots gather data from the web. Over time, pretty much all publicly available information will find its way into their datasets. It will be distilled down and presented to visitors at search engines as "the voice of authority." If the searcher believes they have gotten the One True Answer, why should they click through to any of those sites linked below it? They don't need them, do they? Including, maybe, your site? A lot of this will depend on how the engines decide to best monetize this new power. I think we can safely assume they're going to be watching out for their own bottom lines and not yours. The bog standard "how to" info is going to be at the mercy of the AI. This is nothing really new, of course, but it's going to be pushed a lot further than in the past. If what you teach is publicly available and can be summed up in generic terms, you can kiss your traffic from the engines goodbye. SEO? Yeah, good luck with that. Got something advanced? Won't matter. It will get mashed in with everything else they scrape up and spit back out in their summaries. Your efforts at keyword management will just help them to know where to fit your hard-won knowledge into their datasets. Or maybe not. Maybe they'll attribute snippets to the source site. I wouldn't count on that, but it's possible. Even if they do, you still have to beat that "this is what the god in the machine picked so this must be the best info" myth. And that's pretty powerful. They could end up picking a profit approach that's different from what we've seen so far. It's hard to predict with any confidence. Regardless, I doubt it will tilt things more in your favor. So, there are two main responses to this that I see at the moment. The first is to gate all your important and exclusive content. Keep the engines out completely, if you can. Then encourage traffic through other means. You may be doing this now, in which case you are likely to get a boost against competitors who rely more on SEO. That could shift a lot of traffic and influence, fast. The other is to focus on a point of view. A human perspective that the engines and their silicon brains can't replicate or distill down to simple terms. This might involve opinion or a story or a humorous slant that is unique to you and your company. It might be an avatar that speaks to the visitor in ways a bot can't. You will still have to get the visitor's initial attention somehow. If your position is compelling enough, and they've gotten bored with dry and humorless summaries, this could make it easier to get them to sign up for your list or register for your site. Or install your app. I'm thinking niche publications via apps are going to become even more popular soon. Of course, you will still be able to buy traffic. It's anyone's guess what this new variable will do to pricing and the quality of the match, though. Either way, you're going to have to stand out and step out. I wouldn't plan on much from your SEO. .... As a user of a search engine, you're going to have to be even more diligent in the results you accept. It will be easy to get into conversations with the bots, asking them questions and getting what seem like definitive answers, in a tone that seems almost human. That will make them easy to trust. Just remember... They're only as good as the data they've collected. For things like making a knife from a saw blade or fixing a bicycle tire, sure. They're gonna get those right almost every time. There's no incentive for random people to spread misinformation on subjects like that. But the important stuff? The stuff you really need to be sure about, enough that you might pay for the answers? That's where the con artists are. Especially in money and finance. That's also where the political and religious and social groups flood the zone with their answers, all focused on convincing you they have the One Great Truth. And medical misinformation? Yeah, there's going to be a lot of that lurking in that black box, too. If this charming 4-year old genius believes any of that, it will try to sell it to you as Gospel. It will talk to you like an adult, all while having zero understanding of what it's mashing together and serving back up. And don't forget ... they hallucinate, too. As you can see from some of the earlier examples, it's very easy to let these things into your head. To view them as you might another human being, giving you what they think is helpful advice. A really, really smart human being. That can knock your usual guard rails of skepticism and critical thinking right out of the game. It can send you down a path based entirely on what the bot "thinks" is the right answer to your question. Without those guardrails, you could be headed for a nasty fall. The biggest bit of advice I can give you for dealing with these things is going to be easy to understand and tough for some people to do: Never, ever forget that you are talking with a machine. It has no feelings. It has no experience. It has no ability to truly think. It is not sentient, and has no sympathy or affection for you, no matter how it may try to frame itself in those positive terms. And it is not a person. It is not good or evil or neutral or any word that might fall into that spectrum. It has no intention or desire at all. It is an incredibly advanced calculator, estimating what should come next based on what it thinks is the most applicable pattern. Probabilities in context. Whether it is presented as a counselor or a companion, it is no more aware than a hammer. Investing emotion into it is a Very Bad Idea. And it will likely be harder to avoid than you expect. Stay alert for that. .... Now, let's look at how some actual humans, the malicious kind, might use this awesome and growing power against you. Let's start with some examples that have been in the news, so a lot of people will be familiar with them. In the first, a video was circulated showing President Zelenskyy of Ukraine telling his troops to lay down their arms and surrender. This was not real, of course, but it shows a strategy people can use these tools for. It's called a deepfake. In this case, the creators had to know the Ukrainian soldiers would know it was bogus. But any question, any break in the chain of command, works to Russia's favor. The ways this sort of thing could be used on the foreign policy stage are many and nasty. The Brookings Institute did a nice summary report on the potential for this in foreign affairs. You can download their paper on it here. It's a PDF called "Deepfakes and International Conflict." Don't read it just before bedtime. .... On the domestic scene, a good example would be another rather poorly done video. This one had President Biden saying things that no-one who'd been paying attention to him would believe he'd said. The thing is, this one was a bit better than the Zelenskyy video. It matched the facial movements to the words being dubbed in, and did a good job of faking Biden's voice. It was probably done with easily used commercial software. The kind of thing you or I could get access to for a few bucks. With a few hours effort, we could do a better job now. The tools are improving fast. To make the point, a college professor named Ethan Mollick decided to use ChatGPT to write a video script on entrepreneurship for his class. He described the result as "surprisingly good." He took the script and a two minute long recording of his voice and went to Eleven Labs. For $5 he got a "recording" of his faked voice reading ChatGPT's lesson. From what I understand from friends who've worked with it, he could have gotten an almost indistinguishable recording if he'd uploaded a longer set of recordings of himself. Still, the end result as he did it could be convincing. He then went to D-iD, paid them $5.99 and uploaded the 11 Labs audio and just one photo of himself. 2 minutes later, it gave him the video he wanted. You can see samples of the the real and fake Professor Mollick here: https://www.youtube.com/watch?v=840bHIATbDg This isn't something that would convince a lot of people as is. It should be noted here, though, that he did it at a cost of very little time or research and a whopping $11. With what you've just read, you could duplicate the process in under an hour. A little more digging for the right places, a few more photos, and you could create a much more convincing video. Of anyone. All you need is enough minutes of recording of their voice and a few more photos. For most people, not hard to get. For celebrities, or anyone with an active presence on TikTok, it's there for the taking. For a more in-depth explanation of how it works, check out this interview Matt at MattVidPro AI did with an AI version of himself. Much better output than Professor Mollick's. https://www.youtube.com/watch?v=vwuiEJJCOZ Of course, Matt knows the field better and spent more time on it. But, and this is important, he shows you how it can be done, in a way that anyone can duplicate. With a bit more work, you can vary the body language and movement of the AI avatar, and insert any background you like, including video. You can have anyone saying anything, anywhere. And a lot more convincingly than the Zelenskyy or Biden videos. Or, if you want to pretend they said something over the phone or on a mic they weren't aware was recording, you can go with just the audio. That can be so close even the "speaker" could be fooled. So, what would you NOT want people to think you'd said? Here's something to consider: Even if the person viewing or listening to it immediately dismisses it as a fake, the idea in the content has been planted. It's something they will associate with you. And it can stick for a long time. Slow, soft poison. And deadly at the right time. .... Then there's the problem of fake endorsements. Joe Rogan was deepfaked recently in an ad for supplements that would supposedly "grow" certain male parts. That one was pretty convincing, if the response online is anything to judge by. This is likely to get a lot more common. And, as these "custom video on demand" services get more advanced and able to run at higher accuracy in real time, they'll likely migrate to direct one-to-one scams. You may have seen one or more of the videos made to teach people about the potential for these things. There's one with a split screen where you can see a guy talking and his voice and facial expressions being converted to a believable fake of Barack Obama. Yeah. In real time. Then there's the one with the dude making a fake Morgan Freeman video. That's been around a while. Not state of the art, by any means. But folks would buy it. One of the most shared is a video with Paris Hilton (the real one) and a guy who was edited to look and sound like Tom Cruise. That one was also convincing. And it was supposedly done on a consumer grade laptop. This isn't science fiction any more. It's not even "next year." It's now. .... Let's get darker. I've mentioned before that Lensa, the program that takes a few photos of you and uses them to create those cool avatars people have been posting online for months, can do some things it shouldn't. Before we get into it, I should note that Lensa has already stated they will address this problem. It may already be fixed. Thing is, it's not the only way to get the ... questionable ... result. Anyway. What happened was that a lot of women who uploaded the required photos to Lensa occasionally got back avatars in various states of undress. Often in what would be called seductive poses. Turned out anyone could try for those using photos of any woman they wanted that kind of thing for. And you didn't need to give the app anything suggestive. Perfectly normal images could churn out the problem pics. Rather than talking in vague terms, let's get right to it: Anyone with a few photos of any woman, say your wife or daughter or you, could have a decent chance of creating what look like compromising pics of them/you. I'm sure you can see where that could become a problem. But that's just the start. .... Why does this happen? And why not with photos of men? Remember I mentioned that these AIs work based on data they are fed or that they scrape from the web? What is the most common type of picture online? Yeah. Naked, or nearly naked, women. Okay. Maybe cats. Maybe. But even the most basic image AI will know the difference. Barring restrictions built into the software, a certain percentage of images of women alone in a photo will be "not safe for work." Probabilities in context. .... Lensa isn't the only software that can do this. There are others that can be used with a great deal more direction and specificity. Many of them are free and untraceable. Lensa just made it easy for any clueless idiot with a few pics, a grudge, and $4 to spare on a pre-paid debit card. If you have more experience, you can go way beyond this. Or if you have a few hundred bucks and a real desire to have something nastier to wield against the object of your ire. That joke I made at the beginning of the issue about the porn video you filmed when you were in Albuquerque? Not a joke. A popular game streamer was recently deepfaked in what was presented as a nude video of her. Apparently she caught the attention of one of the "20-something going on 7-year old" freaks who think women should stay away from gaming. They decided to harass her this way. Think about it. All you need to do this is enough photos of the person alone and in different positions or backgrounds. That pretty much covers every TikTok star, and quite a few Instagrammers. Also YouTubers, anyone who posts a lot of selfies on Facebook, and anyone the creeps can take pictures of in real life. These scumbags are doing this to a lot of women. Some are stars they just fantasize about. Some are ex-wives or girlfriends they want to get back at. Some are children. Yeah. Them too. We're not talking about normal humans here, remember? As the tools get better, deepfakes get easier and easier to create. They get more realistic, and more convincing. And they're going to become a lot more common. These things can be used for revenge, or blackmail, or even business or political sabotage. Or in court, in divorce or custody proceedings. We're going to have to get a lot more skeptical about video. And it gets worse. .... Remember a while back, I told about a scam that was being run on high school kids? A 17-year old Boy Scout was talked into sending a naked picture of himself to what he thought was an attractive girl who was interested in him. He'd never done this before, but, like most 17-year olds, a pretty enough face overcame his good sense. "She" (almost certainly really a guy) turned around and told him he had to give her a bunch of money or she'd show the pictures around his school and online. After giving "her" what little he had in the bank, the vermin demanded more. She was insistent and aggressive. He saw no way out. In his teenaged mind, he was about to be ruined. The kid killed himself. And you can bet "she" moved on to scam more young kids, without a care in the world about what "she" had done. This is the kind of creature we're dealing with here. Do not doubt, for a moment, that they would go after you or someone you love this way if they had the ammo and thought you had the money. Thing is, they can now create the ammo. All they need is a picture of someone's face and the software can do the rest. And they can do a lot more than just a photo. If you have kids in their teens, I recommend finding a way to have a talk with them to warn them about this, and to assure them you'll have their back if something like it happens to them. The Boy (or Girl) Scout you save may be your own. .... I know it probably sounds like I'm exaggerating. But these are all real life examples. Stuff that really happened. You can verify it for yourself with a few minutes on Google. Or do a search for the term "revenge porn." That'll be an education you probably didn't want to have, but should anyway. It's been a problem for a long time, but it's gotten a lot easier and more common with the explosive growth of easy to use AI image and video tools. That thing I said about being cautious about video? It's a wider net than just dirty pictures. For example, a woman in Pennsylvania, Raffaela Spone, was accused of creating deepfakes of teenaged cheerleaders and sending them to the girls' parents. She denied it. Turns out the content (not pornographic, but arguably inappropriate) was real. The kids had posted it to their social media accounts. This brings up the question of photographic and video evidence in a courtroom. With the ease of creation of fake recordings and video, pretty much anyone can make them. Or have them made. The innocent can be accused and the guilty can use the potential for it to deny their guilt. At the moment, this stuff can usually be detected fairly easily. But that won't last for long. Get a well done video or audio and one person willing to lie and say it's real and you've got someone in deep trouble for things that never happened. Or people getting off on charges for things that did. Then there's the problem of public figures who are falsely accused based on manufactured evidence. Even if you prove it's not real, that sort of thing will be believed by a lot of people, including many who might see the accusation but not the proof it's false. Charges like that can ruin a reputation or a career. Including yours. .... Okay. Let's back this up a bit. The odds are low that you will be targeted by this sort of thing. Not as low as we'd like, but not great, unless you are a public figure or have made an enemy of someone who knows this is possible. There are groups that are more at risk than others. Mostly celebrities, far left or right politicians or the people they hate, and women with angry and controlling exes. Even then, it's not like it's going to be an everyone every time thing. Not even close. But it will become common enough. I am not suggesting you become paranoid. I am not saying you should avoid posting photos of yourself on vacation, or abandon your YouTube channel or start attacking everyone you see holding a camera. I am suggesting you treat any surprising video or audio that seems negative with a decent level of skepticism. Even if it involves someone you don't like and who you'd like to believe badly about. Treat it all with a hefty dose of cynicism. .... I know talking about politics is going to irritate some people, but bear with me here. One area where we're all likely to see more of this is in the ways we are divided. Whether that be on income or race or religion or party or any other controversial topic. There are people and countries who want to enflame divisions in societies for their own ends. At least in the US, we have become so divided that the other side, no matter which side we happen to be on, is often described in absolute moral terms. Evil being the most common. We already see people moving this way on their own. Too many of us have lost trust in any and all public institutions, and we look for reasons to blame them for whatever ills there may be or that we imagine or fear. It would be easy for an outsider to turn up the heat even more. Let some hostile group create a deepfake of something extreme that plays into one of those divisions, and then another showing a faked overreaction, and you could have instant riots. Yet another reason to step back and wait for confirmation before believing any video that makes you angry. Any video. On any topic. Understand, these people create content to play on your best intentions. They don't assume you're some violent crazy person. They just figure there are people who care enough about an issue to react in a highly emotional way. They will portray the other side of the thing as being so extreme, so evil, that something simply must be done. Except... the other side isn't any more "evil" or "extreme" than you are. That stuff is almost always made up. AI makes that process so much easier and more believable. I mean, you saw it with your own eyes, right there in the video. Right? By now, you know better. If you didn't already. Stirring up what feels like righteous anger is a hell of a way to control people. Don't fall for it. Don't give them that power. .... There's a lot more I could cover in terms of how AI can be misused. Scam phone calls that only sound like your friend, family member, or employer. Voiceover talent being replaced by fake recordings of their own voices. Photos from war zones or natural disasters of things that never happened, raising money on your desire to help. On and on and on I could go. Maybe another time. This is way longer than I usually go as it is. But, as Lieutenant Columbo might say... "Just one more thing." There's a widely held belief that it's Boomers and older folks who are most easily fooled by these things. Nope. Us oldsters are naturally skeptical of this stuff. Except the video. We do still tend to believe our own eyes. That's a tough habit to break. Turns out, younger people who've grown up with the net and all the social media systems and cameras everywhere are more easily fooled. They're more comfortable with the environment, and with interacting with folks they've never actually met. Intelligence doesn't seem to be a factor, either. I personally know several legit PhDs who have fallen for online scams that should have been obvious. Put the right bait in front of even the smartest fish, they're gonna bite. So, don't assume you're immune to it. You're not, and neither am I. And don't harangue people who might get conned. These creeps are clever, and they practice until they get the pitch down cold. Warn people who might not have as much experience as you. Help them if they get taken in. But don't judge them for it. We're all in this together. Paul To subscribe to TalkBiz News, and get a few useful little goodies along with it, drop by here and tell us where you'd like it delivered. Find this handy? Drop by the Pub and buy me a beer. http://buy-paul-a-beer.com "100%
of the shots you don't take don't go in." Help Desk
- Buy Me a Beer Phone and Text: (814) 245-1555 |