Don’t Take the Bait: New Wave of Social Media Scams

November 20, 2025
Don’t Take the Bait: New Wave of Social Media Scams Hey, Jason, thank you for coming again. It's been about a month since we talked last, and a lot has evolved from that conversation to this conversation. We talked before about a lot of historical breaches and some of the bigger stories that were out there, and sort of how we got to an area where people are really being attacked in a way that we haven't seen before. Threat actors are going after individual accounts, they're going after your network, and they're leveraging that in ways that you know really, really are new and novel. So I'm excited to talk about what's evolved since then, and some of the AI capabilities that are coming up to push threat actors forward in a way that is much faster and much more sophisticated than we've seen. Yeah, absolutely, I'm looking forward to the conversation. AI continues to move at light speed, and what we're seeing now is that attackers are becoming very AI savvy. They can leverage state of the art models. Can leverage state of the art techniques to do things faster, better, cheaper than they could in the past. All right, let's get into it. All right. So I'd like to dig into the DMS and some of the phishing that's happening specifically through that channel. I think recently, I read an article where there was a pretty big crypto phishing attempt that went out that fooled a lot of users. Maybe you could talk about that a little bit. Yeah, then what happened on x, and it was kind of interesting. So what happened was a user, typically a crypto influencer, would get a DM, a direct message, and it would have a link contained in that DM, and the whole point of the DM was to entice the user to click on that link. Now the link itself looks innocuous. So think like calendar.google.com, for example, but when you actually clicked on the link, it didn't take you to Google. It took you to another site that looked very similar to an X application authorization site, and if the user were to click OK, authorize the malicious users would then have complete control over the user's X account. And from there, they can do anything. They can send posts, they can send other DMS pretending to be that user. It's basically game over at that point, and the first thing they're going to do when they do that is lock that user out of their account. So then they're gonna have to figure out how to work with the X folks to regain access to their account. So I think there's a cautionary tale there, and that is, be very cautious of DMS that you get from people that you don't closely know. And if there's links in those DMS, be very suspicious to those those links, even if the previews themselves look legit. If you do click on the link and it takes you out to another website, you really want to examine that URL in your address bar to make sure it's exactly what you thought it was going to be. And if it's not close, it immediately. Normally, legitimate sources aren't coming through that route. They're going through more established channels of communication, right? I mean, again, I think knowing who's reaching out to you, validating that they're a legitimate source is an important step to take, yeah, but the downside is it's getting harder and harder and harder. So in this case, let's go back to the X example to talk about. In that case, if a user had their account taken over, they could DM people that they know and not realize that they're speaking to the person who was the one who actually took over legitimate users account, because it's coming from the legitimate user at that point. Yeah, I think as an influencer, you're really a, you know, a key target, because these threat actors know, once they're in, they're becoming, they're sort of getting that elevated privilege, that elevated authority into the community that they're connected with, yeah. And like we talked about, I mean, these attacks are becoming easier and easier and easier. There's almost zero risk, almost zero cost. So the attackers can launch these things at scale, and maybe only one in 100,000 people fall for it, but that's all I need. Yeah. And I think the other key thing is that you talked about earlier these sort of technologies that are evolving that enable this to happen much cheaper, at a much higher scale and much faster, yeah. And the other side of this is the detector is becoming more sophisticated in who they target, right? So they're thinking like marketers are thinking like business people at this point, so they're not just going to randomly spam a million people. They're going to figure out, okay, these are the top 100 crypto influencers, and I'm going to specifically target them, and I'm not going to worry about anybody else. And they're using AI tools that are out there to do that research, right? So they're saying, you know, they can quickly go in and query, who are these influencers, what's their scale? What is the normal messaging that they're putting out to their base, right? That they can plug into and use to really social sort of engineer an attack. Yeah, that's exactly right. I mean, they can look at historical tweets, they can profile and they can tailor each message specifically to that person and speak in a language in a way that they understand and resonates with them. So outside of. Like trying to do diligence. Are there other things that individuals can do, especially those that are influencers and have a big community, to ensure that they don't sort of fall for these pretty sophisticated attacks? I mean, the best thing you can do right now is just understand what the techniques are, what are the tactics that the attackers are using, and just raise your awareness. I mean, be very suspicious. Be very diligent, especially before you click on anything that's in a DM. Yeah, I think, I think people have gotten used to that through their texts. I think they have to have the same level of diligence through their social networks, their DMS, even what's coming through their messaging and their dating profiles and things like that that are out there. And if it's somebody in your social circle, and you get sort of a questionable DM or even an email from them, just pick up the phone. Yeah? Hey, Chris, did you send this to me? Is this real? Yeah, that's that. That's probably the most foolproof way of determining whether something is legitimate or not. Yeah. I think another way that they're coming into influence is through these, these fake brand collabs, right? Whether they're doing, hey, you've won these sweepstakes, things which are becoming more popular. Businesses are creating large revenue streams through creating these giveaways, and these sweepstakes, if you will, that you could participate in. And so there's legitimate, you know, there's legitimate things that are happening and you are getting a message, but they're also leveraging that, that sort of new path to create fake, you know, fake giveaways. Yeah, absolutely. And in a broader tactic or techniques, the way of looking at it means basically the same thing. They're preying on human psychology, right? So they're preying on, hey, I'm an influencer, and somebody's recognized what I've been doing. This is great. So, yeah, you're going to trust them more, or they could be installing a sense of urgency. Hey, you've only got 24 hours to sign this paper or collaborate with us, or we're going to move on to somebody else. So we've talked about the influencers in target. So let's say they get into an influence where they've got control of that account. As you're someone that follows this influencer and then your network, how do you validate that what what they're sending out is, you know, is actual, factual thing. So there's a couple ways. One, you can go to their direct feed and see if they're talking about this in any of their other posts. Two, there's verification models on a lot of social networks. You know, X is kind of known for. When Elon purchased Twitter, he kind of did away with the blue check mark, but those still exist on Instagram and all the other social networks, so looking for that blue check mark or something like that is a good way of validating that this person is who they actually say they are. Yeah, I think you make a really good point. I mean, most influencers have multiple social outlets, right? They're not just going through Twitter, they've got an Instagram account, they've got threads, they've got other things. And if they're not talking about it across the entirety of their platform, it's probably something that you want to take pause on. Yes, yes, absolutely. And I think the big one that I've seen is any sort of crypto giveaway. I don't know how many posts I've seen with talking about the Elon coin, for example, and how they're going to airdrop to the first however, many users, if there's somebody who is influential in one market or one topic, and suddenly, out of the blue, they say, Hey, come buy my crypto. Come buy my new coin. Huge red flag. Yeah, there's a lot of it where you see something from Bill Gates, and there's a video, right? They're using deep fake technology, which is readily available now. You can go out and generate video that's really high quality for next to nothing, almost in a free trial level. And so you know, if Elon is talking about something and it seems too good to be true through an influencer, if Elon himself is not talking about it, or Bill Gates is not talking about it, or Mark Cuban is not talking about it on their direct feeds, it's probably something that's again, you should be very cautious of. Yeah, and AI generated videos are getting so good these days, so I feel like I'm a pretty savvy user, but I'm even struggling these days to distinguish what's a real video versus an AI generated video. I'll give you a good example. A couple of weeks ago, my Instagram feed was flooded with what we'll call them cat reaction videos, and it was hilarious, right? Because they were all kind of the same. It would be a woman in her kitchen who had a cake shaped like a cat on the counter, and sitting next to her would be a cat watching what's going on, and she would take the knife to carve the cake and cut the cake's head off, and the cat sitting next her would would suddenly attack her, because the cat thought it was real cat. And I saw three or four of those, and I thought they were legitimate. And it was only after maybe the fifth or sixth one I realized, hey, that cat's not really moving the way a normal cat would. I was like, Oh, this is just AI generated. Yeah, it's getting very difficult to tell the difference, not only in video, but just in the quality of the messages that are coming and the sophistication or the thought behind. Mind the overall strategy for an attack. I think you know, in terms of deep fakes, they're still a little bit tough to produce completely realistic deep fakes. We're probably six months to a year out where they're indistinguishable from real ones. But you know, if you're copying a celebrity in a deep fake video, there's often tells, right? So maybe the voice seems a little bit off, or maybe their body shape is not quite what you expect. So you know, maybe there's a Taylor Swift deep fake, and I think she's like 510, or something, but in the video she looks abnormally short. So if you examine them carefully, you can often find tails tells, the face may not move quite as elastically as a natural face would, but it's becoming harder and harder and harder. And I think you hit on an important point. I mean, when we were here and talking about this kind of attack, it was a month ago, maybe, right? Really, none of the things that we're talking about now were really prevalent that the video capabilities weren't, weren't where they are today. That's just been a month. You go six months down the road, and the ability for threat actors to use these technologies to become incredibly compelling is, it's kind of scary, yeah, yeah. And the price is coming down too, right? So Sora two, which is an open eyes video generation tool, actually has protections. It won't allow you to do deep fakes of known individuals, but there's all the other models that will allow you to do that. Grok on X is a good example. So you can make deep fakes of Donald Trump or other known celebrities, and it gives you a fairly wide latitude and to what sort of videos or images you can create, but at some point, these are just going to be commoditized, where anybody can generate deep fakes of anybody at no cost. Yeah, you know that again, speaking of how these deep fakes are getting just so sophisticated. Have you seen the video of Jake Paul? No, I haven't seen that one yet. We should. We should take a look at the clip. All right. I have an announcement in 321, I'm gay. I hope everyone here supports me. Here's my makeup set. Look at this. Oh my gosh, guys, it looks amazing. Let's turn into a kitty. I've got the rainbow dress on. I've got the rainbow claws, and I'm starting with a little bit of foundation to even everything out. I love the way this makes my skin look soft and it gives me that smooth base for the whiskers. So we got the nose, alright, let's get into it. I'm starting with a little bit of foundation, just buffing it in with this brush. Foundation, just buffing it in with this brush. Super light gives me a clean canvas, but still lets my freckles show through. Now, some pink blush around the cheeks matches the fit today. Blend it up toward the temple so it lifts everything little on the nose. He's come out since and made it very clear that those were AI, but it fooled a lot of his audience, you know, he's, you know, is a, you know, I mean, it's hilarious. It did. It is funny, but a lot of his users were fooled by that. And these are people who see him every single day. He's heavy and social. He's obviously got a big brand, and even for those that were really heavy consumers of what he puts out, they weren't clear if it was real or not. Right? Yeah. I mean, at face value, I can see how people would be fooled by it, but if you look closely, there's, again, there's some things that can sort of signal that it is. Ai, like the skin's little bit too perfect. The lighting is a little bit too perfect. The teeth are a little bit too perfect. But like we were talking about earlier, I mean, this is November 2025 who knows where it's going to be next summer, it may be completely indistinguishable even from trained individuals. Yeah, another avenue that we're seeing a lot of coming through, either emails or DMS, is these, these help desks, right? So, you know, Facebook, for example, they're, you know, they're spoofing that and coming through and saying, hey, you know, your account is going to be locked down in 24 hours, and, you know, trying to get you to go and take some action against them. Yeah, this is another type of scam that's been around forever, right? It's been, Oh, your computer's been hacked. You need to contact us immediately to help, help get it cleaned up again. This is AI just layering a level of professionalism on top of these DMS on top of these messages that didn't exist a few years ago, maybe, or maybe even a few months ago, and people just have to be more diligent, or else they're going to get fooled by these things. There's just no other option. It's incredibly hard for folks because, like we talked about, either earlier, people are busy, right? They don't want to have to play detective on every single message they get, and especially when there's a sense of urgency distilled into these messages where they're like, Hey, I've got to take action now, or bad things are going to happen. Yeah. So it's becoming increasingly challenging for folks to make a decision. I think urgency is something that we're seeing throughout any of these, and I think it's a red flag. So when you see something that's driving a level of urgency, you've got to do this in 24 hours, or you're going to miss out on this opportunity. That's something that should make you pause. Any legitimate person you know reaches out or avenue of contact is going to give you time, right and make sure that you've got appropriate time to take action. If it is something that's real, yeah. And while social networks haven't been great at this, I think financial institutions have been really good about this. And they'll tell you straight up, hey, we're not going to send you a text message, we're not going to send you an email. There's gonna be a message inside your app, or somebody's gonna call you and they're gonna verify that they're actually part of our organization? Yeah. I think one of the best things that an individual that can do is, again, go to the source. They can reach out independently of that. If it's legitimate, the organization is going to be ready to field that call and channel it to the appropriate division to handle it. That's absolutely right. But with some of these DMS that we're talking about, it's, it's challenging, right? Because, like, like we said, I mean, these accounts could have been taken over. So even if you do send a response DM back to the original source, if that account is taken over, you're not talking to the person you thought you were talking to. You're talking to the attacker at that point. Another group that's being targeted pretty effectively or politicians, you know, being in Florida, we should probably talk about Marc Rubio and what happened with some deep fakes that went out and had some, really, some national security implications. Yeah, so the voice is much easier to fake than the video. With video, you can often see things that aren't quite right in terms of movement or the skin or other physical attributes. With voice, it's much tougher because you don't have that visual aspect to it, and cloning a voice is super easy. Anybody can go out to 11 labs and clone a voice in five to 15 minutes, and it's almost indistinguishable from the original speaker's voice. So then the question becomes like, how do you thwart that? How do you stand up protections against those sorts of attacks? And I don't think we have a great answer yet. I think that's a work in progress, and it's something that a lot of companies are working on, but we don't have a great solution today. I think what's interesting about their Rubio case is they actually used a communication channel or application that's typically tied to security. So they use signals, right? They didn't go through telegrams. They didn't go through some of these sketchier applications. They use signals to send messages across to some foreign ministers and try to, you know, I think there were text messages as well as some voice messages that went across that were compelling. Yeah, I think that speaks to why government officials shouldn't be using consumer grade encryption for these sensitive types of communications. Agree, we hit on it a little bit earlier about these account takeovers, but they're getting, again, more prevalent, and then once they get the account takeover, they're really, really going hard in a short period of time after whoever that account is influencing. Yeah, now account takeovers have been a problem for a long time. I used to be in email security, and we saw them constantly, and there's still a problem. So I advised a company in Chicago, and they came to me recently with some questionable emails and asked me to take a look at them. Now the interesting thing about this was that the emails themselves weren't from the company in question. They were from a third party, a partner of theirs, so they didn't have access to that partner security system, so they just trusted all the email that they were getting from this organization. So what had happened is that this third party partner, their security was probably kind of questionable. We don't know for sure, but one of the folks over there had their email account breached. They didn't know it, but the attacker sat in their email box and watched the conversations go through for some period of time, until they started talking about financial transactions, and that's when they started jumping in with lookalike email addresses. Jumped on this email thread to try to convince people that, hey, the money shouldn't go over there. Should go over here instead, and this place over here is the place that's controlled by the attackers. So this is a constant problem. AI, again, is accelerating this because you've got four nationals who are not necessarily native speakers, who can grab large email threads and very easily understand the context and the way that the words are communicated, and then inject response emails into that thread that seem completely native. Yeah. And I think it kind of speaks to the fact that some of these attacks, they sort of evolve over time. So once they get in, they're not necessarily going in and just sort of shooting things out, that's right, they're monitoring internal information that's happening and creating very, very targeted emails that fit into normal communications. Yeah, oftentimes the attackers, they're going to jump straight away, right? But other times, they can be very patient, and they can just watch the communications go by and wait for that right moment and. Then they're going to jump in, and that's when bad things are going to happen. Yeah, I think a lot of what we talked about is getting into an account and immediately reaching out to a user base in a very, very expedient fashion and driving some urgency. But there is this other play which is a lot more targeted and even more difficult to figure out. Yeah. I mean, potentially they can sit these accounts for days, weeks, even months, and just monitor the communications until they feel like the time is right. Speaking of some of the account takeovers, another one that just happened recently was this meme coin. They breached some pretty big names, so Adele, future, Tyler, Michael Jackson, I think Pink Floyd was involved, where they got a hold of all their accounts, and then they created this meme coin and posted it to their accounts. Yeah. I mean, it's hard to know what's real and what's not these days. I mean, politics aside, when you've got a president who's launching a Trump coin, which is a meme coin, and a Melania coin, which is another meme, which are both legitimate meme coins, those are both legitimate, and people are making millions and millions of dollars on there. You know, it's not outside the realm of possibility that your favorite celebrity may launch a meme coin. How do you know? Yeah, so I think part of it as well is that one of the avenues we've been talking about are these direct attacks, right? These phishing attempts, reaching out through DMS, reaching out through various sources, but credential leaks are one of the biggest ways that you can get into an account, and no one even knows you didn't even do anything as any user, right? So a Dell's account, or your personal account, or whoever could be a part of these, these data or credential breaches that are out there for threat actors to get access to. Yeah, and that's where some of these sort of fundamental security measures come into place, like not using the same password on multiple sites, making sure that you have two factor authentication enabled whenever possible, especially with your email having that locked down as much as possible, because once an attacker gets into your email account, it's game open. They're gonna be able to access any other social account you have, more than likely. Yeah. And I think looking at, you know, services that look for that out on either the dark web or known breach sites, and really monitoring the internet for the potential that your credentials could be out there is an important step to take. Yeah, absolutely. Because, I mean, we're getting blasted with so much news these days. You know, most people just don't have time to keep current with what's going on in terms of data breaches or credential leaks or anything else. So having a dedicated monitoring service really gives you a leg up. Gives you, sort of an early warning system to know that something that requires your attention has happened, so you can take the appropriate steps to protect yourself and protect the rest of your social circle. Yeah, I think it seems like every day there's a headline, there's a breach of a million credentials. From here, right? From there, you mentioned not reusing passwords. It's an important point, because you could have a password that you've used in, let's say, an old, you know, Gmail account or a Hotmail account or something like that that you're not using anymore, that got breached and that information is out there, or something you're using on maybe a site that's less secure, yeah, and it gets breached, and that those credentials could be used in your, you Know, your core accounts that you do use every day? Yeah. I mean, it's a great point. I mean, I've been on the internet since the 1990s and I may have used a password in 2005 on some site that I've completely forgotten about, right? But if I reuse that password today, I'm putting myself at risk. So there's not a really good way of keeping up to date with what's getting breached and what impacts you directly, yeah. And I think there's some tools out there that help you manage a diverse set of passwords as well, right? So they're password managers, and there's other things. There's SSO, so using your Google accounts to help manage that, to sort of break out what you're using particular passwords for. Are there any best practices on that front? I'm a big fan of one password, but all web browsers these days have password managers built in. Mac OS has a new password manager that can help you create unique passwords for every one of your accounts. And then the big one is two FA, or MFA is a multi factor authentication, so having to get a text message on your phone or email code just gives you an extra letter of protection. So that means if somebody does have access to your password, they're still not going to be able to get into your account until they provide that extra piece of information. And the way I look at it is you've got to look at what you're accessing, what you're using, and the level of value associated with it. So your banks, you probably want to make sure that for your banking, you have a unique password, username for your bank, and one that's unique for your social and then one that's unique for your email. Maybe you're. We're using some lesser things. You know, if you have a Roomba and you've got an account for your Roomba, throwaway account, yeah, probably okay, if maybe you use that across a few other, you know, less critical accounts, generally, I would say that's fine. But the Password Manager Tools are so good these days that there's really no reason not to use a unique password on every single website. Yeah, there's an extra couple of clicks that you have to make to pull that password out of the password manager and put it into your login screen. But to me, it's well worth the effort. Yeah, I think it is. You know, for most people, right? It's about trying to figure out where the effort is right. How do you put in the effort? How do you prioritize your time? How do you prioritize your time? How do you prioritize your security overall? Because there is so much going on that leveraging tools like that, leveraging a monitoring service to help make you aware of something proactively, is a pretty critical step. Yeah, absolutely. Yeah. I think that's a good place for us to end. We've talked a lot about everything from direct user attacks, right, whether you're an influencer or they're coming directly after your social networks, it's just an individual we've talked about, you know, organizations being attacked that are holding your information, that are potentially leaking out there, and another avenue to get in, and some of the sophisticated methods that are coming after that, and really the fact that in the last month, much of this has started to become a reality. And we didn't, we weren't talking about it a month ago. And you know, potentially Our next meeting is on agentic AI, and how that's evolving even even further, I think the reality is, for users, you know, you've got to be very diligent. You've got to be very skeptical today, and you've got to leverage the tools that are out there at your disposal to help make sure that your accounts are segregated and that you've got some awareness of what's happening out on the internet with respect to your credentials overall. Yeah, absolutely. And again, the challenge is that people are just so busy and flooded with so much information constantly, it's really difficult to filter out what's important and what's not, and leveraging third party services can help with that. Folks really need to protect their online identity, like they protect their offline identity, like they protect their credit or folks in their family, there's no choice, because at the end of the day, there's not a big divide any bit more between your digital identity and your actual physical identity. Yeah, we're good. You want to say, Should we say anything else? Then say something like, Well, thanks everyone for watching, yeah, and stay tuned for more. Yeah. And cyber security. Well, thank you for your time, and thank you everyone for watching, I think, until we talk next, stay diligent, right? Stay aware and really, really be skeptical of what's coming across in your DMS or in your emails, or any other communication that you know looks too good to be true and is driving a level of urgency that may not be necessary. Yeah, I always love having these conversations. There's so much going on, things are moving so quickly, and folks just don't know what they should be aware of. So having these conversations that allows people to kind of understand what's going on is always great.