June 15, 2021
PsyOps: Deep Dive into Social Engineering Attacks
Social engineering campaigns continue to be one of the primary methods that adversaries use to gain an initial foothold in an organization. Red teams and Advanced Persistent Threat (APT) groups often times use phishing techniques and pretexted phone calls to coerce users to either disclose sensitive information or execute malicious files. Even though social engineering exercises (in the industry and the real world) have become somewhat repeatable and stale, users will continue to fall victim to this avenue of attack. In this post, we will cover some psychological concepts that drive successful social engineering. But before this, it’d be interesting and valuable for you to revisit a time you might’ve been influenced into doing something you normally would not have done and what made you change your mind.
Email Spear Phish Campaigns
When an attacker crafts a scenario against an organization, the message can be framed to target the employee specifically or the company as a whole. Attackers may detail topics such as an employee’s retirement plan, an HR violation, company sweepstakes, network updates, healthcare benefits, password expiry, etc. The list is endless. Attackers may then impersonate a figure of authority or another trusted figure within the organization. Finally, the attacker will add details and context to make the email look legitimate.
Most attackers base their campaigns on how their targets are likely to respond, a strategy often mirrored in the planning of tabletop exercises. Adversaries will craft scenarios they predict their targets will likely fall for. These scenarios will range from a variety of topics including sweepstakes, internal policy updates, 401K, miscellaneous IT issues, HR messages, etc. An attacker expects that some targets will fall victim to their campaigns and other targets will not. A more successful campaign that will take measures to ensure that the message appears authentic and contextually accurate. But what if this part doesn’t really matter?
Take your day to day and think about mistakes you make on the job. This can include minor or major mistakes, whether it’s a typo, misreading an email, misconstruing a message, and or general miscommunication. Also, think of the reaction you might experience when you make a mistake. Depending on the person and the scenario, some people spiral, some people nod it off, and some people don’t even know they made a mistake. When you apply this to clicking a link you weren’t supposed to or submitting your password to a site that was cloned off your company’s page, you’ll realize that the actual error wasn’t that different from making a typo. Combine this small mistake with a well-crafted campaign and most people might not even know they’ve made an error. An attacker knows that there is a high probability (to the degree of absolute certainty) that at least one person will fall victim to the campaign. It’s just exploiting human fallibility. Everyone makes mistakes. Some readers are thinking, “I’ve never fallen for phishing and I’m security-aware and I’ll never fall for something like this.”, but you most likely have fallen for “something like this”. We are all consumers to a degree, and we’ve all been influenced to act a certain way. A lot of security training focuses on always being vigilant and aware, but that is unfortunately impractical. Humans simply cannot always be on guard. If you’ve ever felt tired after driving some distance, you’ll be able to relate to this sentiment well. There is no solution to not making mistakes, and that’s what makes social engineering so effective.
The Science Behind It
In marketing, companies often use the psychology behind the urgency to address the issue of scarcity. For example, “buy this now or it will never be available again”. This is effective because it shifts an audience’s mindset from passive to active. The example: “I will buy this later (passive), I will buy this right now (active).” Attackers may choose to the psychology of urgency by itself or couple it with the fear of consequence. Example: “Do this now or you will not be able to enroll later.” Fear of consequences may trigger either an active or a passive behavior from the target. Examples of active behaviors could be, “reporting the email, complying to the email, responding to the email, or deleting the email” and examples of passive behaviors would be, “reading the email and then ignoring it”. For the sake of argument, let’s assume there is a 50% chance of an audience following an active or passive behavior. With a large enough sample size, it is statically likely that the audience who will comply with the email is ~12.5% based just on the hypothetical scenarios. (50% takes the active behavior 25% will either report or delete the email and 25% will either comply or respond to the email.) In practice and during engagements, we’ve seen numbers closer to 30% and up. While this may not seem like a large number, it only takes one person for an attacker to gain a foothold. The percentage may fluctuate depending on organizational training or attacker’s aptitude for influencing people, but it will most likely never be 0%.
Another concept known as the Protection Motivation Theory can play a part in encouraging victims to perform actions to mitigate risks. This theory discusses the severity, likelihood, recommendations, and self-efficacy in relation to what actions one will take to protect themselves from potentially threatening behavior. An email designed to exploit a person’s fear may result in the target taking a misguided recommendation that they believe may help them mitigate risks, but actually put them at risk.
Because humans are creatures of habit, attackers use this understanding to attack decision-making faculties. For example, an embedded link that may use an organization’s domain name with a different top-level domain (TLD) acme.org (malicious) vs. acme.com (legitimate), may not appear suspicious to the target. Since there is no reason to verify an apparently safe and authentic site, a target may enter personal/sensitive information without additional consideration.
Phone Pretext Campaigns
Similar to spear phishing, the psychological concepts that are present in email phishing will also be dominant in pretexted phone calls. A common scenario for phone pretexts involves an unsolicited caller impersonating IT personnel asking for credentials to perform an update. At any large organization, especially in the ever-growing remote work culture, it is unlikely that employees will be familiar with IT staff. Through this, an attacker can use obscurity to impersonate IT staff by simply stating that they are. Employees tend not to verify the identity of the caller if the encounter is seemingly benign. An attacker can perform human intelligence (HUMINT) during these phases. By using elicitation techniques, it is possible to steer targets to disclose information that may be sensitive or internal knowledge that will allow an attacker to blend in more closely. For instance, “I’m on the same page as you, but could you clarify this for me?” By doing this as well, an attacker can empathize with their target. This empathy could assist in lowering a target’s suspicion altogether. While doing so, attackers can simultaneously assume a figure of authority, which leads to a higher probability of employee compliance. The Milgram Shock Experiment details the relationship between authority and obedience. The TL;DR version of it is, individuals interacting under authority are more likely to comply with authoritative requests.
On-Premise Campaigns
When attacking the human element, the data suggests that people at large fall into these categories of social psychology. In this blog post, there is a discussion that talked about building familiarity to exploit trust relationships. The mere exposure effect suggests that when one person is exposed to another stimulus, it is more likely for the person to associate the stimulus with a positive rating. In other words, if person A sees/interacts with person B in any given environment repeatedly, the more likely person A will trust person B. So anyone dressed in business casual clothing loitering around a business complex will not draw suspicion. It is more likely for these individuals who would otherwise be considered a trespasser or loiterer, to be welcomed and politely greeted by security guards and receptionists. In situations that require exposure to build familiarity, most people will fly under the radar as someone who belongs in the space they’re currently occupying. From just this “act like you belong” scenario, most people will never bat an eye, much less confront people who don’t actually belong. Judging from experience and engagements alone, just being able to understand how social groups interact should give a social engineer confidence in infiltration. Using confidence game techniques and compliance techniques will be the next topic of conversation – but all out of time today.
Stay Tuned
Although this post does not offer a way to ensure a foolproof solution against social engineering, it should at least offer insight into the mind of a sophisticated attacker. The next post will also cover using knowledge of enterprise security training to target specific deficiencies in physical security. It will also go over compliance techniques and confidence game techniques that are used and can be used in an engagement and real-life scenarios. This blog post is a primer for transitioning into a detailed offensive. Thanks for reading.