108413 éléments (108413 non lus) dans 10 canaux
Just like perimeter protection, intrusion detection and access controls, user and entity behavioral analytics ("UEBA") is one piece of the greater cyber risk management puzzle.
UEBA is a method that identifies potential insider threats by detecting people or devices exhibiting unusual behavior. It is the only way to identify potential threats from insider or compromised accounts using legitimate credentials, but trying to run down every instance of unusual behavior without greater context would be like trying to react to every attempted denial of service attack. Is the perceived attack really an attack or is it a false positive? Is it hitting a valued asset? Is that asset vulnerable to the attack? It is time for cyber risk management to be treated like other enterprise operational risks, and not a collection of fragmented activities occurring on the ground.
Analogous to fighting a war, there needs to be a top down strategic command and control that understands the adversary and directs the individual troops accordingly, across multiple fronts. There also needs to be related "situational awareness" on the ground, so those on the front lines have a complete picture and can prioritize their efforts.
When it comes to cyber risk management, it means knowing your crown jewels and knowing the specific threats to which they are vulnerable. If an information asset is not strategically valuable and does not provide a gateway to anything strategic, it should get a lot less focus than important data and systems. If an important asset has vulnerabilities that are likely to be exploited, they should be remediated before vulnerabilities that are unlikely to be hit. It seems logical, but few large enterprises are organized in a way where they have a comprehensive understanding of their assets, threats and vulnerabilities to prioritize how they apply their protection and remediation resources.
Even "at the front", UEBA is only a threat detection tool. It uncovers individuals or technologies that are exhibiting unusual behavior but it doesn’t take into account greater context like the business context of the user’s activities, associated vulnerabilities, indicators of attack, value of the assets at risk or the probability of an attack. By itself, UEBA output lacks situational awareness, and still leaves SOC analysts with the task of figuring out if the events are truly problematic or not. If the behavior, though unusual, is justified, then it is a false positive. If the threat is to corporate information that wouldn’t impact the business if it were compromised, it’s a real threat, but only worth chasing down after higher priority threats have been mitigated. For example, let’s say through UEBA software, it is identified that an employee on the finance team is logging into a human resources application that he typically would not log into. UEBA is only informing the incident responder of a potential threat. The SOC will have to review the activity, determine if it is legitimate, if not, check if the user has access privileges to access sensitive information in the application, see if their laptop has a compromise that may indicate a compromised account and then make what is at best a not so educated guess that will often result in inaccurate handling. Just as important, the SOC analyst will likely do all of their homework and handle the incident appropriately, but without the right context they may have wasted a lot of time chasing down a threat that of low importance relative to others in the environment.
A true "inside-out" approach to cyber risk management begins with an understanding of the business impact of losing certain information assets. The information assets that, if compromised, would create the most damage are the information CISOs, line-of-business and application owners, SOC investigators, boards of directors and everyone else within the company should focus on protecting the most. They should determine where those assets are located, how they may be attacked, if they are vulnerable to those attacks and the probability of it all happening. Once that contextualized information is determined, everyone within the company can prioritize their efforts to minimize cyber risk.
Photo credit: arda savasciogullari / Shutterstock
Steven Grossman is Vice President of Program Management, Bay Dynamics. He has over 20 years of management consulting experience working on the right solutions with security and business executives. At Bay Dynamics, Steven is responsible for ensuring our clients are successful in achieving their security and risk management goals. Prior to Bay Dynamics, Steven held senior positions at top consultancies such as PWC and EMC. Steven holds a BA in Economics and Computer Science from Queens College.
Insider threats can be the most dangerous threats to an organization -- and they’re difficult to detect through standard information security methods. That’s partially because the majority of employees unknowingly pose a risk while performing their regular business activities.
According to data we collected from analyzing the behaviors of more than a million insiders across organizations, in approximately 90 percent of data loss prevention incidents, the employees are legitimate users who innocently send out data for business purposes. They are exhibiting normal behavior to their peers and department, even though it might be in violation of the established business policy and a significant risk to their employer.
Adding to the challenge, IT and security teams are getting killed trying to make sense of the mountains of alerts, most of which do not identify the real problem because the insider is often not tripping a specific switch. They spot check millions of alerts, hoping to find the most pertinent threats, but more often than not end up overlooking the individual creating the actual risk. For example, a large enterprise we worked with had 35 responders spot-checking millions of data loss prevention incidents, and even with such heavy manpower, they would most often focus on the wrong employees. Their investigation and remediation efforts were not prioritized, and in turn, they couldn’t make sense of the abundance of alerts because they were looking at them one by one.
To build an effective insider threat program, companies need to start with a solid foundation. It’s critical they identify the most important assets and the insiders who have the highest level of access to those assets. Then, they should practice good cybersecurity hygiene: ensure data loss prevention and endpoint agents are in place and working; check that access controls are configured so that insiders can only access information they need; establish easily actionable security policies, such as making sure insiders use strong and unique passwords for their corporate and personal accounts; and encourage a company-wide culture that focuses on data protection through targeted security awareness training and corporate communication surrounding security.
Once the foundation is in place, monitor users’ behaviors and respond accordingly. By understanding their behavioral patterns, companies can identify when employees are acting unusually, typically an indicator that the user is up to no good -- or is being impersonated by a criminal. For example, when you go through a security checkpoint at the airport, the officers checking your identification ask you questions. They do not care about your responses; they are mainly looking at how you respond. Do you seem nervous? Are you sweating? They watch your behavior to determine if you could be a potential safety risk -- this same principal applies to insider threat programs.
When creating insider threat programs, oftentimes security teams focus on rules: they define what’s considered abnormal or risky behavior and then the team flags insiders whose actions fall into those definitions. However, this method can leave many organizations vulnerable -- chasing the latest attack, rather than preventing it. Rules are created based on something risky someone did in the past, which led to a compromise. The criminals can easily familiarize themselves with the rules and get past them. Rules do not help detect the “slow and low” breaches where insiders take out a small amount of information during a lengthy period of time so that the behavior goes undetected by security tools. And they do not combine activities across channels, such as someone accessing unusual websites and trying to exfiltrate data.
Enterprises need to understand what’s normal versus abnormal and then further analyze that behavior to determine if it’s malicious or non-malicious. By focusing on a subset of insiders -- those who access a company’s most critical data -- and how they normally behave, they can create a targeted list of individuals who need investigating. In a large enterprise, the list can be long, and organizations need to optimize how they respond. For those employees who are non-maliciously endangering the company, companies should provide targeted security awareness training that specifies exactly what each person did to put the company at risk and how they can minimize their risk. Most employees acting in good faith will be more careful once they understand the risk they pose to their employer. For third party vendor users, share information with the main vendor contact about who specifically is putting the organization at risk and what they are doing. Then, the vendor can handle the situation accordingly, reducing everybody’s risk.
An insider threat program should also focus on monitoring performance and communicating progress and challenges to C-level executives and the Board of Directors. Enterprises should show them what they are doing and the impact of their investment in security tools and programs, as well as explain any challenges they need to overcome. An effective insider threat program requires support from the highest level of individuals in an organization. With everyone on the same page, organizations can constantly reassess their program to truly understand their security alerts and reduce the likelihood of setting off false red flags—ensuring they’re catching and predicting the real threats, and removing them before they do any long-term damage.
Image Credit: Andrea Danti/Shutterstock
Steven Grossman is Vice President of Program Management, Bay Dynamics. He has over 20 years of management consulting experience working on the right solutions with security and business executives. At Bay Dynamics, Steven is responsible for ensuring our clients are successful in achieving their security and risk management goals. Prior to Bay Dynamics, Steven held senior positions at top consultancies such as PWC and EMC. Steven holds a BA in Economics and Computer Science from Queens College.