Keeping insiders – malicious or otherwise – from being a threat

There are a variety of threats that IT faces all the time, but the ones that could be most tricky to deal with are usually those that originate in-house. Whether it’s careless users putting systems at risk or malicious ones taking advantage, these insiders can cause serious damage to their organizations. 

Despite insider threats being a huge concern for many organizations, there’s often serious gaps in how companies defend against them.

Most don’t yet address insider threats in their security programs. Many that do will simply squeeze it into their acceptable use policies.

But according to a presentation at the 2015 RSA Conference, insider threats might not fit so nicely into existing policies – and companies should specifically address these threats in their own policies and make sure their vendors do the same.

Not 1 kind of threat

Part of the difficulty in nailing down insider threats is that there’s no single cause of insider incidents, according to Summer Fowler and Randall Trzeciak of the CERT Division of Carnegie Mellon University.

“There’s a difference between an insider who his motivated to do something and the insider who does or doesn’t do something that causes harm,” Trzeciak said. “So there isn’t one insider threat that is applicable to every situation.”

Examples of these threats can include:

  • Sabotage. Insiders with an ax to grind who take down services from the inside or delete data.
  • Intellectual property theft. Users who steal company secrets for their own gain (such as taking contacts to a new job) or to sell to the competition.
  • Fraud. Employees who steal other employees’ personnel or payroll records for financial gain.

And non-malicious insiders can just as easily give up records in a phishing attack or lost device, for instance.

Instituting a policy

According to the Fowler and Trzeciak, the easiest solution to detecting insider threats – tools used to detect unusual activity – isn’t effective on its own. These tools can help detect when users do something suspicious, but aren’t always able to detect the intent behind their actions if they’re done maliciously.

“The challenge is that technology tools won’t always capture that. It may be a user’s responsibility to move or delete data, so doing it won’t raise any red flags,” Trzeciak observed.

In addition to these tools, organizations should also have policies for insider threat mitigation.

These policies should be:

  • Collaborative. The researchers recommend integrating the policy with IT, building security, HR and other parties to ensure all stakeholders have their say. Make sure to run it by a legal department to stay in compliance with employment laws.
  • Consistently enforced. Users need to know your policies aren’t recommendations, they’re hard-and-fast rules. Uneven or inconsistent enforcement is also begging for trouble in court.
  • Thorough. Background checks and other pre-hiring screening may be necessary to keep bad actors out.
  • Supported at all levels. The most important step to getting buy-in is to assure users you’re not on a witch hunt. “The purpose of your program should be to protect the 99.9% of people who are not threats,” Fowler said. “It’s looking out for the good people.”

Insiders in the cloud

One other important factor to consider: Insider threats aren’t limited to your own people.

A cloud service provider’s employee could just as easily access or manipulate your data – and that could be even more difficult to detect.

Ask providers to tell you what their insider threat programs cover before you sign any contracts.

“Make sure you have access to the audit data,” urged Fowler. “Ask whether the cloud service provider will follow your rules and regulations.”

And find out what would happen in the event of an insider attack, including when and if you would be notified and who would face the financial consequences.

Make Smarter Tech Decisions

Get the latest IT news, trends, and insights - delivered weekly.

Privacy Policy