Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail

There are three certainties in life: death, taxes, and more data leaks.

This week, 6 million Verizon customer accounts were leaked, adding to a long list of companies that have fallen victim to data exposures over the past few years. This time, the data breach occurred in an Amazon Web Services (AWS) environment of a Verizon customer service vendor, Nice Systems. 

Luckily, the breach was relatively limited in scope, but more than anything, it highlighted the risks associated with relying on third-party vendors to secure sensitive information. By storing its data in S3 buckets within Nice Systems’ environment, Verizon inherently relied on their partner’s security practices and configurations. Nice Systems works to secure the data of 85 Fortune 100 companies, essentially making them a “super partner.” With such an inter-connected network among security vendors, a future breach may be even more devastating, exposing data on customers from multiple companies.

AWS leaks are certainly becoming very frequent, with the last breach occurring only 3 weeks ago, but they cannot be blamed on Amazon’s security practices. The company has invested heavily in protecting the security of the AWS platform itself, so the obvious question is: why do these leaks still occur?

Amazon’s Shared Responsibility Model

One answer may be that AWS operates on a shared responsibility model. Within this system, Amazon is responsible for the security “of” the cloud. This involves protecting the platform against intrusion, as well as monitoring for fraud and abuse. Amazon also takes responsibility for its managed services, such as DynamoDB, RDS, Redshift, Elastic MapReduce, WorkSpaces.

On the other hand, the customers are responsible for security “in” the cloud. In-cloud security is an essential part of the equation, because it involves monitoring all non-managed AWS services. Additionally, the customers are responsible for configuring the services, using AWS in a secure manner, and installing regular updates for the platform.

Best Practices to Prevent Another Leak

To prevent future data leaks, prudent organizations should undertake the following best practices:

  1. Ensure S3 buckets containing sensitive or regulated data are not publicly accessible. Given the cause of the recent AWS data leaks, organizations should revisit the configuration of their S3 buckets to ensure those buckets containing sensitive data are not publicly accessible.
  1. Enable CloudTrail across all AWS and turn on CloudTrail log validation. Enabling CloudTrail logging is essential, as it generates logs of all activities within AWS services that are not region-specific, such as IAM and CloudFront. CloudTrail log validation complements this by identifying any changes made to the log file after its delivery to the S3 bucket. This adds an additional layer of security that verifies the integrity of the log files.
  1. Enable access logging for CloudTrail S3 buckets. CloudTrail S3 buckets contain log data that is crucial to successful activity monitoring and forensic investigations. Enabling access logging for these buckets tracks access requests and can prevent unauthorized access attempts.
  1. Encrypt Highly Sensitive Data. Protected Health Information (PHI), personally Identifiable Information (PII), financial data, or intellectual property should be encrypted before they are uploaded to a cloud service such as AWS. Employ encryption in transit and at rest to ensure end-to-end data security.
  1. Encrypt CloudTrail log files at rest. Log files at rest should be encrypted in order to prevent unauthorized users from using or exposing them. Only key users who require the S3 buckets for their jobs should have decryption and access permission for those buckets.
  1. Require Multifactor Identification (MFA) to delete CloudTrail buckets. Hackers frequently delete CloudTrail logs in order delay detection. Requiring multi-factor authentication to delete CloudTrail buckets would allow organizations to detect hacking sooner. In general, MFA is a good practice to control unauthorized access across all cloud services as it creates an additional level of security for common threats, such as brute force attacks or stolen credentials.
  1. Enforce a single set of Data Loss Prevention Policies across all applications and services. Enterprises using AWS need to inventory and standardize their DLP policies in order to protect vulnerable information. This should be applied to PII, such as social security numbers, account credentials, and other sensitive data like credit card numbers. If they are using other cloud services such as Box, Salesforce, or Office 365, the same set of DLP policies should be applied across all these services to prevent enforcement gaps.

By applying these best practices, organizations will have taken the first step towards securing their data, and preventing a repeat of the Verizon leak and countless other leaks in the past. However, this is only the first step, and enterprises can secure their applications deployed on the cloud by following an additional set of security procedures. The Verizon data breach this week proves once again how important proper cloud security is in the cloud-first environment.

 

To explore the value of investing in security awareness training to reduce the risk of annualized phishing attacks, check out this comprehensive research report by Aberdeen’s Derek Brink. 

 

Sekhar-BioSekhar Sarukkai is the Chief Scientist at Skyhigh Networks. He has over 20 years of experience in enterprise networking, security, and cloud services development.

Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail
Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

You have Successfully Subscribed!