slider2

Displaying items by tag: saas security

woensdag, 18 februari 2015 00:00

Protecting Code in the Cloud

Protecting the source code of an application can originate from several motivations. Modern applications are becoming more and more the result of an expensive, intensive development cycle and should be protected against adversaries. You don't leave high profile assets unprotected, neither should your source code be. Besides the economical factor, it is also a problem for security. Source code is often the place where crackers or adversaries in general will look when they want to bypass some sort of authentication. The general rule however, still holds: "Never rely on security through obscurity". Nevertheless,
you should not make it too easy, it might throw the attacker off from digging any further. The problem, in particular with distributed applications, is the fact that (malicious) users download the application to their devices and have full control of the execution environment. Nothing stops the adversary from dumping your application and try to reverse engineer (RE) your source code. Some steps can be taken to protect your source code against reverse engineering.

Solutions

Before we dive into solutions against reverse engineering, it is important to stress that there is no such thing as a silver bullet when it comes to protecting your code. It is not a solvable problem, but very well a manageable problem. Attackers always work with one important question in mind: "Is the effort, that I have to put in, enough to justify the gain I get out of it?". It is your job to slow the attacker down as much as you can, in the ideal case the attacker decides it is not worth his efforts.  Several solutions exist against reverse engineering that a developer can deploy, obfuscation being the most important one.

Obfuscation

Obfuscators take perfect, human-readable, source code and will transform it to a hard-to-read format. The source code stays 100% syntactically and semantically correct. This strategy results in very difficult to read source code when an attacker tries to decompile your code. However, there is a catch here. If the application uses some public API (like for example system calls to the OS), these calls are not obfuscated. Through these calls the attacker will
work his way back into the code by renaming methods. This will take time, but it is proven numerous times that this is an effective de-obfuscation method. It should be noted that this consumes a huge amount of time and requires a lot of insight in the obfuscated source code.

Runtime Guards

Guards are pieces in your source code that are inserted at strategical places. These guard will monitor and check the runtime environment. Some common checks that guards can evaluate:

  • Application running on a rooted or jailbroken device?
  • Running on a debugger?
  • Instrumentation used to place hooks in the source code?
  • Resource verification, check if resources are tampered with

Guards are able to shutdown the application if one of these checks detect an unwanted execution environment.

Use Encryption

Where possible, deploy cryptography to protect modules of the application that are currently not being used. Only decrypt and load modules
when necessary. This is considered something for experts, as it is not an easy task to perform successfully.

 

In conclusion, it should be clear that protecting code is a very serious and important task. It should be a standard step in your release cycle before you distribute your application.

zaterdag, 22 november 2014 00:00

Anatomy of a data breach

The adaption of Software-as-a-Service and cloud solutions in general, is starting to become popular. However we are trusting these cloud providers with our most valuable asset, our data. Data breaches are happening all the time, as cloud providers are interesting targets for hackers who are after creditcard information or other sensitive information. Certain questions arise when you migrate your applications, and by doing this also your data, to the cloud. What can we do with data migrating to the cloud? What type of security measures can be taken? It is obvious that we (the good guys) need to be right 100% of the time, for the bad guys however, they only need to be successful 1% of the time. Krishna Narayanaswamy is Chief Scientist and founder at Netskope. He gave a talk about what data breaches are and how to prevent them while using cloud applications. 

Main causes

Ideally, the right security procedures and technologies need to be in place to ensure sensitive and confidential information is protected when using cloud resources. A first cause for data breaches is the fact that the majority of companies are circumventing important practices such as vetting the security practices of cloud service providers and conducting audits and assessment of the information stored in the cloud. 

Another main cause for data breaches is the fact that more and more companies allow their employees to bring their own devices to work (smartphones, laptops and tablets). In the worst case, they allow (or do not actively disable) them to use devices to handle company data. Shadow IT is another main cause of data breaches. 

A last cause which could trigger a data breach is a system glitch, which quite frankly is nothing a company can protect against if they choose to trust a cloud provider. Hence it is of vital importance, when selecting a cloud provider,  to ask after the procedures and mitigations the cloud provider puts into place to protect their customers against data breaches due to system failures. 

The causes mentioned above give rise to two types of data breaches: unintentional and intentional. Intentional data breaches happen when hackers or former employees actively attack a cloud provider and extract valuable company data from the cloud. Hackers bypass technology but former employees can simply use their unrevoked access credentials to access the cloud provider. It's thus of vital importance to ensure access rights are also enforced in cloud applications using the principle of least amount of privileges. 

Unintentional data breaches happen when an employee leaks data by accident. For example, an employee is using a SaaS application, let's assume it is GMail. He needs to send a certain e-mail to a competitor with an attachment, an error is made and he chooses a file with valuable company data. This is classified as an unintentional breach, and the competitor now has the valuable data.

The multiplier effect

An interesting parameter to know when a data breach occurs is the economic impact it can have. A study executed by The Ponemon Institute and funded by IBM, shows that the cost of a data breach is increasing. It is estimated that the theft or lost of one customer record costs $145. Consider now a data breach were 10 000 customer records are compromised. This means that this data breach has an economic impact which is as high as: $145 x 10 000 = $1.45 Million. We call this the multiplier effect.

A simple approach could decrease this economic impact. For example, don't store backups in the cloud and reduce customer data that is stored in the cloud. A balance should be found in the risk of storing data in the cloud and the economic impact if a breach would occur.

Measure, analyze, act

Use solutions or metrics to rate cloud service providers, to check their enterprise readiness. For example, a file sharing service with a 'fake' download button which redirects you to an advertisement page is not an enterprise ready cloud service. Check for abnormal events happening and scan for anomalies to detect abnormal and possible malicious activities. A three-step solution would include: measure, analyze and act.

Measure the cloud services in your company, discover which applications are running. Analyze what these applications do at a deeper level. How do they handle your data, do they use secure network protocols. Are they deploying encryption in the cloud, who has the key for decryption etc. And lastly act. Plot a course of action based on risk, look at the usage of the service, is it critical? This is where the security strategy comes in to play and decides how to incorporate cloud services in the company and the extended usage of these services. 

SaaS vendor responsibilities

In the SaaS model, the enterprise data is stored outside the enterprise boundary, at the SaaS vendor end. Consequently, the SaaS vendor must adopt additional security checks to ensure data security and prevent breaches due to security vulnerabilities in the application or through malicious employees. This involves the use of:

  • strong encryption techniques for data security;
  • hashing of sensitive login information;
  • fine-grained authorization to control access to data and auditing access logs;
  • off-loading sensitive information to dedicated servers.

A lot of different consumer services in the cloud are available nowadays. If you want to have a good data driven IT management strategy in your company, you will need to acknowledge that besides the sanctioned IT cloud solutions there is also something we call "shadow IT". These are consumer services running in your company network without being sanctioned by the IT department. The fact that these employees bring these services into their workplace can pose significant security issues.

It is estimated that today, 72% of the IT professionals do not know the scope of shadow IT at their companies. Looking at the numbers of cloud services in use, we see a big increase. In the second quarter of 2014, 738 cloud services were used in countries, in the third quarter this number raised to 831 cloud services running in enterprises. For the whole picture over several quarters look at this figure (source: skyhighnetworks.com):

source: skyhighnetworks.com

There are several reasons for this incline:

  • we're early in the adoption cycle for cloud;
  • there is a convergence of several forces driving innovation in software;
  • with the availability of open source software components and low-cost platforms such as Amazon AWS, it's cheaper than ever to launch an application over the internet;
  • we are recovering after the financial crisis of 2008, where as a result a lot of startups are solving problems in a new way.

Companies are realizing that "shadow IT" is becoming a problem. Not only is it possible that they do not comply with company policies but even worse than that they can impose great security issues. As a result, the first reaction of companies was to try blocking these services from their company networks, this proved to be unsuccessful because of several reasons. Being one of them, the fact that cloud services regularly introduce new URL's which are not blocked by the company firewall.

A solution for this problem could be to enable the cloud adaption lifecycle. Skyhigh offers a solution for this, based on three principles:

  1. Discover: Gain complete visibility into all cloud services in use and an objective risk assessment across data, business, and legal risk;
  2. Analyze: Identify security breaches and insider threats, analyze usage patterns to understand demand for cloud services, and consolidate subscriptions;
  3. Secure: Seamlessly enforce security policies including encryption, data loss prevention, and coarse and granular access control.

In conclusion, it should be stressed that it is of extreme importance that you search for shadow IT in your company and trace down the usage of these services. This can prevent sensitive data from leaking, which can be of vital importance to an enterprise. Using a dashboard tool like Skyhigh could be a good approach as this gives an easy overview of all the cloud services running in your company, and where necessary you can open a more detailed information panel of the service in question. 

Source: http://www.skyhighnetworks.com

This will be the first article in a series based on talks given at the Cloud Security Alliance Conference in Rome, 19th and 20th November. The first keynote presentation was given by Kevin Walker, Vice President and Assistent CISO at Walmart. 

His talk focuses on how Walmart changed their view on information security and how they improved their security frameworks and what efforts and resources were needed to achieve it. 

The talk started with a bit of history. Information security became a big issue when the bad guys around 1998 started to realize that a lot could be gained by compromising a system and stealing data. It called for some kind of information management and security. At this date, it is still very much, an ongoing battle where enterprises, cloud providers in particular, should always stay ahead of the attackers; preferably by a couple of steps. Something to think about: we use 60 years old technology to protect our most valuable assets, more specifically by the use of a username-password mechanism. It's unbelievable we couldn't do better than that in all these years (excluding multi-factor authentication which is still based on username-password).

In order to enable modern day information security we should change our state of mind: it's not a matter of "if a breach could occur" it's more a matter of "when a breach will occur". This should change modern day information security mechanisms to a more agile approach:

  • improve detection of breaches;
  • reduce response time;
  • improve containment of breaches;
  • reduce recovery time, preferably avoid the need for recovery.

In order to achieve this, Walmart has invested a huge amount of resources on implementing guidelines for their developers to write and deploy code. All the guidelines are based on the idea that you can't buy time; but what you can do, is save time and give it back. To determine which guidelines and tools Walmart  needs to provide for their developers, they used the history of individual developers. They went back, in some cases until two years ago, to check the code of these individual developers where they check which bugs or vulnerabilities were introduced in their code. As a result the developers could be grouped using as a criterium their "bad habits". For example, you have a group developers that write a lot of XSS vulnerabilities. These developers can follow special training, focused on eliminating these bad habits. It's good for the enterprise and good for the individual developer as he can take this with him if he chooses to work for another company. 

In addition to these specialized trainings they developed IDE plugins, they work like some kind of spellchecker, highlighting possible security vulnerabilities. All these efforts should improve the quality of the code. In order to assess these guidelines, they should have some kind of measurement system to measure the quality of the code. This is exactly what they did, they developed a system that scans code with a focus on several aspects (in total they leverage 12 different variables) and developers can track the progress of their code quality. This information can then be used as positive feedback for adjusting certain mechanisms or guidelines and in addition to stimulate individual developers to write more secure code of a higher quality.

In conclusion some numbers, using this approach Kevin Walker estimated that his team gave back 15000 developer hours back by using his team's guidelines. The reason is that a lot less time is spent on rewriting and fortifying code.

@Saasifisecured on twitter