slider2

Displaying items by tag: cloud security alliance

An interesting presentation to follow was the one where ENISA highlighted what actions the European Union is taking to ensure a guided form of cloud adaption for governments and to put a firm and secure cloud strategy implementation into place. It does this by providing frameworks and procurement guidelines for adopting cloud services. We go into more detail regarding risk assessment and management for Small and Medium Enterprises (SMEs). We end with the challenges that are still present for the financial and e-Health sectors. 

ENISA is active and operational in several fields being: 

  • providing recommendations to member states on security best practices;
  • policy implementation;
  • mobilizing communities and raising security awareness;
  • providing hands-on experience.

The goal of their cloud strategy guideline frameworks is a dual one: the first one being the facilitation of adopting cloud computing for users at home and companies and the second one being the adaption of governmental cloud computing.

zaterdag, 22 november 2014 00:00

Anatomy of a data breach

The adaption of Software-as-a-Service and cloud solutions in general, is starting to become popular. However we are trusting these cloud providers with our most valuable asset, our data. Data breaches are happening all the time, as cloud providers are interesting targets for hackers who are after creditcard information or other sensitive information. Certain questions arise when you migrate your applications, and by doing this also your data, to the cloud. What can we do with data migrating to the cloud? What type of security measures can be taken? It is obvious that we (the good guys) need to be right 100% of the time, for the bad guys however, they only need to be successful 1% of the time. Krishna Narayanaswamy is Chief Scientist and founder at Netskope. He gave a talk about what data breaches are and how to prevent them while using cloud applications. 

Main causes

Ideally, the right security procedures and technologies need to be in place to ensure sensitive and confidential information is protected when using cloud resources. A first cause for data breaches is the fact that the majority of companies are circumventing important practices such as vetting the security practices of cloud service providers and conducting audits and assessment of the information stored in the cloud. 

Another main cause for data breaches is the fact that more and more companies allow their employees to bring their own devices to work (smartphones, laptops and tablets). In the worst case, they allow (or do not actively disable) them to use devices to handle company data. Shadow IT is another main cause of data breaches. 

A last cause which could trigger a data breach is a system glitch, which quite frankly is nothing a company can protect against if they choose to trust a cloud provider. Hence it is of vital importance, when selecting a cloud provider,  to ask after the procedures and mitigations the cloud provider puts into place to protect their customers against data breaches due to system failures. 

The causes mentioned above give rise to two types of data breaches: unintentional and intentional. Intentional data breaches happen when hackers or former employees actively attack a cloud provider and extract valuable company data from the cloud. Hackers bypass technology but former employees can simply use their unrevoked access credentials to access the cloud provider. It's thus of vital importance to ensure access rights are also enforced in cloud applications using the principle of least amount of privileges. 

Unintentional data breaches happen when an employee leaks data by accident. For example, an employee is using a SaaS application, let's assume it is GMail. He needs to send a certain e-mail to a competitor with an attachment, an error is made and he chooses a file with valuable company data. This is classified as an unintentional breach, and the competitor now has the valuable data.

The multiplier effect

An interesting parameter to know when a data breach occurs is the economic impact it can have. A study executed by The Ponemon Institute and funded by IBM, shows that the cost of a data breach is increasing. It is estimated that the theft or lost of one customer record costs $145. Consider now a data breach were 10 000 customer records are compromised. This means that this data breach has an economic impact which is as high as: $145 x 10 000 = $1.45 Million. We call this the multiplier effect.

A simple approach could decrease this economic impact. For example, don't store backups in the cloud and reduce customer data that is stored in the cloud. A balance should be found in the risk of storing data in the cloud and the economic impact if a breach would occur.

Measure, analyze, act

Use solutions or metrics to rate cloud service providers, to check their enterprise readiness. For example, a file sharing service with a 'fake' download button which redirects you to an advertisement page is not an enterprise ready cloud service. Check for abnormal events happening and scan for anomalies to detect abnormal and possible malicious activities. A three-step solution would include: measure, analyze and act.

Measure the cloud services in your company, discover which applications are running. Analyze what these applications do at a deeper level. How do they handle your data, do they use secure network protocols. Are they deploying encryption in the cloud, who has the key for decryption etc. And lastly act. Plot a course of action based on risk, look at the usage of the service, is it critical? This is where the security strategy comes in to play and decides how to incorporate cloud services in the company and the extended usage of these services. 

SaaS vendor responsibilities

In the SaaS model, the enterprise data is stored outside the enterprise boundary, at the SaaS vendor end. Consequently, the SaaS vendor must adopt additional security checks to ensure data security and prevent breaches due to security vulnerabilities in the application or through malicious employees. This involves the use of:

  • strong encryption techniques for data security;
  • hashing of sensitive login information;
  • fine-grained authorization to control access to data and auditing access logs;
  • off-loading sensitive information to dedicated servers.

A company like Google has a lot of challenges to tackle when it comes to security. Not only due to the scale of their services but also the complex structure of their applications makes it hard to implement a good security strategy. Nevertheless they seem to do a pretty good job in securing their platforms. So it is interesting to know how they do this. Eran Feigenbaum, Director of Security at Google Apps, gave a keynote presentation on this topic and what follows is a summarized article. 

First and for most, as a Chief Information Security Officer (CISO) at a company, one should ask himself: "What am I going to protect? What are my valuable assets?" The answer to these questions depends heavily on the kind of business that you're dealing in. Some examples could be:

  • intellectual property
  • financial data
  • resources
  • customer data

In the case of Google the main valuable asset is the latter: customer data. Google collects a huge amount of customer data generated by the usage of their services. As this is the main source of revenue, it is pretty clear that their security strategy is focused on protecting this valuable asset.

Technology, Scale, Agility

When you look at the security strategy that Google tries to enforce, you can see the focus is kept on three main pillars: technology, scale and agility. If you need certain technology concepts, you will need to have the knowhow to develop the technology. Whenever you do something at Google, being security or deploying software, you always need to keep in mind the scale on which you're operating. And when it comes to security, agility has to be a main concern of your strategy. This could range from responding fast on security incidents or applying patches for zero-day exploits. You have to be prepared, your team has to be drilled.

Security was too often a matter of ticking of a checkbox. This is a wrong way of doing security when you look at the consequences of this approach. It is quite often the case that the security division of a company finds an issue right before the application was scheduled to go live. The right action to take is to block the application from going live and resolve the issue, in practice this seems to be a rather optimistic form of thinking. What really happens: "Launch the application, and we will take care of the issues in an update". History has shown unfortunately these issues still remain a source of security related problems.

Google has the unique position where they control the entire stack that they use. From design of the CPU chip, motherboard, up to the servers and server-racks. An advantage of this approach is the fact that you can build machines from the grown up, without having the useless components installed. Less components, less security issues. For example the Google servers do not have video output, as it is not needed. Go back to the fundamentals, to the security basics.

Gmail is a great example, take for example the e-mail account of Eran. He is a Google Director, so it is safe to say his e-mails can't be leaked or compromised. Gmail chops the e-mail up in a lot of small pieces and distributes these pieces over the whole environment, it even mixes it with regular user data. The filename gets a random value and the content of the pieces get obfuscated or encrypted, depending on the application. To deal with loss, every chunk of e-mail has 6 live copies. 3 copies are distributed in the same datacenter and 3 others over different datacenters. This means that if one server or even datacenter fails, there are still enough ways to recover the e-mail.

Nowadays, one of the biggest issues that cloud providers struggle with is the fact that customers are looking for a silver bullet. This is for them a way to put trust in the cloud provider, which at the end of the dat, it's all about. Unfortunately, this is not possible. There's not a method X that if you deploy X, all systems are 100% secure. We need to approach this in an other way, we need to work towards security "built in depth". Several security measures working together to make it almost impossible for attackers to breach the application.

Three main threats

At Google, they identify three main threats. Lost of hardware, physical intruders (nearly impossible) and network security. The lost of hardware is being mitigated by the use of a unique inventory system. Every harddisk has a unique serial number, which is monitored in the inventory system. Everytime an operator needs to do something with the disk it gets logged in the system. If there is a harddrive missing, the datacenter is being shutdown and no data moves in or out the datacenter. A Vice President needs to come on site to open the data center again.

Physical intruders are not really a problem at Google. It is nearly impossible to breach the datacenters physically. Laser-guided alarm systems, motion detectors , nets to catch cars trying to breach the wall, etc.

Devices on the Google network cannot talk to other devices, unless they get explicit permission to talk with each other. This means that connecting a device to the Google network won't result in any communication happening, unless the other device is given permission to talk to your device.

"Someone needs the key to the store"

Often the question is raised "who can read which of my data at Google?". This is a valid question and one Google should deal with and be transparant about it. They did this with a Role Based Access Control system. Every employee gets a role assigned which is accompanied with a set of permissions he needs in order for him to be able to do his job. These sets of permissions are fine-grained and assigned using the least amount of privilege principle. In addition, every call the employee performs, is getting logged. The employee himself will get an e-mail once a month containing all his logged actions, this is a psychological reminder so that the employee knows all his actions are logged and known to the system.

In conclusion, we see that Google has spent a great deal of work and resources in their security strategy and management, as is expected from one of the biggest cloud provider out there.

A lot of different consumer services in the cloud are available nowadays. If you want to have a good data driven IT management strategy in your company, you will need to acknowledge that besides the sanctioned IT cloud solutions there is also something we call "shadow IT". These are consumer services running in your company network without being sanctioned by the IT department. The fact that these employees bring these services into their workplace can pose significant security issues.

It is estimated that today, 72% of the IT professionals do not know the scope of shadow IT at their companies. Looking at the numbers of cloud services in use, we see a big increase. In the second quarter of 2014, 738 cloud services were used in countries, in the third quarter this number raised to 831 cloud services running in enterprises. For the whole picture over several quarters look at this figure (source: skyhighnetworks.com):

source: skyhighnetworks.com

There are several reasons for this incline:

  • we're early in the adoption cycle for cloud;
  • there is a convergence of several forces driving innovation in software;
  • with the availability of open source software components and low-cost platforms such as Amazon AWS, it's cheaper than ever to launch an application over the internet;
  • we are recovering after the financial crisis of 2008, where as a result a lot of startups are solving problems in a new way.

Companies are realizing that "shadow IT" is becoming a problem. Not only is it possible that they do not comply with company policies but even worse than that they can impose great security issues. As a result, the first reaction of companies was to try blocking these services from their company networks, this proved to be unsuccessful because of several reasons. Being one of them, the fact that cloud services regularly introduce new URL's which are not blocked by the company firewall.

A solution for this problem could be to enable the cloud adaption lifecycle. Skyhigh offers a solution for this, based on three principles:

  1. Discover: Gain complete visibility into all cloud services in use and an objective risk assessment across data, business, and legal risk;
  2. Analyze: Identify security breaches and insider threats, analyze usage patterns to understand demand for cloud services, and consolidate subscriptions;
  3. Secure: Seamlessly enforce security policies including encryption, data loss prevention, and coarse and granular access control.

In conclusion, it should be stressed that it is of extreme importance that you search for shadow IT in your company and trace down the usage of these services. This can prevent sensitive data from leaking, which can be of vital importance to an enterprise. Using a dashboard tool like Skyhigh could be a good approach as this gives an easy overview of all the cloud services running in your company, and where necessary you can open a more detailed information panel of the service in question. 

Source: http://www.skyhighnetworks.com

woensdag, 19 november 2014 00:00

Interactive Q&A with Cloud Providers

Things get interesting when you put some of the biggest cloud providers together on stage and you basically give the crowd control on the questions to ask. This interactive Q&A had a lot of interesting people on board:

  • Eran Feigenbaum, Director of Security, Google Apps
  • Tim Rains, Director, Trustworthy Computing, Microsoft
  • Kay Hooghoudt, Director Business Development Managed Cloud, Atos/Canopy
  • Cory Louie, Trust, Safety & Security, Dropbox
  • David Lenoe, Director, Secure Software Engineering, Adobe 

Here is a summary of the (short) Q&A:

Q: What are your opinions about information sharing of user data?

Adobe: Information sharing in a responsible way should be possible.

Dropbox: As a cloud provider, we realize trust is key. Some people also forget that Dropbox itself is a customer of services. We expect that the services we use as a customer should respect our desire of privacy, as we expect this from others, we also engage ourselves to provide privacy to our customers.

Google: Eran shares the idea of Cory but adds that there should be a balance between trust and transparency. There's still a lot to do but the whole model of "don't-use-my-data" is too restrictive at this moment.

Microsoft: Information sharing is a loaded word since that Snowden has published the NSA revelations. Let's go back to the basics and see what we can do to give the maximum amount of privacy to our users. Furthermore Microsoft will do everything in their power to make the law enforcement respect every letter of the law. They will not share any data, unless it is perfectly legally backed. They will constantly challenge the law enforcer to follow the law.

Atos: Shares this global opinion of trust.

In conclusion: trust is key, and cloud providers have a responsibility

Q: What are your companies doing to move away from the classic username and password login mechanism?

Microsoft: Our team has set the goal to eliminate passwords and are experimenting with identity based logins. However Tim stresses that people still need to have the choice.

Google: Google says they already reduced the amount of logins with username password and moved to a somewhat risk-based mechanism, but with a good usability. Eran acknowledges that the username-password mechanism is not sufficient anymore, people use the same passwords or passwords with too low entropy. He also stressed that Google already uses additional mechanisms for alternative login. In addition to lowering the prompts for username and password in Gmail, Google also does anomaly checks on the user operations: which computer is he sending from, how many transactions is the user performing, what is his device fingerprint. This allows Google to scan for anomalies in the usage and if necessary prompt the user for another factor authentication. Also Lollipop, the latest release of Android has some improvements. It will no longer ask for passwords when the device is used in a trusted context. For example: if I am connected to my home Wi-Fi don't ask for my code but let me bypass my security lock screen.

Dropbox: We are trying to solve this problem in an open source way. We are trying to build profiles of the users and leverage a lot of variables. 

Adobe: Adobe wants to add that there will always be a struggle between usability en security and that there's still a lot of work to do in this field.

In conclusion: they all acknowledge this technology is outdated and we have to move to new mechanisms for authentication. There is still a lot of work to do.

Q: Information sharing on security breaches?

Adobe: We work with trusted groups where we disclose certain security vulnerabilities.

Google & Dropbox: We are more supporters of the idea to let everybody know about the vulnerabilities at once.

Microsoft: We have a dedicated webpage for disclosing breaches.

Kevin Walker (in audience, Vice President Walmart):  This discussion should be obsolete. The customer has the right to know this, information security should be handled better, seize this unique moment as big cloud providers.

In conclusion: two main approaches: disclosure in a closed group of trusted people & let everybody know at the same time.

Note by the reporter of SaaSificationSecurity on site Dario Incalza: Not all questions or answers are wrote perfectly as they were brought by the discussion members as I added some personal interpretation, but the overal opinion is the same as I perceived while attending the Q&A.

This will be the first article in a series based on talks given at the Cloud Security Alliance Conference in Rome, 19th and 20th November. The first keynote presentation was given by Kevin Walker, Vice President and Assistent CISO at Walmart. 

His talk focuses on how Walmart changed their view on information security and how they improved their security frameworks and what efforts and resources were needed to achieve it. 

The talk started with a bit of history. Information security became a big issue when the bad guys around 1998 started to realize that a lot could be gained by compromising a system and stealing data. It called for some kind of information management and security. At this date, it is still very much, an ongoing battle where enterprises, cloud providers in particular, should always stay ahead of the attackers; preferably by a couple of steps. Something to think about: we use 60 years old technology to protect our most valuable assets, more specifically by the use of a username-password mechanism. It's unbelievable we couldn't do better than that in all these years (excluding multi-factor authentication which is still based on username-password).

In order to enable modern day information security we should change our state of mind: it's not a matter of "if a breach could occur" it's more a matter of "when a breach will occur". This should change modern day information security mechanisms to a more agile approach:

  • improve detection of breaches;
  • reduce response time;
  • improve containment of breaches;
  • reduce recovery time, preferably avoid the need for recovery.

In order to achieve this, Walmart has invested a huge amount of resources on implementing guidelines for their developers to write and deploy code. All the guidelines are based on the idea that you can't buy time; but what you can do, is save time and give it back. To determine which guidelines and tools Walmart  needs to provide for their developers, they used the history of individual developers. They went back, in some cases until two years ago, to check the code of these individual developers where they check which bugs or vulnerabilities were introduced in their code. As a result the developers could be grouped using as a criterium their "bad habits". For example, you have a group developers that write a lot of XSS vulnerabilities. These developers can follow special training, focused on eliminating these bad habits. It's good for the enterprise and good for the individual developer as he can take this with him if he chooses to work for another company. 

In addition to these specialized trainings they developed IDE plugins, they work like some kind of spellchecker, highlighting possible security vulnerabilities. All these efforts should improve the quality of the code. In order to assess these guidelines, they should have some kind of measurement system to measure the quality of the code. This is exactly what they did, they developed a system that scans code with a focus on several aspects (in total they leverage 12 different variables) and developers can track the progress of their code quality. This information can then be used as positive feedback for adjusting certain mechanisms or guidelines and in addition to stimulate individual developers to write more secure code of a higher quality.

In conclusion some numbers, using this approach Kevin Walker estimated that his team gave back 15000 developer hours back by using his team's guidelines. The reason is that a lot less time is spent on rewriting and fortifying code.

@Saasifisecured on twitter