Quantcast
Channel: Invicti
Viewing all 1027 articles
Browse latest View live

Netsparker Exhibited at RSA Conference 2014

$
0
0

RSA Conference LogoNetsparker Web Application Security Scanner will be exhibited at the RSA Conference by our resellers Portcullis at booth 2134.

The 2014 expo is being held between the 24th and 28th of February and is the biggest to date. With more than 350 exhibitors, the expo is divided into two main halls; North Expo and South Expo. If you are at the RSA conference in San Francisco head down to booth 2134 and visit our resellers.


2013/2014 Web Vulnerability Scanners Comparison - Netsparker Confirmed as a Market Leader

$
0
0

Earlier on this month, information security researcher and analyst Shay Chen released the 2013/2014 Web Application Vulnerability Scanners Benchmark, where he compared 63 different web vulnerability scanners, or as they are also known web application security scanners.

The comparison contains a good wealth of information and for those who have time, it is worth to dive into and analyze all of the results. We of course already did our homework; we analyzed the results and are more than happy with results; Netsparker Web Application Security Scanner smoked the competition and is only second to IBM AppScan with only 4% difference ; a scanner that costs much more than Netsparker.

Hence when you also include the price in the equation, Netsparker is the best web vulnerability scanner with the best return on investment; while IBM Appscan has an expensive price tag users still have to spend a lot of time to verify its findings as opposed to Netsparker, which automatically checks its own findings to report no false positives.

How did Netsparker Perform When Compared to Other Scanners?

There are several different angles on how you can look at the results to determine which is the best web vulnerability scanner for you. To start off with, below are the graphs for each web vulnerability class tested in this benchmark:

SQL Injection Vulnerabilities Detection

SQL Injection Vulnerabilities detected by the web vulnerability scanners

Netsparker detected all of the 136 SQL injection vulnerabilities like most of the other web vulnerability scanners. Only NTOSpider and N-Stalker did not detect all SQL injection vulnerabilities.

Cross-Site Scripting Vulnerabilities Detection

Cross-Site Scripting Vulnerabilities detected by the web vulnerability scanners

Netsparker detected all of the 66 cross-site scripting vulnerabilities like most of the other web vulnerability scanners. Only BurpSuite and N-Stalker failed to detect all XSS vulnerabilities.

Path Traversal / Local File Inclusion Vulnerabilities Detection

Path Traversal and Local File Inclusion Vulnerabilities detected by the web vulnerability scanners

Here is where things start becoming interesting; IBM AppScan and Netpsarker are on a league of their own when it comes to detecting path traversal and local file inclusion vulnerabilities.

According to the benchmark IBM AppScan detected all vulnerabilities while Netsparker missed 30. However when these vulnerabilities were scanned individually, Netsparker identified them all. This lead us to discover a very rare bug that only happens when a particular custom 404 configuration happens (it’s quite hard to see this bug in real world, hence we didn’t know about it before this benchmark). We are looking into this issue and addressing it. So if you scan for these vulnerabilities individually, or if they were on a different website Netsparker would have identified them all. Feel free to download the benchmark and test it yourself to see this.

Third placed NTOSpider missed 154, HP WebInspect missed 228, Acunetix WVS missed 348 and so on.

XSS via RFI Vulnerabilities Detection

XSS via Remote File Inclusion Vulnerabilities detected by the web vulnerability scanners

In this case only Netsparker, IBM Appscan and HP WebInspect detected all 108 XSS via RFI vulnerabilities. Next in line is NTOSpider which detected 86 instances, and then Acunetix which detected 84 instances.

Unvalidated and Open Redirects Vulnerabilities Detection

Open and Unvalidated Redirect Vulnerabilities detected by the web vulnerability scanners

HP WebInspect detected the most unvalidated redirect vulnerabilities by detecting 15 out of 60, followed by Netsparker and IBM AppScan with 11 detections.

Old Backup Files Detected

Old backup files detected by the web vulnerability scanners

Acunetix WVS leads the pack in this test, by detecting 60 out of 184 backup files. Then followed by Burpsuite with 46 detections, SyHunt with 34 detections and the rest follow.

Total Number of Identified Vulnerabilities

All Web Application Vulnerability Classes

After going through each individual vulnerability class chart, now its time to add up all vulnerabilities together and see how the scanners performed over all. As per the chart below, Netsparker and IBM AppScan were the only two automated web vulnerability scanners to identify more than 1,000 web application vulnerabilities. Both scanners lead thanks to excellent detection of critical path traversal and LFI vulnerabilities.

Total number of web application vulnerabilities detected by the web vulnerability scanners

Netsparker detected 1,112 vulnerabilities and is only second to IBM AppScan, which detected 1,147 vulnerabilities. Next in line is NTOSpider with 958 vulnerabilities, then  HP Webinspect with 917 vulnerabilities followed by Acunetix, which detected 819 vulnerabilities. BurpSuite, Syhunt and N-stalker follow with 791, 716 and 484 identified vulnerabilities respectively.

Direct Impact Web Application Vulnerabilities

Below is another chart showing how many direct impact vulnerabilities each web vulnerability scanner detected. By direct impact we mean critical vulnerabilities that if exploited could affect the operations of the web application and the business itself, hence excluding the “Old backup files” and “Unvalidated / Open redirects” vulnerabilities from this chart.

Total number of direct impact web application vulnerabilities detected by the web vulnerability scanners

As we can see after excluded non direct impact vulnerabilities the performance of the 2 major players was unaffected. The performance of all other scanners, especially the last 4 in the group was drastically affected, in a negative way. This shows that both IBM AppScan and Netsparker are more focused on identifying critical vulnerabilities.

False Positives and Web Security Scans Time Consumption

When compared to comparisons of previous years, all web vulnerability scanners improved their detection rate and all of them managed to reduce the number of reported false positives. Funnily enough Netsparker, the only false positive free web vulnerability scanner reported 3 false positive SQL Injection vulnerabilities. How did this happen?

To start off with, Netsparker is shipped with an exploitation engine that is automatically triggered once a vulnerability is detected. If the vulnerability is exploited it is not a false positive.

During these tests Netsparker detected all of the 136 SQL injections and reported 3 additional ones. Netsparker exploitation engine confirmed all of the 136 valid SQL injections but was unable to confirm the 3 additional false positive ones, which were specifically marked as unconfirmed.

The Time Efficiency Factor - Netsparker Still Leads the Way

Even though Netsparker reported 3 false positive SQL injection vulnerabilities, it still leads the pack. When using Netsparker, the user only has to verify the 3 unconfirmed vulnerabilities.

On the other hand, all other web vulnerability scanners do not have an exploitation engine, hence the user have to confirm all vulnerabilities. Therefore in the case, a normal user would have had to confirm 136 SQL injection vulnerabilities, which might take quite a bit of time!

Which is the Best Web Vulnerability Scanner?

The best web vulnerability scanner is the one which detected most vulnerabilities, is the easiest to use and can automate most of your work.  As we all know, users have to verify a scanners findings, therefore automated vulnerability confirmation is also something that should be considered in the equation. Verifying findings is a time consuming process and for sure you are better off spending time remediating the issues rather than verifying the findings.

How to Choose the Best Web Vulnerability Scanner for You?

Although the above statistics are a good indication of who are the web application security market leaders, don’t base your judgement just on these facts. There is no better way to determine which is the best tool for you by getting your hands dirty and scan some of your own test websites with a number of different web vulnerability scanners.

If you are new to this geeky world of automated scanning, the article how to evaluate web vulnerability scanners will give you a better insight of how to choose the right web scanner for you. And if you’d like to learn more, read this Getting Started with Web Application Security.

What is Next for Netsparker Web Application Security Scanner?

Of course we are very happy that even though we are the youngest contender in this industry, we are already up there with the major players such as IBM AppScan, although we have to admit that it would have been even more awesome if we beat them as well.

We have done very well in identifying almost all critical vulnerabilities and can see that our lowest point is detecting old backup files on websites. We never really focused on these type of issues since the cost of identification and the worthy of finding is not of a great value. However we will ship these checks as option in the upcoming releases, so users will have the option to enable them during a web application security scan. We will continue working hard to ensure that Netsparker is easy to use and can detect as much web application vulnerabilities as possible automatically.

Last but not least we would like to thank Shay Chen for all his professional work and dedication.

Understand Your Web Application Better with Netsparker Knowledge Base Nodes

$
0
0

Malicious website hack attacks do not just happen when someone successfully exploits a web application vulnerability. Many attacks are successful because an attacker discovered some hidden admin interface while analysing the developer comments, or because the attacker found some debug information that gave him enough information to connect directly to the backend database.

Therefore your web vulnerability scanner of choice should report much more than just exploitable web application vulnerabilities. And that is where Netsparker excels; Netsparker Web Application Security Scanner is not just an automated web vulnerability scanner that automatically identifies vulnerabilities in web applications, it is a complete web security tool that also highlights other security issues which are typically not classified as “vulnerabilities” but might help attackers craft a successful hack attack against a web application.

Netsparker provides the user with a complete detailed analysis of the target web application. All of this information is centralized in knowledge base nodes, which can be found in the sitemap section. Below is a list of all knowledge base nodes and what information they present to the user:

Out of Scope Links

In the Out of Scope Links knowledge base node Netsparker will list all the links found in the target web application that do not fall under the scanning scope, hence they won’t be been scanned.

Therefore from this knowledge base node users can determine what was not scanned and why so they can fine tune their security scan settings should they wish to also scan those links and their content.

Out of Scope Links Knowledge Base Node on Netsparker

Interesting Headers

In the Interesting Headers knowledge base node Netsparker will list all the unusual HTTP headers encountered during the security scan of the target web application. Such information is very useful for quality assurance teams; it typically leads them to discover any legacy or unused components which are still being called because some code have been left enabled in the system.

Such information can also help security professionals uncover more information about the target web applications that they didn’t know of, for example they can find out if a load balancer is in use and can help them determine the version information of some of the server components for more targeted  testing.

Interesting Headers knowledge base node in Netsparker

Web Pages with Inputs

In this knowledge base node Netsparker will list all of the target’s web application pages that have an input. This list can be used by developers and quality assurance members for further manual testing. Security professionals find such information useful as well since it gives them an overview of the attack surfaces of a web application; from where web applications are typically attacked.

List of Web Pages with inputs in Netsparker knowledge base node

MIME Types

In this knowledge base node Netsparker will list all the MIME Types discovered on the target web application. Under each MIME type Netsparker will also list all the files with such MIME type. Such information is very handy in case further manual testing is required. It also helps security professionals spot any unusual file / type served by the server.

Netpsarker will show all the different MIME types used on target website in a knowledge base node

File Extensions

In this knowledge base node Netsparker will list all the different file extensions identified on the target web application. Under each extension it will also list down all the files with such an extension. Although this information might not contain a lot of juicy information, it helps security professionals determine what is being served from the web application.

List of different file extensions identified on target web application listed in Netsparker knowledge base node

Email Addresses

In this knowledge base node Netsparker will list all the email addresses identified on the target web application. Although having clear text email addresses on a website is not a vulnerability in itself, it is good to know that email addresses are published on the website.

List of email addresses identified on target web application listed in Netsparker knowledge base node

Embedded Objects

In this knowledge base node Netsparker will list all the embedded objects such as Flash file or ActiveX component discovered on the target web application and their location.

Embedded objects identified on target web application listed in Netsparker knowledge base node

External Scripts

In this knowledge base node Netsparker will list all the external scripts identified on the target web application. An external script from a non-trusted source should be considered as a security risks, since it might be tampered by someone else to execute malicious javascript on the target web application. Such tampering might result in a stored or permanent Cross-Site Scripting vulnerability.

Information in this knowledge base node might help users determine if the target web application have been already hacked, for example malware is being distributed via an injected script. In this knowledge base node (un)trusted 3rd party scripts used on your web application are also listed.

List of external scripts identified on target website in Netsparker knowledge base node

External Frames

In this knowledge base node Netsparker will list all the frames on the target web application which originate from an external source. Similar to external scripts, external frames might be the result of an already hacked website, hence it is good for security professionals to know all of the external references in a web application.

List of external frames identified on target web application in Netsparker knowledge base node

Comments

In this knowledge base node Netsparker will list all the source code comments identified on the target web application and highlight keywords which might contain sensitive information. Most probably this is the most overlooked security issue of all and could lead to sensitive information disclosure.

For example imagine a developer leaves the below comment on the web application:

<!-- similar to admin pages in /hiddenadmin/ -->

If such a comment is found by a malicious attacker he or she knows that there is some sort of hidden admin area which might give them more information or access to the admin portal. It is very typical for developers to leave very sensitive information on web applications such as connection strings, administrative accounts credentials and much more.

Netsparker will automatically find and crawl identified paths in the comments but there is much more that can be left in the comments by the developers.

Developer comments in source code identified in target web application listed in Netsparker knowledge base node

Netsparker also allows users to add new entries to the list of sensitive comments so they are alerted once such entry is identified in the source code comments. Users can also modify the existing patterns from the Comments node in the Netsparker settings as seen in the below screenshot.

Configuring developer comments highlights in Netsparker web application security scanner

JavaScript Files

In this knowledge base node Netsparker will list all the JavaScripts identified on the target website. Security professionals can refer to this centralized list of information to check that all JaVaScripts on the target website are secure and are being used appropriately, rather than having to browse through the website and find them manually and risking to miss some of them.

List of JavaScript files identified on target web application by Netsparker listed in the knowledge base node

Cookies

In this knowledge base node Netsparker will list all the cookies used by the target website. Cookies can disclose a lot of information about the target website that attackers can use to craft a malicious attack. For example cookies can store strings such as “admin=false” or “debugging=1”.

From this node security professionals have access to a centralized list of all cookies so they can analyse them one by one and identify any information disclosure issues.

Cookies used on target website listed in a Netsparker knowledge base node

Web Services

In this knowledge base node Netsparker will report any identified web services running on the target web application and their operations.

List of identified web service in Netsparker knowledge base node

Identifying All Web Application Security Holes

As this article highlights there is much more to web application security than just identifying and remediating exploitable vulnerabilities, and this is where Netsparker Web Application Security Scanner plays a good role. Web security professionals should take advantage of such tools and use all of the information provided to their advantage.

Netsparker centralizes all information to helps security professionals understand better the target web application and identify any security issues that are not “exploitable vulnerabilities” yet expose information to malicious attackers and lead them to a successful hack attack.

Netsparker Chosen as Finalist in Red Herring Top 100 Europe Awards

$
0
0

Red Herring Europe FinalistNetsparker has been announced as a finalist for the 2014 Red Herring Top 100 Europe Awards, a prestigious list honoring the year’s most promising private technology ventures from the European business region.

The Red Herring editorial team selected the most innovative companies from a pool of hundreds from across Europe. The nominees are evaluated on 20 main quantitative and qualitative criterion: they include disruptive impact, market footprint, proof of concept, financial performance, technology innovation, social value, quality of management, execution of strategy, and integration into their respective industries.

This unique assessment of potential is complemented by a review of the actual track record and standing of a company, which allows Red Herring to see past the “buzz” and make the list a valuable instrument for discovering and advocating the greatest business opportunities in the industry.

"This year was rewarding, beyond all expectations" said Alex Vieux, publisher and CEO of Red Herring. "The global economic situation has abated and there are many great companies producing really innovative and amazing products. We had a very difficult time narrowing the pool and selecting the finalists. Netsparker shows great promise and therefore deserves to be among the Finalists. Now we’re faced with the difficult task of selecting the Top 100 winners of Red Herring Europe. We know that the 2014 crop will grow into some amazing companies that are sure to make an impact."

“It is a privilege for all of the team of Netsparker to be selected as a finalist for the prestigious Red Herring Top 100 Europe awards” said Ferruh Mavituna, CEO of Netsparker Ltd, the company behind Netsparker Web Application Security Scanner. “When you consider that Netsparker is the youngest commercial web vulnerability scanner in the industry, this is a great feat for the team! This clearly shows that we are heading in the right direction and that we have already already left an imprint in the web application security industry by developing innovative and easy to software”.

Finalists for the 2014 edition of the Red Herring 100 Europe award are selected based upon their technological innovation, management strength, market size, investor record, customer acquisition, and financial health. During the months leading up to the announcement, hundreds of companies in the telecommunications, security, Web 2.0, software, hardware, biotech, mobile and other industries completed their submissions to qualify for the award.

The Finalists will present their winning strategies at the Red Herring Europe Forum in Amsterdam, April 7-9, 2014. The Top 100 winners will be announced at a special awards ceremony the evening of April 9 at the event.

Scan Your Web Applications with Your Xbox and PlayStation from the Comfort of Your Sofa

$
0
0

Game consoles such as the Microsoft’s Xbox One and Sony’s PlayStation 4 has become very popular and almost every computer enthusiast has one. Today it is possible to install Netsparker on Xbox One and PlayStation 4 and launch automated web application vulnerability scans using the game controller from the comfort of your sofa.

April 1, 2014 London - Netsparker Ltd today announced the release of Netsparker Web Application Security Scanner Console Edition, the leading false positive free web vulnerability scanner that simulates malicious hackers and enables web application security professionals to automatically identify vulnerabilities and other security issues in their web applications.

The latest version of Netsparker can be installed on popular next gen game consoles such as Xbox One and PlayStation 4 to allow security professionals to launch web vulnerability scans from the comfort of their sofa.

“Remote working is becoming really popular and nowadays it is common for people to work from home. We took this a step further and started supporting popular game consoles to further ease the job of security professionals,” said Ferruh Mavituna, CEO of Netsparker. “We are really thrilled with these updates. Now security professionals can also share their scans on the popular online game platforms with their friends and colleagues.”

Features Highlights of Netsparker Console Edition:

Scan your Web Applications from the Comfort of your Sofa

Launch web vulnerability scans with Netsparker from your Xbox One or PlayStation 4 game consoles from your home. You can control Netsparker from your console’s game controller. The game controller will vibrate when a vulnerability is found.

Share Scan Results on Your Favourite Social Gaming Network

You can live broadcast your scans using social video sharing sites like twitch.tv and ustream. Your friends can enjoy watching the scans and can comment on them. You can also share vulnerability details on popular social platforms like twitter and Facebook.

Co-op Exploitation

You are not alone against the evil vulnerabilities, call your friends for support. You can invite your friends to join your scanning session and exploit the vulnerabilities together!

Your Body as the Controller

Performing some easy body gestures, you can control Netsparker without using a game controller. A single clap will exploit the vulnerability and double clap will retest it. Stopping a scan is as easy as performing a spagat.

Harness the Power of Your CPU, GPU and Memory

With their 8 cores of CPU power, hefty GPU’s and loads of memories, next gen game consoles will make Netsparker scans take significantly less time to complete. Netsparker will utilize GPU power to perform complex attacks to DOM surface of the target web app.

Challenge your Friends, Earn Achievements and Game Points

It has never been this fun to scan web applications. Netsparker will award you with achievements and game points during scans. You can challenge your friends to earn more points or to unlock that SQL Injection Ninja achievement.

Your scans on the cloud, Sync scans between your console and PC

You have started a scan at work and it is still running when you need to leave for the weekend. Now you can transfer that scan to your game console at home and still continue scanning at home. All your scans are stored safely on the cloud and synced with all your devices.

Availability

We are going to release a beta version of Netsparker Console Edition later this month. Please contact us if you want to participate in the beta program.

About Netsparker Ltd

Netsparker Ltd is a young and enthusiastic UK based company. Netsparker is focused on developing a single automated web security product, the false positive free Netsparker Web Application Security Scanner. Netsparker management and engineers have more than a decade of experience in the web application security industry that is reflected in their product, Netsparker Web Application Security Scanner. Founded in 2009, Netsparker’s automated web vulnerability scanner is one of the leading security tools and is used by world renowned companies such as Samsung, NASA, Skype, ING and Ernst & Young.

Note: A lot of people actually believed this! Thumbs up to the Netsparker team for pulling off this April's fool joke.

Are Your Web Applications Vulnerable to Heartbleed SSL Vulnerability?

$
0
0

Heartbleed SSL vulnerabilityAre you are using SSL on your web applications and websites? Scan them with the new version of Netsparker Web Application Security Scanner to find out if they are vulnerable to the latest critical SSL vulnerability Heartbleed. Top sites such as Yahoo! and Flickr are vulnerable to the heartbleed bug.

The heartbleed bug is a serious vulnerability in the popular OpenSSL library, which is used to provide SSL functionality on web servers. The vulnerability allows malicious hackers to steal private information. Once exploited the malicious attacker can access sections of the web server's memory where sensitive data such as users’ passwords are stored. This also means that the malicious attacker can retrieve the web server’s private key hence can decrypt any encrypted information sent to the websites and web applications running on the web server itself.

One of the easiest exploits is hijacking sessions by accessing cookies and requests from the web server’s memory. Since the heartbleed vulnerability affects the OpenSSL library, Microsoft’s IIS (Internet Information Services) web server is not affected by the issue.

Identify Heartbleed Vulnerability in Your Web Applications

Netsparker can automatically identify the heartbleed SSL vulnerability in your web applications. Netsparker will not simply check the version of the OpenSSL library you are running but will send the necessary requests to perform a full scale heartbleed vulnerability check.

Netsparker reporting a Heartbleed vulnerability

If you are already using Netsparker, upon starting up the scanner it will automatically check for updates and alert you to download the latest update. Alternatively launch the product and click Check for Updates from the Help drop down menu.

If you are not using Netsparker, we recommend you to download Netsparker Trial Edition to see for yourself how within a minute or two you can launch automated web vulnerability scans against your websites and web applications and identify vulnerabilities that might leave you and your business exposed to malicious hacker attacks.

For more detailed information about Heartbleed SSL vulnerability refer to the Heartbleed article on Wikipedia.

Is Your Web Vulnerability Scanner Approved by PCI?

$
0
0

A question we are frequently asked is if Netsparker Web Application Security Scanner is a PCI approved scanner or tool, or PCI DSS compliant. In this article we will give a brief overview of what is PCI DSS, how businesses can use a web vulnerability scanner such as Netsparker to ensure their own business and that of their customers are PCI DSS compliant, and will also explain why Netsparker, or any other web vulnerability scanner cannot be a PCI approved scanner.

Introduction to PCI DSS

PCI DSS stands for Payment Card Industry Data Security Standard; a set of rules businesses should adhere to if they accept payments via credit cards, to ensure the security and privacy of their customers and their records.

For a more detailed explanation and in-depth analysis about PCI DSS compliance you can read the article PCI Compliance - The Good, The Bad, and The Insecure.

Therefore if your business accepts payments by credit cards, it has to be PCI compliant. If it is not PCI DSS compliant you risk of losing your merchant account, thus won’t be able to accept credit card payments and do any business. So the next question that comes to mind is, how can your business become PCI DSS compliant?

Is Your Business PCI DSS Compliant?

There are a number of different options how your business can become PCI Compliant. Large businesses can and typically have their own Internal Security Assessor (PCI ISA) who does the annual Report on Compliance (ROC) for them. They can also hire an external PCI QSA (Qualified Security Assessor) to do the audit.

Smaller businesses can also hire an external PCI QSA or can do the PCI SAQ (Self-Assessment Questionnaire) on their own each year. There are also the PCI ASV’s (Approved Scanning Vendors) which are organizations which provide automated security services by scanning the internet facing environments of a business, such as websites, web applications and firewalls and validate if the target is PCI DSS compliant.

What Tools Are Used In PCI DSS Security Audits?

Irrelevant of which auditing option you choose for your business to become PCI DSS compliant, your websites and web applications, even internal ones have to be audited for vulnerabilities. And as we have seen in a previous article Why Web Vulnerability Scanning Needs to be Automated, it is virtually impossible to manually audit today’s websites and web applications.

As a matter of fact, Qualified Security Assessors, Approved Scanning Vendors and Internal Security Assessors use automated tools to do PCI compliance audits. They all use a web vulnerability scanner to scan websites and web applications and help them uncover vulnerabilities.

Can a Web Vulnerability Scanner Be PCI Compliant?

As we have seen, an organization or even an individual can be PCI approved or qualified vendor, assessor etc but the tools they use, i.e. the actual software product such as a web vulnerability scanner can never be PCI compliant, or an approved PCI scanner.

When you think about this, it makes a lot of sense. An automated web vulnerability scanner or any other security software which is typically used during PCI security audits, such as a networks scanner can be used by both experienced and inexperienced users; therefore there is no guarantee that the tool is used correctly and that the results are accurate.

Web Vulnerability Scanner for PCI DSS Audits

If you are a PCI QSA, ASV or doing your own PCI compliance audits do not look for a web vulnerability scanner or any other automated tool that is approved by PCI, since you’ll never find one.

What is important is that the web vulnerability scanner you choose can help you make the job easier, allows you to automate many repetitive tasks, can generate PCI DSS reports and can be easily integrated with other systems you use, especially if you are planning to provide PCI ASV services or become a PCI QSA.

Moreover being PCI DSS compliant is important, but don't forget the purpose of PCI; to ensure that your business  is secure, therefore while choosing a web vulnerability scanner you should choose the one that detects most vulnerabilities and helps you make your website more secure rather than just PCI compliant.

Don't Waste Your Testing Team's Talents - Automate the Repetitive

$
0
0

Many companies shy away from automated testing: it cannot replace manual testing, they reason, and so why invest so much in it? This view can be defended for user interface testing, where humans are faster and more subtle than automated testing and the testing itself is less arduous than the automation of it.

But it falls short of the reality of web security testing; security requires, above creativity and intuition, cycles consisting of hundreds of repetitive tests. These tests will take humans days if not weeks to complete, but require none of the qualities that humans bring to testing, not to mention that some of the security testing cannot be completed manually.

Automated VS Manual Web Security Scanning

Let's compare two common scenarios. Two companies with similar web applications are getting ready to deploy their first versions. One company decides to invest in automated security testing by using a web vulnerability scanner before the first version is deployed, while the second team wants to save the initial investment and performs only manual testing.

Automating Web Application Security Testing

The first team's automated security testing offers excellent coverage of the web application by performing thousands of tests in a few hours: in a web application with hundreds of possible attack vectors, where the automated web security scanner never skips an input or neglects a field. While the faults discovered by the scanner are fixed (and, in turn, verified by the web security scanner), the testers invest their time in researching and testing logical vulnerabilities, where their intelligence and skill are truly needed. Because they did not have to manually perform the most time-consuming security tests, such as detecting web application vulnerabilities, their schedule is flexible enough to allow additional tests, created as the testers became more familiar with the web application and its logic. 

Manual Web Application Security Testing

The second team, meanwhile, wastes days on performing SQL injection tests and struggles to deal with unforeseen events. The tester who was supposed to go over all of the payment forms quits mid-project, the newest tester is slower than the schedule allows for and lacks experience in the security field, and two testers accidentally spend three days performing the same tests. Time runs out before all security testing is performed, and those tests that are performed suffer in quality because of tester fatigue: testers skip inputs, miss fields and cannot develop their testing skills, so that logical vulnerability testing is limited to its original design, rather than utilize the testers' creativity to expand as they become more familiar with the web application.

The First Results

Both companies deploy their web based products and users start signing on for the services. The first team's product offers excellent security, although of course the users are oblivious to this. The second team's product is compromised, but being a young product it has not yet attracted the attention of hackers, and the faults are not found.

Keeping Up with Changes in Web Applications

While preparing for the second version of the web based product, both teams lose a couple of testers. The first team's security automation for the old features is already complete, and a new tester adds automation for the new features; since the web application security scanner itself knows which tests to perform and how, the new tester's inexperience is not a factor. Freed from the task of performing injection and other inclusion tasks, the team has the time to learn and thoroughly test all features, new and old.

The second team is not so lucky: their API tester has quit, leaving only brief manual testing scripts and an assumption that the first version was perfectly tested. While trying to fill the knowledge gap, the second team considers and dismisses the idea of a full security check on their second version. Since they are short on time, they begin to economise. They focus the API security testing on new features, limit their browser testing to the latest version of each browser, and cut corners on input testing - testing only some of the fields, with some of the possible inputs. On top of that, their new testers aren't very knowledgeable in security, and because of the time pressure are given poor instructions and no testing scripts. They test only what they know, leaving whole features open to multiple attack vectors. They deploy a product that has accumulated two versions' worth of security flaws.

As the third version approaches, both teams want to work in short testing iterations that complete a cycle once a week, rather than perform one large cycle every month or two. The first team runs its automated tests every week, and begins to release mini-versions: one or two new features or updates every week. The second team finds that they require two weeks to perform each security cycle, before they can move on to testing features; rapid iterations and weekly releases are impossible, unless they forego security testing.

Worse yet, the second team is now set in its ways and is falling behind the field. Because they work manually, they cannot mimic a malicious hacker's automated work and their view of security risks is distorted by this difference in work methods. They continue testing the same attack vectors in the same manner, while hackers, using ever-improving automated tools, have found new types of security risks they can exploit, and new ways to exploit old risks. The first team's web application security scanner is updated by its vendor's research team; the second team's manual testing keeps looking at the same things in the same way, checking for the same result.

Web Application Vulnerabilities Have No Price

And so it goes, version after version: the first company's automated testing is always faithfully and quickly performed, and always checking for the latest type of web application vulnerabilities, thus heading towards success.

Whereas the second company accumulates testing gaps, with each of its versions offering more security risks. Eventually, the second company's product is hacked, user details are published. The product loses clients, the business gets a bad reputation, hence the future of the business is uncertain.

Automate Web Application Security Testing

You can't, and shouldn't want to, take the human element out of testing; humans are intuitive, creative and natural analysers, and you want them testing where automated tools cannot. But the best way to utilize your testing team's skills is by automating the days' worth of technical, robotic testing on which they're currently being wasted and which no human can perfectly perform. A web application security scanner does what computers do best - running hundreds or thousands of operations in a manner of hours without missing anything - and lets humans do what they do best - solve logical problems.


What Can We Learn from Ebay Hack Attack?

$
0
0

If you work in the web application security or information security industry, for sure you have heard about the latest ebay hack attack. This article looks into what might have happened and highlights a few points that help you keep malicious hackers at bay.

How Did ebay Got Hacked?

It all started when ebay posted a message asking all of its users to change their passwords due to a data breach their corporate infrastructure has suffered. The data breach was from a database which contained sensitive information. Quoting ebay;

"The database which was compromised between late February and early March, included eBay customers’ name, encrypted password, email address, physical address, phone number and date of birth. However, the database did not contain financial information or other confidential personal information."

Could the ebay Hack Attack Been Avoided?

Definitely! Such attacks can always be avoided hence why we need to learn from them. To start off with, it could be debated whether two-factor authentication could have proven to lessen the impact caused by this security breach. But what if the malicious hacker managed to gain access to the two-factor authentication tokens as well, especially if the database server and web server resided on the same machine?

The company also stated that "Cyberattackers compromised a small number of employee log-in credentials, allowing unauthorized access to eBay's corporate network". There are lots of possibilities how employee's account might have been compromised; the attacker might have launched a social engineering attack against the employees by tricking them into executing a malicious file. Or possibly the attacker exploited a cross-site scripting attack to impersonate the employees.

It is also possible that the attacker compromised the database of a specific ebay application, which in return revealed the employees’ login credentials, which could be used to gain access to ebay's infrastructure.

How to Avoid Having Your Websites Hacked

Till now ebay have not disclosed technical information of how the malicious attackers managed to obtain employees credentials. Though as history has thought us most of the breaches start by a social engineering attack, or by exploiting web application vulnerabilities. A single vulnerable application within the infrastructure could lead to a full compromise of the web servers, database servers and all the infrastructure as happened some years ago when the Apache foundation servers were hacked. With that being said, here are some necessary steps you must take to secure your web servers, database servers and the whole web farm.

Keep Web Server Operating Systems and All Software Updated

This is the first layer of security that should be applied. The operating system on which the web servers, database servers and all other network services run should be always updated with the latest security patches. Along with it, the actual software also needs to be updated.

This might be a common sense security best practise for many, but lots of security incidents and breaches that happen till this day still occur due to the use of unpatched software or operating systems.

Seperation of Concerns (Server Roles)

Web servers typically reside in the DMZ (Demilitarized Zone), which is exposed to untrusted networks where anonymous users (a.k.a. website visitors) can interact with the web servers and send data to it, thus making it more prone to attacks. If your web server is compromised, and the database server is installed on the same server the chances of the attacker gaining access to the database are very high.

Hence having separate servers for different roles increases the security of your web farm and also improves damage limitations should one of the servers be hacked. Apart from just improving security, there are also performance and scalability factors your infrastructure can benefit from when running a database server and a web server on different boxes.

Always follow the Principle Of Least Privilege

When installing a database server, web server or any other type of network service you should always follow the "Principle of Least Privilege". This means that your web server should only be granted the least possible privileges to access the database. It should not have any administrative rights to make changes to the database structure, such as drop table; most of the time the web server only needs read and write access.

Many make the mistakes of assigning more privileges than needed because “it always works”. In this case it is better to spend a few extra hours testing the setup rather than granting full access to a network service and being hacked. This also helps to limit the damage in case one of the servers has been hacked into.

Limit Remote Access

Ideally all servers in the web farm, such as web servers and database servers should only be accessible from the local network; admin interfaces should not be accessible from the internet.

In case remote access is required make sure that only handful number of IP addresses should be able to access administrative interfaces by using whitelisting. Remote connections should be established by using a cryptographically secured mechanism such as SSH or virtual private networks.

Another handy tip that is typically overlooked, educate all employees to not use public computers connected to insecure wireless access points to connect to the corporate network and infrastructure.

Uninstall Unnecessary Network Services

By default a server operating system installation has a number of network services such as FTP, DNS, SMTP enabled. Shut down and if possible disable or uninstall any network services that won’t be used on the server. The more network services you have running on the server the more possible point of entries and exploitation a malicious attacker has.

Do Not Use Shared Hosting

While shared hosting might be the cheapest solution, if an attacker manages to gain access to another site on the same server, the attacker will have direct access to your web server or database server. Shared hosting is not for anyone who is serious about web security. For a more detailed explanation of all the disadvantages shared hosting has in terms of web application security read Shared Hosting and Web Application Security – The Opposites.

Use a Web Vulnerability Scanner

As mentioned before, most of the security breaches occur due to an exploitable web application vulnerability. With that being said, encourage everyone involved in the design, development and testing of web applications to use a web vulnerability scanner to automatically detect critical vulnerabilities such as SQL Injection and Cross Site Scripting vulnerabilities.

Ideally businesses should also have a staging server where new code and website changes are tested and scanned for vulnerabilities before they are uploaded to the live server.

Frequently Monitor and Audit Everything

Log files are not generated just to consume hard disk space. The use of log files is to frequently analyse them so you can identify any suspicious behaviour and potentially block an attack before it actually happens.

Ideally log files should be stored in a segregated area so malicious users cannot tamper with them in case there is an attack, hence allowing you to trace back all malicious activity to identify the problem and close down any security hole the attacker exploited.

Also, frequent auditing of systems and web applications is a must. What is secure today might not be secure tomorrow. New attack variants and exploits are discovered on a daily basis and only by using the latest tools and scanners you can ensure that your whole infrastructure is secure.

Keep Yourself Informed

Last but not least, keep yourself informed! Tools are handy and they do help you securing down a web server farm and web applications, but staying informed keeps you a step ahead of the malicious attackers. The internet is full of web security articles and you should subscribe to security blogs and newsletters to keep yourself informed.

If a new type of attack is trending on the internet and you are well informed about it, you can check upfront if your web applications are vulnerable to such type of attack. Also, web applications and their security are evolving every day; hence by keeping yourself informed you are always well equipped with the knowledge needed to secure web applications and web farms.

Netsparker Scan Policies Feature Highlight Video

$
0
0

Many web application security experts have to scan tens, and sometimes even hundreds of different web applications every year. And since every web application has its own setup and configuration, such as URL rewrite rules and custom 404 pages, users have to reconfigure the web vulnerability scanner each they scan a different web application.

Scan policies in Netsparker allow you to save a specific Netsparker configuration setup so you do not have to configure the scanner each time you scan a different web application, thus saving a lot of time and improving your productivity.

Scan policies also allow you to specify which type of web application vulnerability checks you want to launch during a web application security scan. For example you can scan a web application just for SQL Injection and Cross-site scripting vulnerabilities.

To find out more about Scan policies in Netsparker and learn how to configure new ones or modify existing ones watch the video below.

Passwords vs. Pass Phrases - An Ideological Divide

$
0
0

This whitepaper is part of a three-part installment covering a wide breadth of topics on passwords, next-generation security and plenty more.  In this installment, we look at passwords: what, where, and why.  Then we look at future evolution of the process, and what is next to come in this series.

Preface

When we think of passwords, what usually comes to mind?  Often times, we default to those incredibly complex and difficult to remember combinations: A6C.Goo4sp-s, t00d1fficult2remember!, and so forth.  The requirements seem to get tougher and stranger with each passing year, and they actually do.  Barely a week goes by anymore before yet another breaking news article comes out about yet another large web service being attacked and their password database being compromised.

As offline password cracking methods become simpler and quicker -- a topic we will visit in a later paper in this series -- so too do password ideologies grow more complex.  And, as with most things in the computer security industry, this is an uphill battle that security engineers most fight with more challenging methods to combat would-be intruders.  But, as a general axiom of computer security, more safety means less convenience, and sometimes even less productivity.  Indeed, to fight this battle, engineers must impose unruly requirements on their users.  But it need not be this way.

Increasingly, security engineers are employing the use of newer and more challenging methods against attackers, yet they try to keep these requirements relatively easy or more intuitive for their users.  The methods vary widely, and often are unique to specific industries (bank authentication versus Google, for example).  Even the networks themselves that handle authentication have been beefed up in ways never before utilized nor necessary in years past, mainly due to recent increasing threats to security and concerns over privacy.  However, through all of this, one thing has become a clear certainty, that this fight will never stop.  So how do security engineers stay one step ahead of hackers?

Security By Obscurity - Does More Complex Mean More Secure?

As aforementioned, passwords have been the dominant leader in authentication mechanisms.  In fact, the concept of a password has been around since the Roman military of 200 B.C., so it is no wonder that passwords would still find themselves as the forefront most chosen method to login to a website, or anything for that matter.  It should thus come as common sense knowledge that, with the reliance on password authentication as a means of security, the weaker or easier-to-guess the password is the more likely it is to be compromised.

So, throughout the ages, passwords became more and more complex, usually by length or obfuscation (code words, alphanumeric substitution, and so forth).  This holds most true today, especially in web authentication. In fact, when we think of website password requirements today, it is typically something similar to the following:

  • Eight to sixteen characters
  • Minimum of three of these categorys:
    • Uppercase letter
    • Lowercase letter
    • Number
    • Special character
  • Must not consist of your prior N passwords

Seems common enough, right?  But there are some noticeable problems that have been well-known and discussed in security circles for decades now, and yet they still perpetuate themselves to this day.  While the whole requirement group itself is a mess, there are two particularly important elements that cause the whole thing to be a miserable failure: Complexity and length requirements.

Password Complexity - Make it Difficult to Guess, And Remember

Specifying a complex password can be very hard

Example password requirements from Yahoo!, requiring 8 to 32 characters, including letters and numbers.

As depicted with the examples at the start of this topic, typically passwords consist first of some form of simple word complexity, often via alphanumeric obfuscation -- a method of substitution rendering the password obscure or unintelligible.  This is common with passwords in the form of substituting numbers or special characters for letters or even entire words.  This yields passwords like P4ssw0rd, 2fast2furious, m0v!ngaw4y, and other similarities.

The caveat to this type of complexity is that in making a password obscure or unintelligible with the desire to make it more complicated on attackers, it comes with the unintended consequence of the user often being incapable of memorizing the jumbled mess of alphanumeric and special characters.  To add even more problems to the mix, passwords length restrictions are employed, which cause further difficulties in memorization.  We will explore these problems a little later in this article.

The true reason for password complexity is due to the first and biggest misconception of password-based authentication security: longer password equals more secure.  This is based purely on a mathematical theory that, at its time, held some weight.  However, in modern password cracking realms, its purpose is negligible at best and useless at worst.  The theory purports that if you have an N-length password, each additional layer of character sets adds more complexity, and mathematically this is indeed true.  An 8-character password consisting of only lowercase letters and numbers has 368 possible combinations (26 letters plus 10 numbers iterated over 8 characters).  Add uppercase and this increases to 628.  This posits, then, that more character type requirements yield mathematically more secure passwords.  Realistically, however, this is not true.

Consider that modern password hash brute force software can guess at nearly five billion passwords per second.  If you assume a passphrase akin to the xkcd joke of 44 bits (in their example, "correct horse battery staple", a 28-character password), a SHA1-encoded password hash can be cracked in a little over an hour, at worst.  (We dig more into passphrases later in this article, as well.)  If on a six-character password you require upper- and lower-case letters, the maximum possible combinations would be 526.  Add the requirement for numbers as well (increasing the combinations to 626), and the maximum theoretical difference increases by barely five minutes.

Mathematically, however, it is true that as you increase the exponent value (the character count), the maximum also increases exponentially.  This is about the part where most all security engineers stop and call it a day, narrowly focusing on part two of the password security-through-obscurity genre: increase the password length, increase the security.  But is that sufficient?

Password Length - Size Does Matter

In early 2000, the University of Cambridge Computer Laboratory performed a password study involving around 400 students.  The study came to many conclusions, most notably that random passwords are more difficult to remember than mnemonic passwords, and passwords exceeding 6 characters become increasingly difficult to memorize.   In fact, some subjects were never capable of memorizing their passwords.  Though it is not directly discussed in this paper in particular, it does certainly hint to a considerable complexity problem with random or 'non-human' passwords.

Importantly, the Cambridge study highlighted the observation that participants had difficulty memorizing passwords beyond six characters.  This is pretty much a well-known fact to anyone who has ever had to make a password for anything.  We, as humans, are an associative-memory bunch, and in memorizing random data with no real order or correlation we find ourselves incapable of storing this information to memory in any real recallable fashion.  Think of it like a database with no index, just random data floating about.  This is why mnemonic devices work so well for many of us when studying for exams, as we can associate something psychologically tangible to the data.  This is also why we very commonly use remarkably easy-to-guess passwords -- password1, anybody?

To compensate for the fact that our "blink182"-simple passwords exist in readily available password cracking dictionary lists and are just, in general, sometimes very easy to guess, security engineers focus on increasing the cost of the password.  The term 'cost,' in this sense, implies the direct difficulty or iteration count of an encryption or hashing algorithm.   In a SHA256 cryptographic hash function, for example, the default of 5,000 iterations over the hashing formula implies the cost, thus the more iterations, the more 'expensive' an encryption or hashing scheme. Similarly in a raw password sense, it is assumed that the more difficult a password is to guess (the 'cost' in this context, via character length), the more 'expensive' and thus more difficult it is to break.  Indeed, a three-character password is highly insecure and can be brute-force guessed by a mediocre computer in a matter of seconds (if even that long), so naturally the logic flows that a longer password is more secure.

That is all well and good, but this also places entirely too much focus on one end of the spectrum: minimum length.  The almost never-discussed elephant in the room still exists, that of maximum length, and why in the mathematically holy name of Pythagoras does a maximum limit even exist?

Maximum Password Length - How Does That Even Make Sense?

It is true, one must admit, that at one point in ancient times -- Okay, so the 70's and 80's are not ancient, so to speak, but in Internet time the 70's is like the Roman era.  Right?  Anyway... -- systems were incapable of transmitting more than a certain length of characters, for some pragmatic reason or another.

For example, the industry-standard communication protocol for EEP4 PIN pads in common Automated Teller Machines at one point required exactly a four-digit Personal Identification Number (PIN).  This was required because, due to the technology limitations of the security systems in place, the encoding of the PIN required the exact integer length requirement of a four digits.  As technology progressed through time, so too did the maximum length requirements -- PINs can be six or so digits now, admittedly not a lot of progress but it is progress nonetheless.

That explains systems were the protocols required exact, specific lengths, but what about arbitrary encryption, transmission, and storage methods?  Take, for example, MD5 and SHA1, two powerhouse heavyweights in password hashing that are also used to generate checksum hashes of whole file or archive downloads, often millions of bytes in length (whereas a password is typically 6-12 bytes on average).  If an MD5 or SHA1 hash have no real theoretical limit (ignoring the possibilities of hash collision, whereupon multiple strings result in the same password hash), then why would any software developer or security engineer seriously consider imposing a password length limit?

For quite some time now, it has been a long-standing joke as to why maximum password length exists, especially to this day.  One of the theories is that originally in the 1970's, DES-based crypt truncated a password string after the 8th byte, thus anything beyond an eight-character password was pointless and a maximum-length policy was born.  However, simple math and observation would yield that the 70's were 40 years ago, and technology has evolved just a little bit since then, namely in the security field.  As is part of the joke, no one really knows why anyone still enforces maximum length beyond the disappointing but default answer, "It just has always been this way."

The only remotely plausible reason to continue keeping such a requirement anymore may be something to the effect of, "Why must users generate 40-character passwords consisting of obfuscated letters and numbers?"  But this presumes a common assumption of password styling that we have yet to sincerely question: Is a complex mess of obfuscation and easily forgettable gibberish the only type of password ideology that is reasonable or acceptable?

Pass Phrases, A Better and More Modern Approach

Let us examine a hypothetical, but common situation: You are sitting at your computer, registering your new account on the latest social media trend, Trendr.cu ... or something, whatever.  Anyway… You set your username, your email, that picture of you at the Halloween party last year that you still cannot remember too well, and everything else looks great.

But now you are stuck on the dreaded and much-hated password requirements part of the registration.  Minimum 8 characters, Must have special characters, on and on.  You struggle to search your mind for a secure password, while not using that typical password you use practically everywhere else.  After a few attempts that do not meet the minimum requirements, you finally settle on something, enter it twice, and complete the registration.  Now it is just three days later and you cannot remember if that was a letter O or a zero.  Did you put the exclamation point in this password, or was that only when you were trying to meet the minimum requirements?  And now your account is locked and needs reset.  Wonderful…

Although the reality of this occurrence is not really measured at large, it is probably quite reasonable to assume this happens to perhaps thousands of new users each week, and that is probably just on Facebook alone.  The reality of this statistic applied to all websites would likely be so staggeringly large it may warrant a Congressional hearing.  But this need not be so.  We have grown so used to the archaic password requirements of ages past that we quite often do not give even a moment's consideration to the usage of passphrases, such as partial or complete sentences.

We default to the difficult-to-remember combinations of alphanumeric sequences that produce less human-legible content than a palm slapped upon a keyboard randomly.  Perhaps one reason for this is because of maximum length restrictions.  But even still, a passphrase can be easily fit into an unreasonably short maximum password length requirement.  For example, the sentence The door is open. is 18 characters, including the punctuation at the end.  A large amount of password requirements arbitrarily cut off at 20 characters, so even a short sentence such as that may fit.  No, the reason no one really uses passphrases is because, simply put, no one really thinks to do so.

Really, when you think about it, how many websites prompt you for a passphrase in lieu of a password?  Probably none that you ever use.  Sure, some may offer unique and interesting new methods, such as two-way SSL, key fob tokens, or other cool goodies – We will focus more on these in a later Passwords installment – but none ever really correct the issue of passwords versus passphrases, nor even suggest to a user that they apply a different approach.

For an end user, this can often make them default to focusing exclusively on difficult passwords that are complex and near impossible to memorize, as previously discussed.  Psychology calls this many things: subconscious persuasion, herd mentality, or any number of other conditioned response names. We do this because our response to a password requirement is triggered from previous experience, and we know no different.

In fact, this really boils down to us being almost subconsciously trained, for some absurd reason or another, to never use one very important single character: a space.  It is simply because of this forbidden keyboard character – as well as being subliminally coaxed by the word ‘word’ in password – that we just never even give a sentence as a password a moment’s consideration.

Surprisingly, a considerable number of password-based authentication systems actually forbid spaces in their authentication systems.  Much like the laughed-at maximum password length phenomenon, no one seemingly has a good explanation as to why spaces are still banned, either.  A very exceedingly small portion of old authentication systems may have a legitimate reason for this – an archaic authentication string tokenizer that uses spaces as separators, perhaps – but this is 2014, and it is way past time for those to be changed.

Websites can easily parse any other user input that contains spaces, so there is practically no reason why any system that exists today cannot parse a password field with spaces. However, we can even ignore the whole concept of using spaces entirely and still achieve the desired result, simply by implementing progress: make the concept of passwords extinct, and teach about and encourage the use of passphrases only.  But how?

Implementing Progress; Or, How Do We Fix This

Content providers, website hosts, or whatever title befits them, they all have the ability to be the change necessary to properly educate users on a modern approach to password-based authentication, as well as influence other organizations to follow a similar suit.  And it really is quite simple to fix this long-standing problem, too.  It only takes three simple steps:

Prompt users for passphrases and not passwords

As we mentioned in the previous section, this problem is largely due to psychological conditioning of website users at large.  This is something that can be very easily cured, but it requires content providers to be willing and participatory to push for this change.  If a content provider stops prompting for passwords and starts prompting for passphrases, it will open a bit of dialogue with that user so they understand the key differences and change their approach to password-based authentication.  In fact, simply by prompting differently, it may help many users achieve an “A ha!” moment of epiphany, ushering in a different mode of thinking even when using other websites that are not yet participatory in passphrase prompting.  The fix can be something almost as easy as performing a sed or other similar find-and-replace command on your website content to replace all instances of “password” with “passphrase”.  Obviously, a little more descriptive work would be required, but if nothing else, appropriate prompting is likely the most important change a content provider should make.

Provide detailed but brief literature on the differences and why a passphrase is better

Of course, a content provider really should not just stop at changing their prompting or phrasing without explaining what a passphrase is, or why it is important to use a passphrase in lieu of a typical password.  This literature could simply add to or replace any password popups a registration or authentication user interface already presents.  However, it is important that this passphrase information does not focus on the side of brevity, but rather that it gives a detailed though simple explanation of the concept and how to produce a good passphrase.  This is important because it is necessary to reeducate both your end-users and the public at large on good passphrase opportunities and safety precautions.  Remember, competitor and non-competitor content providers alike will notice your changes, too, and this will hopefully influence them to implement a similar approach.

Eliminate all maximum length restrictions on passphrases

Notice we did not mention “… or other absurd requirements” in that title.  When you consider the content of a passphrase, requirements that often seem absurd for passwords are actually not really that absurd at all for a passphrase.  Putting upper- and lower-case letters, numbers, and special characters in a passphrase becomes incredibly simple when you change the focus from a single word to a phrase or sentence.  The 12 angry men., though probably an easily guessable passphrase, satisfies an absurd password requirement with no difficulties, all because we thought of those requirements in the form of a sentence instead of a single word.  Certainly, however, we do not want to limit ourselves to such a short and guessable passphrase as The 12 angry men., so of course one absurd requirement must still go: maximum length restrictions.  This may require some additional software changes, such as eliminating any maximum string length conditional checks, removing any potential truncation (hopefully no website or other code still does this, and if so, for shame!), ensuring spaces are allowed, and utilizing strong and secure one-way password hashing mechanisms that will support this (this is also a topic we will discuss more in a later Passwords installment).  All in all, this fix really should not require much of any effort for any content provider.  And if it does, perhaps it may be time to consider redesigning the authentication system.

Wrap-up: So What Have We Learned?

Earlier, we looked at the evolution of a password: what the concept of a password is; where a password comes from through the progression of authentication, both analog and digital; and why a password is now an archaic notion.  Every day, we punish ourselves by trying to memorize nonsensical jumbles of letters; some of us succeed at it, usually with insecure passwords, and many of us forget.

Even 14 years ago in 2000, the University of Cambridge found memorization to be measurably difficult, and yet we continue on with passwords, expecting it to one day work out for us.  Indeed, as the old adage goes, “The definition of insanity is doing something over and over, and expecting new results,” so are we insane for thinking a mess of keyboard presses will one day be secure and memorable?  Well, no, of course not!  We are, quite simply, merely unenlightened to the alternative, glorious path of passphrases, a concept that has yet to be mainstream.  But you – content provider and user alike – you can change that path, simply by progressing your promoted ideology from passwords, to passphrases.

Why QA Pros Should Be More Involved in Web Security

$
0
0

Now’s a good time to be in the software QA business. Studies are showing – and many business people are realizing – that the quality of their software has a direct bearing on the overall security of their IT environment. At the heart of this are QA professionals. In many situations, they have the final say-so about software bugs that ultimately create security risks. There’s a lot of pressure of these team members but there can also be a lot of rewards associated with QA professionals knowing that they play an integral role in web application security.

Many businesses approach web security testing in the wrong way. Simply put, they don’t involve their QA staff in performing in-depth web security testing. Be it running web vulnerability scans or performing manual analysis having QA pros work as part of the team to help find security flaws can really work to your advantage. QA specialists typically possess the exact traits needed to find web security vulnerabilities and security issues including:

Quality Assurance Professionals Have the Training

Many QA professionals have degrees in computer science. Many are former developers. Because of this experience, QA professionals understand the core essentials of software development. That’s the first (big) step towards understanding, finding, and fixing application security flaws.

Quality Assurance Professionals are Bug Hunters

The essence of a good security tester, hacker, hunter, or whatever term you want to use is having the wherewithal to know what to seek out. They understand the value in “breaking” stuff and they know how to break it. They also know that using good tools is critical to their success.

Some QA professionals perform all of their tests the old-fashioned way: manually. However, the wise person performing software QA knows his time is limited. An automated web vulnerability scanner and its accompanying tools can reap tremendous rewards in the hands of a QA professional because he knows exactly what he’s looking at and the behaviors that should be expected from the application.

Quality Assurance Professionals Have the Dedication and Patience Needed to Excel

QA professionals are in the business of finding bugs and security holes in any type of software and web applications because that’s what they love doing. QA staff have to know a lot about a lot but that’s okay because software testing is often the only thing that they do.  Even if software testing is super repetitive work, they’re good at it because of this repetition and because that’s how their mind works best.

All in all, there are plenty of reasons that every organization should have QA staff performing web security testing. The more eyes you have on web security the better. Security is all about quality and QA pros can be a great fit for the job.

Passwords vs. Pass Phrases – Weaknesses Beyond the Password

$
0
0

This whitepaper is part of a three-part installment covering a wide breadth of topics on passwords, security, next-generation, and plenty more.  In this installment, we look at the modern two-fold weaknesses of passwords, and easy methods to fix these problems.

Preface

In the previous article Passwords vs. Pass Phrases - An Ideological Divide we looked at several factors that weaken password-based authentication security, namely on the side of the end-user.  The concept of a password in and of itself is inherently flawed, and many of the surrounding security or enforcement strategies are equally flawed and antiquated.  We – being both the content providers and end users alike – operate on a password ideology that is decades old and utilizes some principles no one in the security industry can reasonably justify anymore. (Maximum password length, anyone?)  But given the progressive nature of the Internet and the unforgiving speed at which everything changes therein, this is something we can easily change.

Password-based authentication need not be such an archaic pillar of security any longer.  Indeed, as we previously went over, content providers must inspire an ideology of passphrases, and end users must deeply understand and implement this concept.  However, a good security engineer worth his weight in firewall appliances knows that a proper and functional security posture of a well-built and maintained system requires multiple layers of security.  A modernized approach to password ideology is only one of several necessary steps for a highly-secured system.  Next, content providers must ensure that the underlying technology can survive a data breach when – not if – it happens.

Authentication – Why it is Important to Protect Every Bit of Data

Unless you are a developer or a systems administrator, not a whole lot of thought goes into a common password-based authentication mechanism.  You enter your username or email, your password often masked by black dots, a button click starts some magical voodoo that happens behind the curtain, and voila!  You are now logged in and your session is validated for some predetermined length of time.  But as you, our well-learned reader, certainly will know, far more goes into an authentication system.

When an authentication occurs, the user’s supplied data is submitted as-is, most often in plaintext form.  This presents an extremely important, yet often overlooked challenge to developers and administrators: securing the user’s data before it even makes it to you.  More often than not, you will find many large-scale organizations that use plaintext, unprotected authentication systems.  A simple glance at the address bar shows many do not even use SSL – or TLS, as it should properly be called– on their authentication systems.  In fact, some smaller self-hosted online stores may know such security is both wise and required for PCI compliance – a topic we have previously discussed at length– but they will for some inexplicable reason completely gloss over end user authentication.  Logically, they should ask themselves: Why would a hacker desire only credit card details, but never any login information?

The First Layer: A Secured Line

Consider you run a website that caters to students attending a higher-education institution, such as a university.  As has become commonplace among many businesses and organizations, college campuses often provide free WiFi internet access, usually restricted to their students and staff only.  Often times these WiFi connections are provided in two varieties: encrypted and unencrypted.  The encrypted option usually requires some level of setup on your local computer before it works properly, which is why many universities offer the open, unencrypted option, for those who either cannot make it work properly, or need to obtain the instructions how.  Unfortunately, once students have connected to the unencrypted internet access, many all of them are going to mindlessly forego the encrypted path and just keep surfing.

Now say some black hat hacker in the student lobby of whatever university is your largest visitor has a wireless network auditing tool such as WiFi Pineapple.  This device allows him to mimic the WiFi signal the university provides, inspect the traffic, and pass it along potentially compromised.  He quietly sits and sniffs all the unencrypted traffic in that lobby and passes it along, those students being blissfully unaware all of their unencrypted traffic is plaintext treasure for this hacker.  He accomplishes this using Firesheep, a Firefox plugin that allowed hackers to capture other WiFi network traffic in Firefox (prompting Facebook, Gmail, and many others to employ SSL-only connections).  If your hypothetical website for students is running an authentication mechanism unencrypted – no end-to-end TLS certificate handshake of any sort – you now have exposed your end users at that campus to potential sniffing attacks made as simple as a dongle and a browser addon.  It may not seem that important, but what if you had the next Facebook (which started on college campuses) and were taken down by poor authentication measures?  That billion dollar dream is now gone.

It is critically important that the security of password-based authentication start right at the moment an end user arrives at your website.  Before they even begin to provide any data to your servers, the channel between you and them must be secure.  This is almost always done using an end-to-end TLS certificate (they’re commonly called “SSL”, but that is actually a misnomer).  Trusted TLS certificates can be easily obtained for free and are accepted by most all browsers and other systems that honor such security certificates.  However, while end-to-end traffic security is critical and crucial, there are still more layers to a reasonably secured infrastructure.

The Second Layer: Secured Storage

In order to ensure a valid authentication occurs, the system the end-user is authenticating into must compare the challenge password with a previously established comparison.  This is often stored in some secured format.  (Of course, some organizations choose to store all passwords in plaintext form– not a good idea, obviously – but we will get more into that later.)  Typically this storage security is completed using a mathematical algorithm called a cryptographic hash function, a formula that takes in an arbitrary length password and returns a fixed-size string in the form of a password hash.   For example, if we take a lesson from the previous article in this series and generate a passphrase – This is a password. – then we are left with a password hash of 07997f833c2d709d2e5fcd7666858d8c.

Commonly, web and web-like password-based authentication mechanisms utilize simple hashing functions, such as MD5 (used in the previous example) or SHA1, even after both have been proven considerably weak for many years. This is likely due to the simplicity of hash creation and comparison with both functions, requiring only hashing the plaintext password supplied by the user and performing a direct string comparison to the stored hash in the user table.  In fact, this manner of hash comparison can be, and often mistakenly is, completed within the database query that fetches the stored hash itself.  It seems secure enough, and it is incredibly easy to implement in code, so why bother with anything more complex?  Indeed, that is apparently the common and acceptable approach to password storage security, but unfortunately it is a dangerously lazy one, too.

What is Wrong With Simple Hashing?

First, when generating a password hash, you absolutely want each hash to be unique.  No two differing passwords should ever generate the same password hash.  However, both MD5 and SHA1 have been found to have an uncomforting likelihood of two passwords generating the same password hash – known as a hash collision.  MD5 can have no more than 3.4 x 1038 possible unique password strings before a collision will occur.  SHA1 even has a probability formula to determine collision likelihood.  The fact that these are known severe mathematical flaws with both cryptographic hash functions should be reason enough to abandon use of them with password hashing.  However, the extremely minimal modern cost of both functions is truly the most damning element.

In the security industry, a password hash’s strength is determined by the cost of the cryptographic hash function itself.  The term 'cost,' in this sense, implies the direct difficulty or iteration count of an encryption or hashing algorithm.  In terms of difficulty, this can be considered to basically be an exponential curve relative to time for each additional character in a password (essentially as steep as f(x) = 2x – see Figure 1 below).  Cost is also used in terms of how much additional mathematical work is applied to a cryptographic hash function.  In a SHA256 cryptographic hash function, for example, the default of 5,000 iterations over the hashing formula implies a cost of repeating and applying the SHA256 mathematical formula 5,000 consecutive times – thus the more iterations, the more 'expensive' (and often, more secure) an encryption or hashing scheme is.  However, neither MD5 nor SHA1 in typical web system deployments contain the ability to iterate over any formula for additional security, therefore their cost lies solely in the amount and type of characters the end user types.  This is the first among simple hashing functions’ many flaws.

Time needed to crack password hashes

Figure 1 - Example of the exponential-like curve of a password's cost in terms of character length versus time

For quite some time, it was considered that MD5 and SHA1 were reasonably secure since the technology was markedly limited in terms of brute-force cracking the hash itself.  The cost of an MD5 or SHA1 hash were substantial enough to hinder common-day Intel or AMD CPUs at that time from directly deciphering an MD5 or SHA1 hash itself or brute-force guessing at a hash.  In 2007, however, nVidia released a C programming library for their Cuda and Tesla series graphics processors.  This led to all sorts of new projects being designed for GPU usage, linear algebra being a considerably large one.  It was not until 2010 that the most frightening aspect of GPU technology became serious headline news, when researchers at Georgia Tech published GPU technology was extremely successful at password hash cracking.  And not just extremely successful, but so much so that all prior conceived notions of cost have been rendered wholly obsolete now.

Various usages of Hashcat – a password hash cracking utility – have shown CPU to GPU comparisons with Radeon GPUs pushing upwards of 90 times faster than top-of-the-line Intel or AMD CPUs in comparative MD5 brute force tests.  When you put this into terms of the exponential amount of time it takes to brute force a password hash for each additional character, the results are staggering.  Where an MD5 password hash may take 450 years with a higher powered multicore CPU, a modern GPU may be able to do it in 5 years at worst.  A 20 year wait on a CPU is less than 2 months on a GPU.  Today, however, GPUs fare far better than this.  Much of the GPU research data available is around three years old, which is centuries in terms of Moore’s Law. This has proven to be a potential nightmare for content providers utilizing standard password storage methods.

Consider that the IGHASHGPU password hash brute forcing software projects the ability to brute force attempt 3.7 billion MD5 hashes or 1.4 billion SHA1 hashes, per second.  If you assume a passphrase akin to the xkcd joke of 44 bits (in their example, "correct horse battery staple", a 28-character password), a SHA1-encoded password hash by this measure could conceivably be cracked in a little over an hour, at worst.  Simply put, using simple password hashing is a welcome invitation for mass password compromise.  As assuming as that statement is, it is quite true.

Password Data Mass Compromises – Even the Mighty Can Fall

Over the past five years, several dozen major organizations, corporations, and even government entities have fallen victim to attackers infiltrating their servers and extracting massive password hash dumps.  This has become common and recurring event that public projects have begun to appear to document their rapid occurrences and provide a database of the dumps. Many of these victims even had advance warning of the impending attacks, and employed highly skilled teams of security engineers, yet they still could not inhibit their attackers from obtaining password data.  Truly, if a hacker (or group thereof) has a strong enough desire to gain entry into your systems, they most likely will eventually find a way in.

In December 2010, Gawker Media—one of the most popular social media blog networks, consisting of a conglomeration of eight different websites—found itself the unfortunate victim of a massive password database compromise.  LinkedIn—a very large social media networking website tailored specifically to professional relationships—found itself in the same unfortunate circumstance in June of 2012.  The social media gaming giant RockYou found its thirty million users’ passwords compromised in December 2009 (this one was unique due to the fact that RockYou stored all of its passwords plaintext, not cryptographically secured).  Just from January through April 2014 alone, over ten million cryptographic password hashes—possibly more—were released to the public from hacks against enormous media behemoths like Comcast, Yahoo!, and AOL.  Now, as of June 2014, even eBay has found itself victim of a mass compromise, reporting a whopping potential 145 million compromised password hashes.

Indeed, if a hacker is persistent and skilled enough, they may invariably gain access to their target at some point or another.  Even with hundreds of thousands of dollars of equipment, personnel, monitoring, and everything else watching the front gate—which, unarguably, are efficient and necessary tactics big players like Comcast and eBay utilize—eventually someone may be able to break through that hole in the fence that no one is looking at—no one except the attacker, that is.  So much focus is put on the common entry point of a website that no one considers to continue layering the security on deeper.  If not the actual authentication system itself, then how else are hackers able to gain entry, and why are they able to obtain such large treasure troves?

A Firewall Behind the Firewall: Protect the Data at the Database Itself

When your cryptographic password hash data is stored, it is just as critically important to isolate the hashes as it is to have secure hashes.  The purpose of encrypting user passwords is indeed to inhibit the ability of an attacker from learning of your users’ passwords and potentially compromising other accounts they hold elsewhere.  But as we have seen with the widespread use and failures of MD5 and SHA1, even securing your users’ passwords is not enough. Looking past the strength of the password hash used, why should an attacker even have access to the password hashes to begin with?

This all starts primarily with poor database security, which can come from any number of bad (in)security habits: unsanitized user input, dangerous or buggy code, non-segregated data, poor access control lists, and many more.  Unsanitized user input – known in the industry as a SQL Injection, a topic we have discussed at great length previously– is the crux of all web security critical failures, especially ones that yield database treasure troves of password hash dumps.  Since its inception, the Open Web Application Security Project (OWASP) has assembled a top ten list of web security vulnerabilities.  Every year that list has been assembled, SQL injections have made the list.  Furthermore, nearly every single compromise of a password hash database in the past several years has been possible at least in part because of SQL injections.  We have exhausted this topic before – as have many hundreds of other organizations, corporations, even governments – and yet it still remains consistently the most damaging attack vector.

Before we cover SQL injections much further, we must once again and briefly harken back to our two-part series on PCI compliance – a merchant regulatory security standard organized by the major credit card corporations of the world: Visa, MasterCard, American Express, Discover, and Japan Credit Bureau – to revisit some topics that are incredibly important to every aspect of web security.  Whether a content provider’s data is as simple as RockYou’s, or as critical as multi-million dollar banking, the six categories of PCI compliance are highly applicable to nearly any line of business that has a web-facing authentication portal.  Of course, some PCI compliance requirements are potentially inapplicable – not every website can restrict data access at a digital or physical level, depending on their hosting scenario – but the core concept still holds valid: restrict and secure the data with multiple layers of security.

Indeed, securing the code that runs the website should be the only step required.  However, perhaps it is impractical or infeasible to completely secure the SQL queries in the code used (by one comparison, Drupal has had over 20,000 lines of code committed, WordPress has had over 60,000 lines, and Joomla! has had over 180,000 lines).  (The recent HeartBleed bug in the OpenSSL library is an excellent example of software with thousands of lines of code being used without inspection by thousands of users.)  Or, it may simply be impossible to do so because the code is encoded, such as with SourceGuardian or ZenCrypt.  Even with all these impracticalities, a content provider can still potentially shield against many of these attacks by using layers of firewalls.

Typically this might include some adaptive solution that rides on top of iptables or ipfw (depending if you are using Linux or a BSD variant, respectively), or perhaps a reactive Host Intrusion Detection System (HIDS) such as OSSEC, although these are often more complicated than desired and not exactly purpose-built for these uses.  Instead, a content provider may wish to utilize a Web Application Firewall, which is designed specifically for these tasks.  While there exist several enterprise-level solutions that are both a WAF and database firewall (sitting between your web application and your database), there are many open-source solutions, such as ModSecurity and IronBee, that perform remarkably well.

Although, a Web Application Firewall is not always secure, either, and may still allow a SQL injection or other method of attack to penetrate through.  A common theme you may notice in this paper by now is our frequent mention of the word “layers,” and for good reason, too.  A Web Application Firewall itself is not enough, nor is just securing of the code a website runs, nor just monitoring, and so forth.  However, when a content provider combines all of these approaches, frequently performs thorough web security audits and penetration scans, and encourages end-users to practice strong and modern security standards, the probability of a mass compromise drops significantly.

Wrap-up: Making Intrusions Fruitless to Attackers

Of course, no one will never be able to prevent every single type of attack and have 100% assurance that no one may ever gain unauthorized access into their systems.  However, a content provider can implement layers of strong security standards to make any such intrusions fruitless for an attacker.  Sure, they may be able to deface a website or mess with some content, but if a content provider employs strong layers of security, they may be able to keep the damage limited within that scope, or even less.  A highly-secured communication pathway to the user, use of very expensive cryptographic password hash functions, layers of firewalls and data integrity/security checks from user to database and every step in between – all of these and more are critical components to ensuring your systems do not meet the same fate and embarrassment that even the largest organizations have unfortunately suffered.

How to Configure URL Rewrite Rules in Netsparker

$
0
0

By configuring URL rewrite rules in Netsparker Web Application Security Scanner, it can automatically detect, crawl and attack all parameters in URLs. This article explains how to use the wizard to easily configure URL rewrite rules in Netsparker. Read the article URL rewrite rules and web vulnerability scanners for more information on URL rewrite rules.

Step 1: Create a New Scan Policy

The first step to configure URL rewrite rules is to create a new Scan Policy. For more information on Netsparker Scan Policies and how to create a new one watch the below video.

Step 2: Configure URL Rewrite Rules Using the Wizard

  1. Once you create the new Scan Policy click on URL Rewrite from the left pane as shown in the below screenshot.

Creating a new Scan Policy in Netsparker Web Application Security Scanner

  1. Tick the option Use Custom URL Rewrite Rules and click New to launch the URL Rewrite Rules wizard.

Note: Should you wish to configure the URL Rewrite rules manually in Netsparker, without using the wizard you can simply click on the Placeholder Pattern and RegEx Pattern input fields to populate them manually.

  1. In the first step of the wizard specify a URL that matches the URL rewrite rule you want to add, such as http://www.example.com/movie/fight-club/

Add URL Rewrite Rule wizard in Netsparker

  1. In the second step of the wizard tick the path segment that contains a parameter value and specify the parameter name. As seen in the below screenshot the parameter value is fight-club and the parameter name we entered is movie.

Configuring the values and parameters in the Netsparker URL rewrite rule wizard

Note: If there are multiple parameters in the URL you can specify all of them in this step as per the example in the screenshot below, where the URL also includes a parameter called year, a parameter called month and a parameter called movie.

Configuring multiple parameters in the URL rewrite rules wizard of Netsparker

  1. Click Finish so the place holder pattern and regular expression are automatically generated. Click on any of the values to manually modify them, for example to manually write a regular expression.

Configured URL Rewrite Rule in Netsparker

Test Configured URL Rewrite Rules

Once you are ready you can test the rules by clicking on the Test button next to the URL and click OK to save the new Scan Policy and launch the web vulnerability scan.

Netsparker URL Rewrite Rules Configuration Video Tutorial

Below is a short Netsparker video tutorial which shows you how to:

  • Configure a URL Rewrite Rule using the Wizard
  • Configure a URL Rewrite Rule with multiple parameters in the URL
  • Configure a URL Rewrite Rule Manually

URL Rewrite Rules and Web Vulnerability Scanners

$
0
0

URL rewrite rules and automated web vulnerability scanners are not exactly the best of friends; typically URL rewrite technology hinders automated vulnerability scans. This article explains what are URL Rewrite rules and URL Rewrite technology, why URL Rewrite technology hinders automated web vulnerability scans and what Netsparker did to enable users to use Netsparker Web Application Security Scanner to easily scan websites which use URL rewrite technology. In this article:

Introduction to URL Rewrite Rules and URL Rewrite Technology

Web application developers use URL Rewrite Rules to hide parameters in the URL directory structure and is typically used to make it easier for search engines to index all the pages on a website. Another advantage of using URL rewriting technology is that all symbols such as question marks and equal signs are not used in URLs, thus making URLs easy to remember.

URL Rewrite Rule Example

A URL rewrite rule is like a translator; it explains the web server software how to change the URL that users can see and use in a web browser to a format it understands. Example follows:

When you browse a movie collection library, the URL typically looks something like: http://www.example.com/movie/fight-club/

Using a URL rewrite rule the web server converts the above URL to a specific format as shown below, so it can retrieve the data from the back end database to show the movie details to the website visitor:

http://www.example.com/library.php?movie=fight-club

From the above example we can determine that the subdirectory movie in the first URL is actually a parameter in the file library.php that accepts inputs, which in this case is the movie name fight-club.

Web Vulnerability Scanners and URL Rewrite Rules Problems

Parameters in URLs are not Scanned

A common problem web vulnerability scanners have when scanning web applications that use URL rewriting technology is that scanners are unable to identify parameters in the URLs, and would assume that they are directories rather than parameter names or values, therefore such parameters are not scanned.

For example when scanning the URL http://www.example.com/movie/fight-club/ the scanner would think that both movie and fight-club are directories, while in reality movie is a parameter and fight-club is a value.

Prolonged Vulnerability Scans

As a matter of fact the above problem can lead to prolonged scans and incorrect scan results For example if the web vulnerability scanner is scanning a movie database that contains 100,000 films, since the scanner is unable to identify that there is a parameter and a value in the URL it would think that they are all different pages, therefore it will try to crawl them and scan them all.

If memory problems and other exceptions are not handled properly by your scanner this could also lead to the software crashing on you, leaving you with no results and a number of wasted hours. If you do not configure URL rewrite rules in Netsparker it will heuristically identify the pattern and will limit the scan to avoid having prolonged scans and incorrect results.

Configuring URL Rewrite Rules is a Difficult Process

Since URL rewrite technology has become really popular in web applications, many commercial web vulnerability scanners allow users to configure the scanner so it can identify the parameters in the URLs and scan them.

But even though web vulnerability scanners can be configured to scan websites using URL Rewrite Rules, there are several other problems users typically face;

  • Configuring of URL rewrite rules support is very difficult
  • User must know how to write regular expressions
  • User should have access to web server configuration files

Therefore unless you are the developer of the web application itself or have a deep understanding of the web application, and unless you have direct access to the configuration files it is virtually impossible for someone to configure configuring URL rewrite rules on the scanner and will be a very hard and time consuming task.

Web Applications Are Not Properly Scanned for Vulnerabilities

And even if you manage to configure URL rewrite rules in your web vulnerability scanner you are in for more problems, or better there are a number of limitations to how the scanners scan the web application.

As a security precaution typically web applications do not accept HTTP requests which are already “translated”, such as http://www.example.com/library.php?movie=fight-club. In fact by default .NET web applications do not accept such HTTP requests. It can get even worse when scanning MVC web applications because such applications use a different approach to URL rewriting.

As a matter of fact, while Netsparker can scan MVC web applications many other web vulnerability scanners cannot, even when URL rewrite rules are configured.

But once you configure the URL rewrite rules in your scanner, it typically sends such type of HTTP requests, i.e. translated queries. In this case even though the web application security scanner reports that the scan ran successfully, in reality most of the HTTP requests were denied and the parameters in URLs were not scanned, thus providing a false sense of security.

Netsparker and URL Rewrite Rules Support

As we have just seen in this article, many web vulnerability scanners have several shortcomings when it comes to scanning web applications that use URL rewrite technology. They also give a false sense of security since parameters are not actually scanned and are very difficult to configure.

On the other hand when we implemented URL rewrite rules support in Netsparker we wanted to ensure that they are very easy to configure and that all parameters in the URLs are actually scanned correctly.

During the web vulnerability scan Netsparker sends normal HTTP requests to the web application just like an attacker would, to ensure that such requests are accepted by the web application and that all parameters in the URLs are properly scanned to identify any potential vulnerabilities they might be vulnerable to. With Netsparker Web Application Security Scanner it is also possible to scan pages which have more than one parameter in the URL.

Easily Configure URL Rewrite Rules in Netsparker

For more details on how to configure URL rewrite rules in Netsparker using the user friendly wizard read Configuring URL Rewrite Rules Support in Netsparker.


DOM Based Cross-site Scripting Vulnerability

$
0
0

Today Cross-site Scripting (XSS) is a well known web application vulnerability among developers, so there is no need to explain what XSS is. The most important part of a Cross-site Scripting attack developers should understand is its impact; an attacker can steal or hijack your session, carry out very successful phishing attacks and effectively can do anything that the victim can.

DOM Based XSS simply means a Cross-site scripting vulnerability that appears in the DOM (Document Object Model) instead of part of the HTML. In reflective and stored Cross-site scripting attacks you can see the vulnerability payload in the response page but in DOM based cross-site scripting, the HTML source code and response of the attack will be exactly the same, i.e. the payload cannot be found in the response. It can only be observed on runtime or by investigating the DOM of the page.

Simple DOM Based Cross-site Scripting Vulnerability Example

Imagine the following page http://www.example.com/test.html contains the below code:

<script>
    d
ocument.write("<b>Current URL<b> : " + document.baseURI);
</script>

If you send an HTTP request like this http://www.example.com/test.html#<script>alert(1)</script>, simple enough your JavaScript code will get executed, because the page is writing whatever you typed in the URL to the page with document.write function. If you look at the source of the page, you won’t see <script>alert(1)</script> because it’s all happening in the DOM and done by the executed JavaScript code.

After the malicious code is executed by page, you can simply exploit this DOM based cross-site scripting vulnerability to steal the cookies of the user or change the page’s behaviour as you like.

DOM XSS Vulnerability is a Real Threat

Various research and studies identified that up to 50% of websites are vulnerable to DOM Based XSS vulnerability. Security researchers have already identified DOM Based XSS issues in high profile internet companies such as Google, Yahoo and Alexa.

Server Side Filters Do Not Matter

One of the biggest differences between DOM Based XSS and Reflected or Stored XSS vulnerabilities is that DOM Based XSS cannot be stopped by server-side filters. The reason is quite simple; anything written after the "#" (hash) will never be sent to the server.

Historically, fragment identified a.k.a. hash introduced to simply scroll the HTML page to a certain element however later on it was adopted by JavaScript developers to be used in AJAX pages to keep track of the pages and various other things, mostly referred as hash-bang "#!".

Due to this design anything after hash won’t be sent to the server. This means all server-side protection in the code will not work for DOM Based XSS vulnerabilities. As a matter of fact, any other type of web protections such as web application firewalls, or generic framework protections like ASP.NET Request Validation will not protect you against DOM Based XSS attacks.

Input & Output so called Source & Sink

The logic behind the DOM XSS is that an input from the user (source) goes to an execution point (sink). In the previous example our source was document.baseURI and the sink was document.write.

What you need to understand though is that DOM XSS will appear when a source that can be controlled by the user is used in a dangerous sink.

So when you see this either you need to do the necessary code changes to avoid being vulnerable to DOM XSS or you need to add encoding accordingly.

Below is a list of sources and sinks which are typically targeted in DOM XSS attacks. Note that this is not a complete list but you can figure out the pattern, anything that can be controlled by an attacker in a source and anything that can lead to script execution in a sink.

Popular Sources

  • document.URL
  • document.documentURI
  • location.href
  • location.search
  • location.*
  • window.name
  • document.referrer

Popular Sinks

  • HTML Modification sinks
    • document.write
    • (element).innerHTML
  • HTML modification to behaviour change
    • (element).src (in certain elements)
  • Execution Related sinks
    • eval
    • setTimout / setInterval
    • execScript

Fixing DOM Cross-site Scripting Vulnerabilities

The best way to fix DOM based cross-site scripting is to use the right output method (sink). For example if you want to use user input to write in a <div> element don’t use innerHtml, instead use innerText/textContent. This will solve the problem, and it is the right way to remediate DOM based XSS vulnerbilities.

It is always a bad idea to use a user-controlled input in dangerous sources such as  eval. 99% of the time it is an indication of bad or lazy programming practice, so simply don’t do it instead of trying to sanitize the input.

Finally, to fix the problem in our initial code, instead of trying to encode the output correctly which is a hassle and can easily go wrong we would simply use element.textContent to write it in a content like this:

<b>Current URL:</b> <span id="contentholder"></span>

<script>
document.getElementById("contentholder").textContent = document.baseURI;    
</script>

It does the same thing but this time it is not vulnerable to DOM based cross-site scripting vulnerabilities.

Netsparker Web Application Security Scanner 3.5 Features Highlight

$
0
0

We are happy to announce the new Netsparker Web Application Security Scanner version 3.5. It has been quite awhile since you have heard the news of a new major version update from us, which is not normal. Typically we release new major updates much more frequently, though as you will notice in this version we implemented a new crawling engine and several other new features, which are no small feat. So even if it might have taken us some time to release this new version, we are still very happy with the outcome and we are sure you will be as well.

Automated DOM Based Cross-site Scripting Security Tests

Netsparker Cross-site Scripting engine already scans websites for XSS vulnerabilities that can be exploited by sending payloads through HTTP requests. With this new version of Netsparker the scanning engine can now also detect another category of Cross-site Scripting vulnerabilities; DOM based cross-site scripting. These vulnerabilities are usually exploited by setting an XSS payload at the web page’s location hash value. If the page doesn’t validate this input, chances are, a DOM based vulnerability may lay there. From this version on, Netsparker will scan your web pages against DOM Based Cross-site Scripting vulnerabilities. If your website makes use of location hash values for various purposes, scanning your web site against DOM Based Cross-site Scripting vulnerabilities is crucial.

DOM XSS reported by Netsparker Web Application Security Scanner

The DOM Based Cross-site Scripting tests can take quite some time to complete, especially when scanning pages with lots of elements. Therefore the default scan policy in Netsparker has DOM Based Cross-site Scripting off but you can always scan your site using Extensive Security Checks policy or create your own policy to find out DOM Based Cross-site Scripting vulnerabilities. In the meantime we are already working on several improvements to ensure DOM based XSS tests are as fast as all other security checks.

Read the article DOM based Cross-site Scripting Vulnerabilities for a more detailed explanation of this vulnerability variant.

Custom & Easy URL Rewrite Configuration

Websites today are making use of the technique called URL Rewrite to have more readable URLs and have higher search engine ranking. Using URL Rewrite, websites replace the ugly looking regular GET parameters in URLs with more readable URL path segments. Previous versions of Netsparker was trying to automatically detect if the site being scanned has URL Rewrite in place. This version of Netsparker introduces Custom URL Rewrite Configuration which allows you to configure the scanner by providing the URL rewrite patterns on the target web site. Once configured Netsparker will be able to attack URL segments which play role of a parameter. Also, this helps in reducing the number of attacks for the URLs with the same patterns.

Configuring URL rewrite rules in Netsparker Web Application Security Scanner is as easy as ABC

Read URL Rewrite Rules and Web Vulnerability Scanners for more detailed information on URL rewrites and check out Configuring URL Rewrite Rules in Netsparker Web application Security Scanner to see how easy it is to configure URL rewrite rules in Netsparker to scan parameters in the URL.

Ignore a Vulnerability From Scan Results

You may want to exclude a specific vulnerability from a scan result so it won’t appear in the reports. These could be the kind of vulnerabilities that are informational and make the report too verbose, or you believe that the vulnerability doesn’t exist any more. Using this new feature, you can ignore a vulnerability from a specific scan by right clicking it from the sitemap tree as shown in the below screen shot.

It is possible to exclude a vulnerability from a web vulnerability scan result and not having it in the report with Netsparker

Chrome Based Web Browser Engine & Crawler

Previous versions of Netsparker were using the Internet Explorer engine for performing DOM operations and JavaScript execution. There were several problems when IE was being used as a web browser engine in Netsparker. First and foremost, every Windows comes with different versions of IE, which are not always updated to latest version. This has caused us problems when developing features for a few versions of IE at once. Another problem is, the older versions of IE had some recent standard basic features missing which caused your standard based web applications  to behave unexpected. Given these problems, we decided to replace our web browser engine with a Chrome-based engine. We have also replaced the web browser pane on recording phase of our Form Authentication wizard.

The new crawling and web browser engine opens a number of new opportunities for Netsparker and we will continue developing on it to ensure that Netsparker can automatically crawl and scan a wider variety of web applications built with different technologies.

Complete Change Log for Netsparker Web Application Security Scanner 3.5

For a complete detailed changelog of what is new and improved in the latest version of Netsparker please visit the Netsparker Web Application Security Scanner Change Log.

Netsparker Allows SECWATCH to Provide Affordable and Efficient Web Application Security Audits

$
0
0

“Like everyone else we evaluated Netsparker along with a number of other commercial scanners, though we immediately noticed that Netsparker was what we were looking for,” Henk-Jan, SECWATCH Founder

Providing Efficient and Affordable Web Application Security Audits

SECWATCH LogoSECWATCH is a Dutch based company that provides penetration testing, security auditing and compliance checks to a variety of organizations in Holland and abroad, ranging from small businesses to enterprises. They have been leading the security industry for a number of years now because of their unique approach, and the clear and practical advice and remediation suggestions they provide to their customers.

As part of their service offerings, SECWATCH do web application security audits. Originally they started off by using a combination of open source web security tools and manual web application security audits.

Though as the demand for their web security services grew, and the web applications they were auditing became bigger and more complex, they encountered several pitfalls; security tests were taking much longer to complete thus becoming unaffordable, and the open source tools didn’t cope well with the size and complexity of the enterprise level web applications they were auditing.

“We were doing manual web security audits with a variety of open source security scanners and manual validation testing. As web applications became more complex we started noticing that the tools started reporting a lot of both false positives and false negatives,” said Henk-Jan, Founder of SECWATCH. “The scan results the tools were producing impacted our procedures and also our prices. The more complex the web applications were, the more time we were spending manually checking all of the scanners’ results, making the whole process way too expensive."

Moving Towards Automated and Cost Effective Web Vulnerability Scanning

Because of the problems SECWATCH were encountering while delivering their web application security services, and to ensure they could continue to provide top notch service at an affordable price, they had to look for an automated web vulnerability scanner.

Like many other organizations who needed such a tool, SECWATCH were not just looking for a good web vulnerability scanner; they were looking for a complete solution. They needed the software that enables them to automate the process and save time by producing accurate results, and a software company that was there when they needed support.

Switching to Netsparker Web Application Security Scanner

“Like everyone else we evaluated Netsparker along with a number of other commercial scanners, though we immediately noticed that Netsparker was what we were looking for,” said Henk-Jan. “To start off with it detected web vulnerabilities that other solutions didn’t detect. It is easy to use and setup, it generates easy to read findings and reports that we can implement into our base workflows. Netsparker pricing also allowed us to keep on providing web security audits which include manual testing and validation at an affordable price.”

Sticking to Netsparker Web Application Security Scanner

As many security professionals know very well, web application security is not a straightforward business. So when buying a web vulnerability scanner it is not just about how good the scanner is, and how many vulnerabilities it can detect, but it is also about the support the software company can provide you with and the continuous development of the scanner.

SECWATCH have been using Netsparker alongside several other tools for over three years now, with Netsparker being the leading tool for web security audits. They do not intend of switching to another solution sometime soon because as Henk-Jan states “We have contacted Netsparker support several times because when using such an advanced tool it is normal to question some things, or even some results sometimes. Netsparker’s support response was beyond expectation in terms of time, availability and providing the actual solution”.

Netsparker also releases updates and new product versions frequently to ensure that all of its users can stay a step ahead of the malicious attackers. Each new update and version contains new web application security tests and a number of features that enables its users to automate as much of the process as they can.

About SECWATCH

SECWATCH specializes in providing solutions for information and network security. SECWATCH unique approach and vision ensures that your business is optimally protected. SECWATCH not only look at the hardware and software solutions, but also to the organizational aspects, such as a solid security and enforcement. SECWATCH sees information security as an integral business process and therefore in addition to technical recommendations they give advice in the areas of management, organizational and business structure. And this makes SECWATCH approach unique in the industry.

About Netsparker Web Application Security Scanner

Netsparker Web Application Security Scanner is an industry leading automated web vulnerability scanner developed by Netsparker Ltd. It is very easy to use and automates most of the web security scanning. An out of the box installation of Netsparker is able to scan a wide variety of web applications, therefore users do not need to spend hours configuring the software. Netsparker is the only web vulnerability scanner to automatically verify detected web vulnerabilities, thus reporting no false positives. Netsparker is used by world renowned companies such as Samsung, NASA, Skype, ING Bank and Ernst & Young.

How Fast is Your Web vulnerability Scanner?

$
0
0

How fast is your automated web vulnerability scanner? This is most probably the most common question every penetration tester would ask when evaluating an automated scanner. Time is money and for businesses it is very important that a security professional or anyone employed to ensure the security of websites and web applications finishes the penetration tests in the shortest time possible.

There are many factors that affect the duration of a complete automated web application vulnerability scan. And apart from the automated vulnerability scan itself, there are several other factors that determine how long a penetration test can take. For example how long does it take the user to configure the web scanner? How many post scan tasks are required to fully complete the penetration test, such as verifying the scanner findings?

When you look at it from a business owner point of view, all these facts matter; it is not just about the scanner’s speed but it is about the whole picture. In this article we will look into each and every factor that makes up a complete web security scan and what can affect the scan duration, or better the complete process of securing websites and web applications.

Size and Complexity of Target Web Application

The complexity and size of the target web application play a major role. For example an automated web vulnerability scanner will take longer to scan a web application of 10 pages that has 10 inputs on each page rather than to scan a website of 1,000 pages that only has 20 inputs in total. The difference in scan duration is because the scanner attacks inputs, or attack surfaces and static pages do not have any attack surfaces, hence are ignored.

Apart from visible inputs, a website can also have custom 404 error pages implemented, URL rewrite rules and uses DOM. Depending on all these factors and how they are configured the automated scan duration can vary.

Web Vulnerability & Security Checks

Netsparker provides the user the ability to create new Scan Policies and select which type of web application vulnerability checks should be run during an automated security scan. As an example, the scanner will require much less time to finish an automated security scan that scans the target website for SQL injections only, as opposed to scanning it for a wider variety of vulnerability checks.

You should also use the Scan Policies and ensure that the scanner is not performing any redundant checks. For example if your web application is running on PHP and using Apache and MySQL, you can safely disable all ASP.NET, IIS and Microsoft SQL Server security checks from the scan. By doing so you can drastically decrease the scan duration time.

Because of the nature and complexity of vulnerability checks, some vulnerability checks take much longer to complete than others. For example the vulnerability check for DOM based cross-site scripting can take quite a long time to complete because of the way the check is done.

Web Server Response Time

Complimenting with the above points, the web server response time also plays a major role in the duration of an automated web security scan. For instance the number of HTTP requests a scanner sends during an automated scan depends on the complexity and size of the target website and also on the type of vulnerability checks configured. During a typical web security scan the scanner sends thousands of HTTP requests. If the web server response time is very high, i.e. the web server takes a considerable amount of time to respond to the requests being sent by the scanner, then the scan duration is prolonged.

If the web server response time is very high, check the web server’s CPU and memory usage, the load on the database and several other counters that might be contributing to high web server response time.

Internet Connection Speed

Irrelevant of how many HTTP requests the automated web vulnerability scanner sends and how fast your web server responds to them, if the internet connection between the scanner and the target web application is slow, then the automated web security scan can take a considerable amount of time to complete.

If the internet connection is a problem, try to launch the security scan from a computer that is on the same network as the web server, thus eliminating a factor that might affect the scan duration.

Web Vulnerability Scanner Speed

Many users think that the duration of a web application vulnerability scan only depends on the speed of the scanner, but as we have just seen it depends on several other factors. Of course the speed of the automated security tool you are using will vary the scan duration but most probably it is the least affecting factor. For example Netsparker can be configured to send 25 concurrent HTTP requests to the target web server, but will the web server and web application handle such load?

Technically speaking 25 concurrent connections is like emulating 25 browsers. Though the scanner sends the HTTP requests automatically unlike real visitors, who typically take a longer period of time between one action and the other. So before configuring the number of concurrent connections in the web vulnerability scanner, confirm that the target web server and web application can handle it and that there is enough bandwidth to cater for it.

Other Factors Affecting Web Vulnerability & Security Scans

So far we have only looked into the technicalities of the factors that can affect the duration of an automated scan, but a complete web vulnerability scan is not just about the automated scan. To fully justify the investment a business has made the scan duration should be measured from when the user starts configuring the web vulnerability scanner till when he or she verifies the tool’s findings.

Easy to Use Security Tools

It is a well known fact that most security software is difficult to setup and run. For example because of the wide variety of different web applications being used today, sometimes it can take a considerable amount of time to configure the tool prior to launching a web vulnerability scan. Therefore when evaluating several different web vulnerability scanners, make sure that the one you go for is easy to use and can automate as much of the configuration process for you by auto fine tuning itself.

An easy to use web vulnerability scanner does not only mean you need less time to launch a web security scan, but also means that other people from different departments can use the software. For example if the software requires a lot of details about the target web application, in that case most probably only the developers can use it. But if the software is easy to use and automates most of the configurational tasks, even software testers or quality assurance team members can use the software, thus allowing developers to focus on development and remediation of security issues and enables your business to reduce the costs of securing web applications.

Verifying Reported Web Vulnerabilities

Most probably this is the most lengthy post scan procedure; verifying the vulnerabilities that an automated web vulnerability scanner reported. Apart from being a lengthy and daunting task, manually verifying web vulnerabilities requires a lot of knowledge and skills. Sometimes even the most seasoned web security professionals cannot manually verify some of the scanner’s findings.

The good news is that Netsparker automatically verifies detected vulnerabilities by automatically exploiting them in a safe and read only manner; therefore you can almost eliminating this task and reduce the time you require to complete a web vulnerability scan. There are also several other advantages you can benefit from when using a false positive free web vulnerability scanner because false positives have a very negative impact on web application security.

Hardware Specifications

Last but not least, the computer hardware where you are running the scanner also affects the scan duration and speed. A web vulnerability scanner needs processor power and consumes memory to operate properly, like all other software. Web vulnerability scanners do millions of calculations and store a lot of temporary data during a normal web application security scan (depending on the size of the target website), therefore the more processor power you have, and the more memory you have the faster the security scanner can execute the calculations.

In terms of memory, 2 GB RAM is barely enough to run a modern web vulnerability scanner. 4GB should be much better especially if you are scanning large web applications. In terms of processor, the faster the processor is, the more cores the processor has the faster the web vulnerability scanner can do its calculations. A modern dual core processor should suffice, but like with everything else, the bigger the better.

How Fast is Your Web Vulnerability Scanner?

As we have just seen there are many different factors that might be affecting the duration of an automated web vulnerability scan and while some of them are obvious, most of them are not. Also when evaluating web vulnerability scanners do not just look at it from the technical or developer point of view, look at it from a business owner point of view so you can choose the scanner with the best return of investment which your business can benefit from.

Ruby on Rails Security Basics

$
0
0

Recently I have been working on my first Rails application, even though I have been working with Ruby for a number of years, this is the first time I’ve ever developed an application using the Rails framework. By trade, I’m a Security Tester, however, I do like to work on software projects in order to keep my skills sharp and practice what I preach.

Rails has some security features built in and enabled by default, however, I also recommend installing some additional Gems which cover security features Rails lacks by default. This article explains what are the basic Ruby on Rails built in security features and which are the gems I recommend intsalling.

Ruby on Rails Built in Security Features

I’m a great believer in secure by default and making security easy for developers. Some may argue that by making security easy, it will make developers pay less attention to security and possibly lead them to making more security mistakes. Kind of like a horse with blinkers on. In reality I think it is probably a balance, don’t make security invisible to the developer but instead make it just easy enough for them to implement correctly.

So be warned! Don’t just rely on Rail’s built in security features thinking that they offer a 100% effective way at mitigating the vulnerabilities they were designed to prevent against. Instead, learn how to use them correctly and know their limitations.

Preventing Cross-Site Scripting (XSS)

To help prevent Cross-Site Scripting (XSS) vulnerabilities we sanitise input and encode output using the correct encoding for the output context.

Sanitising Input

Rails makes sanitising input easy with its Model View Controller (MVC) design. Any data stored or retrieved from a database should pass through a Model, so this is a great place to sanitise our stored data. Using Active Record Validations within our models we can ensure that data is present and/or in a specific format.

You can also sanitise input/output within your View using the sanitize method. The sanitize method ‘will html encode all tags and strip all attributes that aren’t specifically allowed’. Let’s pass it a common XSS payload and see how it reacts:

<%= sanitize '<img src=x onerror=prompt(1)>' %>

The above will output:

<img src=“x”>

As we can see the sanitize method has allowed our img tag with the src attribute, but it has  removed the onerror event attribute. By default, if we don’t whitelist which tags/attributes we want, Rails will make that decision for us on what it believes is ‘safe’.

If we whitelist the src and onerror attributes, our XSS payload is executed:

<%= sanitize '<img src=x onerror=prompt(1)>', attributes: %w(src onerror) %>

The above will output:

<img src="x" onerror="prompt(1)">

Encoding Output

In modern versions of Rails, strings output in the View are automatically encoded. However, there may be occasions when you want to encode HTML output by yourself. The main output encoding method in rails is called html_escape, you can also use h() as an alias. The html_escape method ‘escapes html tag characters’.

Let’s pass it a common XSS payload and see how it reacts:

<%= html_escape '<img src=x onerror=prompt(1)>' %>

The above will output:

&lt;img src=x onerror=prompt(1)&gt;

As we can see the html_escape method has converted the < and > characters into html entities, ensuring the browser does not interpret them as markup.

This is the same output as we would see if we simply passed a string, thanks to Rails’s default encoding:

<%= "<img src=x onerror=prompt(1)>" %>

The above will output:

&lt;img src=x onerror=prompt(1)&gt;

But don’t forget what we said earlier! Just because modern versions of Rails encode strings in Views by default, does not mean that XSS can not happen. One example is within the href value of a link (using the link_to method).

Preventing Cross-Site Request Forgery (CSRF)

Modern versions of Rails protect against CSRF attacks by default by including a token named authenticity_token within HTML responses. This token is also stored within the user’s session cookie - when a request is received by Rails it checks one against the other. If they do not match, an error is raised.

It is important to note that Rails’s CSRF protection does not apply to GET requests. GET requests should not be used to change the application’s state anyway and should only be used to request resources.

Although enabled by default, you can double check that it’s enabled by seeing if the protect_from_forgery method is within the main ApplicationController.

Preventing SQL Injection

Rails uses an Object Relational Mapping (ORM) framework called ActiveRecord to abstract interactions with a database. ActiveRecord, in most cases, protects against SQL Injection by default, however, there are ways in which it can be used insecurely which can lead to SQL Injection.

Using ActiveRecord we can select the user with the supplied id and retrieve that user’s username:

User.find(params[:id]).username

The above will return the username of the user whose user id matches the one supplied via the params hash. Let’s take a look at the SQL query generated by the code above on the backend:

SELECT  "users".* FROM "users"  WHERE "users"."id" = ? LIMIT 1  [["id", 1]]

As we can see from the SQL query above, when using the find method on the User object ActiveRecord is binding id to the SQL statement. Protecting us from SQL Injection.


What if we wanted to select a user which matched a username and password, commonly seen in authentication forms. You might see something like this:

User.where("username = '#{username}' AND encrypted_password = '#{password}'").first

If we supply a username with the value ') OR 1-- the corresponding SQL query on the backend becomes:

SELECT  "users".* FROM "users"  WHERE (username = '') OR 1--' AND encrypted_password = 'a')  ORDER BY "users"."id" ASC LIMIT 1

By injecting our specially crafted SQL, what we have done is told the database to return all rows from the users table where the username is null or true. This makes the SQL statement return true along with all of the data in the users table.

For some great examples of how not to use ActiveRecord, here’s a great resource which I suggest you check regularly to ensure you don’t have any of the examples within your code - http://rails-sqli.org/

Ruby on Rails Security Gems

As we have seen, Rails offers many built in security features to help protect our applications, data and users from web based attacks. But we also saw that these have their limitations. For security features that Rails does not offer by default there are always Gems, lots and lots of Gems. Here are some of my favourite.

devise

Devise is a popular authentication and authorisation Gem for Rails. It offers secure password storage using bcrypt to hash salted passwords. User lockouts, user registration, forgot password functionality and more.

Although Devise’s own README states “If you are building your first Rails application, we recommend you to not use Devise”, I would ignore this statement. If you’re security aware and have built applications in other frameworks before, I don’t see any issue with using Devise for your first Rails application.

URL: https://github.com/plataformatec/devise

brakeman

Brakeman is a Static Code Analysis tool for Rails applications. It searches your application’s source code for potential vulnerabilities. Although it does report the occasional False Positive, personally, I think this is a great Gem and one I would definitely recommend running against your application before going into production. Even better, run it after every commit.

URL: https://github.com/presidentbeef/brakeman

secure_headers

Developed by Twitter, SecureHeaders is a Gem that implements security related HTTP headers into your application’s HTTP responses. Headers such as Content Security Policy to help protect against Cross-Site Scripting (XSS) attacks, HTTP Strict Transport Security (HSTS) to ensure your site is only accessible over secure HTTPS, X-Frame-Options and others.

URL: https://github.com/twitter/secureheaders

rack-attack

Developed by Kickstarter, Rack::Attack is a Gem for blocking & throttling abusive requests.   Personally, I use Rack::Attack to prevent forms being abused, for example, instead of implementing a CAPTCHA on a submission form, I use Rack::Attack to ensure it is not submitted too many times in a short space of time. This should prevent automated tools from abusing the form submission. It also supports whitelisting and blacklisting of requests.

URL: https://github.com/kickstarter/rack-attack

codesake-dawn

Codesake::Dawn is similar to brakeman in that it scans your source code for potential vulnerabilities. However, Codesake::Dawn also has a database of known vulnerabilities which it uses to scan your Ruby, Rails and Gems for known issues.

URL: https://github.com/codesake/codesake-dawn

Ruby on Rails Code Quality Gems

Sloppy and messy code leads to bugs and some bugs may have security implications. Better quality code is more secure code. Let’s take a look at what Gems we can use to ensure our code is nice and clean.

rails_best_practices

The rails_best_practices Gem is a great Gem for ensuring your code is adhering to best practices. It will help you make your code more readable and eloquent by scanning through it and giving you suggestions on how to improve the syntax.

URL: https://github.com/railsbp/rails_best_practices

rubocop

Rubocop is not specific to Rails and can be used for any Ruby application. It uses the Ruby Style Guide as a reference to scan your code and ensure you adhere to it. Things like variable naming, method size, using outdated syntax, etc.

URL: https://github.com/bbatsov/rubocop

Conclusion

Rails does a lot of things right when it comes to security. When developing a Rails application it feels like Rails has your back. However, don’t let this sense of security lull you into a state of not caring about security. As we have seen, there are pitfalls, and it only takes one mistake for your users table to end up pastebin.

No post on the Netsparker blog would be complete without a Netsparker plug. Everything we’ve talked about in this post is mostly source code related. As well as looking at your application’s source code, you should also ensure it is scanned with a heuristic scanner like Netsparker. You can take every development precaution, read every line of code, but this does not mean you will catch every single vulnerability. Netsparker web vulnerability scanner should be used in conjunction with what has been discussed above throughout your Security Development Lifecycle, as well as when in production.

Further Reading

Below is a list of URLs from where you can find more information about Ruby on Rails security and best coding practices.

https://www.owasp.org/index.php/Ruby_on_Rails_Cheatsheet
http://api.rubyonrails.org/classes/ActionView/Helpers/SanitizeHelper.html
http://guides.rubyonrails.org/active_record_validations.html
http://api.rubyonrails.org/classes/ERB/Util.html#method-c-html_escape
http://guides.rubyonrails.org/security.html
http://api.rubyonrails.org/classes/ActionController/RequestForgeryProtection.html
http://rails-sqli.org/

Viewing all 1027 articles
Browse latest View live