Quantcast
Channel: Invicti
Viewing all 1027 articles
Browse latest View live

Netsparker used to Identify Thousands of Vulnerabilities

0
0

“Netsparker is a fantastic tool and is very light to use. Security Reports are easy to comprehend and helped programmers close web vulnerabilities quickly. It has become an essential tool for our consulting team. I will not hesitate to recommend Netsparker to anyone!”– Sujit Christy, Director at Layers-7 Seguro Consultoria Pvt Ltd.

Web Application Security Services

LayersLayers-7 specializes in IT security and provides businesses from all industries with security consultation, services and solutions with the aim of helping them find the right balance between security and their primary business mission.

As part of the security services, a team of security professionals from Layers-7 ensure the security of thousands of websites and web applications run by their very own customers and accessed by millions of visitors every month. With customers operating from such a wide range of different industries such as manufacturing, aviation, retail and Government, Layers-7 encounters all kind of custom built web applications, most of which are tightly integrated with other custom made software.

Automating Web Application Security Services

In order to meet the growing customers’ needs, keep on identifying web vulnerabilities such as SQL injection and be more efficient while still providing an excellent service to its customers, the team of experts at Layers-7 had to automate web application security. They needed an automated web application security scanner that:

  • Is fast and reliable
  • Can scan websites running on Apache web server
  • Identify web application vulnerabilities in web applications built in PHP, .NET and Java
  • Generates industry standard reports such as developer and executive reports
  • Reports no false positives to ensure team spends time securing web applications and not verifying false positives

Best Automated Web Application Security Delivered by Netsparker

After evaluating several other automated web vulnerability scanners such as AppScan and Acunetix WVS, Layers-7 chose Netsparker web application security scanner because it is the only web application security scanner that “detects all type of vulnerabilities without reporting false positives. Our team does not need to spend hours configuring Netsparker because an out of the box installation is good to scan almost all type of web applications” said Sujit Christy.

“Apart from the product, the Netsparker sales and technical support teams are awesome. They answer all kind of questions, exceptionally prompt and never any question of our team was treated as trivial” added the director of Layers-7, Sujit Christy.

About Layers-7

Layers-7 was founded with the vision of providing enterprise level security consulting and services to private and public sector organizations.

About Netsparker

Netsparker is a young and enthusiastic UK based company. Netsparker is focused on developing a single web security product, the false positive free Netsparker Web Application Security Scanner. Founded in 2010, Netsparker is one of the leading web application security scanners and is used by world renowned companies such as Samsung, NASA, Skype, ING and Ernst & Young.


Netsparker Version 3 is Available for Download

0
0

We are happy to announce the new version of Netsparker Web Application Security Scanner. In the last couple of weeks everyone in the team has worked tirelessly so Netsparker version 3 is released on time as planned. Thanks to our awesome team (especially to our Q/A lead Onur, who is actually getting married tomorrow!)

What is New in Netsparker 3

The new Netsparker Version 3 has several new features that will make the difficult process of finding web application vulnerabilities much easier and quicker. Apart from the new features, the existing scanning engine and security checks have been improved to make sure all web application vulnerabilities are being detected. Netsparker version 3 also includes a number of new security checks.

For more detailed information about the new features and improvements in Netsparker Version 3, you can refer to the post  Netsparker Version 3 Highlights. Refer to the Netsparker change log for a complete list of all the changes including bug fixes.

Netsparker New Reduced Prices and Discounts

Make sure you also check the Netsparker pricing page. We have reduced the license renewal prices by more than 40% and are also giving a discount of 35% to anyone who buys the Netsparker license for 3 years. If you need more information about the new Netsparker pricing or other discounts we give for multi seat licenses, just drop us an email on sales@netsparker.com.

An official Netsparker version 3 press release is available here.

Upgrading Your Netsparker to Version 3

If you are already using Netsparker, a pop up window with the upgrade details will show up the next time you run Netsparker, as seen in the below screenshot. You can always click Help > Check Updates to force manual updates as well.

If you have problems with the upgrade or any product related questions, get in touch with our support by sending us an email on support@netsparker.com

Download the Netsparker Version 3 Trial

It only takes a couple of minutes to launch a scan with Netsparker Web Application Security SCanner and identify web vulnerabilities on your websites and web applications.

Download the trial edition of Netsparker and check if your web applications are vulnerable to cross-site scripting and other web vulnerabilities such as SQL Injection today.

Why Web Vulnerability Testing Needs to be Automated

0
0

For those doing business in the 21st century, automation is the name of the game. It applies to more general areas of business such as manufacturing and inventory control but it also applies to more technical areas of IT such as web application security. Any time a business process is not automated, it costs more time, effort, and money – resources that cannot be squandered.

When people are involved in accomplishing business tasks, especially skilled labor in the case of web application security and penetration testing, it creates a considerable burden on everyone. Looking at web vulnerability testing, many resources are required during:

  • Project scoping
  • Information gathering
  • Scanning for web application vulnerabilities
  • Vulnerability identification and validation
  • Reporting
  • Remediation

In any given organization these factors typically involve numerous people: developers, QA analysts, project managers, network administrators, web application developers, information security managements, auditors, and management. Even third-party vendors are often pulled into web security assessment projects. With this many highly-paid staff members working toward a common goal, every business has to automate as much as possible to avoid confusion and expensive bills.

The question becomes: Why? Why is automation important? Every situation is different but there are some commonalities. For starters, you run the risk of duplicated efforts when redundant tests are performed. When you have numerous complex web applications as most of today’s online businesses do, this can add up to a considerable amount of unnecessary work. Another issue that management doesn’t fully understand is that there’s not enough knowledge or time to perform manual web vulnerability testing on all web applications all the time. No one is that smart much less that good at time and project management. If web application security testing is not automated using a proven automated web application security scanner that can test for thousands of potential security flaws, some if not all of the serious web application vulnerabilities can be overlooked. Web security testing goes from being a seemingly benign IT project to a serious business liability.

For example, imagine a custom made web based enterprise resource planning (ERP) system. Such system would have hundreds, if not thousands of visible entry points or attack surfaces and many other “under the hood” that need to be checked for web application vulnerabilities such as SQL injection and cross-site scripting.

Using real life numbers, imagine the ERP system has 200 entry points that need to be checked against 100 different web application vulnerability variants. That means that the penetration tester needs to launch at least 20,000 security tests. If every test had to take just 5 minutes to complete, it would take a web security specialist around 208 business days to complete a proper web application security audit of an ERP system.

An automated web application security scanner such as Netsparker can scan a much bigger custom ERP systems against a much bigger number of web application vulnerability variants in a matter of hours. And unlike a human, an automated security scanner will not forget to scan an input parameter or get bored while trying different variations of a particular attack.

When doing a manual web application security test, you are also restricting the penetration to a number of known vulnerabilities known to the penetration tester. On the other hand, when using an automated web vulnerability scanner such as Netsparker you are making sure that all parameters are being checked against all type of web application security variants. By using Netsparker you are also ensuring that no false positives are reported in the web application security scan results, therefore you do not need to allocate time to validating detected vulnerabilities. 

Underscoring the importance of vulnerability testing automation are the popular information security studies. Year after year this research points to the same underlying causes of information risks such as insufficient resources, lack of visibility, and uninformed management. Each of these elements can be addressed by automating security testing processes.

There’s no perfect way to test for web security vulnerabilities. However, one thing is for sure: going about it manually and relying on staff expertise alone can be an exercise in futility that you cannot afford to take on because it might cost your business a lot of money and some web application vulnerabilities might go undetected. Do what’s best for your business and integrate automation into the web vulnerability testing discussion and into web applications software development life cycle. When using an automated web application security scanner you find more and better vulnerabilities.

There are issues where automation will not help you and manual testing needs to take place, but you don’t want your security team to check an input for 100 different possible issues one HTTP request at a time or by trying to analyze the output of a fuzzer. Free your team members’ time so they can focus their efforts to the tasks that actually will benefit from their expertise.

Web Application Security Testing should be part of QA Testing

0
0

A typical software and web application development company has a testing department, or a QA (quality assurance) team that constantly tests the software and web applications developed by the company to ensure that the products work as advertised and have no bugs. Larger software companies also invest hundreds of thousands, if not millions of dollars on software to automate some of the testing procedures and ensure that the product is of a high end quality.

Web Applications Still have a lot of Bugs

So how come websites and web applications are still getting hacked every day? For example just a couple of days ago the Istanbul Administration site was breached by a hacker group called RedHack via an SQL injection (more info). In March 2013, Ben Williams released a white paper called “Hacking Appliances: Ironic exploits in security products”. The whitepaper includes details about web application vulnerabilities found in the administrator web interface of several security gateway devices that could be used to bypass the security device and gain administrative access. The whitepaper can be downloaded from here (pdf). In April 2013 a remote code execution vulnerability that allows a malicious hacker to execute code on the victim’s web server was identified in two of the most popular caching WordPress plugins (more info). And the list goes on an on.

How come these type of bugs (aka as development mistakes)  that when exploited could put the customers’ data and business at risk are not identified by the testing department or QA team?

Only the Functionality of Web Applications is Tested

While software companies have departments dedicated to identify functionality bugs, most of them do not have any sort of security testing procedure in place. In fact when a developer adds a new button in a web interface, typically there are documented procedures that are followed by the testing department to test the functionality of the button, but there are no procedures to test the functionality underneath the button and to check if it can be tampered with or exploited.

This mostly happens because many companies still differentiate functionality (QA) and security testing, or the management is unaware of the implications an exploited security issue might have on the customers’ business.

Web Applications Should be Checked for Vulnerabilities during SDLC

Security testing of web applications and any other sort of software should be included in the software development life-cycle (SDLC) with the normal QA testing. If a security vulnerability is found at a later stage, or by a customer it is of an embarrassment for the business and it will also cost the business much more fo fix the vulnerability. So as much as developers are expected to do unit testing when they write new code for a new function, the testing department should be expected to also test and confirm that the new function is secure and cannot be exploited.

Even if the developers follow good security coding practise, or say that they do not need a specific tool to do security testing, rigorous web application security testing should be performed by the testing department to ensure there are no web application vulnerabilities.

Typically developers also say that they follow good coding practises but when they finish they also check their own code several times and the company still invests money and build departments to test their code, so why not check their code for web application vulnerabilities as well? Unless the developers are seasoned hackers, their code should never be released to the public unless it has been through a proper security audit.

After all a security vulnerability is like a normal software bug. For example if an input field in a web application allows the user to enter his name, the developer restricts the input of such field to letters only. The testing department will also check that only letters are allowed as input and that the input is stored in the right place. So once at it might as well check if special characters are allowed, or if encoded input is executed by the web application. If it is, then it is a bug that falls under the security category.

Automatically Scanning for Web Application Vulnerabilities

If the developers and testers are not into web application security, don’t fret. QA team members can use an automated web application security scanner to detect vulnerabilities in the code. Automated web application security scanners allow users to detect vulnerabilities in web applications even if they are not security experts. Such software helps the team in understanding the vulnerabilities and train developers to write more secure code in the future. By automating the web application security testing you are also saving money, time and ensuring that no vulnerability as can be seen from the article Why Web Vulnerability Testing Needs to be Automated.

Developing Secure Web Applications and Software

As we have seen there are enough reasons and several advantages to including security testing of web applications with the functionality testing. You can never assume that a web application is secure, in the same way that you can never assume that it functions properly, which is why companies invest in testing and QA teams. After all, web application vulnerabilities are normal software functionality bugs!

Netsparker 3.0.5.0 Released

0
0

This is a minor update to Netsparker Standard / Professional editions which contains minor bug fixes and enhancements for vulnerability database and fingerprinting tables.

Improvements

  • Updated vulnerability database
  • Updated fingerprinting tables for WordPress and Movable Type
  • Improved the language used in knowledge base templates

Bug Fixes

  • Fixed a bug to prevent auto update message dialog when the auto update setting is disabled
  • Fixed a bug in meta tag parser to match the correct generator version

Upgrade your Netsparker

If you have a valid Netsparker Professional or Standard license then all you need to do is click "Help > Check for Updates" to update to Netsparker 3.0.5.0.

Netsparker 3.0.7.0 Released

0
0

This is a minor update to Netsparker Standard / Professional editions which contains minor bug fixes and enhancements.

Improvements

  • Updated OWASP Top Ten 2010 classifications for SVN and CVS vulnerabilities

Bug Fixes

  • Fixed a critical bug where vulnerability templates rendering is broken on systems with IE8
  • Fixed a bug where some vulnerabilities is not reported due to a race condition
  • Fixed a bug occurs when a scan file is imported and the related scan policy file is missing
  • Fixed a syntax error on Cookie Not Marked As Secure vulnerability template

Update

If you have a valid Netsparker Professional or Standard license then all you need to do is click "Help > Check for Updates" to update to Netsparker 3.0.7.0.

Should you pay for a Web Application Security Scanner?

0
0

Solving the Commercial vs Non Commercial (free) Software Dilemma

In today’s commercial world nothing is available for free, or so most of us think. Within 10 minutes of searching on the internet for a web application security scanner for web application penetration testing, I found more than 50 non commercial web application security scanners, i.e. they are available for free. While some only scan for a specific vulnerability classes such as SQL Injection or Cross-site Scripting, some of them seem to be a fully blown security scanners.

Therefore why should I pay for a web application security scanner when there are many available for free? How can I justify the costs incurred for buying a commercial web security scanner to the management?

Both commercial and non commercial web application security scanners have their pros and cons. In this series of blog posts we will look at several aspects and requirements you would typically look for when owning such software.

Automation of Web Application Security Scanning

Commercial web application security scanners have built in automated crawlers and scanners. Therefore scanning a web application for vulnerabilities is a very straightforward process. Most non commercial web application security scanners do not have an automated crawlers and scanners. Most of them require a lot of manual intervention to detect vulnerabilities in web applications. For example most of them have to be configured as a proxy server to capture the traffic while you manually browse the web application. Afterwards you have to manually trigger a number of security scripts to analyse the recorded session.

Although it is not guaranteed that a commercial automated web application security scanner will crawl all areas of a web application and  identify all attack surfaces, automating the process is always more efficient than manually crawling a web application. What are the guarantees that a penetration tester will access all areas and inputs in a web application and have them all recorded? Also, sometimes it is almost humanly impossible to manually crawl a large web application such as a modern ERP system.

Further reading about automation: If you would like to read more about automating web application vulnerability finding, the blog post Why Web Vulnerability Testing Needs to be Automated highlights and explains the benefits of automating web application vulnerability findings.

Ease of Use and Documentation

As explained in the previous section, some of the non commercial web application security scanners require you to go through several procedures to identify vulnerabilities in a web application. And this is just the tip of the iceberg. Some of them have several prerequisites or run in a very specific environment, so you might encounter several difficulties even when trying to install them. And then they include a myriad of options or only run in command line.

On the opposite, an out of the box installation of a commercial web application security scanner boosts an easy to use graphical user interface and can automatically crawl and scan a wide variety of web applications. All you need to do is fire up a user friendly wizard, configure some options and you’re ready to start the scan within minutes. Even though most of the procedures are automated, commercial scanners still have a good number of options that allow you to tweak the scanner when in need. And if you do not know how to do something, you can always RTFM :)

Continuous Development and Advancements - Scan for the Latest Security Checks

How many times have you started a project to drop it at a later stage because you do not have time for it? The same happens with many non commercial projects and web application security scanners. Such scanners are typically developed by very talented developers and penetration testers as a hobby, or to automate their own work. After some time they end up online and some of them become very popular, but most of which are rarely updated or discontinued. Maybe the day pizza is available for free we have no rent to pay, these projects might survive for much longer.

Commercial web application security scanners come from a very different background. They are developed and maintained by financially secure software companies and successful businesses. Large amounts of money are invested in these type of projects. Software companies can afford good developers and can also invest in their own research departments. This type of financial stability guarantees a product built using the latest cutting edge technology that can detect the latest types of web application vulnerabilities. As long as the business is profitable, the product can be maintained and will continue to improve.

In fact while many commercial web application security scanners are updated at least once a month, unfortunately most of the non commercial web security scanners are not even updated once a year.

Ability to Crawl and Scan Custom Web Application

Modern web applications are developed using a wide variety of development frameworks such as PHP, .NET, ASP, ColdFusion, Ruby, Java and more. They also include several security features such as anti-CSRF mechanism and use different types of authentication mechanism. And since SEO has become vital for all types of online businesses most of them have SEO features such as search engine friendly URLs (URL Rewrite) and custom 404 error pages.

While it is impossible for any commercial and non commercial scanner to support all of the above development frameworks and customizations, an out of the box installation of a commercial scanner supports a wide range of development platforms. For example Netsparker has anti-CSRF token support to automatically scan websites which have built-in CSRF attack protection. Some others automatically detect URL rewrite rules and adopt to the scenario, thus reporting less or no false positives*1.

From the crawling and framework support point of view, non commercial web application security scanners work a little bit differently. While they might not have specific support for a particular web application feature or framework, because many of them require manual intervention they can be used to crawl sections of the web application and detect “low hanging fruit” vulnerabilities.

Latest Web Application Vulnerabilities Checks

Most of the non commercial web application security scanners have a lot of shortcomings; the limited time and resources block the development advancement of such projects and research of new vulnerabilities. Therefore most of the non commercial scanners only detect a specific, or a limited number of vulnerability classes.

Commercial web application security scanners can detect a wider range of web application vulnerabilities because they have a good financial backing. Most of the security software companies developing security scanners have the budget that is invested in researching new vulnerabilities and security trend. Some also have their own testing departments where engineers constantly launch vulnerability scans against vulnerable and non vulnerable web applications to ensure a stable vulnerability detection rate. The results of such exercises can also be used to advise web application developers of vulnerabilities in their web applications. The blog post Are Hackers a Step Ahead? An Analysis using Web Application Vulnerabilities contains statistics of vulnerabilities discovered during such tests.

Professional Software Support

Isn’t it frustrating when you encounter a problem and support is not available? I myself used to religiously rely on non commercial software but after several pitfalls, I had to call it a day. When I encountered a problem, support was very limited, or in some cases it simply didn’t exist. Even worse sometimes you find out that someone else already encountered the same problem as yours months ago and no one answered to his forum / comment post yet. That’s not good enough in anyone’s books.

If while performing a penetration test at a customer’s site and I encounter a problem, I would want to solve the problem at the earliest possible. I cannot, and it is not professional to ask the customer to wait until a solution is found. And what if a solution is not found? A web application security scanner is the type of software that businesses depend on. You can download a non commercial email client and change it anytime, or live without it for a couple of days. But web application security scanners are used to secure business web applications that are constantly evolving and under attack, by a penetration testers who have a strict deadline and cost a lot of money. Therefore one wouldn’t change a web security scanner overnight after just a ten minutes research.

Hence why professional support and technical documentation are a must.  I would  gladly pay for support even when using non commercial software. Support is a lifesaver when you are stuck with your back against the wall.

Integration with Development Systems and Professional Reporting

The scope of a web application security scanner is to detect vulnerabilities and not to generate reports or integrate with other systems. Granted, but when working as part of a team or with customers, reports have to be generated and scans have to be triggered automatically by other systems; typically scanners are integrated in the software development lifecycle. So even if most of us technical people don’t like it, integration and reporting are vital features that make a web application security scanner a complete security solution. The more collaboration and integration features a security scanner has, the more the chances of it being integrated in large development projects and penetration testing teams.

Businesses quickly understood the industry’s needs and grasped the opportunity to sell more copies of their scanner. Nowadays most commercial web application security scanners have their own reporter or can export scan results in XML format so they are parsed and imported in bug tracking systems. Some of them can also be integrated with web application firewalls or other types of defense system.

And this is another subject where non commercial web application security scanners fail to deliver. Again, this is not the developers’ fault, but it could be the limited resources, or lack of support from the major players with whom they have to integrate with.

Conclusion

As we have seen in this series of blog post, it is impossible and somehow unfair to compare non commercial and commercial web application security scanner. It is like going to a racetrack with a race car and race against a standard road car. This does not mean that non commercial web application security scanners are inferior to commercial ones, it just means that they were built for a different purpose thus cannot be compared.

Large companies with dedicated penetration testing teams, or large web application development companies want to have an easy to use web security solution that can be easily integrated in their SDLC, help their staff to quickly finish the job, generate professional reports and can be used to automate most of the recurring boring processes to reduce the headcount. Businesses do not want to invest money in training their staff on how to use a software, or to hire someone solely to run a software.

On the other hand, if you are experimenting around and willing to learn more about web application security (and have the time for it), or working on small projects, most probably a non commercial web application security scanner is the right option for you.
 
*1 To know more about false positives and their impact on web application security scanning, read The Problem of False Positives in Web Application Security and How to Tackle Them.

Oakland University uses Netsparker to Protect its Web Applications from Hacker Attacks

0
0

Oakland University needed to protect its web applications from security flaws, programming errors and other threats. It needed a solution that was compatible with its existing security audit tools and a variety of web development frameworks. The university chose to use Netsparker Web Application Security Scanner, a market leading solution that continuously scans and protects web applications from the rising threat of malicious attacks.

Safeguarding the university’s web applications from attack

Oakland University is a highly respected public university in Oakland County, Michigan. It has nearly 20,000 students and runs an extensive range of bachelors and undergraduate programs, offering professional, masters and doctoral degrees. It is the only major research university in Oakland County, supporting major research institutions including the Center for Biomedical Research, the Center for Robotics and the renowned Eye Research Institution.

The Oakland University William Beaumont School of Medicine is a collaborative, diverse, inclusive, and technologically advanced learning community, dedicated to enabling students to become skillful, ethical, and compassionate physicians, inquisitive scientists who are invested in the scholarship of discovery, and dynamic and effective medical educators.

The university has a number of websites and web applications used daily by university staff and students. This includes student portals, faculty web applications and the Oakland University’s official websites. These provide core services vital to the university’s daily running. If they were hacked or went down due to a programming error or malicious attack, confidential information could be at risk of being lost or stolen. A systems failure would also impact staff and students who rely on the university’s online services to manage their daily lives.

Dan Fryer, a Senior Windows System Engineer, and Dennis Bolton, a Network Security Analyst, are responsible for managing the security of the university’s web servers. These servers host websites and web applications built in multiple web development frameworks, including Java, PHP, .NET, Ruby, Perl and Python, which run on both IIS and Apache Tomcat web server technology.

Fryer and Bolton needed a web application security solution that could be setup and left to automatically scan for web application vulnerabilities. With an already heavy workload, the solution would need to be quick and easy to manage. It also needed to be compatible with the university’s multiple web development frameworks and its existing security audit tools. The Netsparker Web Application Security Scanner ticked all the boxes.

Web Application Security Solution

After assessing the available options, Fryer and Bolton decided to use Netsparker; the only false positive free web application security scanner on the market. It has a built in exploitation engine that confirms vulnerabilities and it can be setup to automatically test all the university’s web applications for flaws that leave them exposed to hackers.

With full support for AJAX and JavaScript, Netsparker is fully compatible with all the university’s web development technologies. It is also fully up-to-date on all the latest potential security flaws and vulnerabilities that can be exploited by hackers.

“Since the university’s web applications are frequently changing to adapt to the students’ and university’s needs and because malicious attacks are becoming more sophisticated, it is important that we keep on scanning all of them frequently for the latest type of security threats to ensure that no vulnerabilities are left undetected,” said Fryer, “We chose Netsparker because it is more tailored to web application security and has features that allow the university to augment its web application security needs.”

Fryer now uses Netsparker Web Application Security Scanner to run monthly scans and also do web application security checks on demand. Once a scan is complete, reports on confirmed flaws and vulnerabilities are generated in PDF or xml format. These are handed to the university’s IT security team (on which Bolton serves) for analysis and to advise on fixes. The IT team then rescans all of the university’s web applications to confirm that reported vulnerabilities are fixed and the university’s web applications are secure.

A ‘hands-off’ solution that saves time and offers reassurance that web applications are secure

Checking for and eliminating web application security threats can be a very time consuming and repetitive task. Netsparker however, provides the Oakland University’s IT team with a host of user friendly features that make the process quick and easy to manage.

Scans are scheduled and left to run automatically, while its at a glance reporting and actionable insights ensure the university’s IT team knows exactly what to do. There is no time wasted checking for web application vulnerabilities manually or having to figure out a solution. All the information is provided for them. This has enabled the university’s IT team to gain more time to focus on other tasks and gain the reassurance knowing the university’s web applications are secure and free from vulnerabilities at all times.

“We chose Netsparker since it is very easy to use. It helped our team increase the visibility into the security of our web applications,” explained Fryer.

About Oakland University

Oakland University is a top-rated academic institution in southeast Michigan offering 132 bachelor’s degree programs and 124 graduate degree and certificate programs. As a state-supported institution of higher education, Oakland University has a three-fold mission: It offers instructional programs of high quality that lead to degrees at the baccalaureate, master’s and doctoral levels, as well as programs in continuing education; it advances knowledge and promotes the arts through research, scholarship, and creative activity; and it renders significant public service. In all its activities, the university strives to exemplify educational leadership in a diverse and inclusive environment.

About Netsparker Web Application Security Scanner

Netsparker is an industry leading automated web application security scanner developed by Netsparker. Netsparker management and engineers have more than a decade of experience in the web application security industry that is reflected in their product, Netsparker. Netsparker is a very easy to use web application security scanner that automates most of the web application security scanning. Since an out of the box installation of Netsparker is able to scan a wide variety of web applications, web security experts, penetration testers and QA people do not need to spend countless amount of hours tweaking and configuring the security scanner. Netsparker is revolutionising web application security by being the only web application security scanner to automatically verify detected web vulnerabilities, thus reporting no false positives.

About Netsparker Ltd

Netsparker is a young and enthusiastic UK based company. Netsparker is focused on developing a single web security product - the false positive free Netsparker Web Application Security Scanner. Founded in 2009, Netsparker is one of the leading web vulnerability scanners and is used by world renowned companies such as Samsung, NASA, Skype, ING and Ernst & Young.


Shared Hosting and Web Application Security - The Opposites

0
0

 

Shared Hosting is Simple but Lacks Flexibility

So you are feeling entrepreneurial and want to start a fresh, new website for your idea.  Great!  Starting a website anymore is so very easy, and various hosts offer a plethora of options, ranging from the WYSIWYG wizard website generators, to the standard old-school style of "throw your website files here" data dump.  These are all well and good, but they come with some rather serious problems and risks.

First and foremost, these options are very simple and do not afford you much flexibility.  Say your website starts out simply as a restaurant menu page, and as you grow you find the need to add a take-out shopping cart or online reservation handler.  Oops!  That shared hosting solution only allows for very limited software (and many do not even allow for anything beyond static images and web pages for security purposes).  There is, however, a far more sinister problem with shared hosting, and that is security -- or rather, lack thereof.

Shared Hosting Means Shared Everything, Including Hack Attacks

The reason shared hosting is called "shared" is because you are sharing the web server that hosts your website with dozens, hundreds, perhaps even thousands of other websites.  Remember how I said many shared web hosts do not allow for anything beyond static images and web pages?  The reason is because there is not a whole lot a shared web host can do to offer much web application security to explicitly separate all the websites they host.  However, even limited software execution does not inherently protect the websites hosted.

For example, your little restaurant website could be hosted on the same server as a website that has irritated a particular hacking. Hacking groups are infamous for using distributed denial of service (DDoS) attacks on websites they dislike.  A DDoS is not selective to a single website hosted on a shared server, either, so that DDoS can take down the entire server, including your website and web applications.  You have no way to protect against this, and indeed, no way of knowing you were on a shared server with a targeted website.

Less Simple, More Effective, Complex Hosting: More Variables Means More Problems

Say that shared web host lets you and others use software on your website (such as PHP or Perl).  That is great for you, because now you can probably run that take-out shopping cart or online reservation software you have grown into needing.  But wait!  Now you're handling personally identifiable information, and you want this to remain secure.  So you do everything you can to ensure your website is not vulnerable to malicious web attacks... but have all the other websites on your shared web server done the same?  Probably not.

When a web server runs scripting software, it does so as a single 'user'.  To explain this simply, this is done to partially prevent, by file system permissions, unauthorized access to the web server from compromising the rest of the server itself.  However, that does not mean each individual's website is secure, as often all websites and web applications running on the web server run under the same web server user.  Since a web host running the Apache web server typically runs under the 'apache' user, all websites are run under that user.  So if foobar.com's code is insecure, it does not matter how secure your shopping cart or online reservation system is, your data is still easily accessible via a hacking attack.

To combat this, web hosts typically setup individual user accounts for each hosted website and install what is called "su" software (standing for switch user), such as suPHP for PHP and suEXEC for Perl.  This type of software limits the access of the website software to only the website it belongs to.  This, however assumes that the web host is properly configured and the su software is secured (nearly all server security vulnerabilities are by poor configuration).  Furthermore, do not forget that you need a database for your shopping cart and online reservation systems, and that, too, must be secured by the web host.

What Are My Other Web Application Hosting Options?

Many reputable and well-aged web hosts have grown wise and experienced enough to learn from past mistakes and ensure their systems are as secure as possible. But they still cannot prevent every conceivable variable, and shared web hosting opens the door to many, perhaps too many, web security vulnerabilities.  So the question remains: How do I host my website, but do so securely without shared hosting?  Unfortunately, when you approach this end of the spectrum, you lose that "easy" aspect of shared hosting and enter into more complicated arenas, such as virtual private servers (VPSs).

For example if your website runs on WordPress (as many do), WordPress itself suggest a few places to host.  These have been time- and hacker-tested and proven about as reliable as can be, while still retaining the ability to gently customize your WordPress-based website.  If, however, you need more (again, we revisit the shopping cart or online reservation systems), perhaps it is time to invest in a VPS.

Not Simple, Very Effective and Highly Complex Hosting: Now You Are Playing With the Big Boys

A virtual private serer is essentially an entire bare server (think of the ability to do your own shared hosting, if the VPS's resources allow it).  The 'V' stands for 'Virtual', meaning that it is not technically a steel-and-silicon server humming away in a data center, but rather an emulated one of many on a steel-and-silicon server.  Think of it as shared whole servers rather than simply shared website hosting.  That is about as deep as we will go with VPS descriptions, as anything more would be its own article of pages and pages of explanation.  But we digress...

VPSs are very cheap.  Some websites showcase incredibly inexpensive VPS deals, and a few even go the extra step to be extremely transparent about those deals (describing past community experience, company establishment, and so forth), so cheap does not necessarily mean unreliable.  If you grow even larger still, you can even begin looking into dedicated or co-located server hosting, but that, again, is discussion for another article.  However, bear in mind that once you go the VPS route or above, all of the web application security falls on your shoulders, as well as the running of the web server, database server, and so on.  Everything comes at a price.

Wrap-Up, To Go

All in all, there will be at least some risk no matter how you host your website.  Shared hosting is easy, but puts you at the mercy of your web host and whomever else they host on your server. VPS hosting puts the onus of everything on you, but effectively removes others from the equation.  At the end of the day, only you can decide what is best for your website's hosting, but keep in mind that every option requires some level of sacrifice.  Now quit worrying about your website and get back to your restaurant.

Getting developers on board to transition from part of the problem to part of the process

0
0

Web Application security often focuses more on software than it does on people. That can be a dangerous approach. Why? Because at the root of every security success or failure is a person or a team of, namely software developers.

Your developers are key players in the web application security equation. They are often the unsung heroes who help prevent many security problems from ever occurring, or closing down web vulnerabilities once identified. Yet in the real world they are often portrayed as a large part of the security problem. It doesn’t have to be that way.

Many, arguably most, software developers are analytical thinkers. They see business issues and technical challenges from a logical perspective. This approach to problem solving is exactly what’s missing – and what we need more of – in order to improve web application security over the long haul.

So how can you get, and keep developers on board with web application security once and for all? It’s not that difficult. Here are four things you can start doing today:

  • Explain the "why" of application security in terms they will understand. It’s not about bits and bytes or encryption or input validation but rather the business. Show them the standards and regulations (i.e. NIST 800-53, PCI, or OWASP Top 10) that must be complied with. Explain that the business has to produce secure software for reasons X, Y, and Z and here’ s how they impact you in your position as a developer.
  • Encourage developers to focus on specific areas of security that have been the most problematic for your organization and others. The OWASP Top 10 2013 is a great place to start but you’ll have other areas that are unique to your business. Share security research reports and statistics with them to show the impact web security flaws can have on your business. Find out your own unique pain points and come up with ways for management to incentivize web application developers to make sure those pesky web application security vulnerabilities aren’t introduced into your web applications.
  • Find someone on the development team that you know is willing to take the lead on software security. Some developers will be better at this than others. It should be obvious who the best person is to help evangelize security initiatives within the organization. Work with this person so you can both demonstrate that security matters and you’re doing what it takes to minimize your business risks.
  • Share hacking tools and techniques. Your own web vulnerability scanner is a great tool for showing how vulnerabilities are uncovered and exploited. Beyond that, a simple web browser combined with a malicious mindset can do wonders for things such as manipulating the application’s login mechanism and workflow/logic. Once developers ‘get’ the what, why, and how of application exploitation they can change their own mindset and approach to how they develop software.

The growing focus on web application security underscores the importance of developer involvement in the application security process. Don’t be afraid to step up and make things happen. If you don’t, odds are no one else will until they’re forced to, and that’s not good for business.

Netsparker 3.0.12.0 Released

0
0

This sixth version 3 update is a minor update to the Netsparker Standard and Professional editions which contains new signatures in the vulnerability database of known applications.

Improvements

Updated vulnerability database with several new Drupal and PHP security checks.

Updating Netsparker

Launch Netsparker and click “Check for Updates” from the Help drop down menu to update Netsparker to version 3.0.12.0.

14 Years of SQL Injection and still the most dangerous vulnerability

0
0

Ever since the advent of the computer, there have always been people trying to hack them.  William D. Mathews of MIT discovered a flaw in the Multics CTSS password file on the IBM 7094 in 1965; John T. Draper ("Captain Crunch") discovered a cereal toy whistle could provide free phone calls around 1971; The Chaos Computer Club, the Cult of the Dead Cow, 2600, the infamous Kevin Mitnick, even computing godfather Alan Turing and his World War II German Enigma-cipher busting Bombe, all and more have participated in hacking computers for as long as computers have existed.

Through the 1980s and 1990s, the world began to see the advent of the personal computer, the internet, and the world wide web.  Telephone lines in millions of homes began screaming with the ear-piercing tones of dial up connections.  AOL, CompuServe, Juno, and more began providing home users with information portals and gateways to the web.  The information age was born; as was the age of information security (and, indeed, insecurity).

As websites began to form by the thousands per day, so did the technology behind them.  Websites went from merely being static pages of text and images to dynamic web applications of custom-tailored content.  HTML, CSS, and JavaScript grew into bigger and better systems for stitching content together in the browser, and the browser itself evolved, through Internet Explorer, Netscape, Firefox, Chrome, and more.  PHP and Perl CGI, among others, became the languages of choice for backend website scripting to real-time generate the HTML and other elements browsers would render.  Database systems came and went, but MySQL became the most popular.  In fact, a lot of things came and went -- Dot-Com bubble, anyone? -- but one thing always remained: web application security.

The (In)Security Watchmen - OWASP and Others

In December 2001, the Open Web Application Security Project (OWASP) was established as an international not-for-profit organization aimed at web security discussions and enhancements.  For practically their entire existence, OWASP has kept track of perhaps every type of hack that could be done.  Everything from social engineering, poor authentication systems, cross-site scripting, SQL injection, general software vulnerabilities, and more, OWASP kept track and encouraged the web community to continually secure everything as best as possible.  As with the growth of the world wide web, things came and went, and with the efforts of OWASP and its participants the hacks that were popular were no exception.  However, of all types of security intrusions, almost the only one that constantly and consistently remained in the top ten were injections (usually exclusively database SQL).

An injection is defined by OWASP as "when untrusted data is sent to an interpreter as part of a command or query."  Typically, this grants an attacker unauthorized access to data within a database through a web application, or grants them the ability to insert new or alter pre-existing data.  This is done because, quite simply, a web application uses the user input to directly insert it into a database query without any type of sanitization.

Immediately, one thinks, Why would anyone allow unsanitized data to enter a database query?  Indeed, if we had an answer for this question we would probably be receiving billion-dollar U.S. Department of Defense contracts right now.

Interlude: What is a SQL Injection?

We want to pause before we continue further and ensure you, our treasured reader, understands what an SQL injection is and the technical aspect behind it.  For purposes of brevity and focus, we will assume from here on that you understand the concept of a SQL injection, how it works, and basic ways to prevent it.

If not, first read our article What you need to know about SQL Injection and keep an eye out for our future publications, as we continue to look into this constant security problem.  It is important that you understand the technical side behind a SQL injection, as it helps to highlight the simplicity and, indeed, the absurdity of the repetition of this security vulnerability.

Resume Play: A History Lesson about SQL Injection

For as long as relational databases have existed, so too have SQL injection attack vectors.  Since 1999, the Common Vulnerabilities and Exposures dictionary has existed to keep track of and alert consumers and developers alike of known software vulnerabilities.  Since 2003, SQL injections have remained in the top 10 list of CVE vulnerabilities; 3,260 vulnerabilities between 2003 and 2011.

In 2012 a representative of Barclaycard claimed that 97% of data breaches are a result of SQL injections.  In late 2011 and through early 2012, i.e. in only one month, over one million web pages were affected by the Lilupophilupop SQL injection attack.  The year of 2008 saw an incredible economic disruption as a result of SQL injections.  Even the official United Nations website in 2010 fell victim to a SQL injection attack.

All these stats (excluding, of course, the CVE count) are all within the past three years.  Just three years.  It is indeed absolutely no surprise that in 2011 the United States Department of Homeland Security, Mitre, and SANS institute all named SQL injections as the number one most dangerous security vulnerability.  So why, after over 14 years, it is still a number one seemingly unfixable vulnerability?

Low Hanging Fruit Vulnerabilities, Or: By Blunder We Learn ... Or Not

In a recent study at Goldsmiths University of London, a group of researchers came to the conclusion that our brains are hardwired such that we as humans just do not (easily) learn from our mistakes.  Perhaps it is simply that developers see and are even fully cognizant of the faults in developing software, but they are mentally incapable of progressing past those recurring gaffes.  Perhaps they are not seeing the proverbial forest for the trees or, specifically, they understand the technical details but not the big picture of applying that knowledge.

As far as low hanging fruit goes, SQL injections present themselves as the most likely guarantee an attacker has of easily gaining illegitimate access to a website or other SQL-backed system, simply based on the probability of success, if 14 years of historical statistics are to be believed.  This is primarily because of the most obvious problem: We are still using relational SQL databases.

Were we to use NoSQL database systems such as MongoDB or CouchDB, none of these attacks would ever happen, or at least nowhere near as easily and commonplace as SQL injections.  And that is not to say that NoSQL is completely and one hundred percent safe, but rather that it would immediately solve the problem of SQL injections.

But that is not the real cause, nor even a reasonably viable solution.  The real reason lies in the fact that software and web application developers do indeed seem to suffer from the University of London's conclusion, that humans cannot easily learn and adapt once they (or, by observation, others) mess up. It also probably does not help that the easiest and most common information for integrating relational SQL databases with common languages, such as PHP, almost never provide the proper and most safe methods of integration, so perhaps some of the blame also lies in a near-complete lack of valuable educational material.  Combine these with over-worked developers granted unreasonable deadlines or requirements, and it makes for a wicked trifecta of low-hanging fruit vulnerabilities.

Minimal Effort, Easy Reward; Exploiting a “Low Hanging Fruit” Vulnerability

By comparison, a Distributed Denial of Service attack (DDoS) requires careful coordination and leveraging hundreds to tens of thousands of compromised systems to carry out such an attack. Whereas an SQL injection attack can be accomplished on a single computer with patience, trial and error, some ingenuity, and a little luck.  It really does not take much skill at all to complete a SQL injection attack.  In fact, a script kiddie can do so with absolutely no understanding of SQL injections whatsoever; by using any of the free available tools.  They truly are that easy.

Perhaps some SQL injection attacks result from lazy development or malpractices, but in reality, there are three big commonly repeated mistakes that allow SQL injections to occur.  They include the following:

Ignorance of the Least Privilege principle

Quite simple, yet frequently ignored, this principle simply states that a user, process, or other entity shall have only the least required privileges necessary to complete its tasks.  For example, a log database table does not need DELETE or UPDATE privileges, and yet database administrators commonly grant all privileges possible to a service rather than tailor-fit the permissions to exactly only what is needed.

Conglomeration of Sensitive Data

There is no reason to keep credit card data in the same database as your news articles.  There is also no reason to store passwords in plaintext or with poor hashing techniques.  If you segment and distribute your data, then your database and its contents becomes a far less valuable target.  Would you keep all your belongings in your home, or would you keep some in your safe deposit box?

Blindly Trusting Unsanitized User Input

This is why SQL injections happen.  When user input is not sanitized, an attacker has the ability to complete a SQL injection attack, amplified by the aforementioned two points.  Once an attacker gains access via unsanitized input, availability of sensitive data and unlimited privileges give them everything they could ever want to wreak havoc.

That is it.  Just those three simple problems that have caused over one million web pages in under a month to become compromised, including the United Nations' and several other high profile websites, have consistently kept SQL injections in OWASP's top ten list.  It is almost absurd, with how simple these three problems are, that SQL injections keep happening.  So what can developers do?

Later in our series of SQL injection articles, we will go over more technical details of a SQL injection attack and how to protect against them.  But for now, the most important point we can stress is that developers and systems administrators do not fall prey to these three problems we have mentioned.  Developers need to ensure they implement the least privilege necessary for a web application's needs, segregate or encrypt data such that a database becomes a far less valuable target, and, most importantly, always sanitize user input!  These are incredibly simple techniques that, if applied as consistently as SQL injections rank in the top ten list, can potentially eliminate SQL injections from that top ten list for the first time since it was created.

Automatically Detect SQL Injection Vulnerabilities in your Web Applications

One easy and quick way  to check if your websites and web applications are vulnerable to SQL Injection is by scanning them with an automated web application security scanner such as Netsparker.

Netsparker is a false positive free web application security scanner that can be used to identify web application vulnerabilities such as SQL Injection and Cross-site scripting in your web applications and websites. Download the trial version of Netsparker to find out if your websites are vulnerable or check out the Netsparker product page for more information.

Netsparker 3.0.14.0 Released

0
0

This seventh version 3 update is a minor update to the Netsparker Standard and Professional editions which contains new signatures in the vulnerability database of known applications and several bug fixes.

Improvements

  • Updated vulnerability database (PHP, osCommerce, Python).

Bug Fixes

  • Fixed a critical bug where some report templates weren't printing all vulnerability instances.
  • Fixed a bug on DOM/JavaScript Parser that causes some ASP.NET postback links to be not crawled.

Updating Netsparker

Launch Netsparker and click “Check for Updates” from the Help drop down menu to update Netsparker to version 3.0.14.0.

SQL Injection–Understanding and Protection

0
0

As we mentioned in our previous article on the history of SQL injections, the SQL injection web vulnerability have consistently been on the top ten list of attack styles for a solid 14 years, and it shows no sign of leaving that position any time soon.  Furthermore, 6 years ago our CEO Ferruh Mavituna has released this must have SQL Injection Cheat Sheet. Needless to say, the importance of the SQL injection vulnerability, understanding it, and protecting against it is a priority which we cannot stress enough.

Simple SQL Injection Example

In this article, we will continue our previously-discussed technical side of SQL injections, as it is imperative to ensure complete knowledge of what a SQL injection is. Otherwise, how can you protect against it? In order to completely understand what a SQL injection is, one must know how a SQL injection works.  Taken from theOWASP example page, consider if you had the following PHP code:

mysql_query("SELECT * FROM accounts WHERE custID='" . $_GET['id'] . "'");

With that query, you could go to the following URL:

http://example.com/app/accountView?id=' or '1'='1

Notice the placement of single quotes in the URL.  This turns the SQL query into the following:

mysql_query("SELECT * FROM accounts WHERE custID='' or '1'='1'");

Protecting Web Applications SQL Injection

Surely, there must be a way to simply sanitize user input and ensure an SQL injection is infeasible.  Unfortunately, that is not always the case.  There are perhaps an infinite number of ways to sanitize user input, from globally applying PHP's addslashes() to everything (which may yield undesirable results), all the way down to applying the sanitization to "clean" variables at the time of assembling the SQL query itself, such as wrapping the above $_GET['id'] in PHP's mysql_escape_string() function.  However, applying sanitization at the query itself is a very poor coding practice and difficult to maintain or keep track of.  This is where database systems have employed the use of prepared statements.

Prepared Statements

When you think of prepared statements, think of how printf works and how it formats strings.  Literally, you assemble your string with placeholders for the data to be inserted, and apply the data in the same sequence as the placeholders.  SQL prepared statements operate on a very similar concept, where instead of directly assembling your query string and executing it, you store a prepared statement, feed it with the data, and it assembles and sanitizes it for you upon execution.  Great!  Now there should never be another SQL injection again.  So why, then, are SQL injection attacks still, for over 14 years, constantly one of the biggest and most prevalent attack methods?

Insecure SQL Queries are a Problem

Simply put, it perhaps boils down to web application developer laziness and lack of education and awareness.  Insecure SQL queries are so extremely easy to create, and secure SQL queries are still mildly complex (or at least more complex than generic and typical in-line and often insecure queries).  In the example above, a malicious hacker can inject anything he or she desires in the same line as the SQL query itself.

Example and Explanation of an SQL Prepared Statement

However, with prepared statements, there are multiple steps.  No major database system operates like printf (with everything occurring within the same statement on the same line).  MySQL, directly, requires at least two commands (one PREPARE and one EXECUTE).  PHP, via the PDO library, also requires a similar stacking approach, such as the following:

$stmt = $dbh->prepare("SELECT * FROM users WHERE USERNAME = ? AND PASSWORD = ?");

$stmt->execute(array($username, $password));

At first glance, this is not inherently problematic and, on average, increases each SQL query by only an extra line or two.  However, as this requires extra caution and effort on behalf of already tired and taxed developers, often times they may get a little lazy and cut corners, opting instead to just use the easy procedural mysql_query() as opposed to the more advanced object-oriented PDO prepare().

Beside of this many developers just stick with what they know to get the job done and they generally learn the easiest and most straightforward way to execute SQL queries rather than showing genuine interest in improving what they know. But this could also be an issue of lack of awareness.

Deeper Into the Rabbit Hole of SQL Injection Security

Say, however, this isn't the case of lazy developers, or even lack of prepared statements -- or, more precisely, say the software itself and its security is out of your hands.  Perhaps it is impractical or infeasible to completely secure the SQL queries in the code you use (by one comparison, Drupal has had over 20,000 lines of code committed, WordPress has had over 60,000 lines, and Joomla! has had over 180,000 lines), or, it may simply be impossible because it is encoded or so.  Whatever the case is, if you do not have control over the code you may need to employ different, more advanced "outside the box" protections.

Non Development Related SQL Injection Protection

Running Updated Software

First and foremost, always ensure you are running the most up-to-date software you can.  If you are using WordPress or any other CMS framework, keep it updated!  The same goes for PHP, your web server software such as Apache and nginx, and your database server (MySQL, Postgres, or others).  The more recent the version of  your software is, the less chance of having a vulnerability, or at least a widely-known one.  This also extends down to your other software as well, such as SSH, OpenSSL, Postfix, and even the operating system itself.

Block URLs at Web Server Level

Next, you should employ methods to ensure you are as minimally vulnerable to potential SQL injection attacks as possible.  You could perhaps go for a quick and easy match against common SQL query keywords in URLs and just simply block them.  For example, if you ran Apache as your web server, you could use the following two mod_rewrite lines in your VirtualHost directive, as explained below:

RewriteCond %{QUERY_STRING} [^a-z](declare¦char¦set¦cast¦convert¦delete¦drop¦exec¦insert¦meta¦script¦select¦truncate¦update)[^a-z] [NC]

RewriteRule (.*) - [F]

This is indeed quite clever, but it does not protect against everything.  SQL injection parameters can still be passed via POST values or other RESTful-type URLs, not to mention there are tons of different ways to bypass this kind of generic blacklisting.

Securing the Database and Privileges

You can also ensure your database itself is as secure as possible.  In the information security field, there exists a concept known as the principle of least privilege.  Effectively, this principle states that a user or program should have only the absolute very least amount of privileges necessary to complete its tasks.  We already do this practically every day with Linux file permissions, so the concept is in no way foreign, and is equally applicable to databases.  There is probably no reason why your log table should have anything beyond INSERT privileges, so you should not simply GRANT ALL PRIVILEGES because it is easier.

Segregating Sensitive and Confidential Data

Similarly, you might consider separation of data as a defense in depth approach, rather than conglomeration it into a single source.  When you step back and think about it, it is probably not a very wise idea to keep your (hopefully PCI-compliant) customer credit card data stored in the same database as your forums, which are running an outdated and highly vulnerable version of phpBB, right?  Not only would the principle of least privilege be very applicable in this situation, but even going so far as to entirely separate out your more sensitive data is a very sage approach.  To think about it another way, would you keep all your most important paperwork inside your house, or would you keep some in a safe deposit box, too?  The same concept applies with sensitive data.

Analyzing HTTP Requests Before Hitting the Web Application

Another option is the use of more detailed firewall systems.  Typically this might include some adaptive solution that rides on top of iptables or ipfw (depending if you are using Linux or a BSD variant, respectively), or perhaps a reactive Host Intrusion Detection System (HIDS) such as OSSEC, although these are often more complicated than desired and not exactly purpose-built for these uses.  Instead, you may wish to utilize a Web Application Firewall, which is designed specifically for these tasks.  While there exist several enterprise-level solutions that are both a WAF and database firewall (sitting between your web application and your database), there are many open-source solutions, such as ModSecurity and IronBee, that perform remarkably well.

The Truth about SQL Injection Web Vulnerability

There exists no real magic wand answer to fix SQL injections and protect your web applications from them, although PHP is attempting a more brute force approach of their own.  As of PHP 5.5, procedural MySQL is deprecated and soon to be removed entirely, which will require future software projects to switch either to MySQLi or PDO MySQL in order to continue to work.  This is good since it forces developers into a system that handles prepared statements with relative ease, although it still requires the use of stacking a few operations.  However, as many developers operate within a coding golf style; attempting to complete work in as few lines or characters as possible, many unfortunately will still opt for a single-line straight query over a two-line prepare.

There still exist other options to account for any development shortcomings, including but not limited to privilege limitations, data separation, web application firewalls, and many other approaches.  But until these options are as consistently employed as SQL injection attacks, we indeed may never see the day that injection-style attacks escape the OWASP's Top 10 list.  Be the change that is needed to ensure data and web application security, and keep your databases safe from SQL injections!

ING EURASIA IT Audit Team Chooses Netsparker to Detect Web Application Vulnerabilities

0
0

“As opposed to other web application scanners we used, Netsparker is very easy to use and does not require a lot of configuring. An out of the box installation of Netsparker Web Application Security Scanner can detect more vulnerabilities than any other web application security scanner we have used so far” said Perry Mertens, Audit Supervisor within the ING Insurance EURAsia IT Audit team. An international financial institution such as ING Insurance that has offices all over the world, remote employees etc, heavily depends on web applications. Web applications such as internal portals, external portals, life insurance and investment management websites as well as online banking web applications are used to share data between all of the corporation’s offices and employees, and are also used by ING customers and other businesses to access their bank accounts and finances. This implies that there has to be put a lot of focus on security.

Automated and Easy to Use Web Application Security Solution Needed

The IT Security Audit team at ING performs audits to ascertain whether numerous websites and web applications are solid and secure. Most of these web applications are custom built, using a wide variety of commonly used web frameworks and underlying infrastructure.

Why ING IT Audit Team Chose Netsparker Web Application Security Scanner

ing_backgroundIf you are auditing many web applications each year you have to make sure that the right tools are used to detect all web application vulnerabilities to keep malicious hackers out and make sure the customers’ money is left intact.

The ING EurASIA Audit team chose Netsparker over several other web application security scanners because it is a very easy to use web application security scanner, penetration testers do not need to spend hours configuring it because by default it supports a wide variety of web application technologies and implementations, can generate meaningfull reports and is very affordable. “Netsparker Identified More Vulnerabilities and Reported No False Positives”

“When we were evaluating web application security scanners, Netsparker was the scanner that identified most vulnerabilities without requiring any configuration changes. It also identified several SQL injection and cross-site scripting vulnerabilities that other scanners did not identify” said Perry Mertens, Supervisor Auditor at the ING EurAsia IT Audit team.

About ING

ING is a global financial institution of Dutch origin, currently offering banking, investments, life insurance and retirement services to meet the needs of a broad customer base.

About Netsparker Web Application Security Scanner

Netsparker Web Application Security Scanner is an industry leading automated web vulnerability scanner developed by Netsparker Ltd. Netsparker management and engineers have more than a decade of experience in the web application security industry that is reflected in their product. Netsparker is a very easy to use web application security scanner that automates most of the web application security scanning. An out of the box installation of Netsparker is able to scan a wide variety of web applications, therefore web security experts, penetration testers and QA engineers do not need to spend countless amount of hours tweaking and configuring the software. Netsparker is revolutionising web application security by being the only web application security scanner to automatically verify detected web vulnerabilities, thus reporting no false positives. Netsparker is used by world renowned companies such as Samsung, NASA, Skype, ING and Ernst & Young.


Netsparker 3.0.15.0 Released

0
0

The new version of Netsparker is a minor update to the Netsparker Standard and Professional Editions which contains several new signatures in the vulnerability database of known web applications.

Improvements

  • New security checks in the vulnerability database for Apache, MySQL, WordPress, osCommerce and MediaWiki

Updating Netsparker

Launch Netsparker and click "”Check for Updates” from the Help drop down menu to update Netsparker to version 3.0.15.0.

Top 10 Mistakes when Performing a Web Vulnerability Assessment

0
0

We all make mistakes, it’s in human nature. In Information Technology, there are numerous mistakes, oversights, and blunders that are repeated consistently day after day. But given what there is to lose when it comes to web application security, why not learn from the mistakes of others so you don’t get burned?

Here are the top 10 mistakes, all based on assumptions, that you need to be aware of when seeking out the real business risks in your web vulnerability assessments:

1. Assuming everyone is on board with what you’re doing, i.e. the web application security audit. Many people, including key players such as developers and compliance managers are often out of the loop on vulnerability assessments. Getting all the right people involved, in advance, will help ensure smooth testing and project success.

2. Not dedicating the same amount of resources for all web applications. Focusing on the critical web applications is good but you eventually need to find and fix all the web security vulnerabilities that can cause problems. This includes seemingly harmless marketing websites, content management systems, intranet and portals, and the web interfaces for your network devices. Remember that a malicious user only needs to find 1 exploitable vulnerability for his malicious attack to succeed.

3. Assuming you’ve properly tested from all angles. Think outside the box. You need to test your web applications both without and with user authentication, from in front of and behind the firewall or WAF, IPS controls, etc. to ensure that all web application vulnerabilities have been uncovered.

4. Using complex tools without knowing how they operate. Most of the tools used nowadays, like a web application vulnerability scanner automate most of the tasks for you and make the process of identifying vulnerabilities quite easy. Though there might be other tools that are quite complicated to use and only seasoned experts are able to fully exploit their capabilities. Therefore always make sure that you know your tools inside out and what repercussions they might have when used.   

5. Assuming that just because a vulnerability wasn’t uncovered, it doesn’t exist. As with human diseases, there is always a chance that something is lurking undetected in your web environment. Be careful so you don’t get caught off guard with a false sense of security. Take every necessary step to ensure all web application vulnerabilities are identified.

6. Relying on third parties for the security of your web applications. This is especially dangerous, in the context of cloud services and hosting providers etc. Regardless of the situation, make sure you fully understand how these web systems are being tested and secured and never take anything for granted. If need be make your own research and ask around for more information before subscribing to a cloud service or use a hosted service.

7. Expect a fix for reported vulnerabilities without following up. Developers may not even hear about the problem because management won’t tell them about the vulnerabilities you identified. If they do, they could have their own set of priorities that keeps them from addressing the security issues that matter to you. So in such cases always liaise and follow up with the responsible contact  to ensure all reported vulnerabilities have been remediated.

8. Assuming that your developers will learn from their mistakes and not repeat the same coding problems again in the future. Unless developers really understand what the issue is, and the business invests in training them to write secure code, you will keep on identifying the same vulnerabilities in newly developed web applications.   

9. Assuming that a “secure” web application is a “compliant” web application and vice versa. This is reason enough to get your auditors and compliance managers involved to ensure that risks are known and business assets are being properly protected.

10. Expect management to understand your findings and continue support of your web vulnerability testing program. Simply uncovering web application vulnerabilities isn’t necessarily going to create a sense of urgency for others. You need to make it known what’s at stake, for example by showing what a malicious attacker can gain when exploiting a detected vulnerability.

Don’t ignore these issues and repeat the mistakes of others if you would like to ensure that your web applications are secure and not end up hacked. As long as you understand this and realize that you must remain vigilant with your vulnerability detection programs and never let your guard down, you’ll be well ahead of the curve.

How to Evaluate Web Application Security Scanners

0
0

Ask 20 penetration testers which web application security scanner they prefer and use and you will get 20 different answers, if not more. Every web vulnerability scanner has its own pros and cons and what works for Mr X does not necessarily work for Mr Y, almost like everything else. Therefore you shouldn’t simply base your purchasing decision, or build a web application security program on what colleagues say or think. You have to do some testing and come up with your own decision.

This guide will explain how to evaluate web application security scanners and help you choose the right web security tool that fits your requirements.

Make a List of Requirements

Before getting your hands dirty with web security scanning, compile a list of the requirements you currently have. The most typical requirements are:

  • Automate most of the tasks; there are several advantages to automating the identification of web application vulnerabilities. Most common reason management can think of is to save time, but it is not just about that, as seen from the article Why Web Vulnerability Testing Needs to be Automated.
  • Lowering the costs of web application security by doing in house scanning rather than hiring a seasoned expensive penetration tester or service.
  • Increase the coverage; as opposed to a penetration testers, an automated web application security scanner has an extensive set of heuristic web vulnerability checks that are frequently updated by a number of researchers and security experts. This allows users to identify all type of web application vulnerabilities in custom made web applications. Some scanners also have a vulnerability database for known web applications such as WordPress and Joomla that also comes in handy if your business or customers are using such web applications.

In short, automated web application security scanners are mostly required to save time and to ensure that all technical web vulnerabilities are identified. Once you listed down your requirements proceed to gathering as much information as you can about the websites and web applications you will be scanning.

Document all the Web Applications to be Scanned

The next step in the selection process is to document the web applications you will be scanning with the automated web security scanner. During this stage it is important to identify most common factors of the web applications. For example if something is implemented on 2% of the web applications you are scanning, then you can test such parts manually. But if something is implemented on 60% of the web applications, or more, then its testing should definitely be automated. The below list of questions will help you build a list of web applications common factors;

1. Which development frameworks are used to build the web applications?

2. Is there any type of authentication mechanism used on the web applications?

3. What type of database backend server is used?

4. Are URL rewrite rules being used?

5. Do you need a client side certificate to access the web applications?

6. Are there any client-side scripts that need to executed and tested?

7. Are Custom 404 error pages being used?

8. Are there any types of protection mechanisms implemented on the web applications, such as Anti-CSRF mechanism?

Once you have the list of common factors you can shortlist the web application security scanners you would like to evaluate. For example;

1. If all your web applications are developed with PHP and .NET, you do not need to test automated tools that cannot scan and identify vulnerabilities in such web technologies. For example some web application security scanners cannot automatically crawl, scan and identify vulnerabilities in web applications built with .NET, hence there is no need to test such scanners.

2. If authentication is implemented, the automated web security scanners you want to test must support the type of authentication mechanism that is being used, and can be configured to automatically login to the website during a web security scan.

3. If URL rewrite rules are being used for search engine friendly URL’s, the scanner should be able to identify such URL structure and scan the website properly without getting stuck in some sort of infinite loop, or without requiring extensive configuration changes.

4. If the web applications you want to scan have some sort of mechanism such as anti-CSRF, the scanner should still be able to automatically scan the web application without missing any parts of it. Note: Such mechanisms are typically barriers for most of the automated tools.

Testing Web Application Security Scanners

Once you have completed all the paperwork it is time to get your hands dirty and start scanning websites. Download the trial or demo version of each web vulnerability scanner you would like to evaluate. If the trial is not available for download, contact the software vendor.

Before You Start a Web Security Scan

If you are new to automated tools such as web application security scanners, there are some things you need to know about before launching a security scan. Here is the list of points:

How Web Application Security Scanners Work

Before doing anything else, one needs to understand how such automated tools work. First the scanner will crawl the target website or web application and identify all possible attack entry points and parameters. During this stage the crawler will access every link it discovers, including links in client-side scripts etc. During the scanning stage the scanner will send specially crafted HTTP requests which include a payload that is used to test if a website if vulnerable or not.

Although automated scanners are designed not to be intrusive, there is still a small probability that such attacks might tamper your web application or database, depending on how secure your web application already is. In light of this, we move to the next point.

Always Scan Realistic Test and Staging Web Applications

First and foremost it is important to scan realistic web applications, i.e. web applications that are similar, or ideally identical to the real live ones you will be scanning. Unfortunately many evaluate web application security scanners against vulnerable web applications such as DVWA (Damn Vulnerable Web Application) and OWASP WebGoat. These type of web applications have been built for educational purposes and might be completely different than the web applications your business or customers use.

Therefore do not base your findings on any of such demo sites. Don’t forget a web application security scanner can easily cover thousands of different vulnerabilities where about these simple test systems will only show how the scanner behaves according to 10 of these thousands vulnerability cases.

Therefore you should make your tests against a test or staging website if you would like to see how the web application security scanner performs. It might take some time until you get used to the automated tools you are using so stay safe to scan test websites. What’s for sure is that with time you will get used to the tools you are using and soon should be able to scan live websites. In fact it is recommended to scan both staging and live websites because some vulnerabilities might only be introduced when switching from the staging server to the live server.

Evaluating Web Application Security Scanners and the Results

Now that you know what you need and how to evaluate the software, it is time to fire up the scanners. Below are some points which if followed should help you determine which is the best automated web application security scanner that fits your requirements.

Web Application Coverage

During a scan check the list of all crawled objects. Ensure that the sitemap representation of your web application includes all the files and their variations, scripts, client scripts, input parameters, directories etc which are on your website. If not all objects are listed, it means that the crawler is not able to automatically crawl all of the web application, thus might not identify all vulnerabilities.

Web Vulnerability Reporting

Once the automated web security scans are finished, start comparing the findings of both scanners.

  • Which web application security scanner detected the most vulnerabilities?
  • Which web application security scanner reported less false positives?
  • Are there any vulnerabilities that were not detected by any of the scanners (false negatives)?

By finding the best ratio between all of the three points mentioned above, you should find the ideal automated web security scanner. Definitely you do not want a scanner that reports hundreds, if not thousands of vulnerabilities but most of them are false positives.

You might ignore false positives or mark them out manually. However this mechanism might not scale well because when you actually start using your scanner regularly on large scan web applications, and there are bigger problems with false positives, such as a lot of extra time being spent on this process (i.e. verifying false positives), you and your users will start losing faith in the scanner. To learn more about false positives and the impact they have on a penetration test, refer to the article The Problem of False Positives in Web Application Security and How to Tackle them.

On the other hand you do not want a scanner that does not detect much of the most commonly exploited vulnerabilities. The best automated scanners are typically those scanners which can report all of the most commonly exploitable web application vulnerabilities, such as SQL injection and Cross-site scripting while reporting the less false positives and which require the least configuration.

After all each automated scan should always be accompanied by a manual penetration test, so those one-off vulnerabilities which are rarely, or never exploited in a real life attacks can be identified manually. But at least, the bulk of the work is done and developers can already start concentrating on remediating the reported vulnerabilities.

Integration with Other Web Security and Development Tools

Another important feature, or set of features to look for in web application security scanners is how easy they can be integrated. For example does it output the scan results to standard formats that can be parsed by other tools, such as XML? Can an automated web security scan be launched via command line or batch files? The more of such features it has, the easier it is to integrate web application security scanning in the SDLC process of web applications.

And for those penetration testers who always like to dig deeper into web applications, can the scanner import the output of other web security tools typically used during manual security audits, such as other proxy servers, or http analysers? Supporting such kind of tools is an important factor to consider as well, since the more integration capabilities a scanner has, the more of the process can be automated.

Choosing your Web Application Security Scanner

As we have seen there are several criteria you should look into when choosing a web application security scanner. What is important is that you should focus on your needs rather than relying on others’ opinion. Ensure that the automated tool you are about to choose can scan the web applications you want to audit and allows you to automate as much as possible, to save time, money and other resources.

PCI Compliance - The Good, The Bad, and The Insecure

0
0

Does having a PCI compliant website and business means they are bulletproof, or better, hacker proof? This first part of this PCI compliance article looks into, and explains the shortcoming of compliancy, specifically focusing on the popular Payment Card Industry compliance (PCI).

A History Lesson When You Thought Your School Days Were Over

Around 600 BC, the first coins were minted in Greece and Persia, now known as Iran. Their usage was simple: the coins represented a currency value that was used in lieu of previous generations' barter systems. But, as with everything else in the world, innovation and progress led to further complexities. Eventually we evolved to a stage where currency and the use thereof went from a gold standard, to a paper and minted coin system, and now to nothing more than ones and zeros on computer servers.

As that slow progression of change occurred, laws and security surrounding digitized transactions needed to evolve as well. Originally, keeping your money safe was as simple as keeping your coin purse or wallet safe. However, with the advent of magnetic strip cards, digitization of money flow, and finally the Internet and shopping online, there began to grow a need for security methodology to protect that data. Your wallet and your bank were no longer the only two things that protected your money; the places you shopped at were now held responsible as well.

In the beginning, the major credit card companies had their own individual methods: Visa's Card Information Security Program, MasterCard's Site Data Protection, American Express's Data Security Operating Policy, Discover's Information and Compliance, Japan Credit Bureau's Data Security Program, and many others. Each company's program was designed to ensure an additional level of protection of consumer financial details at the merchants themselves, typically by policy restrictions on the storage, processing, and transmission of cardholder data.

This, however, made for many fragmented systems that sometimes varied wildly from card company to card company, which proved quite difficult, if not outright impossible, to adhere to completely because in some cases, one card company's policies violated another's. In fact, often merchants would not accept certain card types simply due to the restrictions imposed. In order to combat this variety and make for a simplistic and easier-compliance policy, the card companies assembled in late 2004 and formed a global alliance. Thus the Payment Card Industry Data Security Standard (PCI DSS) and its managing agency, the Payment Card Industry Security Standards Council (PCI SSC), were born.

PCI compliance, as it is called, became a regulatory standard for all merchants, big or small, that processed transactions involving cardholder data. Organized into six logically related groups with twelve overall requirements, compliance required several key features: A secure network, protection of cardholder data, vulnerability mitigation, access control measures, monitoring and penetration testing, and finally, adherence to a strict and well-defined security policy. This all seems well and good, and proper adherence to these guidelines should yield no breaches of security. Except those are the exact problems: guidelines and adherence.

A Rule is Only as Good as Its Enforcement

The first problem is that PCI compliance is, for the most part, a guideline, not a law. The difference is that while laws have a governing body, strict definitions, and a form of enforcement with punishment for illegal acts, the PCI compliance guidelines are loosely defined by a conglomeration of banking institutions with nearly no oversight (however, plenty of punishment in the form of massive private-industry fines are still levied by this regime). In fact, PCI compliance effectively has the feel of being almost on a volunteer-level of completion due to the rather limited amount of control and oversight. The PCI SSC has no registration entity to handle any sort of listing or inspection of compliant members, nor does it explicitly restrict merchants from participation for non-compliance (unless a merchant finds itself the victim of compromise and, thus, under the watchful eye, and punishment, of the PCI DSS). Because of this, merchants are almost exclusively responsible for not only their own ability to adhere to PCI standards, but their own inspection and certification of compliance as well.

There exist many entities that 'certify' a merchant's PCI compliance, such as ScanAlert, VeriSign, TRUSTe, and others. Some of these entities also proudly tout "Hacker Safe" badges to throw on a merchant's website to assure its customers that their data is secure and the merchant follows the guidelines of PCI compliance to a satisfactory degree. However, the duty falls solely on the merchant to obtain this certification, and there is no real requirement for such certification, either. This lack of certification requirement and self-policing obviously leads to the potential problem of little, if any, adherence to PCI standards. Couple this with the fact that such "Hacker Safe" emblems are, in fact, a welcome mat to black-hat hackers, and you have quite a dangerous concoction of voluntary uncertainty and insecurity. We will cover more on this later in the article.

PCI Compliance Applies Only To Big Boys, Like Amazon. Right?

Wrong. PCI compliance extends to all merchants, big and small, though, most especially big because they are in the limelight of the show. In order to realize the depth and importance of this, we first need to have a basic understanding of how banking institutions and digitized currency works. At its core, you have the Federal Reserve, the Automated Clearing House (ACH) system for Electronic Funds Transfer (EFT) payments directly via bank accounts, and even systems for wire transfers, both domestic (such as FedWire and the Clearing House Interbank Payments System (CHIPS)) and international (such as the Society for Worldwide Interbank Financial Telecommunication (SWIFT)). However, these strictly and federally regulated systems only handle the final processing of payments in between banking institutions, and almost never are used directly by merchants. Instead, merchants will file their transactions through larger entities known as payment processors, which do not fall under the same federal regulation standards as these federal institutions.

When you go to a brick-and-mortar store or shop online and use your credit or debit card, your transaction is processed through a payment processor. In a matter of a few seconds, your card information is provided from the merchant (the store you are shopping at) to a card payment processor. That payment processor checks the cardholder data via the banking institution responsible for that card (such as Visa or MasterCard, as well as a banking institution in the case of debit cards), performs a series of anti-fraud checks, then sends final authorization or denial to the merchant for your transaction. These payment processors typically handle hundreds of thousands, sometimes millions of transactions a day, for millions of cardholders. As such, payment processors are held to the utmost strictest compliance level, known as Compliance Level 1. But not even this highest and most restrictive level of compliance is a perfect guarantee of the safety and security of cardholder data.

How the Mighty Fall - When Compliance is Not Enough

As a payment processor, Heartland Payment Systems handles electronic and other transactions for over 250,000 businesses within the United States, completing over 11 million transactions a day for more than $120 billion USD a year. As one of the largest payment processors in the United States, Heartland is held to the strictest adherence of Compliance Level 1. In fact, as one technology analyst has stated, Heartland effectively leads the way for the entire industry.

It then came as a shock to the industry when Heartland announced that in 2008 it had fell victim to one of the largest mass compromises in history, where more than one hundred million card numbers were compromised to such a degree that entire cards could be duplicated perfectly. But Heartland was not alone in this. Other processors, such as Hannaford Brothers and TJX Companies, also fell victim to the same attack that hit Heartland, yielding an estimated grand total of over 250 million compromised card numbers -- nearly one card number for every person in the United States at that time. And that was not the first nor last time, either.

A few years later in 2012, Global Payments Inc. reported yet another mass compromise: Over ten million card numbers. Even extremely large first-level merchants themselves have fallen victim to massive data breaches, such as the attacks that hit Sony in 2011, compromising the cardholder and other personally identifiable information of over twenty million of Sony's PlayStation Network users. In fact, in 1999, approximately one out of every 1,200 transactions turned out to be fraudulent, resulting from compromised cardholder data. There are some estimates that claim over ten million card numbers are involved in mass compromises every year, resulting in billions of dollars lost due to fraud.

Even merchants certified PCI compliant and "Hacker Safe" are quite heavily vulnerable. As aforementioned in this article, not only is certification a voluntary effort, but it has even proven to be a star-bright, mile-wide target for attacks. Take for example the "Hacker Safe" website seal provided by ScanAlert Inc., a provider of one of the most prominent and widely-used security seals at the time. Touted by major corporations' websites, such as those of Johnson & Johnson, Sony, and Warner Bros, ScanAlert finally garnered the attention of its largest competitor, McAfee, which acquired ScanAlert in October of 2007. This acquisition comes as no surprise considering the business effectiveness of a "Hacker Safe" badge (most websites notice an average of 14% boost on conversion rates, yielding quite a high rate of return).

However, barely three months after its acquisition, ScanAlert found itself under fire for proving to be sometimes ineffective at its badge's promise. In January 2008, technology retailer Geeks.com put out a notice to its customers that it had become the victim of a mass compromise of customer data -- while being certified "Hacker Safe" by ScanAlert. This was later asserted to be due to the generic method of vulnerability scanning and automated certification ScanAlert incorporates. According to McAfee itself, 90% of ScanAlert's "Hacker Safe" daily scans are performed by automated systems with no manual intervention or oversight, designed to look for common SQL injection, cross-site scripting, and other general web application security flaws. After the Geeks.com hack, one security research organization claimed to be able to effectively penetrate nine out of ten "Hacker Safe"certified websites well enough to easily access customer financial data. Not only this, but hackers may also find a "Hacker Safe" badge to be a challenge. Where they may normally ignore a website and move on to the next once their basic vulnerability scanning tools fail, a "Hacker Safe" badge may prompt hackers to pursue further, more advanced methodologies to prove the badge wrong. Such seems true, especially in Geeks.com's case.

The most interesting and damaging part is that all of these entities -- Heartland, Hannaford Brothers, TJX Companies, Global Payments Inc., Sony, Geeks.com, and so many others -- shared one important thing in common: They were all either certified PCI-compliant (or worse, were the PCI compliance certifiers) at the time of their attacks.

Subscribe to our blog via RSS or follow Netsparker on Twitter or Facebook to be automatically notified once the second part of the article PCI Compliance – The Good, The Bad, and The Insecure is released.

PCI Compliance - The Good, The Bad, and The Insecure - Part 2

0
0

If Compliance is Not Enough, What Else is Needed to Secure Web Applications?

As we have seen in part 1 of this article, PCI compliance is a good idea in abstract, however it should be viewed only as a starting point, given its rather minimalistic and generic approach to meeting compliance requirements. One of the largest problems with PCI compliance is the absolute lack of real, technical requirements. For example, the very first requirement is to have a firewall designed to protect cardholder data. That sounds good on paper, but nothing actually says how or to what degree this firewall must protect data.

Consider that any random Joe McSysadmin can throw a firewall on their network and call themselves compliant, and they would be technically correct. But that would not actually protect their network and web applications in any realistic way unless that firewall was finely and appropriately tuned, which is not thoroughly detailed in any real way under the PCI guidelines. Indeed, most merchants find themselves meeting the requirements at the most basic and minimal levels necessary, which properly explains the annual amount of cardholder data that gets compromised. Instead, merchants should go well above and beyond the basic and often ambiguous generalities of PCI compliance requirements.

As mentioned earlier, there are six categories of PCI compliance, each with a subset of rules. The following details a good starting point and some additional steps all merchants should follow when attempting to become PCI compliant:

A Complete Guide to Having PCI Compliant Web Applications and Business

Build and Maintain a Secure Network

1. Install and maintain a firewall configuration to protect cardholder data: Just installing and configuring a basic firewall is not enough, even if it meets the PCI requirements. It is also imperative that all externally-facing systems (and, indeed, even some internal-only systems as well) not only be properly configured with adaptive and well-tuned firewalls, but that the firewall logs be frequently inspected as well. And by adaptive, this means that the firewall should both automatically and manually improve itself actively with traffic that occurs, including but not limited to rate-limiting or outright blocking questionable traffic, and alerting security engineers of any possible trouble. This is not only to prevent external threats from gaining entry, but also to prevent even insider threats from gaining access they should not have (hence the prior mention of internal-only systems).

This also is not limited to your web servers, but any systems on your network as well, such as your employees' desktop computers. In 2011, RSA Security - an American computer and network security organization used in both high-level corporate business and government contracting - fell victim to a social engineering and trojan horse attack that rendered their SecurID two-factor authentication tokens useless, all due to a compromised employee desktop from a simple infected email attachment. For more information about this attack click here. Most insider threats are not intentionally committed by disgruntled employees, but in fact occur from poor computing practices on insecure networks.

2. Do not use vendor-supplied defaults for system passwords and other security parameters: Time and time again, network engineers install routers with cisco:cisco username/password combinations, thinking, "Surely, no one will make it in this far."  Wrong.  The same can be said for practically anything that comes supplied with defaults, be they passwords or configurations.  There exists plenty of black-hat scanners that search for fresh installations of Wordpress, phpMyAdmin, and various other easy-access web applications and software for that brief period just after installation when default passwords have not yet been changed.  Just this momentary exposure can wreck havoc on an administrator's setup, or even the whole network.

It is also worth mention that this requirement should include any defaults, including configuration, such as ports and version replies.  There exists no reason to leave SSH port 22 open to the world unless you are running a shell server, in which case that shell server should never be even the slightest bit connected to cardholder data to begin with.  There also exists no reason to leave the full version reply in the Apache web server reply headers.  In fact, wherever possible, the most minimal information should be supplied, or none at all if it is not critically necessary.  The less potential attackers can glean from your surroundings, and the less entry points made available to them, the more secure your systems will be.

Protect Cardholder Data

3. Protect stored cardholder data: This requirement should go without saying, but often gets ignored or mostly overlooked once the first requirement is completed.  For example: With PCI compliance, it is required that CVV numbers not be stored whatsoever, and that cardholder data such as the card number, ZIP code, and cardholder name all be stored in an encrypted format.  All too often, neither of these two requirements are completed.  Some eCart software provides this functionality already, but also does an ineffective job of protecting the keys used in the encryption/decryption process.  What good is a lock if you leave the keys in it?  This sort of mass-compromise is easy preventable by two simple methods of data protection:

  • One-Way Encryption: It is not necessary to store cardholder or personally identifiable information in a decryptable method unless absolutely necessary, such as recurring charge payments or saving cardholder data for future easy payment methods.  If you absolutely must store cardholder data for whatever reason, and have no reasonable need to retrieve it later, then encrypt it using a highly secure one-way algorithm, such as salted SHA512.
  • Store Keys Offsite: If you absolutely must store cardholder data and have reasonable need to retrieve it later, then keep your encryption methods offsite (or, if multiple servers are infeasible, inaccessible to the publicly-facing services, such as by process chroot and permissions).  One way you can do this is by running a service on a system that is inaccessible from your publicly-facing servers or services (e.g. via SSH or NFS for separate systems; open permissions shared processes; or any other access methods) that takes only two actions: Receive cardholder data to encrypt and store, or charge existing stored cardholder data with a defined cost (such a command could be like: charge client #123 with $29.95 USD to their payment method #2).  This service would never return cardholder or personally identifiable data via any query, thus preventing that data from ever being compromised.  This service could be coupled with exclusive access to your cardholder database as well for simplicity, just so long as - again - it does not return any privileged data.

Additionally, many eCart owners often store all of their data in a single database with shared permissions, including cardholder data.  If a website owner runs a web forum that charges for premium services such as access to exclusive hidden forums, and stores that forum's data in the same database or via the same access controls as the cardholder data, that cardholder data is as good as compromised the moment a vulnerability exists and is exploited on that forum, which happens exceedingly often.  For this reason, it is critical to employ a Separation of Privileges and Segregation of Data set of principles, as follows:

Separation of Privileges: Take the prior example of a web forum.  If you must run them both on the same server, then separate out permissions of each one's access.  If you are running forum.com, then set both forum.com and www.forum.com up in one segregated web application (such as nginx running with php-fpm for speed and application server security, listening on ports 80 and 443).  Then, set store.forum.com to handle your premium forum access eCart purchase system in an isolated, segregated system.  This could be done via suPHP with individual system users for each the forum and the eCart.  Another separate entity traditional method to retain using nginx on both services involves a complex chroot by jailing off a second nginx instance.  Couple this with a chrooted php-fpm instance, and this could work.  However, a simpler method for full service segregation would involve running the eCart in an Apache instance with mod_php under a different system user and group with strict permissions (similar manual and module chroot methods are still applicable for highly restrictive security if desired).  This Apache instance would listen localhost-only on a different and publicly-firewalled port (i.e. 8080, publicly-firewalled in deny state just in case Apache is misconfigured to listen on public IP addresses), and the nginx instance would proxy SSL requests through for this sub-domain.

Segregation of Data: With Separation of Privileges, the access point is secured, but the data it uses is not .. yet.  To combat this side's problem, we employ the concept of data segregation.  First, this involves more of the prior concept - Separation of Privileges - where you restrict the logins and access controls between your forum and eCart database users.  Next, provide individual databases for each element: one for forums, one for eCart and cardholder data.  So long as your eCart application remains secure, so, too, will your cardholder data, regardless of the security of your forums application (assuming no elevation of privileges occurs, of course). The one caveat is the security of your eCart, which can also fall victim to vulnerabilities.  In mid-2012, One of the largest service eCart systems was exploited by multiple attacks and vulnerabilities, so even your eCart system can become the insecure entry point.  A clever additional approach to this would involve custom-coding of a localhost-only and publicly-firewalled network listener service.  By listening for only two commands ("store [clientID] [cardholderData]", "charge [clientID] [amount]"), having exclusive access to the encryption keys via unique user and group ID, permissions, and perhaps a chroot environment, and having exclusive access to the database user and tables with cardholder data, this service - with a little additional coding and hacking, such as generating the payment method plugin for your eCart application - could act as a middleman between the eCart and the sensitive data itself.  It may seem a bit much, but indeed nothing is overkill when it comes to strict security.

4. Encrypt transmission of cardholder data across open, public networks: This requirement definitely goes without saying.  Simply put, if you are handling cardholder data between your servers and your clients' computers without encryption, you have no business running an eCart system to begin with.  You absolutely must encrypt this traffic, and you must do so with reliable, trustworthy SSL certificates.  Free single sub-domain certificates are available, as are plenty of commercial-grade, small-business to professional eCommerce levels of certificate options.

One side that sometimes gets overlooked is the communication from your servers to your payment processor, and all steps in between.  Most all payment processors as of lately accept only secured communication methods.  However, if you have a middle step in the process - such as a shopping cart mirror server hosted in a different data center which transmits, by non-encrypted communication, the cardholder data to your central database server before making it to the payment processor - that in-between traffic happens over public networks and must also be strongly encrypted, just like your communication between your servers and your clients must be.

Maintain a Vulnerability Management Program

5. Use and regularly update anti-virus software: As stated all the way back in Requirement #1 (Install and maintain a firewall configuration to protect cardholder data), protection of your externally-facing systems is a highly important duty to maintain.  Also as previously stated, your employees' terminals are very important in this field of protection as well.  However, a piece of software is only as good as the user running it.  Included in this requirement should also be highly effective education of good computing practices, as well.  In the aforementioned 2011 RSA Security hack, ineffective anti-virus and firewall software coupled with poor computing practices of opening unsafe email attachments both ultimately led to the $66 million USD loss RSA Security felt as a result.

6. Develop and maintain secure systems and applications: Probably one of the most important requirements of PCI compliance, this requirement acts as a sort of umbrella over other requirements to re-assert the absolute and unarguably significant importance of security and web application security.  As mentioned in Requirements #1, #5, later in #9, and several others, security is of the utmost importance with regard to cardholder data -- this cannot be over-stressed.

Good firewalls and anti-virus services; Encryption when crossing public channels; Encrypted storage of cardholder data, authentication tokens and passcodes (perhaps even methodology of two-factor authentication or biometrics).  Additionally, later, in Requirement #10, we will address detailed and secured logging of all privileged activity.  These all and more, you may notice, are repetitiously repeated in recurring repetition, repeatedly.  Why?  Because if they were not some of the most problematic failures of PCI compliance, there would exist no reason to continually drive these points home.

Implement Strong Access Control Measures

7. Restrict access to cardholder data by business need-to-know: As with the next requirement, #8, access restrictions are a highly crucial element of protecting cardholder data, namely with regards to privileged personnel.  However, unfortunately this requirement is often overlooked at a service level.  In the technology industry, there exists a principle known as Least Privilege.  As its name implies, the concept involves granting a service or user the least amount of privileges necessary to complete their job, including the revocation of temporary privileges applied where necessary.  This principle should not be foreign to our readers, as we have previously discussed this concept several times already, and for good reason.  Indeed, as discussed in our SQL injection articles, the restriction of permissions to the most minimalist level required is quite the common concept; As Linux administrators, for example, we apply this methodology to stored data in the form of filesystem permissions.  So, too, should this concept be applied wherever possible, especially in environments that handle sensitive information such as cardholder data.

In Requirement #3, we exemplified the scenario of a web forum coupled with an eCart for premium access.  In that scenario, the principles of Separation of Privileges and Segregation of Data are further deeply enhanced by the principle of Least Privilege when applied to database permissions.  Of course, Least Privilege is not exclusive to database permissions, either.  The concept is appropriately applied to everything that has any level of access, such as on-disk stored data, backups, employee file stores, communication pathways, command and control systems, even the contents of the access control lists themselves (it is unwise to tell an intruder what they must infiltrate next in order to gain the desired escalated privileges).  In any and every possible area, Least Privilege should be applied and strictly enforced to minimalize the damage and effectiveness of when -- not if -- a hacker ultimately gains access to a service.

8. Assign a unique ID to each person with computer access: This may seem like a relatively simple requirement, but it actually has quite a few layers of complexity beyond the obvious.  As mentioned in Requirement #5, good education of computing practices should also be mandated with anyone who may have privileged access to sensitive data, but there are indeed other important aspects.  For example, what good is a unique ID if the systems utilizing those authentication methods are insecure?  By 'insecure', this does not only mean the insecurity of the network nor a poor anti-virus posture, but rather this also very importantly includes the computing practices of the users of those unique ID logins as well.  As mentioned in Requirement #6, all systems involved, including those requiring unique ID authentication, should be secure.  Equally important, though, are the security practices of the possessors of those unique ID authenticators.  This can include many things, such as a strong understanding of social engineering approaches, how to employ safe computing, reduction of high-risk exposure on social networking or other arenas, and so forth.  Again, proper education of secure computing practices cannot go far enough.

Also, as mentioned in the prior requirement, #7, this requirement does not exclusively apply to actual personnel, but services as well.  Along with the principle of Least Privilege, services should possess unique access exclusive to each service unless the sharing of access is absolutely necessary (which should be avoided via protected communication pathways wherever possible).  You can consider this another way: If a user can cause damage by sharing his or her credentials, so too can a service exploited by a hacker when its access is shared among other services.

9. Restrict physical access to cardholder data: Indeed, for some merchants this requirement may exceed their ability to control, especially in the cases of an online store.  However, simply using a reliable and trustworthy hosting provider would adequately meet the compliance necessities for this requirement.  This also does include ensuring that the server hosting your online shopping system is exclusively accessible only by you or other users properly privileged by Requirement #8 above, such as by avoiding shared hosting (a topic we have addressed previously) or other methods that would give unauthorized users privileged access (such as through a hypervisor terminal with VPS hosting).

And, of course, physical security does obviously include the systems you and other privileged users have physical access to, both permanently installed or otherwise.  Over the past several years, major corporations and, indeed, even the United States federal government itself have all fallen victim to massive security breaches due to failed physical security, most often due to unsecured laptops illogically carrying enormous troves of highly sensitive personally identifiable information.  Ignoring the absolute irrational absurdity of laptops carrying vast amounts of highly sensitive, the lack of simple hard drive encryption led to several tens of millions of peoples' private information being leaked to entities that had no business accessing that data (resulting in billions of dollars of loss, via both identity theft and fraud or lawsuits).

Regularly Monitor and Test Networks

10. Track and monitor all access to network resources and cardholder data: Not just with merchant systems that handle cardholder data, but practically every type of server imaginable, this often gets overlooked as needlessly unimportant, when in fact it is an extremely valuable asset.  First, look at the side of monitoring, namely for its value in uptime, but also for its usefulness with security and rapid response.  It is unreasonable and impossible to manually check on services constantly to ensure their consistent uptime and reliability.  Many tools exist -- Nagios and Icinga are two of the most popular, among many others -- that allow you to monitor any conceivable service.  Furthermore, most all monitoring software are incredibly simple to setup, requiring only the knowledge of the services you wish to monitor.  For example, with the aforementioned Nagios and Icinga, system checks are performed via a series of check scripts or utilities.  Nagios and Icinga require only two things from these check scripts: an exit code (0 for OK, 1 for Warning, 2 for Critical, 3 for Unknown) and a single line of status text.  That really is all that is required.  And you can generate a check script for practically anything -- CPU and memory utilization, properly formatted website output, TCP service replies, even monitoring the local weather around your remote data centers.  Anything and everything can be monitored and give you not only the visibility of the moment any problem occurs, but also the moment any security issue erupts.  That brings us to the second side of this: Logging.

Sometimes real-time monitoring of your systems and services may not prove completely effective.  Sure, they keep you appraised of your uptime and general responsiveness, but a monitor is only as good as the things it monitors.  It cannot watch for what it does not know to watch for.  While you may be capable of finding when most kinds of attacks occur, as they occur, you may not be able to see them all.  Thus, it is imperative to have a reliable, offsite monitoring system.  Why did we heavily underscore the word 'offsite'?  Well, we would not highlight something we felt no need to stress the importance of, now would we?  Think of it this way: A convenience store has security cameras, and a system that records the images captured by those cameras.  Would you leave those recording devices behind the counter, beside the register a robber is stealing from?  Of course not.  So why would you leave the records of an attacker's intrusion on the very server they are intruding upon?

There exist many solutions, both free and corporate, that allow users to store logging in an offsite format.  The two simplest are a Syslog variant, and a dedicated offsite monitoring agent.  By default, most Linux and Unix varieties have their own form of Syslog already operating locally.  To facilitate the storage of offsite logging engines, administrators can use either syslog-ng (which often comes standard on modern distributions of Linux), or rsyslog, both of which can be interchangeable with some similar-but-different functionality.  A far better solution, however, is an offsite monitoring agent, such as the open-source OSSEC -- a host-based intrusion detection system that can perform offsite logging, among many other features, even including this very requirement of PCI compliance.

11. Regularly test security systems and processes: This requirement proves to create a rather tricky problem: A vulnerability scan is only as effective as the list of vulnerabilities it knows to scan for.  Indeed, it is impossible to truly account for all unknowns, so the best a vulnerability scan can do is check for the conceivable known methods of intrusion.  Thus, it is exceedingly necessary to take a very outside-the-box approach, where thinking abstractly like a hacker becomes a useful skill to employ.  A skilled security engineer would not only run vulnerability scans, but could also even perform wargame scenarios to attempt real-world tests of all sorts of intrusion methods in order to most effectively find weaknesses and best craft successful remediation solutions.  This is not just limited to checking firewalls, ensuring anti-virus scanners are up-to-date, or verifying traffic is encrypted.  This includes everything and anything -- even almost absurd roleplaying tests, such as:

  • Social Engineering: Real-world, unannounced tests of the personnel's ability to respond and resist being exploited as a security weakness
  • Insider Threat:Testing both the damaging ramifications of a user or service's access being compromised, either accidentally or intentionally, such as in the case of a disgruntled employee
  • Response and Remediation: In the event of ultimate catastrophic failure of security protocols, determine how quickly can a security team react and control the situation

Obviously, this short list of ideas is far from comprehensive, but should give a good idea of how to truly and deeply test all systems (and, indeed, personnel as well) with effective and usefully abstract methods to develop a successful security posture.  They may border on the ridiculous, but you would be surprised how often these little-tested and widely-open security vulnerability access points fail in the real world.  According to some recent security industry research statistics, upwards of 70% of all cyber attacks are in part due to insider threats.  Testing not only the systems and services themselves, but the people responsible for them may prove invaluable.

Maintain an Information Security Policy

12. Maintain a policy that addresses information security: It is essential that a security team (even if this is just you in a solo enterprise) is well prepared for any and every possible scenario that can be thought up, as it is an absolute guarantee that hackers are doing the same to dream up new and innovative ways to gain illegitimate access to cardholder data.  A proper and prepared security team will find itself not only planning for the absolute impossible to the absolute worst, but everything in between and surrounding.  You must plan for basic first-level response remediation -- secured configurations, firewalls, anti-virus software, and communication encryption.  You must plan for cardholder data protection schemes -- encrypted data stores, physical and digital access restriction and control, Least Privilege, Separation of Privilege, Segregation of Data, education of responsible stakeholders.  (Remember, in Requirement #5 and repeated thereafter, we mentioned the necessity of effective education in good security practices.)  And finally, you must plan for the known and, as best as possible, for the unknown, via regular and irregular testing methods, monitoring, and offsite storage for forensics research.

Simply put, if you find yourself unprepared for when -- again, when, not if -- an attack or intrusion occurs, you will also find yourself incapable of prompt reaction and mitigation.  Similarly, if you or others responsible for protecting cardholder data find yourself incapable of fully and completely protecting that cardholder data, it will -- not can, but will -- eventually fall into the wrong hands.  It is therefore perfectly fitting that this requirement is last, but certainly not least in the list of PCI requirements, as it sums up the most important requirement of all: Planning and being prepared.

PCI Compliance is just a Stepping Stone

As we hope this article has highlighted, there are nearly infinite expansive approaches to the very limited and basic starting guidelines of PCI compliance. The need to go beyond the minimums of PCI compliance should indeed be well understood, particularly due to the incredible ramifications from not going well above and beyond those bare minimums. Not only will a breach in security systems cost a business a large sum of revenue due to lost sales (mainly from properly lost trust), but also via a very substantial cost in the form of levied PCI SSC fines. Couple all of these with contributing to the billions per year in losses from identity theft and the untold misery of millions of consumers per year, and the undeniable need for an almost fanatic level of security becomes quite clear.

PCI compliance is just a stepping stone up the Himalayan-sized mountain of information security. It presents itself as perhaps the most modest beginning guideline for all merchants, both big and small, to expand from. It would be pragmatically impossible for this paper to expand upon every conceivable (and, indeed, inconceivable) notion to expand from the basics of PCI compliance, especially due to the infinite combinations of systems and services in a merchant's setup, so, indeed, the onus of preparing and planning falls ultimately on the merchants themselves (and, of course, their security team). The task is difficult, though not impossible. It is, however, indeed quite impossible if only the bare minimum basics of PCI compliance are all that are implemented.

If only one thing is to be taken away from all of this, then at the very least take from Requirement #12 one simple thought: Hope for the best, but absolutely plan for the worst; It can, and sometimes does happen.

Viewing all 1027 articles
Browse latest View live


Latest Images