Quantcast
Channel: Invicti
Viewing all 1027 articles
Browse latest View live

7 Common Web Application Development Security Misconceptions

$
0
0

Web application security is often a misunderstood topic with many false beliefs held by developers and many others in the IT Industry. These beliefs can be dangerous and are telltale signs of a lack of security understanding and experience.

Common misconceptions in web application developmentWeb application developers in particular should be aware of these common misconceptions. Apart from writing the code developers are typically involved in the design stages of web applications, during which both the technical and security requirements of a web application are typically laid out. Hence web developers should have at least a basic understanding of web application security. Here are seven common security misconceptions to watch out for if you are a web application developer.

We Use a Web Framework, We Do Not Need to Worry About Security

Popular frameworks such as Ruby on Rails and Django are written with security in mind and help developers prevent most common technical vulnerabilities, such as cross-site scripting and SQL Injection vulnerabilities. However they don’t prevent from logical vulnerabilities.

Logical vulnerabilities are flaws in the business logic. For example by modifying the URL the attacker can make a purchase without paying. It doesn’t even end there. Many frameworks only addresses common issues for the default usages, so when you start doing something different they’ll fail to protect against even simplest issues such such as DOM XSS (DOM based cross-site scripting).

Therefore when using web frameworks do not solely rely on their security features and make sure you manually implement all checks that are needed and understand what they do and don’t. They don’t all just work out of the box.

No One Wants to Hack Our Website

This is most probably the most common misconception. You’re a startup, or a small business, who is interested in hacking your website or customer portal? Even if your company or web application itself is not of great value to an attacker, your visitors, your server's resources and the bandwidth you pay for are exactly the things that attackers are after.

Attackers do not really care who the target is. They simply use automated tools to scan large blocks of the internet and if there are vulnerable websites in such blocks they attack them. Such type of mass and non targeted attacks are very common, especially when vulnerabilities like Shellshock and heartbleed are discovered.

We Have Backups for Our Websites

Backups can help to restore a website after it has been hacked but substituting them for good security is not a viable option. A temporarily hacked website can result in serious consequences such as: being blacklisted by search engines, having sensitive user data stolen, phishing attacks on your visitors and it will tarnish your business’ reputation.

If a website was hacked then it means that the image of the website in the backup also has the vulnerability. Hence restoring it is only a temporary solution until the attackers find the vulnerability and exploit it again.

It Is Running in an Internal Network, No Need for Website Security

You can never be sure that the threats won’t come from an employee or an attacker who somehow gains access to your internal network. Is the confidential data in the internal CRM or ERP that you’re working on safe from a disgruntled or curious, security savvy, employees? Not to mention the fact that the typical employee is not security savvy and is the main target in social engineering attacks. So web application security should always be catered for.

It Is Secure Because It Is Only Accessible Via VPN

Just because people connect to your web application via a VPN, it does not mean that your application itself is secure. The same arguments I highlighted in relation to internal networks, such as disgruntled employees, network vulnerabilities and employees as victims of social engineering attacks apply in this case as well.    

The Website Runs on SSL (HTTPS)

Unfortunately this is another common misconception. If you use SSL on your website, it will encrypt the data in transit between your website and a visitor’s browser. Encryption prevents others from intercepting the unencrypted data, but but it won’t stop attackers from exploiting vulnerabilities that your website might have.

We Have a Web Application Firewall

When configured properly, web application firewalls can help mitigate specific attacks such as the exploitation of cross site scripting and SQL injection vulnerabilities, but they won’t protect you from attacks that aren’t defined in the rules you supply them with. As explained in Getting started with Web Application Security, even though web application firewalls, or as commonly known WAFs are definitely a good addition to your security portfolio they have a number of shortcomings.

WAFs do not fix the underlying problem, they just add an additional layer of security to it to protect it. And considering there is a good number of WAFs bypass techniques which are still widely popular today, one shouldn’t solely rely on a WAF but should always fix any security flaws a web application has.

There Are No Excuses. Web Application Security Should Always Be Catered For

Misconceptions can be very misleading though there are no excuses. Web application security should always be catered for, and ideally at every stage of the development of the web application. There is no better way to avoid being hacked than to building a secure web application, rather than protecting your insecure code with other applications which might have their own vulnerabilities.

Emulate the malicious attackers; use automated web application security tools to identify vulnerabilities and security weaknesses in your web applications.


What are Scan Settings in Netsparker Cloud and How Can They Be Managed?

$
0
0

In Netsparker Cloud there are a number of scan settings you can configure prior to launching a web application security scan. These settings can be saved so they can be loaded and used for other web security scans at a later stage. Therefore when using scan settings you do not have to reconfigure the scanner before each scan. Below is a list of the configurable scan settings in Netsparker Cloud:

  • URL of target website to be scanned
  • Initial path of scan
  • Scan Policy
  • Scheduling options
  • Scope of the scan
  • URL rewrite mode
  • List of regular expressions (RegEx) to match URLs that should be excluded or included in the security scan
  • Custom cookies
  • List of URLs of pages which are not linked from anywhere in the website and must be included in the scan.

 Netsparker Cloud Scan Settings

Note: Scan Settings only apply to single website scans. They cannot be used for website groups scans.

Default Settings Values

All the scan settings have a default value and unless configured otherwise, the default value will be used for the web security scan. Therefore if for example you do not configure the initial path of the scan, Netsparker Cloud will start scanning the website from the URL you specified.

Managing Netsparker Cloud Scan Settings

All of the above scan settings can be saved so they can be loaded at a later stage to be used for another web security scan. Therefore if you want to scan a number of websites using the exact same scan settings, configure all the scan settings without specifying the URL and save the scan settings.

Saving Netsparker Cloud Scan Settings

Managing scan settings in Netsparker Cloud

  1. Once you configure the scan settings click the Manage button highlighted in the above screenshot.

Name the scan settings

  1. Specify a name for the scan settings and click the Save As New Settings.

Using Saved Scan Settings for a Web Security Scan

Select the scan settings from the drop down menu

To load the saved settings for a new security scan select them from the Scan Settings drop down menu at the top of the New Scan page.

Updating Saved Security Scan Settings

To change or update saved scan settings follow the below procedure:

  1. Select the scan settings you would like to update from the Scan Settings drop down menu in the New Scan page.
  2. Do all the necessary changes and click the Manage button next to the Scan Settings drop down menu once ready.

Updating scan settings in Netsparker Cloud

  1. Click the Update button to save the new changes to the saved scan settings.

Alternatively you can save the updated scan settings under a new name by clicking the Save As New Settings button and specifying a new name.

Deleting Scan Settings

To delete saved scan settings load the scan settings from the Scan Settings drop down menu, click the Manage button and the Delete button.

An Introduction to the Digital Black Market, or as also known, the Dark Web

$
0
0

In this article, we explore the evolution of the digital black market.  We first explore how the concept of currency evolved into a digital, anonymous form – a critical component of the modern vulnerabilities and illicit goods black market.  Digital black markets have existed long prior to these anonymous, digital currencies – called cryptocurrencies due to using peer-to-peer encrypted and signed communications similar to BitTorrent – but those markets had existed for a long time much like their analogue predecessors, using cash or bank accounts.  Those risks were mitigated with cryptocurrencies, but we explore how even the new, supposedly anonymous cryptocurrency solution is not without risk or incredible insecurity, either. 

From there, we learn how digital black markets began pretty much with the beginnings of the Internet itself.  Starting with, and for the longest time still existing on decades-old Internet technology, we explore the slow, progressive evolution on this platform.  Systems like Bulletin Board Systems, Usenet, and later websites on the budding World Wide Web provided a home to these activities.  We briefly explore why sellers (and buyers) take the risk, in spite of increased penalty and raids.

We continue to explore this “why” factor by examining one of these entities in person, discussing with the very people involved in these activities what their motives and agenda are.  We find that a rebellious attitude, which we discuss in the prior section, largely drives these users, even knowing they face potentially decades in prison for their crimes if caught.  From there, we discover how that attitude and new anonymizing technology solutions, namely TOR and Bitcoin, spawned a new, easy-to-use environment for even the everyday layman to purchase illicit goods.  Physical and digital illicit goods have become as easily accessible and point-and-click as an Amazon purchase, and now this has yielded a new generation of the underground Internet (“dark web”).

Finally, we explore how this almost social media-level of ease available on many dark web black market websites has resulted in a new genre: boutique exploits and software vulnerability trading.  We learn how this has ultimately created a realm of computer attacks (e.g. Windows 0days) and website attacks (e.g. web scanning and exploiting packages) that anyone can use with just a simple understanding of TOR, Bitcoin, and a knowledge of how to use a Web 2.0-prettified website interface.

Introduction

The concept of time and the things that happen during its slow progression is utterly staggering.  Did you know Cleopatra lived closer in time to the Moon landing than she did to the Great Pyramid of Giza being built in Cairo, Egypt?  Here is another mind-blowing realization: The concept of currency trade for goods – cash, coin, crops, cattle, whatever people deem as valuable and worthy of trade – has existed for at least over twice as long as recorded history itself.  It is estimated that recorded history began around 4,000 B.C., whereas currency in the form of bartering for goods has been around since at least 9,000 B.C., perhaps much longer.

For perhaps just as long, black markets have existed as well.  While a market is an economic institution where tangible currency in some form is exchanged in trade for goods or services, a black market is similar, but involves either the currency exchange itself or the goods or services being traded being illegal.  This is done typically to circumvent regulatory powers, avoid detection, or simply to obtain illicit or illegal goods.

With everything becoming more advanced and evolving into a digital format, the world now relies on the Internet and electronic currency to continue uninterrupted on its unfathomably rapid pace.  Currency has seen its evolution from the bartering of grains and goats, to coins and promissory notes, to ones and zeros on a bustling stock exchange floor zooming by with millisecond trades.  Black markets also participated in this evolution, and while some of the illicit goods have remain unchanged (such as drugs), many new ones have been borne of this evolution, such as software to take down websites and log peoples’ keystrokes.

With the evolution of currency from commodity to fiat – that is, backed by tangible goods, like gold, to being valued solely by federated banking institutions, respectively – federation of currency itself has come under fire as an indefinably corruptible system.  Through very recent advancements in technology and software, currency federation itself has started to become a thing of the past – a windfall digital black markets are more than happy to exploit.

Untraceable, Undetectable – Anonymous Currency

The great thing about cash is that aside from physical forensics (e.g. fingerprints, DNA, locality-specific trace elements, etc.), it is virtually untraceable.  Barring any form of surveillance, it leaves no record of who spent it.  You never see drug dealers on Breaking Bad use credit cards to buy illicit substances, because the transaction is traceable.  Cash is king, as the saying goes, because unlike digitized transactions where everything can and will be logged, traced, and archived, cash grants that anonymity and a better guarantee of not getting caught.  But cash is still physical.

Fast forward to today’s era, and everything relies on digital currency instead of cash.  Unlike some European countries, like Germany for instance, the United States and others rely heavily upon ethereal currency in the form of a 16-digit debit or credit card number in lieu of cash at every available opportunity.  Governments like this because it grants explicit and traceable detail for an indefinite length of time; Black market merchants and customers despise it for those very same reasons.  This has inspired the desire for an anonymous and decentralized currency.  Many have been proposed, though almost all had floundered due to poor planning, implementation, or simply lack of widespread adoption.  For the longest time, this was just a pipedream – until 2008 happened.

In 2008, an entity that went by the name of Satoshi Nakamoto (either a single person or team of people pretending to be one) released the Bitcoin payment system.  Designed as a decentralized virtual cryptocurrency (a currency exchange of whole or fractional ‘Bitcoins’ using cryptography to ensure the security and integrity of transactions), the system utilizes an anonymous public ledger of hashed addresses of both wallets and the Bitcoins they possess.  New Bitcoins are ‘mined’ and generated by doing complex computations spread across entire server farms, growing in complexity with each new Bitcoin (or BTC) generated.  No names, addresses, or any information is known except for a historical record of what wallets have what Bitcoins (or slices of Bitcoins).  The Bitcoin system itself also affords a level of peer-to-peer always-online security, being unable to succumb as easily to things like Distributed Denial of Service (DDoS) attacks – a fate PayPal experienced in 2010 when it chose to freeze Wikileaks’ account.  While this system affords quite an astoundingly high level of anonymity, it still has its risks of being traced to individual users. 

Because it has no federated agency or nation-state backing its fiat value, nor any commodity to back it with tangible value, the trade equivalency of BTC-to-U.S. Dollar can fluctuate incredibly based purely on a commonly agreed-upon value between unofficial major monitoring and trading markets, or “Bitcoin exchanges.”  The value of BTC has fluctuated from a few cents to as high as $1,200 USD.  These values, though, can change dramatically for any reason and at any time.  When one of the largest modern digital black markets fell in 2013, the value of BTC plummeted 15% almost immediately.  It recovered, but then when China banned their financial institutions from interacting with Bitcoins, the value dropped over 50%, by more than $500 USD.  Extreme momentary fluctuations like “flash crashes” were known to happen occasionally, too, such as one case where the value went from $700 to $100 back to $655 in a single morning.  Lately, the value of BTC has remained around on average $250 USD per BTC for all of 2015 thus far.

Bitcoin is also highly unstable in terms of fiduciary reliability.  BTC are stored in a wallet.dat file which for the longest time was unencrypted on each person’s local installation of the Bitcoin client.  Steal someone’s wallet.dat file and you possess any BTC associated to the Bitcoin addresses it holds.  Various organizations, often the exchanges themselves, fell victim to this massive security flaw (one that went unrepaired for a surprisingly long time).  AllCrypt, a smaller Bitcoin exchange, found itself victim to a SQL injection hack against its Wordpress that compromised over $11,000 USD of BTC.  Many others lost far more from wallet compromises – Bitcoinica losing $90,000 USD in 2012, Bitfloor losing over $250,000 USD later that year, Inputs.io losing over $1.3 million USD in 2013, most due to compromises of their websites via SQL injections or other web application vulnerabilities that typically can be identified automatically with an automated web application security scanner. The largest theft of all – over 850,000 BTC totaling $460 million USD – occurred against the Mt. Gox Bitcoin exchange for many reasons: website source code was not version controlled, all code changes bottlenecked with one person, and a myriad of long-unpatched website security flaws that allowed for the biggest BTC heist ever.

Even with the extreme instability of Bitcoin value and wallet security (eventually rectified), the anonymity it affords coupled with its widespread adoption has crowned it king of cryptocurrency.  Others have spawned to try and take the throne, DogeCoin being among the most amusing (currently around $170 USD per 1 million Doge).  Many different websites with a wide range of services and products now accept BTC and some derivative types, such as Wordpress.com, Overstock.com, TigerDirect.com, even Tesla Motors.  The argument has been made clear by users of the Internet (and real-world customers, too, with BTC gaining increased adoption in the brick-and-mortar market): anonymous cryptocurrency is here to stay – and indeed will be the currency of choice in digital black markets schemes.  This all, however, raises an interesting question: if anonymous cryptocurrency came into popular usage only in 2008, how then did digital black markets exist for decades prior?

How Digital Black Markets Began – A Brief History

When everything in the world started the shift from analog to digital – or rather, from non-networked and offline to networked and online – so, too, did the idea of a black market.  Originally, the concept began on systems like Usenet newsgroups.  In the early days of the Internet, the world wide web did not yet exist, so people lucky enough to afford a 2,400 baud modem – the typical modem most users could enjoy for nearly 20 years – had to make due with simpler methods: email, bulletin board systems (BBS), and Usenet.  Bulletin boards had some of these users, like the MindVox BBS and others, but Usenet groups are really where the concept flourished.  The Usenet groups were often more difficult to find, too, which was a major reason for their popularity in this particular realm.  Depending on the host – who were nowhere near as ubiquitous as they have been over the past decade – sometimes the content was heavily policed, but not always.  With the proper newsreader client and server information, users could eventually find their way to these various shady Usenet groups that usually trafficked in illegal pornography, questionably legal electronics or software, and other items one would reasonably assume from such nefarious places, to perform their transactions.

  Usenet Newsgroup

Figure 1-Usenet newsgroup (Source: Newsgroup Reviews Blog)

Normally these markets employed less reliable exchange methods like physical trade, leaving cash at one drop point and retrieving the purchase elsewhere.  This was not much different from the way some black market trading had existed for centuries.  The only difference at that point was just a new medium of communication, so, in general, the products remained the same.   Then the world wide web epoch happened.  The availability of Internet connections increased and the ease of navigation and accessibility of information portals via the world-wide web helped shift the black market to an online platform to follow the digitization of goods and information.

Nielsen’s Law of Internet Bandwidth is an observation that states the average bandwidth speed will grow by 50% each year.  From 1983 to 2014, data suggests this has remained true.  Computer systems became more prevalent in households, and because of the ubiquity of always-on broadband systems with speeds increasing at a near-linear rate, Internet connectivity became a supporting element of life itself.  Everything shifted with it, not just black markets.  Currency, as we mentioned earlier, became a digital thing, as did practically everything else.  Music to Movies, financial and transaction information, software to corrupt it all.  Everything could be uploaded and downloaded with expedience, and legal or not, everything began to find its way through the series of tubes.

This, of course, led to an explosion in online black market activity.  With the increased reliance on computers and the Internet, the market to exploit it grew as well.  This began with tools that generally were more precisely targeted – utilities like Back Orifice and Sub7 – intended for sale to users called “script kiddies” (a pejorative term meant to indicate their newbie level in many regards).  With the growth of websites built by Internet startups and enthusiasts, closed-source tools were developed and sold to allow script kiddies and beyond to scan, analyze, and attack those websites.  Sold for anywhere between $20 to $500 on average (some even for few thousand dollars), these tools generally relied on large swaths of unpatched bugs or vulnerabilities in largely popular software like phpMyAdmin, vBulletin, phpNuke, and other content management systems and administrative panels.  In a way, they were similar to modern automated web security scanners -- except rather than searching for both known and unknown vulnerabilities and providing suggestions to fix them, they searched for specific web applicion vulnerabilities and provided utilities to viciously exploit them.  Some even granted some time-shared access to botnets like ZeuS to bludgeon websites with unrelenting DDoS or focused Layer-7 attacks, like authentication brute forcing or comment spam.  Of course, these tools eventually found themselves on free file sharing platforms like KaZaA and LimeWire, but not before they found plenty of sales on online black markets first.

With all the digitization of information and a black market to follow, however, the financial aspect remained the same, consisting of either cash at a drop point (far less common as time went on) or wired digital currency from one bank account to another.  Usually the risk was gambled, and direct account-to-account transfers were done – checking account wire, debit cards, PayPal, MoneyPak cards, et cetera – while other times a questionable escrow may be used.  Buyers and sellers were still presented with a possibility of law enforcement detection and seizure.  Like those digitized transactions, the web activity itself could also be logged, traced, and archived, even with precautions like encrypted traffic and attempts at anonymizing oneself.   But despite the risks involved, thousands of people still took the chance every day.

Why Take the Risk?

Whatever the risk involved, some users of the Internet felt that being told what they can and cannot possess, physical or digital, was akin to oppression.  Telling someone they cannot do something is among the easiest ways to inspire them to do it.  Human beings are fascinating creatures for a lot of reasons, in particular that for even as logical as we are, we still do some incredibly foolish or backwards things.  For example, if you tell a human, “No, you cannot do that, it is not allowed,” they will find any conceivable method to do that which they are forbidden to do, even if it is utterly pointless to do so.  This is especially apparent in children, but just as much so in adults.  Thus, the theory goes, if you tell humans they are not allowed to purchase particular objects, or own certain items in ways they feel they are entitled to, then Hell or high water they will find a way to do it.  This is undoubtedly a core reason, if not the core reason behind black markets existing – that rebellious nature inherent to self-awareness – and it remains just as true today as it did in the Assyrian era when black market bazaars first started to exist.  Perhaps it could be argued that digital black markets – maybe not all, but some – exist under the belief of free information, free knowledge, and freedom to do with your purchase as you see fit, corporate content makers’ controlling wishes be damned.

It all sounds very Robin Hood, like some 21st century “copy from the rich corporations, and give downloads to the poor consumers online” idea, but it also takes a horrifying turn from righteous to indignant and sometimes even downright disgusting with the click of a mouse.  Within the ranks of these hackers that felt ethical in their liberation of data, some existed that wanted to profit from burning others.  More questionable digital content came into the trade, including malware and virus sales, purchasable vulnerability information and more.  With both the copyright industry coming down on the DRM circumvention and copyright infringement groups, and government alphabet soups (FBI, DOJ, SEC, etc.) coming down on the virus trade, these groups began going further underground.  They found that to minimize risk, they would need to utilize end-to-end anonymity software, thus fostering a new iteration of digital black markets on a network colloquially termed the “dark web.”

These black markets are often hidden, operating on encrypted and anonymized networks such as The Onion Router (TOR).  Much like Bitcoin, TOR exists to anonymize data that passes through it.  It operates by utilizing a network of anonymous relays that traffic hops through.  (Basically, a user connects through an entry node, then their traffic hops through various relays until exiting from an exit node.  The traffic from the user’s computer and through all nodes including the exit node is encrypted, thereby guaranteeing anonymity – that is, so long as the user’s computer does not somehow leak what it is doing and does not connect to an attempted de-anonymizing entry node.)  Because of such a deep level of anonymity, the overwhelming majority of TOR traffic is illicit.  In fact, over one-third of all TOR traffic consists of dark web black market traffic, including drugs, illicit goods, fraud, and much worse.

The more that illicit or illegal activity was shuttered throughout the normal, everyday dot-com Internet, the perpetrators of such enterprises eventually found their way to alternative, basically lawless outlets for their questionable activity.  Initially, it began with communities cropping up on new domains thanks to hosting companies in less policed nations like Russia and Ukraine being welcoming to anything and everything.  The advent of cloud computing made it much easier to host illicit or malicious activity, and the hosting country and its policing no longer seemed relevant, with one study putting Amazon AWS as the largest malware distribution host during part of 2013.  However, when the owner of TVShack.net found himself facing extradition solely because of the Top-Level Domain (TLD) being .net, many illicit and sinister websites began abandoning their domains in lieu of ccTLDs (Country Code Top-Level Domain), like .it, .cc, and others.  (Because .net is a United States TLD, the U.S. government argued extradition was valid since crimes were committed using U.S. property, even though both the owner and hosting itself resided in the United Kingdom.)  Then, when even changing TLDs proved ineffective for some organizations (due to increasing large-scale international cooperative and coordinated raids), they began to find a safer home in places accessible via the TOR-network-only .onion TLD.  This is because the .onion TLD helps mask the IP address of the server hosting the content, keeping it safe from government raids and host shutdowns.  The risk of being discovered or caught was no longer as big a concern as it had been in decades prior.

A Jump Down the Rabbit Hole

The increasing popularity of anonymizing systems and methodologies such as TOR and unlogged VPNs created an underground, hidden environment where nothing was too extreme to host or sell anymore; their world was finally free.  These havens of anything-goes became the go-to for anyone wishing to participate in anything with minimal fear of being caught, coming with the unsurprising caveat of often being very seedy, immoral, and even hostile.  In one such community, users communicate over the IRC protocol secured and accessible only via the TOR proxy system.  The IRC network found itself home to various forms of trolls, including conspiracy theorists, “men’s rights activists” (Internet users that argue on behalf of anti-feminism), and others.  A large number of the channels (IRC chatrooms) were dedicated specifically to topics users enjoyed trolling, with some users occasionally finding the time to band together and commit “raids” – organized efforts of ruination that typically involved hacking a website, smearing someone’s reputation, publicizing confidential or proprietary information, and various other similar activity.  Sometimes raids similar to these yielded information dumps not unlike the ones you would see in the news, like Hacking Team and others.  A few channels even existed solely for Schadenfreude, so depraved that they gave users a place to enjoy and share nonconsensual pornography and real-life gore.  One thing, however, that found itself as a common foundation between these communities, no matter what their focus or amount of trolling, is that they always had a few channels surprisingly active with users happily trading digital and physical goods.  All for just a Bitcoin or two.

In wholly unregulated one-on-one sales markets like in this IRC network, reliability proves to be the trait most difficult to find due to the anonymity leading to a lack of liability.  Communities like this IRC network deeply valued good reviews of sellers to establish credibility and trustworthiness.  Even buyers sometimes had to be vetted depending on the risk to the seller.  This is because some of the wares being sold consisted of some of the most illegal and unsavory items one can think of.  Some sellers sold physical and tangible goods, ranging from narcotics and marijuana to even guns and explosives.  Since Bitcoins provided a way to arrange payment, the physical goods had to be figured out next.  This requires an interesting dance – buyer trusting the seller with an address, seller securing packaging to inhibit detection – an interesting dance, but nothing new.  Even comedian Mitch Hedberg joked nearly two decades ago that his postal delivery guy was unwittingly a drug dealer – “and he’s always on time.”

Drug trafficking is an obvious use of any black market.  The ubiquity of drug sales, though, afforded the opportunity for those same buyers to see other things of interest that they otherwise normally would not have found.  Some sellers in this IRC community specialized only in software utilities, such as ransomware, Trojans, and vulnerability information (e.g. unpatched exploits not yet known to the outside world, Wordpress and its plugins being a recently popular category), items they claimed were traded and sold almost as commonly as drugs at times.  (In fact, an interesting correlation was made by a couple community members: buyers of viruses and vulnerabilities sometimes also bought drugs from the same place, or at least engaged in discussions thereof.  The more complex or expensive a virus or vulnerability they bought, the more complex their experiences and preferences in drugs appeared to be.)  The difficulties both buyers and sellers faced were of little consequence, either.  In fact, when asked in the IRC network why they took the risk and sold what they sold, some users seemed almost flummoxed, as if they had been asked why they eat food or breathe air.  The responses varied, but the mentality seemed almost groupthink among them: It is a free-market right to sell anything people want, and no one has the right to tell people what they cannot buy or sell.

This sort of laissez-faire mentality seemingly permeates a significant amount of the dark web and could perhaps be largely responsible for the popularity of the concept of an Internet black market.  One user even suggested that the dark web is perhaps the modern evolution of a decentralized, truly pure democratic libertarian utopia.  It became quickly apparent that at least part of the reason for their activities was as some form of protest against centralized government and economy, harkening back to that rebellious nature inherent to self-awareness mentioned earlier.  In effect, their activities were simply because they were told they were not allowed to do so, which clashed with their firmly held, almost dogmatic belief in no leadership and no real rules.  This was especially apparent in those selling vulnerabilities and drugs.  Much less Robin Hood and much more Mikhail Bakunin.

The political and ideological aspect finds itself far more prevalent in the fast-moving pace of chat environments and significantly less strong or abrasive in easier moderated communities such as web forums.  In fact, when you make your way out of the anarchistic side of the dark web where the hidden IRC networks lie, the chaotic vibe of the IRC networks mostly disappears and paves way to well organized and cleanly designed websites.  These, too, find themselves hidden behind .onion domains accessible only via TOR-enabled web browsers.  Everything from the Grams darkweb search engine to various popular wikis (colloquially called The Hidden Wiki, even though it actually consists of a group of unrelated, individual wikis), and more, many looking professionally designed and well-programmed.  That is, assuming you know where to find the right 16-character .onion addresses.

 A browser successfully configured to use the TOR network

Figure 2-TOR Browser (Source: Tor Project)

Silk Road – From Han Dynasty to Internet Underground

In around 200 B.C., the Han dynasty of China created what became the most world-renowned and famous network of trade routes of all time, the Silk Road.  Fast-forward a couple millennia, and in February 2011 a website bearing the same name opened its doors within the TOR network.  Operated by a user named Dread Pirate Roberts (a joke name from the awesome cult-classic movie The Princess Bride), the organization saw itself with slow, but obviously predictable beginnings in illegal drug trade.  After just a couple months and a little help from a Gawker article, this new Silk Road organization quickly overtook the historical infamy of its name, establishing itself as the go-to organization for dark web trading, even being dubbed by some journalists as the “dark web Amazon.com”.  Almost anything imaginable could be bought there, from drugs to software vulnerabilities, even hitmen, apparently.

 The Silk Road website

Figure 3-Screenshot of Silk Road (Source: John Ribeiro, PC World)

In just over two years, Silk Road went from a brand new, relatively unknown black market to one of the biggest illegal trafficking criminal enterprises on the Internet, with one study suggesting upwards of 18% of all American drug users obtained their product through the website.  Software vulnerability sales saw an increase during this time, too, as the popularity of a dark web black market provided more opportunity to exchange such information.  However, with the increasing popularity of Silk Road came the full focus of the F.B.I. and other law enforcement agencies.  Utilizing some tricks to remove the anonymity of Bitcoins, users of the website began finding themselves caught.  It was not long until Dread Pirate Roberts – unmasked as Ross Ulbricht – was himself caught, the Silk Road website shut down, and over 144,000 Bitcoins were seized, valued at the time by some estimates at over $28 million USD.  But the damage was done, so to speak.  The denizens of the dark web learned that a black market can successfully exist on TOR, and that a system can be engineered to establish reliability, trust, and credibility between buyers, sellers, and the organization that hosts their transactions.

The joint task force raid on Silk Road was not effective for long, resulting in fracturing the central nature of Silk Road into dozens of smaller, more difficult to trace entities that rapidly grew to take its place.  Shortly after the first was shut down, a replacement was swiftly brought online, fittingly titled Silk Road 2.0.  That, too, found itself shuttered by a massive, coordinated international raid – Operation Onymous – exactly one year to the day that it opened, taking with it organizations and hidden servers of over 400 other TOR-anonymized .onion domains.  Blue Sky, Cloud Nine, Hydra, Pandora, and over 50 other similar black markets were shuttered.  But with each raid came more publicized information about how the proprietors of these websites got caught, and with each iteration the next generation of hosts learned how better to hide themselves.  Silk Road 2.0 and over 50 other black market organizations fell during Operation Onymous, but more like Agora and Evolution took their place.  Less than a year later, Evolution abruptly disappeared, its administrators absconding with over $12 million USD in escrow, but Agora and a few others remain.  One of the markets that got shut down – Hydra – had an ironically fitting name for the whole process: cut off one head and two more grow back in its place.  (The IRC network mentioned earlier has since disappeared during the writing of this article, reportedly being shut down or seized during a recent multi-organizational raid similar to Operation Onymous.)

With each raid and shutdown, the dark web black market fractured further and further into smaller pieces.  This has proven effective in keeping the organizations more hidden from law enforcement attention by divvying up the target on their back.  Rather than one giant target in the form of a single entity, segment the community into smaller, more specialized organizations.  This fracturing posed an additional problem to law enforcement: not only were the replacements getting better at hiding, but the battle itself made for great media.   Attempts to shutter these organizations created a Streisand Effect, bringing more attention to the very thing the government was trying to shut down.  News of each raid brought more publicity, and social media spotlights were shined on the dark web itself.  Huffington Post, Gawker, shared on Facebook and Twitter, Reddit threads and new sub-Reddit boards dedicated to the subject – news and social media began spreading the stories like wildfire.  Each retweet and like led more common, less tech-savvy users to discover the concept of a dark web and its black markets, and people began to flock to them.  The larger markets fell, but more single-business websites spawned to meet demand and sell specific items: some for drugs, a few for guns, and many new, user-friendly ones for the blooming digital ‘warez’ market.

The Warez of the Darkweb

Two things dominate the dark web black market today: drugs, of course, and financial fraud, like stolen credit card numbers or personal financial information.  The former only needed the existence of two systems to flourish explosively in the way that it did – an anonymous cryptocurrency, and an anonymizing proxy system.  Financial fraud, however, needed these and one more variable in the mix to complete its trifecta: a way to continue scraping financial information to defraud.  While malware trading was nothing new, a whole new generation of black market warez gained focus during the Silk Road era.  Unlike the anti-DRM circumvention mentality, these buyers and sellers sought not free information for the benefit of others, but proprietary information solely to profit off the harm of others.  Cheap, easy money by exploiting vulnerable computers – that was the sales pitch, and it has been largely successful.

For the longest time, fraud has been done via emails containing either infected attachments or information directing a victim to perform some action.  (Seriously, when is that Nigerian prince going to wire us our millions?)  Viruses and trojans have existed since about as long as computers have been networked together, usually developed by ambiguous, anonymous people or teams for their own isolated purposes.  Generally, it was the data these groups mined that found its way onto digital black markets in each of their iterations, and not the software to procure the data itself.  This business model remained relatively stable for decades, a few savvy developers writing the viruses, keeping it to themselves and profiting off the fruits of their labor.  The credit card numbers gathered, not the method to gather them, were the product, and sales were good.  Eventually, though, some began exploring selling the utilities themselves, and not just the viruses but also components thereof.  The billion dollar industry of financial fraud was growing, evolution had to happen in order to support this growth, thus taking the middle-man out was the next logical step.

Components of malware, even complete kits, began being sold on these markets in specialized, customizable form, sometimes even as licensable software development kits (the irony of licensing black-market software is pretty amusing).  The growth of software development being a cool, trendy thing – be it in Go, HTML5, jQuery, ArnoldC – led to a more pwn-it-yourself DIY attitude.  Vulnerability descriptions, example and explanation code, even complete software packages that just needed some Makefile customization tweaks and a recompile became increasingly popular items of trade.  Specifically, vulnerabilities known as 0day exploits – named because, upon discovery, the victim software’s developers and white-hat security researchers have zero days to learn how to correct it – quickly grew as commonly sold items in this particular enterprise.  The reason is quite simple, really: most vulnerabilities that are exploited are “open source,” or already known to the public.  A 0day, however, is proprietary, granting the buyer exclusive rights to exploit it and profit the most from it.  These can come in multiple forms – exclusive sale of a vulnerability (this is an example of where buyer reliability and trustworthiness becomes an important factor), a sort of software development kit, or even boutique customizable kits ready for deployment, no compiler necessary.

 

Tox Boutique Ransomware Generator

A sort of interesting bell curve has happened with black market software exploit sales.  First, it started closed-source, with just the end result being sold (credit card numbers, financial history, etc.).  Then, it grew into a sort of trend to DIY your own, based on closely-guarded secrets (unpublicized 0day vulnerabilities, for example).  But now, with the Average Joe being the normal browsing customer of these black market shopping carts, the curve is returning back to closed source.  The DIY trend is still satiated, but boutique kits now grant a level of simplicity harkening back to an era of script kiddie easiness, requiring nothing more than a working knowledge of Bitcoin and TOR.  Turnkey web scanner and exploitation kits, botnet time-sharing, and malware generators are nothing new; the ability to sell your warez easily because no one has to run gcc is obvious.  Nowadays, 0days in Windows, for example, are frequently exploited for either zombie malware or ransomware – a virus that encrypts your personal data and requires you pay a ransom within a few days, or the key to unlock your data gets destroyed and you lose it forever.

Sometimes these utilities are sold as “white-hat” penetration testers or scanners, but in reality they are $250 exploit toolkits purpose-built for annoyance.  More complex systems come with a significantly higher price, but also a greater yield – a $4,000 USD investment grants you access to potentially hundreds of thousands, perhaps millions in profits (the five hackers behind the ZeuS toolkit bank account hack in 2010 successfully stole over $70 million).  All these, unfortunately, require you to get your hands dirty a good bit, though, and also require some fair amount of technical expertise.  This gave rise to point-and-click 0day exploit websites that handle everything for you, being about as simple as signing up for a social media account.  Systems like Tox let you do everything over TOR, use Bitcoin to receive payment (they, of course, take their 20% off the top automatically), and task you only with getting your boutique ransomware out there.  Just sign up anonymously and for free, create your malware, spread it, and wait for the Bitcoins.  How simple is that?

These warez have yielded incredible success, too.  Through both exploiting known, publicized vulnerabilities, as well as procuring 0day information and software, then combining the two, hackers part of the “Carbanak cybergang” managed to steal tens of millions of dollars through an absurdly successful data manipulation attack.  Ironically, supporters of cryptocurrency would claim this kind of attack is one of the main reasons for the existence of non-federated cryptocurrency: the hackers were able to make the banking systems think an account had an arbitrarily higher amount than it actually had by simply editing the balance, thereby allowing the withdrawal of magically generated money.  A series of successfully exploited 0days, some potentially exchanged via black market sales or trades, let the Carbanak group make ATMs freely give them $7.3 million.  The cybercrime industry itself is incredibly lucrative, costing over $114 billion USD every year, with an increase of 78% in just four years.  The profits black-hat hackers yield from this is sometimes more difficult to measure, but it, too, still measures in the tens of billions annually.

Windows 0days are not the only area seeing boutique point-and-click generator websites like Tox.  TOR-hidden website interfaces exist for many kinds of digital attacks, the profit aspect (for the attacker) often not even being an important component.  The script kiddie ideology of “doing it for the lulz” (doing something because it amuses the perpetrator) finds itself incredibly satiated in these boutique markets.  For fractions of a BTC ($10-$50 USD on average), a user can anonymously request a segment of a botnet, such as ZeuS, perform some attack against a website, with no intentions other than to annoy and disrupt the victim.  No longer does an attacker need to run a script on his or her computer, or compile any software, or connect to a secret IRC network, or do practically anything.  In fact, the attacker need not even have much of any technical prowess – just a basic, high-level, abstract understanding of web attacks.  Simply install TOR, find the right .onion website, pay a little BTC, choose your target and attack (DDoS and email spam, of course, being the most common), and click a button.  The guides and tutorials are so well documented that a Baby Boomer who has difficulty using their printer could reasonably perform these attacks.

What Can Be Done For Protection?

The quick answer is also the most uncomfortable one: not a whole lot actually can be done.  Government agencies know of these black markets and are working quickly to shutter them.  White-hat researchers and major software development firms like Oracle and Adobe are trying to procure covertly, even buy 0day vulnerability information in order to race toward fixes faster than the vulnerabilities can be exploited on a large scale.  Only so much can be done, however.  In May this year, the IRS declared over 10,000 tax accounts were compromised due to hackers having incredibly detailed information about taxpayers (like address history, credit report data, employment records and more).  Data breaches like this are not the result of a 0day on IRS.gov itself, but from the slow and steady progression of vulnerabilities being exploited over time against various organizations, or even from the large-scale spread of data logging aggregation malware.

Protecting against this is as complex as asking, “How do we end poverty?”  For an end-user, good security practices are a major first step.  Good anti-virus and anti-malware software, like the free products from AVG or Malwarebytes, are crucial.  Not opening untrusted email attachments and careful web browsing habits such as SSL-connectivity verification, validation of website authenticity and reliability, and browser protection add-ons are all just as crucial, as well.  These steps alone can prevent an extremely large amount of potential fraud, as the overwhelming majority of email spam and DDoS attacks are sent from compromised desktop computers.

Content providers, especially those with eCommerce online shopping carts and other personally identifiable information, have perhaps an even more complex duty.  Good development standards, such as following a Security Development Lifecycle (actively auditing and securing code during development), keeping systems and services updated with security patches, and maintaining a strong security posture within a network’s entire infrastructure (strict adherence to good practices, auditing, reporting, incident management and more) all will help lessen the probability of falling victim to vulnerabilities, especially 0days.  Unknown vulnerabilities still pose a risk, but a good web security scanner can help reduce these.  While specific 0days may be unknown to a web scanner, they still have the ability to discover them and even provide advisories long before the impacted software is patched.  Discovering vulnerabilities and potential 0days prevent the probability of further intrusion and cascade effects.  (Almost all recent major security breaches required at least two different vulnerabilities to be exploited.)  Were one of these vulnerabilities patched or protected against, things could have ended with far less fallout, both customer and financial, for the victims of those attacks.


Integrating Netsparker with Bug Tracking Systems to Easily Export Identified Vulnerabilities as Issues

$
0
0

The Send To Action feature in Netsparker Desktop allows you to integrate the web application security scanner with your issue tracking system, or source code management system. This integration allows you to import identified vulnerabilities as issues with just a few mouse clicks. It is possible to integrate Netsparker with the following systems:

  • Github
  • JIRA
  • TFS (Team Foundation Server)
  • FogBugz

This article explains how to integrate Netsparker Desktop with JIRA. You can use the same procedure to Integrate Netsparker Desktop with the other systems mentioned above. Alternatively create your own custom 'Send To Action' to integrate Netsparker web security scanner with any other system for which we do not have out of the box support.

Configuring the Netsparker Integration with JIRA

Open the Netsparker Desktop Options

  1. Navigate to Netsparker Desktop options by clicking Options from the Tools drop down menu.
  2. Click the Extensions tab in the bottom left corner of the options.

Select a bug tracking system from the Send to Actions settings page

  1. Click the Add+ button and select the system you would like to integrate Netsparker Desktop with. For this example, we’ll select JIRA.

Specify the Connection Settings

Configure the connection details for the bug tracking system integration

  1. Enter the mandatory connection details;
    1. URL of JIRA setup
    2. Username and Password
    3. Project Key
    4. the Issue Type
  1. In the Vulnerability section you can specify the Body Template and Title format. Body templates are stored in "%userprofile%\Documents\Netsparker\Resources\Send To Templates" hence, to use your own custom templates store them in this location.
  2. In the Optional settings you can specify:
    1. to whom should the vulnerability be assigned
    2. the reporter of the vulnerability
    3. priority of the vulnerability
    4. due date

Once you complete the required fields click Test to confirm that Netsparker Desktop can connect to the configured system. The below screenshot shows a connection test confirmation with JIRA.

Once all is configured use the test to confirm connectivity

Exporting Reported Vulnerabilities to Projects on JIRA

Export the identified vulnerability as an issue on your bug tracking system with just a mouse click

Now that the integration is ready, to export an identified vulnerability to JIRA just right click the reported vulnerability and select Send to JIRA. Below is a screenshot of the SQL Injection that was automatically exported to JIRA.

An SQL injection identified with Netsparker and exported to JIRA automatically

Integrating Netsparker with Other Systems

Take advantage of this easy to setup integration and integrate Netsparker web application security scanner with your bug tracking and code management systems to improve automation, thus allowing you to do more in a shorter time. And don't forget, if you use another system that is not listed above, or you would like to do any other sort of integration you can create a custom Send To Action.

New SQL Injection in Joomla! CMS Allows Attackers Full Administrative Privileges When Exploited

$
0
0

A few hours back Joomla! released version 3.4.5 of their CMS to address a critical unauthenticated SQL Injection vulnerability that was identified by Asaf Orpani, a security researcher of Trustwave.

The Joomla! SQL Injection Technical Details

The SQL injection can enable an attacker to gain full administrative access to a target website when combined with other security weaknesses in Joomla! CMS. The SQL injection was discovered in a core module of Joomla! CMS, therefore all websites running Joomla! CMS version 3.2.* to 3.4.4 are affected by this vulnerability.

The technical details of the SQL Injection vulnerability and several other variations of it can be found in:

  • CVE-2015-7297
  • CVE-2015-7857
  • CVE-2015-7858

Considering how easy it is to exploit this vulnerability, and the popularity of Joomla! CMS expect a widespread attack and thousands of Joomla! CMS websites to be hacked.

Netsparker Heuristically Detects The New SQL Injection in Joomla!

Both Netsparker Desktop and Netsparker Cloud web application security scanners can already detect this new critical SQL injection in Joomla! CMS, therefore you do no need to update or wait for updates from us.

Netsparker web security scanner will heuristically identify the new SQL Injection in Joomla!

Netsparker scanners can heuristically identify this new SQL injection in Joomla! CMS, therefore they do not simply flag the vulnerability by checking the version of Joomla! CMS you are running on your website.

Netsparker Partners with Secnesys in Mexico

$
0
0

We are pleased to announce our partnership with Secnesys, a Mexican organization focused in providing security services and consultancy to a wide variety of businesses operating in the financial, retail and education industry verticals.

Helping Mexican Businesses Build More Secure Web Applications

This partnership enables us to be nearer to businesses and organizations operating in Mexico and who would like to build and have more secure websites and web applications. Secnesys’ years of experience in the security field means that they will be reselling and supporting both the Netsparker Cloud and Desktop based scanners in Mexico. Hence organizations operating in Mexico can benefit from the expertise of a technical partner to help them plan and ensure the security of their web applications.

“We already resell a number of network and web security products to help customers build a defence layer to protect their websites and networks. By adding Netsparker scanners to our portfolio we will also be able to help organizations build more secure web applications and identify vulnerabilities in existing ones,” said Mr Mario Jaramillo, Secnesys Consulting Manager.

For more information about Secnesys, visit their website on http://secnesys.com/. If you would like to start reselling and supporting Netsparker web security scanners in your region or country, get in touch with us for more information.

Netsparker Exhibiting at Istanbul Security Conference

$
0
0

Netsparker is exhibiting Netsparker Desktop and Netsparker Cloud web application security scanners at the Istanbul Security Conference (IstSec) in Turkey.

The conference will be held on the 19th of November at the Bahcesehir University in Istanbul. For more information on the conference visit the Istanbul Security Conference website.

Visit the Netsparker Booth to See How Netsparker Web Security Scanners Can Help You Keep Your Business Websites Secure

If you will be at the Istanbul Security Conference come and speak to us and see how Netsparker web application security scanning solutions can help you ensure the long term security of all your websites and web applications. We would be more than happy to answer any questions you might have. We can also get you started with a full trial to see how many vulnerabilities and security flaws the Netsparker web application security scanners can identify on your websites.

So don’t forget to visit the Netsparker booth while at Istanbul Security Conference, even just to say hello. We look forward to meeting you there.

Introduction to Website Groups in Netsparker Cloud and How To Use Them

$
0
0

In Netsparker Cloud you can use the Groups feature to group a number of websites under a common identifier. By grouping websites you can scan all the websites in that group simultaneously using the same scan policy. Websites groups scans can also be scheduled like single website security scans.

Therefore the Groups feature is another tool in Netsparker Cloud that helps you ease the process of managing the security of many websites. This blog posts gives an introduction to Groups in Netsparker Cloudand also uses an example to show how Groups can be used.

Why Use Website Groups in Netsparker Cloud?

Groups allow for better management of the security of all websites in your Netsparker Cloud account, especially if you have a large number of websites. For example you can group websites depending on their location, state or importance. Here are some practical examples:

Example 1: Staging VS Live Environments

You can use Netsparker Cloud to scan web applications during the different stages of development and also once they are live. Considering that most probably you also have different scan policies, you can group all the staging and live websites under different groups. By doing so you can easily scan all live websites simultaneously using a specific scan policy or all the websites on the staging server using another scan policy.

Example 2: Locations of Websites

Another example would be to use Groups to split websites depending on their location. For example since there are many differences between the laws in the US and the EU, it is normal to have the US and EU based websites running under a different configuration. And since you have to use different scan policies you can use Groups to easily scan all the websites in a specific location collectively.

Can a Website Be Included in More Than One Group?

Yes, a website can be included in more than one group. For example:

  • Company website (US, Critical groups)
  • Staging Company website (US, Staging, Non Critical groups)
  • Europe Employees Online Portal (EMEA, Critical groups)

The Default Group

By default your Netsparker Cloud account has a built-in group called Default. This group cannot be deleted and unless specified otherwise, the new websites you add to your Netsparker Cloud account will be automatically added to the Default group.

How Can You Create a New Website Group in Netsparker Cloud?

Creating a new websites group in Netsparker Cloud

To add a new group in Netsparker Cloud simply click on the New Group node in the Websites sidebar menu, specify a group name and save it.

How Can I Add a Website To A Group?

To add a website to a group navigate to the website's settings and check the tickboxes of the group names you want the website to be part of.

Adding a website to a group in Netsparker Cloud

Overview of Security State of Websites in a Group

Get an overview of the security state of all the websites in the group from the Netsparker Cloud dashboard

To get an overview of the security state of all websites in a particular group, navigate to Netsparker Cloud dashboard and use the groups drop down menu to select the group. Once you select the group the Netsparker Cloud dashboard will be updated to reflect the security state of all the websites in the chosen group.

Scanning a Number of Websites Simultaneously with Netsparker Cloud

To scan a number of websites simultaneously in Netsparker Cloud you should launch a group scan. There are three different methods which you can use to launch a website group scan in Netsparker Cloud, all of which are documented below:

From the Manage Groups Node

Managing websites groups in Netsparker Cloud

  1. Navigate to the Manage Groups node in the Websites sidebar menu
  2. Click the Scan button next to the group name to configure and launch, or schedule a web security scan.

From the Scans Sidebar Menu

Launch a websites group scan from the Scans menu

  1. Navigate to the New Group Scan node in the Scans sidebar menu
  2. Select the group from the Website Group drop down menu and select a scan policy from the drop down menu.
  3. Configure any scan options you need and click Launch to start the scan. Otherwise check the Enable Schedule option to configure scheduled website group scans.

From the Group Dashboard

You can also launch a scan from the group’s dashboard view by clicking the Schedule scans for this website group button. Similar as with the other procedures mentioned in this article, proceed to configure, launch or schedule the web security scan.


Understanding the Differences Between Technical and Logical Web Application Vulnerabilities

$
0
0

Web application vulnerabilities can be split into two distinct categories; logical vulnerabilities and technical vulnerabilities. The main difference between the two categories is their exploitation. Typically to exploit a technical vulnerability, the attacker takes advantage of a coding mistake, such as lack of sanitization that allows him to inject malicious code. To exploit a logical vulnerability, the attacker has to find a flaw in the way the web application makes decisions (the logic part), for example, the web application fails to check a user's permissions. 

Therefore technical vulnerabilities can be easily detected with an automated web application security scanner but logical vulnerabilities cannot. Let’s look into the ins and outs of both vulnerability categories to find out the whys, whats and whens.

Technical Vulnerabilities

Two popular technical vulnerabilities that we will be looking at in this article are SQL Injection and Cross-site scripting. They are considered as technical vulnerabilities because even though there are thousands of different possibilities how to exploit a cross-site scripting or a SQL Injection vulnerability, the outcome of a successfully exploited vulnerability is always the same.

How is a SQL Injection Vulnerability Detected?

An SQL Injection vulnerability allows the attacker to bypass the security mechanisms of a website and send SQL commands directly to the backend database. To find out if a website is vulnerable to SQL Injection or not the attacker tries to input malicious code in a website form’s input field. If the website responds with an error that includes an SQL error, the website is vulnerable to SQL Injection.

How is an XSS Vulnerability Detected?

A cross-site scripting vulnerability allows the attacker to bypass the security mechanisms of a website and inject malicious code that is executed when the victim accesses the website. To find out if a website is vulnerable to cross-site scripting or not the attacker tries to inject the malicious code via a website form’s input field, for example in a forum or blog post. If the injected code is executed upon a page reload the website is vulnerable to XSS.

Predictable Outcome of Technical Vulnerabilities

Therefore even though SQL Injection and XSS are exploited in different ways, the end result can be predicted, so it is easy to detect such type of vulnerabilities automatically. As a matter of fact, these are the type of security checks an automated web vulnerability scanner uses when scanning your website for vulnerabilities. It tries to inject malicious code in website’s input parameters and depending on the response it determines if there is a vulnerability or not.

The above are just examples of vulnerabilities in their simplest forms. In a real life environment, web applications are much more complex and there are hundreds of variants for each vulnerability class, so it is much easier said than done. Though at least with the above you can get the gist of it.

Logical Vulnerabilities

Like technical vulnerabilities, there are several different types of logical vulnerabilities though not all can be classified under a specific vulnerability classes as per the below examples.

Access Control Logical Vulnerabilities

Access control is a very common logical vulnerability. Using an accountancy web application as an example where typically there are different user roles. For example users with the role of a chief financial officer have access to everything while accounts clerks should only have access to the financial transactions of their departments.

But what if because of a flaw in the design the web application allows the accounts clerks to see the financial records of each other's’ departments? An automated tool will never be able to detect such a flaw because it does not have the knowledge to determine what a user with an accounts clerk role should be able to access or not.

In theory, you can configure a scanner to detect a number of logical vulnerabilities that are specific to your setup, but in practise it is not worth. It will take you hours to figure it out, and it would still be limited to a number of logical flaws.

Other Type of Logical Vulnerabilities

Unlike access control security issues, there are several other logical vulnerabilities that do not fall under a particular class or category. For example you want to buy a pair of shoes and notice that the web applications stores some values in the URL as per the below:

http://www.example.com/store/order.asp?itemid=4&price=50

What happens if you change the price to ten or one? Does the website still accept the order, which means you got a pair of shoes for $1? If it is so, the website’s logic is vulnerable and by exploiting it attackers can have a negative impact on the business. During a test, the scanner can change the value of the price parameter but it is not able to determine if that is a good or a bad thing.

Identifying Logical Vulnerabilities

In most cases, logical vulnerabilities can only be identified by seasoned professionals who are familiar with the scope of the web application and the industry your business operates in. Therefore, if you use third party consultants make sure you stick to the same ones if you’re happy with their job.

And the good thing is that the more they are exposed to your web application the better they will get at identifying logical vulnerabilities. On the other hand, someone who is new will not be able to determine who should have access to what,  even if they have the technical expertise.

Ensuring a Comprehensive Web Application Penetration Test

Automated tools cannot identify all types of vulnerabilities but neither can we. Automated scanners excel at doing the repetitive and at ensuring that every single attack surface is checked for thousands of different types of vulnerabilities, thus saving you ample of time. We humans have the upper hand, we are more intelligent but are prone to making mistakes. We get tired and forget things. Imagine having to manually test hundreds of parameters on a single web application. How long do you think it will take you? And how will you ensure that after you’ve been at it for hours, and most probably days you will not miss an attack surface?

Take advantage of the situation. Use automated tools. Use fast automated web application security scanners, that allow you to finish the job as quickly possible, so you have enough time to check for logical and other type of hairy vulnerabilities.

Security Weekly and Ferruh Mavituna Talk Automation and Scaling Up Web Application Security

$
0
0

Ferruh Mavituna, our CEO and product architect was interviewed on last week’s Episode #442 of Security Weekly. For those who are not familiar with Security Weekly, it is the most popular weekly webcast show where Paul Asadoorian, the host of the show, discusses everything related to IT security with different industry leaders and security professionals each week.

During last week's interview, Ferruh Mavituna, Paul Asadoorian, Jeffrey Man and other security professionals look into several aspects of web application security such as:

  • Automation of identification of security flaws in web applications and time management for penetration testers and security professionals.
  • What can be automate and what not in web application security and possibly what we'll see in the future of automation.
  • Scaling up web application security; how to secure 100+ websites with limited resources?
  • Why integration of web application security in the SDLC got even more important. Large companies such as Facebook and Dropbox are pushing new code to production multiple times in a day.
  • How Bug Bounties are making young security researchers lazy by focusing only on the outcome rather than understanding the cause of the security issue.

Latest Report Points to a 45% Increase in Web Application Attacks

$
0
0

A few weeks back Alert Logic released their latest cloud security report. The report highlights the current rise in web application attacks. In short it states “‘Businesses with a large volume of online customer interactions are targeted for web application attacks in order to gain access to sensitive customer & financial data".

This 45% increase in attacks on web applications in 2014 also means that these type of occurrences represent at least 70% of all types of attack incidents on cloud-based web applications, where typically businesse store confidential data about their own business and customers.

What Is Driving This Increase in Web Application Attacks?

The growth and popularity of public cloud providers such as Rackspace and Amazon Web Services have seen all types of businesses shifting to more affordable & efficient cloud-based infrastructures. Hackers have also noticed this trend and have adapted their attack methodology accordingly. The biggest mistake many organizations are making is to assume that securing cloud based web applications, software and data is the responsibility of cloud providers.

Therefore the biggest driver of these attacks, and related threat vectors are the vulnerabilities of a business’ customer-facing web applications, such as customer and online Banking portals. What this means, in simple terms, is that the amount of online interactions a business has with customers determines the attack vectors that an attacker will use against it.

As the current trend shows, and as web application attacks continue to grow in volume, business owners need to maintain their own risk management protocols and not rely solely on the security of the Cloud service provider. For this reason, using web application vulnerability scanners like Netsparker and Netsparker Cloud enable stakeholders to scan and identify potential security weaknesses in their applications such as Cross-Site Scripting (XSS) and SQL Injection vulnerabilities.

As we saw in late October this year, the “Talk Talk” hack was achieved through an SQL Injection exploit, that allowed the attacker to access their databases that held names, addresses and financial information on thousands of customers.

What are the Top 10 Types of Attacks per Industry?

  • Transportation: 77% Application Attack
  • Real Estate: 55% Application Attack
  • Advertising: 54% Application Attack
  • Retail: 55% Application Attack
  • Computing Services: 48% Application Attack
  • Manufacturing: 46% Application Attack
  • Mining: 70% Trojan
  • Healthcare: 39% Brute Force
  • Accounting/ Management: 37% Brute Force
  • Financial Services: 33% Brute Force

Moving Forward - Ensure the Security of Your Web Applications

2015 already saw a number of high-profile breaches that included a major Hollywood studio, the biggest online dating site, and a Telecommunications company, to mention just a few. According to Alert Logic, more than 85 million records were lost via data breaches, both from internal and external attackers.

The only ‘silver lining’ to these high profile security breaches is that it highlights just how important it is to ensure business owners take all appropriate action to ensure their website and web applications are secure. Download the Alert Logic Cloud Security Report for more detailed information and statistics..

Configuring and Managing Scan Policies in Netsparker Cloud

$
0
0

A Scan Policy is a set of instructions for the web scanner and crawler on what they should do during a web application security scan. In a Netsparker Cloud Scan Policy you can configure the list of web security checks the target website should be scanned for, crawling & attack, HTTP connection, autocomplete and several other settings.

The Need to Optimize Scan Policies

Before looking into how to manage the scan policies in Netsparker Cloud, it is important to point out that what you configure in the Scan Policy can have an impact on duration of the scan, hence it is important to optimize the scan policies. You can read more about this subject in the post Optimize Netsparker Scan Policies for Quicker and More Efficient Web Application Security Scans.

Use the Scan Policy Optimizer for Automated Optimization

Netsparker Cloud online web application security scanning service has a built-in wizard based Scan Policy optimizer which you can use to automatically create a Scan Policy for your target website(s) within just a few seconds. Should you wish to manually optimize the Scan Policies you can still do so as explained in this post.

Managing Scan Policies in Netsparker Cloud

You can manage the Netsparker Cloud scan policies from the Scan Policies node in the Policies menu.

Scan Policies in Netsparker Cloud

Default Scan Policies in Netsparker Cloud

By default Netsparker Cloud has the following Scan Policies:

  • Default Security Checks which includes all the security checks.
  • DotNet Policy which can be used to scan .Net applications.
  • WAVSEP which can be used to perform test scans on the Web Application Vulnerability Scanner Evaluation Project.

Default Scan Policies cannot be modified or deleted. If you would like to modify a default Scan Policy click the Clone button next to the Scan Policy name, modify it as per your requirements and save as a new scan policy.

Creating a New Scan Policy

To create a new Scan Policy you can either clone an existing Scan Policy by clicking Clone next to an existing scan policy name or create a new one by clicking New Scan Policy.

Creating a new Scan Policy in Netsparker Cloud

In the New Scan Policy page specify a name and description. Should you wish other users to use your scan policy tick the option Is Shared. For more information on sharing scan policies refer to the section Sharing Scan Policies further down in this post.

Configuring the List of Security Checks

By default all the security checks will be enabled in a new Scan Policy. Browse through the list and disable the security checks you do not want to run during a web security scan.

Configuring the list of web security checks in a scan policy

Configuring All the Other Options

All the other options in the Scan Policy such as the Crawling, Attacking and Ignored parameters will retain the default values unless configured otherwise. Hence you only need to configure those options you want to change.

Configuring more options in the Scan Policy

Sharing Scan Policies

The Scan Policies you create will be tagged as Mine and by default they can only be used by you, hence why they are also tagged as Private.

Shared and Private Scan Policies in Netsparker Cloud

If when creating a Scan Policy you tick the option Is Shared specify with which groups the Scan Policy should be shared so anyone who has access to such groups can use your Scan Policy. By sharing your Scan Policies users can use and clone your Scan Policy but they cannot modify it.

Configuring a scan policy as shared

Scaling Up and Netsparker Cloud Scan Policies

You might not necessarily need to optimize the Scan Policies when scanning a small number of websites, especially if they are not complex web applications. Though when scaling up and you have to scan 100s or 1000s of websites you cannot afford not configuring Scan Policies. The time you need to configure the Scan Policies will be much less than the time the scanner needs to scan complex websites. And anyway, with the automated Scan Policy optimizer it will only take you a few seconds to optimize the Scan Policies.

Use Tasks in Netsparker Cloud to Ensure All Identified Vulnerabilities are Fixed And Improve Team Collaboration

$
0
0

Managing the development, up-keeping and security of enterprise web applications is a difficult process, and unless you have the right tools you won’t go far. Enterprise web applications are very extensive in terms of functionality and are developed by a team of developers. Therefore ensuring that all identified vulnerabilities are fixed can be quite a feat, especially when you have to go through all the bureaucracy of large enterprises to chase developers. The task can become even more difficult when you have to manage the security of hundreds, or even thousands of websites and web applications.

Netsparker Cloud was specifically designed for this; to help you and your team ensure the security of hundreds and thousands of websites. In fact, Netsparker Cloud is not just another online web application security scanner. It also has features to help you automate and improve the post-scan stage, making Netsparker Cloud a one-stop web application security solution.

In this post, we will look into Netsparker Cloud’s Tasks, a feature that helps you automate the process of alerting and chasing developers to fix vulnerabilities and security flaws.

What are Tasks in Netsparker Cloud?

Similar to a bug tracking system, Tasks allows you to assign identified web vulnerabilities as a task to a developer. Therefore, Tasks allow you to easily follow up on the progress of vulnerability fixes, automate the fix verification process (Netsparker Cloud does it automatically for you) and ensure that all vulnerabilities are fixed before the web application is live.

Tasks Lifecycle

A Task can have different statuses. This section explains what each status is and how to use them to automate as much of the follow-up and chasing processes.

Open Status

When a new task is assigned to a team member, it has an Open status. At this stage, the assignee receives an email with the details of the task and the identified vulnerability. The assignee can change a task’s status to Fixed or Ignored and can also assign the task to another Netsparker Cloud user.

Fixed Status

Once the assignee fixes the vulnerability and changes the task’s status to Fixed Netsparker Cloud will automatically scan the fix. If the vulnerability is fixed, Netsparker Cloud will automatically mark the task as Completed, so you do not have to manually verify the fix. If the task is not fixed, it is automatically reassigned back to the assignee.

Ignored Status

The assignee can also set a task’s status to Ignored. This status means fixed and tested hence Netsparker Cloud will not take any further automated action in regards to this task. The status of an Ignored task can only be manually changed by a user.

NOTE: Each time a task’s status is changed, both the user who opened the task and the assignee are advised via email. Netsparker Cloud users can also manage the list of tasks from the Tasks node as explained later in this article.

Working With Netsparker Cloud Tasks

Assigning a Task in Netsparker Cloud

To assign a task to a developer:

1. From the scan results select the vulnerability you want to assign as a task and click the +Create Task button in the top right corner.

Select a vulnerability and click the Create Task button

2. In the Create Vulnerability Task page specify to whom the task should be assigned from the Assignee drop-down menu and should you need to, add a note to provide more details. Click Save to create the task.

Assigning a vulnerability as a task in Netsparker Cloud

The assignee will receive an email notification about the new task assigned to him, as shown in the screenshot below:

Email notification from Netsparker Cloud alerting user a new task has been assigned to him

Fixing a Vulnerability and Updating a Task

The assignee can view the details of a task by clicking on the task’s name in the email or by logging in to the Netsparker Cloud dashboard and navigating to the Tasks > To Do node.

View the tasks assigned to you in Netsparker Cloud in the To Do section

Once the vulnerability is fixed the assignee can change the task’s status to Fixed by following the below procedure:

  1. Click on the Go To Task button
  2. Change the task status to Fixed, add a note should it be required and then click Save.

Mark a vulnerability as fixed so Netsparker Cloud will automatically check the fix

Once the vulnerability is marked as Fixed Netsparker Cloud will automatically scan the target to confirm the fix. To see a list of tasks which are waiting to be tested navigate to the Tasks > Waiting for Testing node in the Netsparker Cloud dashboard.

A vulnerability fix that will be checked automatically by Netsparker Cloud

Viewing a Task’s History

Netsparker Cloud keeps a record of every change in a task, for example when the status or the assignee of a task is changed. To view all the changes of a task refer to the History section at the bottom of the task’s page, which is shown in the below screenshot.

Viewing the history of the vulnerability task

Managing Tasks and Ensuring the Security of All Your Web Applications

Every Netsparker Cloud user has a Tasks node that they can access from the dashboard. From this node the users can see the status and detailed information about every task they dealt with. The sub-nodes are listed below:

Nesparker Cloud Tasks MenuTo Do: this section lists all the tasks that have been assigned to you and you need to take action on.

Assigned Tasks: this section lists all the tasks you assigned to other Netsparker Cloud users.

Waiting for Testing: this section lists all the tasks that are marked as fixed and are waiting to be tested automatically by Netsparker Cloud.

Completed Tasks: this section lists all the tasks that have been marked as fixed and Netsparker confirmed the fix. It also includes the tasks that have an Ignored status.

All Tasks: This section lists down all the tasks a user ever dealt with.

Getting Things Done with Netsparker Cloud Tasks

The Tasks functionality makes Netsparker Cloud a one-stop web application security solution. Netsparker Cloud goes beyond the remit of a normal web security scanner that would be used to detect vulnerabilities. Netsparker Cloud enables you to assign tasks and track their progress via the Tasks node on the dashboard. You can also follow-up on the tasks assigned to team members and view the current status on the assigned task. This provides a full record of when and how security vulnerabilities were fixed and who fixed them. This one feature alone is enough to put Netsparker Cloud in a new, different class, of website application security scanners.

Netsparker Heading to RSA Conference 2016 in San Francisco

$
0
0

Visit the Netsparker Booth at RSA 2016 in San Francisco, USA

This year Netsparker will be exhibiting at the RSA Conference in San Francisco, USA. The event will be held from February 29th to March 4th at the Moscone Centre.

Several Netsparker team members will be representing Netsparker at stand #N4326, and will be available to answer any questions you might have about automatically detecting SQL injection, XSS and other vulnerabilities with our web application security scanners Netsparker Desktop and Netsparker Cloud.

Visit the RSA Conference website for a copy of the agenda and for more information about the workshops and tracks that will be held.

Register for a Free Complementary Hall Pass at RSA Conference 2016

Click here and use the code XENETSPRK16 during the registration process for a free complementary hall pass.

Don't forget to drop by our stand #N4326, for more information on how Netsparker can help you find vulnerabilities in your websites before a hacker does. If you do not have any questions, you should still pass by to say hello. Last year’s merchandise was a big hit, so come and check out the goodies we have for this year.

Looking forward to meeting you there!

Configuring and Managing Scan Profiles in Netsparker Desktop

$
0
0

Netsparker Desktop Scan Profiles allow you to save all of the pre-scan settings, so you can load them at a later stage and use them for other web application security scans.

Why Should You Use Scan Profiles?

If you scan a number of different websites frequently, and each of which requires a different configuration, you can save the pre-scan settings for each individual website as a Scan Profile. So the next time you need to scan it you can simply load the Scan Profile and launch the scan, rather than having to configure the scanner each time.

Which Settings Are Saved in a Scan Profile?

 Highlighting the settings that are saved in a scan profile

All the scan settings you can configure from the Start a New Website Scan dialog box (highlighted in the above screenshot) are saved in the Scan Profile. These are:

  • Target URL
  • Scan Policy
  • Custom cookies
  • Crawling options
  • Scan Scope
  • Excluded and Included URLs
  • List of imported links
  • URL rewrite rules
  • Authentication settings

Highlighting the Changes in a Scan Profile:

When you change any of the settings in the Start a New Website Scan dialog, the node in which the changes are made is marked in bold and underlined. This allows you to easily identify where the changes have been made. For example in the below screenshot the General node is highlighted because we enabled the option Pause Scan After Crawling.

 When you change a setting in Netsparker Desktop it is highlighted

This feature is also useful for when you load a Scan Profile; you can quickly see which nodes in the profile have been modified.

How to Create a New Scan Profile in Netsparker Desktop

Once you configure all the settings in Netsparker Desktop you can save these settings as a Scan Profile by clicking the arrow icon next to the Previous Settings button and select Save As New Profile… from the drop down menu.

 Saving a new scan profile in Netsparker Desktop

How to Load a Saved Scan Profile

If you want to use a saved Scan Profile click the arrow icon next to the Previous Settings button and select the profile’s name from the drop down menu. In the screenshot below we are loading the PHPTestsparker Scan Profile.

  Loading a saved scan profile in Netsparker Desktop

How to Change the Settings Saved in a Scan Profile

To change the settings in a ScanProfileload the Scan Profile, make the required change and then save the new changes by selecting Save Profile from the drop down menu.

Default Scan Profiles in Netsparker Cloud

Default scan profiles in Netsparker DesktopThe Netsparker Desktop web security scanner has two built-in Scan Profiles, the Default and the Previous Settings.

- The Default Scan Profile has the default configuration.

- The Previous Settings built-in Scan Profile is used by the scanner to save the settings of the Scan Profile used in the previous scan. Therefore even if you used a saved Scan Profile in a previous scan, its settings will be saved in the Previous Setting Scan Profile.

 

Managing Scan Profiles in Netsparker Desktop

Netsparker Desktop Scan Profiles are saved as XML files in the following location:

%USERPROFILE%\Documents\Netsparker\Profiles

 To delete or backup the Scan Profiles you can do so from this location.


The New Netsparker Web Security Scanners: Automated Configuration of URL Rewrite Rules, Scan Policy Optimizer and Proof of Exploitation

$
0
0

We are excited to announce the release of a new version of Netsparker Desktop, and an update for Netsparker Cloud web application security scanning service. There are quite a few new features to talk about, so let’s get started.

The new features automatic configuration of URL rewrite rules and Scan Policy Optimizer will automate more of the pre-scan process for you, making the scanning of hundreds and thousands of websites an easier task. We are also introducing the new proof of exploitation, which will definitely ease the post scan process for you, as explained further down in this post.

These new updates also include a number of new web security checks and several internal product improvements, such as the fully responsive Netsparker Cloud dashboard. Below is a highlight of the main features.

Automated Configuration of URL Rewrite Rules in Netsparker Web Security Scanners

Netsparker scanners no longer require you to configure URL rewrite rules. The new web security scanners will automatically configure the URL rewrite rules needed to scan all the parameters in URLs. Configured URL rewrite rules also mean more efficient scans.

Automatically configured URL rewrite rules in Netsparker Desktop

If you wish to manually configure URL rewrite rules in Netsparker scanners it is still possible. Though if you do not have detailed knowledge of the target website’s setup, or have to scan hundreds, or thousands of websites you do not need to get bogged down in such pre-scan task. Read the whitepaper Automating the Configuration of URL Rewrite Rules in Netsparker Web Application Security Scanners for more detailed information on this new unique technology.

Scan Policy Optimizer for Shorter & More Efficient Web Security Scans

Optimized scan policies mean shorter and more efficient scans, though not everyone has the time or knowledge to manually optimize web security scan policies. For this reason, our automation obsessed engineers came up with the Scan Policy Optimizer; a wizard based optimizer that enables you to optimize scan policies according to your target website, within just a minute.

Scan Policy Optimizer Summary

Proof of Exploitation, So You Do Not Have To Verify All The Scanner Findings

Automatic exploitation of identified vulnerabilities is something we pioneered with the first release of Netsparker web application security scanner. With such technology you do not have to manually verify all of the scanner’s findings, easing off the post scan process.

Ever since we have been continuously improving this unique technology, and with this new release we are announcing a major improvement; proof of exploitation. Therefore upon automatically exploiting a vulnerability, the scanner will also generate a proof of the exploit. For example in case of a Command Injection, the scanner will send certain commands and show the server's response to the command injection in the vulnerability report.

Proof of a command injection

Beside of the fact Netsparker marks the vulnerability as “CONFIRMED”, now Netsparker provides conclusive proof as well.

Export Identified Web Security Flaws as Issues into Github and Team Foundation Server with just a Click

You can now configure Send To actions in Netsparker web application security scanner to migrate identified security flaws to Github and Team Foundation Server with just a single mouse click. All you need to do is configure the credentials and projects. Then simply right click an identified vulnerability and select the server you would like to automatically add it to as an issue in your projects.

Export identified web vulnerabilities to JIRA, Github and other bug tracking and source control systems

Responsive Netsparker Cloud Dashboard for Mobile and Tablet Users

The new updated Netsparker Cloud dashboard is fully responsive. Now you can check the status of your web application security scans from your mobile phone or tablet. There is no difference to accessing Netsparker Cloud from your portable device or your computer; you can still review scan results, assign vulnerabilities as tasks and launch new web application security scans.

List of scheduled and completed web security scans in Netsparker CloudSummary of vulnerabilities identified on target website in Netsparker CloudLists of tasks in Netsparker CloudDashboard in Netsparker CloudA cross-site scripting vulnerability reported in Netsparker CloudScan summary of target website in Netsparker CloudScan policies in Netsparker Cloud

New Web Security Checks in Netsparker Desktop & Netsparker Cloud

Here are some of the new web security checks included in the latest version of the Netsparker web security scanners:

  • Hidden directory checks for detection of admin panels
  • Security checks for Windows short file/folder name disclosure
  • Ruby on Rails and RubyGems security checks such as:
    • checks for database configuration files
    • checks for version in HTTP responses
    • check if version is out of date
    • check for status of development mode
  • Backdoor checks for MOF Web Shell and DAws.
  • New attack patterns for "boot.ini" LFI checks.
  • MySQL "LIMIT" injection attack patterns.
  • MSSQL error based SQLi attack payloads.
  • Added syntax highlighting in HTTP request and response viewers for XML, JSON, CSS, JavaScript etc
  • New knowledge base nodes for SSL issues, CSS and slow pages

Improved Security Checks

  • MySQL "LIMIT" injection attack patterns.
  • MSSQL error based SQLi attack payloads.

Other Noteworthy Features & Improvements

  • New template for HIPAA compliance report
  • Windows 10 support
  • Added syntax highlighting in HTTP request and response viewers for XML, JSON, CSS, JavaScript etc
  • Several performance and memory management improvements

Complete List of What is New and Improved in New Netsparker Scanners

For a complete list of what is new and what has been improved in the latest versions of Netsparker Desktop and Netsparker Cloud refer to the changelog.

Automate More of Your Web Application Security

Web application security is difficult, hence the tools and services your business invests in should be easy to use and help you automate as much as possible. And this is exactly what Netsparker web security scanners do; help you identify vulnerabilities in web applications and ensure they are fixed with the least possible effort from your end. Apply now for a free trial of Netsparker Cloud or download a demo of Netsparker Desktop to see the difference.

Netsparker Sponsors “Let's Encrypt”, the free, automated and open Certificate Authority

$
0
0

Lets encryot

Today we are happy to announce that we sponsored Let’s Encrypt.

What Is Let’s Encrypt?

Let’s Encrypt is a free, automated, and open certificate authority (CA), run for the public’s benefit. The service provided by the Internet Security Research Group (ISRG). In other words, Let’s Encrypt is a service which can provide you with a free TLS certificate so you can run your website on HTTPS.

Why Did We Sponsor Let’s Encrypt?

Many of you might have noticed that all of our website was always on HTTPS. The reason is simple; we believe that your browsing session should always be private. There are several other reasons why it is recommended to encrypt the visitors’ sessions, but that is out of the scope of this announcement. 

Yet not every organization, especially startups and open source projects, have the budget to get a TLS certificate to run their website on HTTPS. So we joined companies such as Facebook, Cisco and several others in supporting this cause, to make sure everyone can run their website on HTTPS, in turn making the internet a better place.

Do you own a website? Go ahead and get your TLS certificate for free from Let’s Encrypt. And if you contribute to an open source project, you can also get free web application security scans with our online scanner Netsparker Cloud.

Getting Started with Let’s Encrypt

Getting started with Let’s Encrypt is very simple; no verification emails or calls, simple configuration and no payments. It is all documented in their How it works section. For more detailed documentation such as the Developer Guide, refer to their full documentation.

Ensure All the JavaScript Libraries Your Developers Use in Web Applications Are Not Vulnerable

$
0
0

Almost any type of modern web application uses some sort of popular JavaScript framework or library, such as AngularJS and jQuery. These JavaScript libraries gained a lot of popularity because they allow developers to easily build dynamic and interactive web applications, without the need to develop all the functionality themselves. They build the web application around the JavaScript library’s functionality.

The Importance of Keeping All JavaScript Libraries Up to Date

There are several other advantages to using such JavaScript libraries. For example, you are guaranteed a stable product because most of these libraries are thoroughly tested by the public. Though like any other software component, these JavaScript libraries have their own security issues. In fact in 2015 we did publish a few advisories about vulnerabilities in JavaScript libraries.

If a JavaScript Library is Vulnerable, Your Web Application is Vulnerable as well

Therefore unless the JavaScript libraries you use in your web applications are kept up to date, your web application might be vulnerable. And how do you ensure that all your JavaScript libraries are not vulnerable? By scanning your web applications with a Netsparker web security scanner.

Netsparker’s JavaScript Libraries Fingerprinting Engine

In both the latest Netsparker Desktop and Netsparker Cloud versions which we announced this January, we included a new JavaScript Libraries engine. This new engine is able to identify the JavaScript libraries used on a target web application and their version.

If an outdated JavaScript library is identified, which possibly could also be vulnerable, Netsparker web application security scanners will raise an alert and also report the vulnerabilities associated with that version of the library, as seen in the below screenshot.

A vulnerable version of AngularJS was identified on target website with Netsparker web application security scanner

Which JavaScript Libraries Do Netsparker Scanners Detect?

The first version of the JavaScript libraries scanning engne can already fingerprint twenty of the most popular libraries such as jQuery, jQuery-mobile, AngularJS, backbone.js and easyXDM. We will be updating the JavaScript libraries fingerprinting module with future updates, ensuring it can identify more JavaScript libraries. For a complete list of the JavaScript libraries, and to disable or enable the JavaScript fingerprinting module open the Scan Policy Editor, as shown in the below screenshot.

The JavaScript Libraries check in the Scan Policy Editor

Netsparker Announces Better Coverage and Security Scanning of Single Page Applications (SPA)

$
0
0

Today we announced the release of Netsparker Desktop version 4.5.7, and Netsparker Cloud 20160129. With this release, we are shipping a new updated version of the DOM parser. This means that now Netsparker has much better coverage and scanning capabilities of single page applications (SPA) and modern web applications that heavily depend on JavaScripts.

What are the Under the Hood Improvements?

You were already able to scan single page web applications for vulnerabilities with previous versions of the Netsparker scanners. Though with these new updates we have seen a good improvement when it comes to coverage of both SPAs and web applications that use a lot of JavaScripts. And coverage is very  important, because unless a parameter is crawled, it won’t be scanned.

The new version of the DOM parser is able to simulate a user more accurately and has better handling of multiple level JavaScript interactions. For example when it simulates a mouse click, or a mouseover, it will detect all the new changes in the web application. The same to when you are using Gmail. When you click Compose, a new section of the web application is opened with new input parameters. Now Netsparker can deal with this kind of design much better than ever before.

We have also improved the automatic submission of forms in web applications. In previous versions of Netsparker, the scanner was only populating and submitting a form during the crawling and attacking stages, using the details specified in the Form Values section of the scan policy.

Configuring pre-defined form values in Netsparker web application security scanner

From this version onwards, Netsparker will also populate and submit forms according to specified rules in the Form Values, even when analyzing client-side scripts. This means that it can bypass client-side checks, allowing it to do more thorough web security scans.

Configuring the Netsparker JavaScript Analyzer

Even though an out-of-the-box installation of Netsparker web application security scanner is able to scan SPA applications without any problems, we included a number of new settings that allow you to fine tune the scanner, should you need to.

Configuring the DOM / JavaScript parser in a  Netsparker Scan Policy

The new JavaScript Analyzer settings can be configured from the JavaScript node in a Scan Policy. Below is a list of all the options:

Load Preset Values: Use this drop down menu to select a built-in preset of settings the scanner has.

DOM Load Timeout: This is the timeout for the page to load, including the downloading and browser rendering time.

DOM Simulation Timeout: This is the timeout for the whole parsing operation of a single page. In case of a large application it might not be feasible to scan all of the application, since the parameters are typically identified until the timeout is reached. The value of this timeout can have an impact of the scan duration.

Interevent Timeout: This values define for how long should the scanner wait for a response after triggering a DOM/JS event. In this duration no other DOM/JS events will be triggered by the scanner.

Max Simulated Elements: This value defines the maximum number of simulated DOM elements the parser will simulate before terminating the parsing phase.

Skip Threshold and Elements to Skip: These two settings are used to specify how many elements should be parsed (Skip Threshold) before the parser starts skipping (Elements to Skip) some elements. For example, if the Skip Threshold is set to 1000 and Elements to Skip is set to 10, after simulating 1000 elements, the parser will not simulate elements 1001 to 1009. Element 1010 will be simulated. The idea behind these settings is to diversify the simulation.

Max Modified Element Depth: This setting specifies the maximum number of levels the DOM parser should follow when a DOM modification is triggered by result of an another simulation or modification. This can be used as a sort of infinite loop protection.

For example imagine a case where a button is clicked and another button is created. When this new button is clicked it will create another one etc. This depth setting allows to control the maximum depth that the simulation will go in such cases.

Generate Debug Info: When this option is enabled, the DOM parser will write the diagnostics information to a log file in the scan folder, including data about the coverage. When this option is enabled, the scan may be slowed down and will use some additional disk space.

The Importance of Finding All Vulnerabilities on Your Web Applications

$
0
0

Web vulnerabilities checklistMany businesses understand that it’s important to properly manage their web application security. But in truth, it goes far beyond the need to simply “avoid being hacked”. There are often serious liabilities associated with the failure to properly manage your security.

Unfortunately, many of those liabilities are an afterthought. Until of course, there is a security breach and it’s too late.

If you aren’t properly managing and implementing best practices, and even minimum compliance requirements, you could be putting the future of your business at risk.

In this post, we’re going to discuss the importance of scanning for, finding and removing all vulnerabilities within your web application. We’ll also tie this into the importance of complying with requirements such as PCI because these are often where the greatest liabilities exist.

Web Application Security and the Risk of Liability

A common question that tends to arise as a result of managing security vulnerabilities is whether or not the time and cost involved might provide a return on investment. Your initial inclination might be to look at the process of automated and manual security scanning as an expense, but that’s often not the case.

Although this makes sense from an accounting standpoint, you also need to consider the potential liabilities as a result of a security lapse.

For a small business, the result of a potential lawsuit could mean bankruptcy. Let’s take a quick look at a few examples:

As recently as January 2016, the New York Attorney General’s Office announced a settlement with Uber, under the terms of which, they agreed to pay a fine of $20,000 and change their security practices by protecting customer data. It’s small change for Uber, but a significant penalty for a small startup developing a web application.

At the other end of the spectrum, in late 2015, Wyndham Worldwide Corp. settled with the FTC and was required to implement a comprehensive security program after hundreds of thousands of customers had sensitive payment information exposed to hackers. Wyndham already had a security program in place, which obviously left at least one backdoor open. As a result of their breach, they now face 20 years of increased scrutiny. Imagine the added burden, both in terms of time and expense that your company would face as a result of a similar judgment.

The point I am trying to make is that you shouldn’t be looking at the security of your web application as an expense. Instead, consider the potential ROI that is achieved when you are able to thwart the most recent exploit in the wild. Being proactive is always the preferable choice.

Security Compliance is Not Enough

An organization like the PCI Security Standards Council helps to put standards in place to protect cardholder data around the world. The problem is, these PCI security standards are the minimum requirements in order to remain compliant. PCI DSS compliance is a great place to start, despite its nondescript nature. The fact remains that it only takes a single vulnerability to put thousands, or tens of thousands of your customers confidential information at risk.

Web Application vulnerabilities and exploits are constantly evolving. Any compliance standard put in place today could be out of date as early as tomorrow as a result of a new exploit. This alone makes compliance a minimum standard, not an acceptable level of achievement.

Again, a large-scale security breach requires but a single vulnerability – be it XSS, SQL Injection or directory traversal. It’s simply not acceptable to be satisfied with finding “most” of the vulnerabilities in your web application – you need to find them all.

How You Can Find All Application Vulnerabilities

One of the risks of relying on standards such as PCI DSS is that is just because you are compliant, does not mean your application is secure. In contrast, if your web application is properly secured, you will have more confidence in your ability to maintain PDC DSS compliance. Although compliance is mandatory, a secure web application is more important.

It’s important to realize, is that there is no single solution capable of eliminating all web application security vulnerabilities. In almost all cases, effective scanning requires both automation and manual processes (ie. human assessment). An approved PCI ASV will use a combination of these two methods as well, necessitated by the complexity of today’s web applications. If your current method of assessing vulnerabilities consists of only one of these methods, you need to reassess your security practices.

Your Business Depends upon You Exposing All Vulnerabilities

There is no debating the importance of maintaining compliance standards. In the event of a negligent breach, the penalties can be financially severe and the consequences onerous, as seen in the examples above.

Beyond the compliance requirements, it’s equally important to consider the potential impact on the goodwill of your business. The trust of your customers is an intangible asset that can take years to earn. A security breach which puts their personal and confidential information at risk can cause irrevocable damage.

The most important point to understand is that a single vulnerability can expose you to an untold amount of risk. The process of properly scanning your web application for all vulnerabilities should considered a high ROI activity.

Viewing all 1027 articles
Browse latest View live