Thursday, April 15, 2010

Smartphone Application Security

I've been dabbling in smartphone application security for a while now focusing primarily on the Android and iPhone platforms. I have done a few assessments of purchased applications and I must say I am more than a bit concerned.

These mobile devices are unlike any mobile device we have had to deal with in the past. A smartphone is most of the time turned on, in a pocket or purse of it's owner, and in some cases connected directly to a work VPN. This model is similar to a laptop, but not really. A Laptop is not always connected and most of the time is not physically on the owner outside the office, airport, or local coffee shop. Security on the smartphone devices is in general up to the owner (except in some cases when a company manages password policies on the devices).

Routinely, consumers of their mobile devices treat them quite a bit differently than a laptop because they aren't as powerful, until now. With the recent release of smartphone devices with 1GHz processors we have crept into a market that has for the most part been under the radar (unless we speak of blackberries which focus primarily on email and calendaring).

On the other side the recent release of the iPhone AppStore and Android Market open the smartphone platforms to many different applications that can do anything one can imagine. The problem is these applications are seen as money to software developers and companies and little time is spent on securing such applications. A recent review of a purchased application uncovered some serious vulnerabilities one which stored the users password as a SHA1 hash in a properties file. As we all know SHA1 can easily be broken using rainbow tables. Also, there were client settings which could easily be modified by an attacker such as account lockout, timeout, and password failure count. I feel like in general the application world realizes these things are bad practice but in the smartphone and mobile application market there is this different mind set.

The mindset is that the mobile phone is this blackbox or walled-garden (actually had a vendor say that) and that all application data is kind of lost in the phone. When in reality the hacker world has easily available rooting routines for both an iPhone and Android. This means as a white hat hacker we have to rate our vulnerabilities assuming a user/attacker has root access to the smartphone device. At the same rate we as developers have to develop our applications with this in mind as well.

There is no walled-garden, for instance, in general all applications downloaded from the iPhone AppStore are installed as the "mobile" user. This means technically any other application would have access to any other applications stored data at the operating system level. Once the iPhone is rooted or jailbroken, obviously all data is accessible. On the Android applications are installed directly on the device but have access to the smartcard to store application data (as long as the user clicks OK when installing). One example on the Android is the "Touchdown" application. This application is wonderful for those users of Android phones who need calendaring access to their Microsoft Exchange accounts. Touchdown provides the email, notes, and calendaring functionality and stores data on the smartcard. Included in this data is all attachments. So upon initial sync to the system, depending on how many days of data you sync, all attachments will be put directly on that smartcard. The only flexibility here is to set a password at the Android level for the smartcard (which I highly recommend), or to tune down the size of the attachments to sync to be very small (Touchdown setting). I would say in general most users would not understand they are storing their attachments in the clear on their smartcard, and most cases when using the Exchange Account it is a business account. This process should provide a more secure approach and go through a serious application security assessment since a lot of it's users are business level users.

The new generation smartphones have the power and means to perform most anything a typical business user would need a computer or laptop for. They can email, browse the web, create/edit documents, connect remotely to other computers, etc. This means we as users and developers need to treat them with the same respect we would a laptop or anything other computing device that provides a way into our personal or business lives.

Friday, November 14, 2008

SOP Bypass and CSRF via Macro

A co-worker and I have been working diligently on a CSRF exploit via a macro written in VBA. The site we have been working on allows users to send excel files back and forth. My co-worker found there is this nice little VB object called InternetExplorer.Application that basically allows you to invoke an instance of IE through a macro. The "InternetExplorer" ActiveX control gives the attacker full access to not only the persisted cookies, but also the session/temporary cookies that exist in all currently open instances of IE. Note that the "same origin policy" *does not* apply. We can grab the DOM (including persisted and session cookies) of any site/origin.


One example we did to prove this was login at a site with remember-me disabled. We ran the script which then showed all of my cookies at various domains, including the previously mentioned site which only had session cookies. Note the InternetExplorer ActiveX control respects the HttpOnly flag.


Our Proof of Concept macro will recursively iterate through all your IE Favorites and request the current available cookies to each one. If you happen to be currently logged in to that site via IE an authenticated cookie will be displayed in a popup box. We all know what this means. This means once we create this new IE.App object we can now utilize any authenticated sessions a user may have open and send requests on behalf of the user. This IE.App object does not behave like the IE browser does. If you open two instances of your IE browser it will not share session cookies between them because of Same Origin Policy (SOP) restrictions. In the case of invoking the IE.App object it bypasses any SOP security settings. The code is pretty straight forward and is simplified below:


{pseudocode}

Dim ie As InternetExplorer

Set ie = New InternetExplorer

Files = iterate(Favorites dir)

Foreach file

url = readFile(file)

ie.Navigate = url

Do Until ie.ReadyState = READYSTATE_COMPLETE Loop

' Now we get the cookie

MsgBox ie.Document.Cookie

{/pseudocode}


Obviously the code in is a much better PoC (bigger, faster, stronger), but the above is a quick synopsis. In order to have this code run, the victim must either be tricked into opening a malicious Excel/PP/Word document or have really low browser security setttings and visit a site that has VBScript which tries to instantiatethe object within the browser (this is THE WORST situation, but also pretty unlikely).


The general risk of this issue seems pretty low although it is definitely a way around any form of CSRF protection since we could potentially parse the DOM and send any CSRFToken with the request.

We understand a macro is basically unmanaged VBscript code and can do anything on the user’s system for which they are authorized, but the fact the InternetExplorer ActiveX control allows for seeing all current session cookies is a bit scary.


We thought about disclosing this to Microsoft and may still do so, but since it is unmanaged code they will probably turn their heads. If anyone has any thoughts I would love to hear them.


Wednesday, November 12, 2008

CSRF attack through Macros? Really?

Sitting with my co-worker and wondering what our next move would be at exploiting another XSS attack we both turned our minds toward Microsoft Excel. We had seen cases where we could get XSS attacks and even Excel attacks embedded in downloaded CSV files and were presented with an application allowing for similar functionality. My co-worker starting looking for ways to embed nastiness in a macro and sure enough we came across yet another attack vector for CSRF.

Like I said the application at hand allowed users to upload excel files and actually send these files to other users of the system. We found there is a "neat" little VBA object called InternetExplorer.Application which basically allows you to open an instance of IE from a macro. This "neat" little object provides most all functionality IE does BUT since the attacker is now ALLOWED access to the file system through the VBA code this could get interesting. Oh did I mention this object allows access to a users cookies.

Lets dig a little deeper. If I was able to write a quick little VBA macro that iterated through a users "Favorites" links and request cookies for every single site, what would happen. If the user is currently logged into any of these sites via normal IE the cookie of the authenticated session would now be in the hands of the attacker. This means CSRF is happening through a macro and in this case the attacker wants a bunch of authenticated sessions. This does work.

Now what if we wanted to take this a bit further and create a worm that not only stole session information but performed a CSRF against a favorite webmail provider wherein it grabbed the contact list, created an email, uploaded an infected xls file with our macro and sent it to every one of our contacts. That is scary and a zero day in my mind.

Now the really scary thing about this is we all know macros are bad and dangerous but honestly how many times have we gotten an xls file and clicked "Enable Macros" because we were told to. Come on be honest....We have all done it and we will continue to do it until something bad happens.

So how do we protect against this. It is almost impossible without getting MS to take this functionality away. With InternetExplorer.Application we have a fully functional browser where we could send multiple requests, parse the DOM returned and grab any CSRF token we need to present with each request.

This macro will obviously work in any MS application with macros enabled. And once the macro runs the user would never know what happened. As with any CSRF attack the user would need to be logged into a site the macro knew about.

Thursday, August 30, 2007

iPhone Security

So I was one of the first in the world to purchase the much anticipated Apple iPhone. Needless to say this is one awesome revolutionary device it still has some things Apple needs to work on.

My main concern about this little device is security. With most Palm or Windows Mobile devices you have the option to have a strong password or have some way of remotely killing the phone if it should be lost or stolen. With the iPhone the only option presented by Apple is a 4 digit pin.

Now you must be thinking to yourself this would mean there are 10,000 possibilities of a pin (assuming any digit can be repeated). That being said after three tries the phone does lock you out for a short period of time (which keeps adding time, 1 min the first time 5 min the second time and so on). My concern however is since this phone has a finger touch screen there is a serious problem. Everytime you slide the unlock arrow and type in your pin number your finger prints are left in the exact same spot which would allow anyone to pick up your phone and know the 4 digits you use to login to your phone just by looking at the screen. So now the number of possiblities would go from 10,000 to 24 (assuming there are 4 different digits).

So now someone could easily "guess" this pin, login to your phone, read your emails, send emails, make calls, steal contact information and so forth. I posted this question recently on an Apple discussion board and for some reason the post was locked and then deleted. Most answers ranged from chill out to there is no personal information on the phone to it is not made for business. Although the phone is not made specificallt for business it still does provide functionality to add a business email account over POP, IMAP or through a VPN connection.

Some users replied that most Palm or mobile devices would have the same issue, but I argue with those devices a stylus can be used and with the iPhone this is not the case.

I wonder if the only two good solutions are to keep the screen very dirty or spotlessly clean......

Anyone?

Thursday, April 27, 2006

Deniable Encryption

I have been doing quite a bit of research in the deniable encryption area. I have looked at both TrueCrypt and RubberHose. I quickly moved away from RubberHose because of the lack of support for Windows and the lack of at least a beta version.

TrueCrypt (http://www.truecrypt.org ) is a very powerful little toolset that allows for many different TrueCrypt file systems with the availability of hidden file systems. The product allows for a user to specify a partition, file, USB key, floppy disk, or any other hard disk type to place a TrueCrypt "partition" with the possibility of hiding another partition within. The power of the hidden partition allows a user to create the outer "known" encrypted partition with an "unknown" partition within. Each partition has it's own password and when the user utilizes the TrueCrypt tool to mount the partition, depending on the password used they will mount either the outer partition or the hidden partition. The good thing about this is there is no feasible way to determine if a hidden partition exists or not. It allows for the ability to change passwords without having to re-encrypt the file, drive or partition. The TrueCrypt application can be ported to a USB key so it does not need to be installed on every system the key is plugged in to.

A couple things that would be nice would be not needing to use the TrueCrypt interface at all to mount the file system. This would keep it even more "secret" that there is a TrueCrypt filesystem on the drive. Another thing would be to allow multiple hidden partitions within a single outer partition. That was the one nice thing about RubberHose, it allowed up to 16.

Anyone have any experience with any other deniable encryption tools?

Monday, April 24, 2006

Application Security

Why is there an inherent disconnect from application development and security. Why is it there are still many web applications performing no input validation except for maybe on the login page? Why do some banking sites still only allow a password of 6-8 characters with no special characters allowed.

I heard a funny the other day where one business unit was placing a limit on the size a password could be. This limit was 25 characters and I must be crazy but most all of my passwords are much longer than that. I recommended they allow longer passwords (up to the system max of course) but the development staff said it would take a while to get business sign-off. I am wondering why this places a burden on the user by allowing a lengthier password or passphrase. I am burdened by only using a 25 char password.

Application security is seriously being looked at from a consulting perspective but not much is being done to mitigate the risks and flaws found within applications. The biggest problem is there is no security win an application development lifecycle until the implementation phase when users, groups and roles are mapped into the application. There should be security involvement throughout the entire lifecycle beginning with the requirements gathering phase. This would continue through the development phase with penetration tests and then into the implementation and maintenance phases.

This is not a new thought but it still isn't being done because security costs money and time. The real problem is there is a disconnect between development staff and what security is. I sit in meetings and ask developers if there site is secure and they say things like "It uses SSL, so yes", or "it requires a user to login." If I ask a team of developers for another company if there product has been through a formal penetration test based on standards like the OWASP Top 10 (http://www.owasp.org) I get answers that range from "Government agencies use our product," to "there are no coding changes needed."

The thing is development staff needs to be accountable for understanding the basic flow of a network, what is a socket, how does TCP/IP work, what about SSL, what does it really mean? If they really understood an HTTP Request and Response they would better understand what damage could be done. I amazed a developer recently when I bypassed the "security" checking on their site by using a proxy tool such as WebScarab or Paros. His input field validation was using client side JavaScript. I then showed him I could turn off the JavaScript and do the same thing, then I changed the input field to the max of 40 characters. He said "see we atleast verify the size." I then intercepted the request with WebScarab, changed it to be 4 million characters and slowed the site down. Needless to say I saw a lightbulb and the whites of his eyes.

Do we really have to wait for an attack to occur before action is taken? How do we put it in terms of money for management? There are actuarial tables for everything else, like how much it would cost a hotel to have a story released about a girl being raped in one of their rooms (saw it on Law and Order: SVU). It would cost the company much less to pay off the girl then to have a story released saying their hotels are not safe from predators. How about, how much would it cost a company if data was lost and a story was released to the public about a data breach through a web application? This might strike a technically savvy person as an interesting thought but for some reason development, and management staffs do not get it. What to do......

Wednesday, February 01, 2006

What's The Purpose of Infas

Over the past few weeks it has become apparent that there are many questions in today's society about what exactly an Information Assurance group is supposed to do. Are they there to be technical experts in every single platform and application that is brought into a company as to secure it in the best means possible? Are they there to maintain controls and non-repudiation logging to the utmost so as fall back on that when something bad happens.

The problem is as a business with what do you place your focus. Should you security staff be focused on security and only security or should there be more of a focus to protect from legal matters. Are the security staffers there to protect the data of the customers/clients or are they there to protect the company.

I can't imagine that in ChoicePoints case the customers were protected in any matter, and that being said neither was the company as they lost much more in soft costs and clients than they did in the courts.

I think the focus should be on protecting the data that keeps the company afloat and directly protecting the company from SOX, GLB, and such. If we cork to protect the data to the utmost extent, don't we inherently protect the company as well?