Thursday, August 30, 2007

iPhone Security

So I was one of the first in the world to purchase the much anticipated Apple iPhone. Needless to say this is one awesome revolutionary device it still has some things Apple needs to work on.

My main concern about this little device is security. With most Palm or Windows Mobile devices you have the option to have a strong password or have some way of remotely killing the phone if it should be lost or stolen. With the iPhone the only option presented by Apple is a 4 digit pin.

Now you must be thinking to yourself this would mean there are 10,000 possibilities of a pin (assuming any digit can be repeated). That being said after three tries the phone does lock you out for a short period of time (which keeps adding time, 1 min the first time 5 min the second time and so on). My concern however is since this phone has a finger touch screen there is a serious problem. Everytime you slide the unlock arrow and type in your pin number your finger prints are left in the exact same spot which would allow anyone to pick up your phone and know the 4 digits you use to login to your phone just by looking at the screen. So now the number of possiblities would go from 10,000 to 24 (assuming there are 4 different digits).

So now someone could easily "guess" this pin, login to your phone, read your emails, send emails, make calls, steal contact information and so forth. I posted this question recently on an Apple discussion board and for some reason the post was locked and then deleted. Most answers ranged from chill out to there is no personal information on the phone to it is not made for business. Although the phone is not made specificallt for business it still does provide functionality to add a business email account over POP, IMAP or through a VPN connection.

Some users replied that most Palm or mobile devices would have the same issue, but I argue with those devices a stylus can be used and with the iPhone this is not the case.

I wonder if the only two good solutions are to keep the screen very dirty or spotlessly clean......

Anyone?

Thursday, April 27, 2006

Deniable Encryption

I have been doing quite a bit of research in the deniable encryption area. I have looked at both TrueCrypt and RubberHose. I quickly moved away from RubberHose because of the lack of support for Windows and the lack of at least a beta version.

TrueCrypt (http://www.truecrypt.org ) is a very powerful little toolset that allows for many different TrueCrypt file systems with the availability of hidden file systems. The product allows for a user to specify a partition, file, USB key, floppy disk, or any other hard disk type to place a TrueCrypt "partition" with the possibility of hiding another partition within. The power of the hidden partition allows a user to create the outer "known" encrypted partition with an "unknown" partition within. Each partition has it's own password and when the user utilizes the TrueCrypt tool to mount the partition, depending on the password used they will mount either the outer partition or the hidden partition. The good thing about this is there is no feasible way to determine if a hidden partition exists or not. It allows for the ability to change passwords without having to re-encrypt the file, drive or partition. The TrueCrypt application can be ported to a USB key so it does not need to be installed on every system the key is plugged in to.

A couple things that would be nice would be not needing to use the TrueCrypt interface at all to mount the file system. This would keep it even more "secret" that there is a TrueCrypt filesystem on the drive. Another thing would be to allow multiple hidden partitions within a single outer partition. That was the one nice thing about RubberHose, it allowed up to 16.

Anyone have any experience with any other deniable encryption tools?

Monday, April 24, 2006

Application Security

Why is there an inherent disconnect from application development and security. Why is it there are still many web applications performing no input validation except for maybe on the login page? Why do some banking sites still only allow a password of 6-8 characters with no special characters allowed.

I heard a funny the other day where one business unit was placing a limit on the size a password could be. This limit was 25 characters and I must be crazy but most all of my passwords are much longer than that. I recommended they allow longer passwords (up to the system max of course) but the development staff said it would take a while to get business sign-off. I am wondering why this places a burden on the user by allowing a lengthier password or passphrase. I am burdened by only using a 25 char password.

Application security is seriously being looked at from a consulting perspective but not much is being done to mitigate the risks and flaws found within applications. The biggest problem is there is no security win an application development lifecycle until the implementation phase when users, groups and roles are mapped into the application. There should be security involvement throughout the entire lifecycle beginning with the requirements gathering phase. This would continue through the development phase with penetration tests and then into the implementation and maintenance phases.

This is not a new thought but it still isn't being done because security costs money and time. The real problem is there is a disconnect between development staff and what security is. I sit in meetings and ask developers if there site is secure and they say things like "It uses SSL, so yes", or "it requires a user to login." If I ask a team of developers for another company if there product has been through a formal penetration test based on standards like the OWASP Top 10 (http://www.owasp.org) I get answers that range from "Government agencies use our product," to "there are no coding changes needed."

The thing is development staff needs to be accountable for understanding the basic flow of a network, what is a socket, how does TCP/IP work, what about SSL, what does it really mean? If they really understood an HTTP Request and Response they would better understand what damage could be done. I amazed a developer recently when I bypassed the "security" checking on their site by using a proxy tool such as WebScarab or Paros. His input field validation was using client side JavaScript. I then showed him I could turn off the JavaScript and do the same thing, then I changed the input field to the max of 40 characters. He said "see we atleast verify the size." I then intercepted the request with WebScarab, changed it to be 4 million characters and slowed the site down. Needless to say I saw a lightbulb and the whites of his eyes.

Do we really have to wait for an attack to occur before action is taken? How do we put it in terms of money for management? There are actuarial tables for everything else, like how much it would cost a hotel to have a story released about a girl being raped in one of their rooms (saw it on Law and Order: SVU). It would cost the company much less to pay off the girl then to have a story released saying their hotels are not safe from predators. How about, how much would it cost a company if data was lost and a story was released to the public about a data breach through a web application? This might strike a technically savvy person as an interesting thought but for some reason development, and management staffs do not get it. What to do......

Wednesday, February 01, 2006

What's The Purpose of Infas

Over the past few weeks it has become apparent that there are many questions in today's society about what exactly an Information Assurance group is supposed to do. Are they there to be technical experts in every single platform and application that is brought into a company as to secure it in the best means possible? Are they there to maintain controls and non-repudiation logging to the utmost so as fall back on that when something bad happens.

The problem is as a business with what do you place your focus. Should you security staff be focused on security and only security or should there be more of a focus to protect from legal matters. Are the security staffers there to protect the data of the customers/clients or are they there to protect the company.

I can't imagine that in ChoicePoints case the customers were protected in any matter, and that being said neither was the company as they lost much more in soft costs and clients than they did in the courts.

I think the focus should be on protecting the data that keeps the company afloat and directly protecting the company from SOX, GLB, and such. If we cork to protect the data to the utmost extent, don't we inherently protect the company as well?

Monday, January 30, 2006

How much is your identity worth?

In CNN's article:

http://money.cnn.com/2006/01/26/news/companies/
choicepoint.reut/index.htm?cnn=yes

ChoicePoint is settling for $15 million after losing 163,000 customers data. $10 million of it will go to fines and the other $5 will go to those affected. This means that each person identity is only worth a measly $30.67.

Granted CP lost much more money in the process through stocks and other fines but in the end the big loser is the clients whose information was lost. What about those 500 so cases of known identity theft from this loss? I am going to wager that they lost more than $30.

I think companies secure their systems and networks to prevent those fines from the government, but really the fines should go to pay those whose data was lost. If I was one of those 163,000 people I would have turned around and got new credit cards, new Social Security number and heck who knows even a new name. There is something wrong with this picture.

What do you think?

Secure Single Sign On

How to perform Single Sign On from website to website in a secure manner?

There have been many requests recently for reviews of different methods of Single Sign on approaches from website 1 to website 2. The approaches have all been different and don't usually end up in the original form of the request but I wanted some thoughts on other options to solve the SSO problem. (website 1 and website 2 are completely separate companies and have no ties financially)

1. Have a hidden form in website 1 that can be posted to website 2. The hidden form includes an encrypted string (using DES cipher) that is a userid and timestamp (which is good for 5 minutes).

I don't like a lot of things about this method. First and foremost, use a cipher that cannot be broken in a couple hours. I recommended atleast RC4 but I would prefer AES. Hidden fields are the security by obscurity of the middle ages. A hidden field is not hidden and can be modified very easily by the end user. Then the end user could perform brute force attack against the encrypted string and never be locked out on the other end.

I think a better approach is to use a server side redirect over a dedicated VPN. Both sides could verify it is coming and going to the correct systems. The end user never has a chance to modify the encrypted string and an external attacker could not perform a brute force attack against website 2 without first authenticating to website 1.

I would like thoughts on this and other methods that people have used.