Web site security is a very important issue to me. I find it frustrating sometimes dealing with people who operate based more on superstition and urban legends than on solid principles. Part 4 is about some strange behavior I have seen in security groups and other insanity.
I explained in part 1 that a web site is secure if a user logs in using an email address and a password, as long as that password is sufficiently strong.
Within our company we are requires to use Virtual Private Network (VPN) in order to connect to different parts of the company. The corporation runs a large firewall that separates the outside Internet from the internal network, and there are string security guidelines around anything to do with those connection points to the Internet. Out internal traffic does not run over public lines, so the encryption aspect of VPN is not of essence. Still, even within the company, firewalls keep different parts of the company separate, and VPN is required to cross the bridge (or tunnel under it I guess). This is because the IT folks lack confidence in the normal web security. Let’s give them the benefit of the doubt and assume that there is some software they are required to host that is NOT secure. So they impose VPN as a way to assure that only specific people can access those parts of the networks.
The problem is that you can not run two VPN sessions at a time, so we are always connecting and disconnecting VPN in order to access different resources. It certainly prevents some people from accessing information. At the same time it makes it much more difficult for people who need access to information to get it. Where is the tradeoff? This is really just an extension of the really-long password: The details to access VPN have to be written down, and thus likely to fall into the hands of hackers, and so really does not provide any real security to speak of. Ironically, the web addresses and the list of remote usernames & passwords is emailed around. It is common to see a printed out copy. This is not security.
Recently I have been working to get others within the company to make better use of collaboration software (social business software) to work more effectively with each other. Whenever I talk to people in IT, the very first question is “how do I prevent people from using this?” That is not how they ask it, but that is what the question amounts to. They always want to know how to limit the access, and exclude people from different aspects. I always point out to them that collaboration only works when everyone is included, but they don’t seem to get the irony. They are concerned only about what would happen in the very worst case: if there was a rogue disgruntled employee attempting to harm the system or leak information. It makes no sense to hold the IT people accountable against a vandal of this sort, but they are.
Another crazy example is out three-strikes-your-out policy on user accounts. When attempting to log in, if you fail three times to type the correct password, then your user account is permanently disabled. You then have to call a support number, talk to a person, and have your account re-enabled.
This is, of course, mindbogglingly inappropriate for the rather obvious reason that any disgruntled person in the system can lock you out of the system at any time. Anyone can attempt to log in with your userid three times, and your account is shut down. This causes, at the very best case, ten minutes of interruption.
This happens to me frequently when I test software that accesses system resources. Every time I call in to get my account unlocked, I always ask the support person whether they think this rule is a good idea. Without an exception, they believe that it is absolutely essential, to guard against brute force attacks to guess your password. I point out that even thousands of guesses is probably no risk to guess my password, but they always defend the policy of cutting off access after three. I point out that simply a delay, or a temporary disable of the account would be just as effective, and less costly. But of course, cost of support is not an issue for the support people themselves. They just do what they are told. It is amazing, however, that management does not see the unnecessary waste in this. No to mention the annoyance of being denied access to your account at any time.
Furthermore, “security by obscurity” does not work. There is a strong desire to hide the security mechanisms from people, on the theory that the less that a hacker can discover, the harder it is to hack in. Unfortunately, this makes it harder for anyone else to know what is going on. There is a strong tendency to cover up any problems in security. If something fails in the security system, it is believed that exposing this in an error message might give a hacker an advantage, and so it is often hidden. The actual effect is to allow security problems to persist unnoticed and unattended. Far better would be to build a secure system, and then expose to everyone exactly how that security works. The “white hats” can then critique it and find flaws in it before the “black hats” get a chance to to leverage the flaws.
In one situation, error messages were removed from a systems I designed, for fear that those error messages would expose too much about the working of the system. This is insanity: it is better to find and fix the cause the problem quickly. Then not only will the hackers be deprived of internal information, but also the users will have a correctly operating and safe system. You do not make a system more secure, by hiding these sorts of things.
It takes a lot of confidence to say “the network does not need that to be secure”. Many IT folks and site administrators will fear that anything done to weaken the security has the potential to allow a hacker in, and then they will be to blame. There is a huge temptation to enforce things through software without consideration to the usability aspects.
I imagine many of this is designed in committee: someone suggests a way to make security tighter, and nobody can disagree or take a position against it, without appearing to be against security. This is how superstitions arise out of misconceptions and fear.
The irony in all this is that at this point in history network security is probably the single most important issue facing everyone everywhere. I need not mention Shady Rat, and I am sure the real threats are not known to the public. Proper security measure need to be taken, and at the same time we need desperately to avoid the security superstitions.