A look at why some companies struggle to operate securely
By Chris Rasmussen, Systems Administrator
Large companies face many practical, real-life challenges when it comes to operating securely, including:
- Time
- Money
- Manpower
- Compatibility
- Culture
For the first three challenges listed above, the problem revolves around not having enough of these resources allocated to cybersecurity. The other two challenges, however, are not as straightforward. In this first installment, I explore the issue of compatibility which refers to the ability of one computer, piece of software, etc. to work with another.
“There are two types of companies. Those who have been hacked, and those who don’t know they’ve been hacked.”
Paraphrase from a former FBI Director
It’s rare for me to go a week without seeing an article that mentions “hackers exploit a known vulnerability that had not been patched.” This is tech-talk for “thieves got inside because somebody left the door unlocked even after being told it was open.” Other common phrases include “company did not use Multi-Factor Authentication (MFA)” and “the default password was still in use.” These are the things that everybody who works in cybersecurity has been told are bad practices, yet they are commonplace. Why? How do companies, especially large ones, allow their cybersecurity to be anything less than secure?
Being secure digitally is like being healthy physically: it is far easier said than done. Eat healthy food and exercise. What could be simpler than that? Yet think about how many people are aware of these two basic principles and still struggle with them? I know I sure do. Keeping computers and communications secure is similar in that it sounds deceptively easy to manage. Throw in the issue of compatibility, and it becomes even more difficult to address.
“Don’t check that box. It will break everything.”
A software engineer providing technical support
The above quote related to a security feature on a system the software engineer was helping me with, a feature that would enforce a government-instituted security protocol to ensure heavy-duty protection against attacks. The problem is that computer systems can rarely handle this protocol. I equate it to requiring three forms of identification and a DNA sample from someone before you are willing to hold a conversation with them. If I had enabled that security, the system would have lost its ability to communicate with anything that could not meet this rigid standard, making it useless.
While it is uncommon to have security settings that come with instructions saying, “Nobody should ever use this,” it is incredibly common for security settings to break things when improperly configured. Sometimes, even properly configured security still breaks something.
“We do not recommend that administrators use this feature.”
Documentation on using MFA for a certain product
Multi-Factor Authentication (MFA, 2FA) is a great example of a cybersecurity feature. Using MFA makes a massive difference in security, but it is not always for the better. Some devices and applications cannot use MFA for a variety of reasons. In some cases, MFA only works most of the time, which means it must be disabled for special situations. Sometimes, MFA can only work for certain users on a system, such as non-administrators, which is counterintuitive. In rare cases, the users who have the greatest need for MFA are unable to use it.
The majority of MFA systems that require active authentication (i.e., “enter the code to continue”) use a mobile phone for the second authentication factor. Unfortunately, certain secure environments understandably do not allow mobile phones due to the massive security risks they pose. In other words, users cannot bring the device that is required for them to be able to work in that room. This makes MFA incompatible with the physical security requirements in place. How do we fix this?
One real life solution is for a person to trigger the initial MFA prompt, run to their car to access their phone, get the code, and run back into the building before the time expires.
So why don’t more people use MFA? Because some people can’t run fast enough.
There are, of course, less painful methods of handling it, but there are also circumstances where those solutions are not compatible with other security protocols. As a result, sometimes the only sane answer is to not use MFA at all.
“Today I learned not to update important things without asking.”
Me, after breaking important software by updating it
Patches and updates also can cause compatibility issues. At a practical level, patches and updates tend to be a bigger issue for security than the lack of MFA, if only because literally all software has vulnerabilities, and patching is fixing a hole in the wall that the bad guys probably already know about. It’s hard to overstate how important these things are, but they are often neglected. Time, money, and manpower are often to blame, but sometimes incompatibility is a problem that no reasonable amount of resources can overcome. Our old software cannot always run on our new hardware, and our new software cannot always run on our old hardware.
An unfortunate and shocking thing happens when we stop allowing outdated, insecure code to run on our computers: the things that were made with that insecure code stop working.
Obviously, we are supposed to upgrade to something that is safer, but many times, no such upgrade exists. I have heard horror stories about cash registers running off Windows 95 – in 2007. Why? Because nobody had created a suitable replacement program for handling whatever niche requirements that company needed. They had to stick with an outdated operating system or completely restructure the way they handled purchases. Completely restructuring is often too expensive or time consuming for a company, so they choose to keep what they have.
Patches are usually more tolerable than upgrading entire computers or switching to new programs, but again, if a patch prevents a hacking trick from working, it might also block legitimate activity. Complicated programs or suites have documentation with indices showing what part of the software or hardware system is compatible with certain updates or dependencies. If those dependencies have dependencies of their own, upgrading becomes an exercise in pain. I once made a flowchart to help me figure out if I could upgrade a single module in a program because that module needed other modules to be updated.
“How the heck did we get here, when we literally need COBOL programmers.”
New Jersey Governor
So, how do we address compatibility in cybersecurity?
First, foster communication within an enterprise organization. Somebody (usually somebody in IT) must know all the software, hardware, and services the company is using if there is any hope of minimizing security issues caused by incompatibility. This can only happen if individuals or groups keep the company aware of what systems they use for work. This includes phone apps if used for work purposes. Yes, even Snap-Chat.
Second, updating existing software and incorporating new software need to be done deliberately and with foresight. Even with a map of the present, a route must be planned for a company to reach a secure future; upgrading willy-nilly or failing to consider the impact a single system might have on the corporate ecosystem can pave the way for exploitation by the bad guys.
Finally, in some cases we cannot fix compatibility issues – at least not directly. Sometimes, we end up having to run something that reached its End of Life (EoL) 15 years ago. In such instances, it is crucial that we are aware of this and the vulnerabilities that may result. If a piece of horribly insecure software must be used, maybe the vulnerabilities can’t be patched, but the software could at least be isolated from anything sensitive.
For more ideas on how you can put cybersecurity solutions in place for your organization, visit our website or reach out to us directly to schedule an appointment to meet with our cyber engineers.