Sunday, November 11, 2012

OS and Service Fingerprinting with Nmap

I decided that I wanted to have a network map of all the machines on my network containing information about the Operating System and services that are running on each one.  Furthermore, I want to include this data on my IDS running Snort + BASE. 

I'm running through this proof of concept scenario at the moment.  Don't complain about any code that I post below.  Again, I'm just doing a quick POC, so the code is fairly poorly written.  But it does work.  If you'd like to make it better, please feel free.  Please don't make functionality requests here.  If you would like to see a feature added, please make the changes yourself.  That's the beauty of having the code.  In other words, I'm doing this for me and sharing it with the world.  But in the end, it's for me.  So if you don't like it, I don't want to hear about it because I don't care.  Sorry for all that, just needed to get it out of the way, lest I become inundated with silly requests and negative opinions.

As far as OS and service fingerprinting goes, Nmap is fully capable of doing just that.  So why reinvent the wheel?  I first started trying to use Nmap along with a series of 'greps', but the command became long and well, pretty horrible looking.

Then I realized I could output the data from an Nmap scan to XML format.  My command ended up looking like this:
nmap -A -T5 <IP Address(es) to scan> -oX output.xml

The above command will scan the hosts that you provide, attempting to identify the OS and services running on them.  I usually use a CIDR block for the range to scan, such as 192.168.1.0/24, but you can use any nmap accepted format.

I chose to use perl to parse the output.xml file.  That's because there is a great perl module called Nmap::Parser.  It was built specifically for this sort of activity.

The script I have right now is below:

#!/usr/bin/perl -w

#
#
# Give the XML file as the only program argument
#

use strict;
use Nmap::Parser;          
use DBI;
use DBD::mysql;

my $dbh = DBI->connect(
    'DBI:mysql:database=nmap;host=localhost',
    '<user>',
    '<password>',
    { RaiseError => 1, AutoCommit => 1 },
);

# set the value of your SQL query

my $dquery1 = "delete from osdata";
my $dquery2 = "delete from servicedata";

my $query = "insert into osdata (ip, name, vendor, name_accuracy, class_accuracy)
            values (?, ?, ?, ?, ?) ";

my $query2 = "insert into servicedata (ip, protocol, name, port, product, version, confidence) values (?,?,?,?,?,?,?)";

# prepare your statement for connecting to the database
my $statement = $dbh->prepare($query);
my $statement2 = $dbh->prepare($query2);

my $dstatement = $dbh->prepare($dquery1);
my $dstatement2 = $dbh->prepare($dquery2);

# execute your SQL delete statements

$dstatement->execute();
$dstatement2->execute();
my $np = new Nmap::Parser;

# Parse the input XML file
$np->parsefile("$ARGV[0]");

# Get an array of all hosts that are alive
my @hosts = $np->all_hosts("up");


foreach my $host_obj (@hosts) {

    # Get the IP address of the current host
    my $addr = $host_obj->addr();
    my $hname = $host_obj->hostname();
    if ($hname ne 00) {
    print "$addr\t$hname\n";
    } else {
     print "$addr\n";
}

#Identify the Operating System
my $os = $host_obj->os_sig();
my $osname = $os->name();
my $osacc = $os->name_accuracy();
my $osven = $os->vendor();
my $osacc2 = $os->class_accuracy();
#print "$osname\t$osacc\t$osven\t$osacc2\n";
$statement->execute($addr, $osname, $osven, $osacc, $osacc2);

    # Get a list of open TCP ports for this host
    my @tcp_ports = $host_obj->tcp_open_ports();
   
    # Enumerate the open TCP ports
    foreach my $tcp_port (@tcp_ports) {
            my $service = $host_obj->tcp_service($tcp_port);
        no warnings;
        my $svcname = $service->name();
        my $svcport = $service->port();
        my $svcprod = $service->product();
        my $svcvers = $service->version();
        my $svcconf = $service->confidence();
       
    if (defined($svcname)) {
       
$statement2->execute($addr,'TCP',$svcname,$svcport,$svcprod,$svcvers,$svcconf);
        use warnings;
}
        }
     
    }




You would need to replace <user> and <pass> with your database username and password.

For the sake of testing, I just created a new MySQL database called nmap along with two tables; osdata and servicedata.

mysql -uroot -p

mysql> create database nmap;

mysql> use nmap;
mysql> create table osdata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), name varchar(20), vendor varchar(20), name_accuracy int(3), class_accuracy int(3) );

mysql> create table servicedata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), protocol varchar(3), name varchar(20), port int(6), product varchar(20), version varchar(6), confidence int (3) );

After the fact, I went back and added a timestamp column to each table:

mysql> alter table `osdata` add `lastUpdated` timestamp;
mysql> alter table `servicedata` add `lastUpdated` timestamp;

With the database created, I can simply run the script from above, which I have saved as nmap_parser.pl like this:

./nmap_parser.pl output.xml

The script will run and populate the new database tables with the results it finds.  Instead of dealing with checking if the database rows already exist and changing the insert to an update in the script, each time the script is executed, it completely deletes all the data in the osdata and servicedata tables. 

My thought is that the nmap scan can be set as a cron job on the snort machine.  Then the nmap_parser script can also be set to run after that cron job completes. 

The next step will be to make modifications to the snort front-end, BASE.  I hope to be able to add a new menu item which will read in the data from the osdata and servicedata tables and display them in a friendly format in the BASE UI.  Not sure when I'll have time to get around to that.  But I'll be sure to post my results whenever I do.  And again, this is a work in progress, so I know much needs to be changed in the code I have provided today. 



 
   









Saturday, November 3, 2012

Post Hurricane Sandy RAID Rebuild

I am fortunate that where I live did not suffer much damage in the wake of the recent storm named "Sandy".  I think that we maybe got some 40-50 MPH winds and a fair bit of rain from the storm, but no major damage was done.  Most of our power lines are buried underground in this area, so I was happy that we never lost power during the storm.  We did, however, lose power the day after the storm had passed.  Probably as a side effect of the power company working to restore power for those who had lost it during the storm.

After power was restored, I went around the house turning on all of my computer and server equipment.  I didn't really do a thorough check, though.  Today, I went to put a file on my NAS and noticed that my NFS mount was not present on my workstation.  I tried mounting it manually and it just hung.  I tried pinging the NAS and got no response.  It was powered on, though.  It was time to hook up a monitor and keyboard to this usually headless server.

As soon as the monitor came up, I could see the problem.  The system was sitting on the GRUB menu screen.  This screen usually has a timeout, that when reached, will boot the default selection.  This time, though, there was no timeout.  I thought to myself that something must be wrong.  I proceeded to make the selection and allow the system to boot.

As it booted I noticed that it said my software RAID array was in a degraded state and something about an invalid partition table.  I chose to let it boot anyway.  Once the system was up and running, I logged in and was able to determine that the RAID member with the problem was /dev/sda. 

Below are the steps I used to remove the array and add it back to begin rebuilding the array:

  • mdadm --manage /dev/md127 --fail /dev/sda1
  • mdadm /dev/md127 -r /dev/sda1
  • mdadm --zero-superblock /dev/sda
  • mdadm /dev/md127 -a /dev/sda1

Now I'm using the next command to view the status of the rebuild:

  • watch cat /proc/mdstat

All I can do at this point is wait for the rebuild to complete.  Maybe one day I'll invest in a nice hardware RAID controller.

Sunday, October 21, 2012

BYOD

I thought I'd take a moment to give my opinion on BYOD (Bring Your Own Device).  I do not agree with BYOD in the workplace.  I don't see what advantages it brings.  Personal electronic devices have no place on a corporate network.  I can't even begin to imagine the types of security holes and malware infestations that end users would be connecting to the network.

It's obvious the reasons why an IT department would not want this.  There are certainly any number of risks associated with plugging in devices that you have no control over.  There may be severely out of date software on these devices, malware, and who knows what other security risks.  However, I also can't see why end users would want this. 

If you need a smartphone, tablet, etc. to do your job efficiently, then these things should be provided by your place of business.  You should never have to spend your hard earned cash on tools needed to perform your job.  If your employer refuses to give you the tools you need, then maybe it's time to look for another place of employment. 

Personally, I have always maintained a line between my personal and my professional life.  In the past, when I was told that I needed to join a conference call from home, my response was that they needed to provide me with a phone or I would not be joining that meeting.  The result was that I got a company issued phone.  There's a difference between being outright insubordinate and protecting your own assets. 

I do sometimes feel bad for those people who just prefer to use their own devices at work.  Because for every one of those people, there are a dozen others who would just use this as an excuse to play games or socialize all day instead of working on a presumably unmonitored device.

So if you're an end user who has been nagging your IT department to allow you to use your own device, please try to understand why they are telling you "no".  It's not because they want to feel powerful by telling you what you can and cannot do.  They are busy people, too.  Keeping a network safe and secure is a full time job.  They don't get to just plug in some appliance and set it and forget it.  They must constantly be analyzing intrusion attempts and attack vectors.  All the while patching software to minimize those attack vectors.  In addition to all that, they are still available whenever you forget your password.  So please, take it easy on those guys and gals.

Monday, October 15, 2012

See Percentage of Memory Used in Linux

You can use the following command to see the percentage of memory used on a Linux system.  Keep in mind that all it's actually doing is adding together the memory percentage used lines for each process listed.  Depending on your input method, the results could vary a little, but should generally be in the same ballpark.

The first example below adds together everything in the 4th column of "ps" output. 

The second example takes input from top, by running just one time in batch mode.  Then it adds together the values in the 10th column.

ps aux | awk '{sum +=$4}; END {print sum}'

top -b -n 1 | awk '{sum +=$10}; END {print sum}'

Friday, May 4, 2012

Snort to the Rescue!

I still use Base as a web frontend to my snort installation.  I know a lot of people are using things like Snorby now, but I think Base does everything I need it to do.  Anyway, I was looking at Base this afternoon and I noticed over 200 new alerts. 

All of the alerts were from my main router and they were of the type "ICMP Test".  Closer examination showed that the router was trying to ping a machine that was unreachable.  Since my router also acts as my DNS and DHCP server, I checked the syslog on that machine. 

The syslog was full of DHCP offers to the same IP address that snort was showing as unreachable.  I took the MAC address and ran it through an online MAC to vendor lookup and it showed me that it was a MAC from Motorola CHS.  I went through the house restarting all of my Motorola cable boxes.  Since doing that I noticed that the DHCP log shows that an acknowledgment was sent in response to the DHCP offer.  Snort has also stopped alerting for that particular ICMP Test. 

I guess one of the cable boxes just got hung up a bit.  It happens from time to time.  Usually I don't catch the problem until it is too late (e.g. my favorite TV shows aren't recording as scheduled in the DVR).  Thanks to Snort and Base, that won't be a problem tonight.

NTFSclone

I installed ntfsprogs on my Debian desktop because I have a Windows partition that I'd like to create an image of on my NAS.  I ran ntfsclone with the --save-image option and directed it to place the output in an NFS share to my NAS.  I started it last night and it's almost 60% of the way finished.

My lessons learned are as follows:
-  Software RAID sucks.  I should probably spring for a decent hardware RAID controller.
-  Consumer hard disks also suck for large file copies like this one.  Those cheap Western Digital disks in my NAS may have seemed like a great deal, but they just don't compare to higher-end SCSI disks.  The IO Wait is what's causing it to take so long.  It was over 50% when I last looked at it on the NAS.

I should really invest in better equipment at home :)

Update:  The ntfsclone imaging finally finished.  It turns out that I may have tracked down another culprit relating to the slow file transfer and the high iowait.  I have a 3-disk RAID 5 array in my Openfiler NAS.  Running mdadm -D /dev/md0 showed that one of the disks was faulty.  I rebooted the NAS and re-added that disk to the RAID array.  Right now it is in the process of rebuilding, so I'll have to wait a while to see how that goes.  Even if it comes back online okay, I'll still probably order an extra disk to add to the array as a spare. 

Snort: Sensitive Data

Man, I'll tell you, the sensitive data processor in snort was not designed to be used with web traffic.  If you've used it at all, you know it fires all the time when used in conjunction with regular web traffic.  If seems to throw alerts for detecting email addresses if it so much as finds the '@' symbol in a packet.  Any string of numbers in a packet makes it alert for finding supposed credit card numbers. 

Since in my current setup snort is processing all packets sent from my router, I'm going to have to disable sensitive data processing.  I guess if I was only monitoring traffic from my internal network, then there would be fewer of these alerts.  And then, the alerts I do get would probably at least be worth taking a look at.  Right now, though, I'm just getting flooded with false positives.