Tuesday, January 22, 2013

Maintain a Local IP Reputation Database for Free

There are a lot of IP reputation sites out there that maintain data on certain IP addresses and will let you know if those addresses have been known for serving malware, spam, or other malicious content.  The good news is that you can use this data to create more effective whitelists and blacklists.  The bad news is that most of these services come at a cost.  And the free ones usually impede you by requiring a captcha to be entered before checking an IP address for you.

Fortunately, the good folks over at Alienvault also maintain an IP reputation database that you can download for free.  After learning of this, my first thought was "How can I make use of this?".  Then I thought, "Wouldn't it be cool to store their IP Reputation list in my own database?".  And that's just what I did.

I wrote the following script and set it up to run as a cron job every 2 hours.  They update the list you can download every 1 hour.

#!/bin/sh

wget https://reputation.alienvault.com/reputation.snort -P /tmp/ --no-check-certificate -N

sed -n '/^[0-9]/p' /tmp/reputation.snort > /tmp/iprep.out
/path/to/loadiprep.sh

The above script will use wget to download the latest reputation.snort file from Alienvault.  You don't have to use this list with Snort.  It just happens to have exactly the information I need for my database in it, so that's the one I went with from their website.

It gets downloaded to your local /tmp directory and then I pull out the IP addresses and reputation information and place it in /tmp/iprep.out.

Finally, it runs loadiprep.sh.  That script looks like this:

#!/usr/bin/perl

use CGI qw(:standard);
use DBI;

$opt_user='<user>';
$opt_password='<password>';
$mydb='reputation';
$host='localhost';
$dbh = DBI->connect("DBI:mysql:$mydb:$host",$opt_user,$opt_password) or
die("ERROR");
{

my $query1 = "load data infile '/tmp/iprep.out' replace into table iprep fields terminated by ' # ' lines terminated by '\n' (ip, reputation)";
my $statement = $dbh->prepare($query1);
$statement->execute();
$statement->finish;
}
$rc=$dbh->disconnect;

On my machine I created a MySQL database called "reputation".  In that database I created a table called "iprep".  The iprep table contains two columns called "ip" and "reputation".  The ip column contains the IP addresses and the reputation column contains, you guessed it, the reputation information.

The loadiprep.sh script that is called by the first script I mentioned will populate the database with the downloaded IP reputation information.  It will stay updated if you are running it with a cron job like I am.  Don't forget to change <user> and <password> in the above script to a user and pass that has access to your reputation database.

Right now my iprep table contains over 300,000 rows.  In looking for a way to query this data without the need to log into MySQL each time, I came up with the following:

#!/usr/bin/perl -w

use DBI;
 $dbh = DBI->connect('dbi:mysql:reputation','<user>','<password>')
   or die "Connection Error: $DBI::errstr\n";


$LOGFILE = "$ARGV[0]";
open(LOGFILE) or die("Could not open log file.");
foreach $line (<LOGFILE>) {
    chomp($line);              # remove the newline from $line.




$sql = "select ip,reputation from iprep where ip = ?";
 $sth = $dbh->prepare($sql);
 $sth->execute($line)
   or die "SQL Error: $DBI::errstr\n";
 while (@row = $sth->fetchrow_array) {




print join("          ", @row), "\n";
}
}
If you save the above perl script as "queryrepdb.pl", you would execute it from the command line like this:  ./queryrepdb.pl somefile.txt, where somefile.txt contains a list of IP addresses that you want to check, each on its own line.  The script will return no output if none of the IP addresses are found in the database.  But if a match is found, it will print the IP address and reputation information to the console.

I took it a step further with another script:

#!/usr/bin/perl -w

use DBI;

$PCAPFILE = "$ARGV[0]";


system qq(`tcpdump -tnr $PCAPFILE | awk -F '.' '{print \$1"."\$2"."\$3"."\$4}' | sort | uniq | sed 's/^...//g' > samp.txt` );


$dbh = DBI->connect('dbi:mysql:reputation','<user>',<password>')
   or die "Connection Error: $DBI::errstr\n";


$LOGFILE = "samp.txt";
open(LOGFILE) or die("Could not open log file.");
foreach $line (<LOGFILE>) {
    chomp($line);            




$sql = "select ip,reputation from iprep where ip = ?";
 $sth = $dbh->prepare($sql);
 $sth->execute($line)
   or die "SQL Error: $DBI::errstr\n";
 while (@row = $sth->fetchrow_array) {

print join("\t\t", @row), "\n";
}
}

This script will also take a file as input.  This time however, the file should be a pcap (packet capture) file, such as one created with tcpdump or wireshark. 

The script will then create a list of the unique source and destination IP addresses from the pcap using tcpdump to read the file.  The IP list is stored in a file called samp.txt.  That file is then read and the addresses are compared to the data that resides in the reputation database.  If no matches are found, the script will output nothing.  Again, if a match is found, the IP address and reputation information are printed to the console.

That last script only matches against the source and destination IP addresses in the pcap.  If there is a host domain listed in the packet, such as a visited URL, it is not looked up by this script.  However, I would encourage you to run an nslookup against the domain and then run a search against the IP address you get against the reputation database.

If you just want to query it directly from mysql, the query is very simple:

use reputation;
select * from iprep where ip = 'xx.xx.xx.xx';

If you come across a pcap with some addresses you aren't sure are malicious or not, this would be a great first step in determining whether or not you should be concerned.  This is not the end all, be all of determining whether or not it is safe, though, and you should also follow your normal network security procedures when trying to determine if a host poses a threat to your network.



Sunday, November 11, 2012

OS and Service Fingerprinting with Nmap

I decided that I wanted to have a network map of all the machines on my network containing information about the Operating System and services that are running on each one.  Furthermore, I want to include this data on my IDS running Snort + BASE. 

I'm running through this proof of concept scenario at the moment.  Don't complain about any code that I post below.  Again, I'm just doing a quick POC, so the code is fairly poorly written.  But it does work.  If you'd like to make it better, please feel free.  Please don't make functionality requests here.  If you would like to see a feature added, please make the changes yourself.  That's the beauty of having the code.  In other words, I'm doing this for me and sharing it with the world.  But in the end, it's for me.  So if you don't like it, I don't want to hear about it because I don't care.  Sorry for all that, just needed to get it out of the way, lest I become inundated with silly requests and negative opinions.

As far as OS and service fingerprinting goes, Nmap is fully capable of doing just that.  So why reinvent the wheel?  I first started trying to use Nmap along with a series of 'greps', but the command became long and well, pretty horrible looking.

Then I realized I could output the data from an Nmap scan to XML format.  My command ended up looking like this:
nmap -A -T5 <IP Address(es) to scan> -oX output.xml

The above command will scan the hosts that you provide, attempting to identify the OS and services running on them.  I usually use a CIDR block for the range to scan, such as 192.168.1.0/24, but you can use any nmap accepted format.

I chose to use perl to parse the output.xml file.  That's because there is a great perl module called Nmap::Parser.  It was built specifically for this sort of activity.

The script I have right now is below:

#!/usr/bin/perl -w

#
#
# Give the XML file as the only program argument
#

use strict;
use Nmap::Parser;          
use DBI;
use DBD::mysql;

my $dbh = DBI->connect(
    'DBI:mysql:database=nmap;host=localhost',
    '<user>',
    '<password>',
    { RaiseError => 1, AutoCommit => 1 },
);

# set the value of your SQL query

my $dquery1 = "delete from osdata";
my $dquery2 = "delete from servicedata";

my $query = "insert into osdata (ip, name, vendor, name_accuracy, class_accuracy)
            values (?, ?, ?, ?, ?) ";

my $query2 = "insert into servicedata (ip, protocol, name, port, product, version, confidence) values (?,?,?,?,?,?,?)";

# prepare your statement for connecting to the database
my $statement = $dbh->prepare($query);
my $statement2 = $dbh->prepare($query2);

my $dstatement = $dbh->prepare($dquery1);
my $dstatement2 = $dbh->prepare($dquery2);

# execute your SQL delete statements

$dstatement->execute();
$dstatement2->execute();
my $np = new Nmap::Parser;

# Parse the input XML file
$np->parsefile("$ARGV[0]");

# Get an array of all hosts that are alive
my @hosts = $np->all_hosts("up");


foreach my $host_obj (@hosts) {

    # Get the IP address of the current host
    my $addr = $host_obj->addr();
    my $hname = $host_obj->hostname();
    if ($hname ne 00) {
    print "$addr\t$hname\n";
    } else {
     print "$addr\n";
}

#Identify the Operating System
my $os = $host_obj->os_sig();
my $osname = $os->name();
my $osacc = $os->name_accuracy();
my $osven = $os->vendor();
my $osacc2 = $os->class_accuracy();
#print "$osname\t$osacc\t$osven\t$osacc2\n";
$statement->execute($addr, $osname, $osven, $osacc, $osacc2);

    # Get a list of open TCP ports for this host
    my @tcp_ports = $host_obj->tcp_open_ports();
   
    # Enumerate the open TCP ports
    foreach my $tcp_port (@tcp_ports) {
            my $service = $host_obj->tcp_service($tcp_port);
        no warnings;
        my $svcname = $service->name();
        my $svcport = $service->port();
        my $svcprod = $service->product();
        my $svcvers = $service->version();
        my $svcconf = $service->confidence();
       
    if (defined($svcname)) {
       
$statement2->execute($addr,'TCP',$svcname,$svcport,$svcprod,$svcvers,$svcconf);
        use warnings;
}
        }
     
    }




You would need to replace <user> and <pass> with your database username and password.

For the sake of testing, I just created a new MySQL database called nmap along with two tables; osdata and servicedata.

mysql -uroot -p

mysql> create database nmap;

mysql> use nmap;
mysql> create table osdata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), name varchar(20), vendor varchar(20), name_accuracy int(3), class_accuracy int(3) );

mysql> create table servicedata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), protocol varchar(3), name varchar(20), port int(6), product varchar(20), version varchar(6), confidence int (3) );

After the fact, I went back and added a timestamp column to each table:

mysql> alter table `osdata` add `lastUpdated` timestamp;
mysql> alter table `servicedata` add `lastUpdated` timestamp;

With the database created, I can simply run the script from above, which I have saved as nmap_parser.pl like this:

./nmap_parser.pl output.xml

The script will run and populate the new database tables with the results it finds.  Instead of dealing with checking if the database rows already exist and changing the insert to an update in the script, each time the script is executed, it completely deletes all the data in the osdata and servicedata tables. 

My thought is that the nmap scan can be set as a cron job on the snort machine.  Then the nmap_parser script can also be set to run after that cron job completes. 

The next step will be to make modifications to the snort front-end, BASE.  I hope to be able to add a new menu item which will read in the data from the osdata and servicedata tables and display them in a friendly format in the BASE UI.  Not sure when I'll have time to get around to that.  But I'll be sure to post my results whenever I do.  And again, this is a work in progress, so I know much needs to be changed in the code I have provided today. 



 
   









Saturday, November 3, 2012

Post Hurricane Sandy RAID Rebuild

I am fortunate that where I live did not suffer much damage in the wake of the recent storm named "Sandy".  I think that we maybe got some 40-50 MPH winds and a fair bit of rain from the storm, but no major damage was done.  Most of our power lines are buried underground in this area, so I was happy that we never lost power during the storm.  We did, however, lose power the day after the storm had passed.  Probably as a side effect of the power company working to restore power for those who had lost it during the storm.

After power was restored, I went around the house turning on all of my computer and server equipment.  I didn't really do a thorough check, though.  Today, I went to put a file on my NAS and noticed that my NFS mount was not present on my workstation.  I tried mounting it manually and it just hung.  I tried pinging the NAS and got no response.  It was powered on, though.  It was time to hook up a monitor and keyboard to this usually headless server.

As soon as the monitor came up, I could see the problem.  The system was sitting on the GRUB menu screen.  This screen usually has a timeout, that when reached, will boot the default selection.  This time, though, there was no timeout.  I thought to myself that something must be wrong.  I proceeded to make the selection and allow the system to boot.

As it booted I noticed that it said my software RAID array was in a degraded state and something about an invalid partition table.  I chose to let it boot anyway.  Once the system was up and running, I logged in and was able to determine that the RAID member with the problem was /dev/sda. 

Below are the steps I used to remove the array and add it back to begin rebuilding the array:

  • mdadm --manage /dev/md127 --fail /dev/sda1
  • mdadm /dev/md127 -r /dev/sda1
  • mdadm --zero-superblock /dev/sda
  • mdadm /dev/md127 -a /dev/sda1

Now I'm using the next command to view the status of the rebuild:

  • watch cat /proc/mdstat

All I can do at this point is wait for the rebuild to complete.  Maybe one day I'll invest in a nice hardware RAID controller.

Sunday, October 21, 2012

BYOD

I thought I'd take a moment to give my opinion on BYOD (Bring Your Own Device).  I do not agree with BYOD in the workplace.  I don't see what advantages it brings.  Personal electronic devices have no place on a corporate network.  I can't even begin to imagine the types of security holes and malware infestations that end users would be connecting to the network.

It's obvious the reasons why an IT department would not want this.  There are certainly any number of risks associated with plugging in devices that you have no control over.  There may be severely out of date software on these devices, malware, and who knows what other security risks.  However, I also can't see why end users would want this. 

If you need a smartphone, tablet, etc. to do your job efficiently, then these things should be provided by your place of business.  You should never have to spend your hard earned cash on tools needed to perform your job.  If your employer refuses to give you the tools you need, then maybe it's time to look for another place of employment. 

Personally, I have always maintained a line between my personal and my professional life.  In the past, when I was told that I needed to join a conference call from home, my response was that they needed to provide me with a phone or I would not be joining that meeting.  The result was that I got a company issued phone.  There's a difference between being outright insubordinate and protecting your own assets. 

I do sometimes feel bad for those people who just prefer to use their own devices at work.  Because for every one of those people, there are a dozen others who would just use this as an excuse to play games or socialize all day instead of working on a presumably unmonitored device.

So if you're an end user who has been nagging your IT department to allow you to use your own device, please try to understand why they are telling you "no".  It's not because they want to feel powerful by telling you what you can and cannot do.  They are busy people, too.  Keeping a network safe and secure is a full time job.  They don't get to just plug in some appliance and set it and forget it.  They must constantly be analyzing intrusion attempts and attack vectors.  All the while patching software to minimize those attack vectors.  In addition to all that, they are still available whenever you forget your password.  So please, take it easy on those guys and gals.

Monday, October 15, 2012

See Percentage of Memory Used in Linux

You can use the following command to see the percentage of memory used on a Linux system.  Keep in mind that all it's actually doing is adding together the memory percentage used lines for each process listed.  Depending on your input method, the results could vary a little, but should generally be in the same ballpark.

The first example below adds together everything in the 4th column of "ps" output. 

The second example takes input from top, by running just one time in batch mode.  Then it adds together the values in the 10th column.

ps aux | awk '{sum +=$4}; END {print sum}'

top -b -n 1 | awk '{sum +=$10}; END {print sum}'

Friday, May 4, 2012

Snort to the Rescue!

I still use Base as a web frontend to my snort installation.  I know a lot of people are using things like Snorby now, but I think Base does everything I need it to do.  Anyway, I was looking at Base this afternoon and I noticed over 200 new alerts. 

All of the alerts were from my main router and they were of the type "ICMP Test".  Closer examination showed that the router was trying to ping a machine that was unreachable.  Since my router also acts as my DNS and DHCP server, I checked the syslog on that machine. 

The syslog was full of DHCP offers to the same IP address that snort was showing as unreachable.  I took the MAC address and ran it through an online MAC to vendor lookup and it showed me that it was a MAC from Motorola CHS.  I went through the house restarting all of my Motorola cable boxes.  Since doing that I noticed that the DHCP log shows that an acknowledgment was sent in response to the DHCP offer.  Snort has also stopped alerting for that particular ICMP Test. 

I guess one of the cable boxes just got hung up a bit.  It happens from time to time.  Usually I don't catch the problem until it is too late (e.g. my favorite TV shows aren't recording as scheduled in the DVR).  Thanks to Snort and Base, that won't be a problem tonight.

NTFSclone

I installed ntfsprogs on my Debian desktop because I have a Windows partition that I'd like to create an image of on my NAS.  I ran ntfsclone with the --save-image option and directed it to place the output in an NFS share to my NAS.  I started it last night and it's almost 60% of the way finished.

My lessons learned are as follows:
-  Software RAID sucks.  I should probably spring for a decent hardware RAID controller.
-  Consumer hard disks also suck for large file copies like this one.  Those cheap Western Digital disks in my NAS may have seemed like a great deal, but they just don't compare to higher-end SCSI disks.  The IO Wait is what's causing it to take so long.  It was over 50% when I last looked at it on the NAS.

I should really invest in better equipment at home :)

Update:  The ntfsclone imaging finally finished.  It turns out that I may have tracked down another culprit relating to the slow file transfer and the high iowait.  I have a 3-disk RAID 5 array in my Openfiler NAS.  Running mdadm -D /dev/md0 showed that one of the disks was faulty.  I rebooted the NAS and re-added that disk to the RAID array.  Right now it is in the process of rebuilding, so I'll have to wait a while to see how that goes.  Even if it comes back online okay, I'll still probably order an extra disk to add to the array as a spare.