Monday, February 25, 2013

IP Reputation: Noting the Obvious

I mentioned previously about using a local IP reputation database for help when doing packet analysis.  It's handy to have to lookup the reputation of a source or destination IP address, especially if your database contains reputation data from various sources.  Last night I decided to use that data in a blacklist with snort's reputation preprocessor.  Today I'm going to remove that blacklist and I'll tell you why.

I was so ensnared by the idea of having a blacklist to block traffic to sites with bad reputations, that I forgot one fundamental aspect...the fact that many sites share the same IP address.  An example that I can think of would be shared hosting, where something like Apache virtualhosts are used.  In this scenario, each website might share a single public IP address. 

This means that if one of those sites is hosting malware files that you can download, then the reputation for the single public IP address will be bad, even if the other sites that are hosted on that IP address are clean and not hosting any malware or other baddies.

Because I didn't want to block traffic to legitimate sites, I turned on the preprocessor and enabled the requisite GID 136 rules to alert only and to not actually stop any of the traffic.

After a while, I had thousands of alerts.  Spot checking many of the alerts, I noticed that they did not appear to have any untoward content.  I looked them up on clean-mx.de and realized that it was other sites hosted on those IP addresses that contained malicious content, not the ones that were being visited from my network.

If I had chosen to drop or block that traffic, then I would have stopped communications with legitimate sites that just so happen to be hosted on the same IP as a malicious site. 

If you're extremely security conscious and do not want to have any affiliations with malicious sites, then maybe you would want to block traffic to any IP's having any type of malicious content.  However, I don't think it is fair to punish other websites because they chose the wrong hosting provider, or simply got stuck on the same IP address as a site hosting malware.

Tuesday, February 12, 2013

Snort Inline with AFPacket DAQ

I decided to put my snort instance into inline mode this past weekend.  It had been running for a while in passive mode, getting fed traffic from a SPAN port on my switch.  I had been running it this way so that I could do some initial tuning.  I disabled rules which don't apply to my environment, set thresholds for other rules, etc.

I placed it inline just inside my firewall.  So it is positioned on the network between my perimeter firewall and my internal network switch.  It will be able to inspect the traffic going out of my internal network to the Internet as well as the traffic coming into my internal network.

I thought for sure that when I switched to inline that I wouldn't have any performance issues.  I thought wrong.

I changed my startup script to start snort with these options:  "snort -Q -i eth1:eth2 --daq afpacket -c /etc/snort/snort.conf"

Everything seemed to work well.  I could create drop rules that were effective and I could even block access from my internal network to certain web addresses.  The problem came when I tried to use speedtest.net.  I noticed it was very slow.  I was getting less than a megabit per second, even when choosing a nearby test server.

I tried disabling all of my rules in snort, disabling inline normalization, and nothing seemed to fix the problem.  I stopped the snort instance and manually created a bridge for eth1 and eth2 using brctl.  Without snort running, this worked fine and gave me the expected results.  That told me that there wasn't a problem with the hardware or the way I have everything networked.  All signs were pointing to a problem with snort.

Last night I finally discovered the problem.  My snort instance is running as a KVM virtual machine on Proxmox.  When I set up the machine, I had chosen to use the virtio network drivers.  Changing this to the e1000 (I have Intel NICs in the physical host) driver fixed the bandwidth problem when using snort inline.  Now I'm getting the expected results when using speedtest.net.

I guess when snort automatically bridges your inline interfaces, it has some performance issues with the virtio drivers.  Either that or it is just treating them like some kind of generic driver, hence the performance problems even though it is technically able to bridge the ports and use them.


Thursday, February 7, 2013

IDS/IPS User Annoyances

As you know, I use snort as my IDS product of choice at home.  Sometimes I see questions from other people who use snort that just strike me as being dumb questions.  Now I understand if you were suddenly told by your boss, "Guess what?  You're the new person in charge of our IDS".  You have my sympathy.  There will obviously be  a learning curve for you if you've never managed an IDS or IPS.  The people I don't have sympathy for are the ones who have fancy titles like Sr. Security Analyst or Senior Network Engineer, who ask the dumb questions.  These people should just know better. 

Below is a list of things I thought of off the top of my head that these people should know:

Place your snort sensor inside your firewall.  Put it as close to the same network segment that you want to monitor as possible.  Don't put it outside your perimeter firewall.  You don't need to care about everyone who is knocking at your door.  If you want to keep them out, that's why you have a firewall.  If you place the sensor outside the firewall, you will get so many alerts that you won't be able to manage them.

Which brings me to my next point...Snort is not a firewall.  Please don't treat it like one.  Before you ask yourself if snort can block certain traffic, the better question would be to ask if your firewall can block it before it even makes it inside your network.  So if you want to do something like rate-limit certain traffic coming from outside to your internal network, do it at your firewall. 

Tune your IDS/IPS for your environment.  An IDS isn't a set it and forget it type of device.  Yes, you should go through all the rules and turn off the ones that do not apply to your environment.  If you don't have time, make time.  It's that important to the performance of your sensor.  If you're not running any Windows machines, then turn off all the rules that apply to Windows hosts.

Don't write rules for every piece of malware under the sun.  Let your antivirus software do its job.  You do have antivirus software, right?

Do not enable the portscan preprocessor.  It will affect performance and gives you very little in terms of value.  So what if someone ran a portscan?  A portscan is not an exploit.  It is not even necessarily a precursor to an exploit.  And trust me, someone is always port scanning at your perimeter.  Know that and move on.

Using snort to block access to particular websites.  Again, this is a task that is better suited for your firewall or proxy.

Don't expect sympathy if you are concerned about running a rule for a 10 year old exploit that only affects certain older versions of software.  You had 10 years to move off of that software version.   Just because you chose not to does not make it my problem.  An IDS isn't designed to replace your patching procedure.  It's designed to buy you the time you need to get your software patched.  Once the software is patched you should turn off that specific rule because you don't need it any longer.

I'm sure that given enough time I could probably write a book with tips like the above.  Right now, however, I'm a little short on time and think that what I've said so far is a good start for anyone who manages an intrusion system.


Tuesday, February 5, 2013

Snort: Flow-IP Statistics Parser

I recently enabled flow-ip statistics in my snort.conf by editing the perfmon preprocessor line like so:

preprocessor perfmonitor: time 300 file /var/snort/snort.stats pktcnt 10000 flow-ip-file /var/snort/ipflow.csv flow-ip

As you can see, I am logging snort performance data to /var/snort/snort.stats every 5 minutes or 10000 packets.  In addition, I'm logging the flow-ip data to /var/snort/ipflow.csv.

The flow-ip data can be used to help identify the top talkers on your network and is useful for troubleshooting performance issues, such as CPU spikes, in snort.  But because the output is contained in a CSV file, it isn't very easy to read.

I wrote a small perl script that uses bash commands, such as awk, to make it a little easier to read.  The script adds together the total TCP bytes from Host A with the total TCP bytes from Host B and outputs the top 10 in descending order by the total TCP bytes sent between the two hosts.

It does not look for unique hosts, though.  And it does the calculations per line in the CSV.  So if your CSV file contains multiple lines with the same hosts, and they all happen to have more TCP traffic than the other lines in the CSV, then those hosts will be listed multiple times in the output, sorted only by differences in the total TCP bytes transferred between them.

Feel free to modify and use the code any way you see fit.  You can execute the code by saving it to a file, such as flowipparser.pl and then calling it with your CSV file as an argument from the command line:  ./flowipparser.pl ipflow.csv

The output to the console will contain three columns:  Host A IP Address, Host B IP Address, and the total TCP bytes transferred between the two hosts.

Here's the code:

#!/usr/bin/perl -w

$INPUTFILE="$ARGV[0]";

system qq(awk -F "," '{print \$1, \$2, \$4, \$6}' $INPUTFILE | sort -k3n,3 | sort -r -n -k3 | head | awk -F " " '{ print \$1, \$2, sum=\$3+\$4 }' | sort -r -n -k3 );



Tuesday, January 22, 2013

Maintain a Local IP Reputation Database for Free

There are a lot of IP reputation sites out there that maintain data on certain IP addresses and will let you know if those addresses have been known for serving malware, spam, or other malicious content.  The good news is that you can use this data to create more effective whitelists and blacklists.  The bad news is that most of these services come at a cost.  And the free ones usually impede you by requiring a captcha to be entered before checking an IP address for you.

Fortunately, the good folks over at Alienvault also maintain an IP reputation database that you can download for free.  After learning of this, my first thought was "How can I make use of this?".  Then I thought, "Wouldn't it be cool to store their IP Reputation list in my own database?".  And that's just what I did.

I wrote the following script and set it up to run as a cron job every 2 hours.  They update the list you can download every 1 hour.

#!/bin/sh

wget https://reputation.alienvault.com/reputation.snort -P /tmp/ --no-check-certificate -N

sed -n '/^[0-9]/p' /tmp/reputation.snort > /tmp/iprep.out
/path/to/loadiprep.sh

The above script will use wget to download the latest reputation.snort file from Alienvault.  You don't have to use this list with Snort.  It just happens to have exactly the information I need for my database in it, so that's the one I went with from their website.

It gets downloaded to your local /tmp directory and then I pull out the IP addresses and reputation information and place it in /tmp/iprep.out.

Finally, it runs loadiprep.sh.  That script looks like this:

#!/usr/bin/perl

use CGI qw(:standard);
use DBI;

$opt_user='<user>';
$opt_password='<password>';
$mydb='reputation';
$host='localhost';
$dbh = DBI->connect("DBI:mysql:$mydb:$host",$opt_user,$opt_password) or
die("ERROR");
{

my $query1 = "load data infile '/tmp/iprep.out' replace into table iprep fields terminated by ' # ' lines terminated by '\n' (ip, reputation)";
my $statement = $dbh->prepare($query1);
$statement->execute();
$statement->finish;
}
$rc=$dbh->disconnect;

On my machine I created a MySQL database called "reputation".  In that database I created a table called "iprep".  The iprep table contains two columns called "ip" and "reputation".  The ip column contains the IP addresses and the reputation column contains, you guessed it, the reputation information.

The loadiprep.sh script that is called by the first script I mentioned will populate the database with the downloaded IP reputation information.  It will stay updated if you are running it with a cron job like I am.  Don't forget to change <user> and <password> in the above script to a user and pass that has access to your reputation database.

Right now my iprep table contains over 300,000 rows.  In looking for a way to query this data without the need to log into MySQL each time, I came up with the following:

#!/usr/bin/perl -w

use DBI;
 $dbh = DBI->connect('dbi:mysql:reputation','<user>','<password>')
   or die "Connection Error: $DBI::errstr\n";


$LOGFILE = "$ARGV[0]";
open(LOGFILE) or die("Could not open log file.");
foreach $line (<LOGFILE>) {
    chomp($line);              # remove the newline from $line.




$sql = "select ip,reputation from iprep where ip = ?";
 $sth = $dbh->prepare($sql);
 $sth->execute($line)
   or die "SQL Error: $DBI::errstr\n";
 while (@row = $sth->fetchrow_array) {




print join("          ", @row), "\n";
}
}
If you save the above perl script as "queryrepdb.pl", you would execute it from the command line like this:  ./queryrepdb.pl somefile.txt, where somefile.txt contains a list of IP addresses that you want to check, each on its own line.  The script will return no output if none of the IP addresses are found in the database.  But if a match is found, it will print the IP address and reputation information to the console.

I took it a step further with another script:

#!/usr/bin/perl -w

use DBI;

$PCAPFILE = "$ARGV[0]";


system qq(`tcpdump -tnr $PCAPFILE | awk -F '.' '{print \$1"."\$2"."\$3"."\$4}' | sort | uniq | sed 's/^...//g' > samp.txt` );


$dbh = DBI->connect('dbi:mysql:reputation','<user>',<password>')
   or die "Connection Error: $DBI::errstr\n";


$LOGFILE = "samp.txt";
open(LOGFILE) or die("Could not open log file.");
foreach $line (<LOGFILE>) {
    chomp($line);            




$sql = "select ip,reputation from iprep where ip = ?";
 $sth = $dbh->prepare($sql);
 $sth->execute($line)
   or die "SQL Error: $DBI::errstr\n";
 while (@row = $sth->fetchrow_array) {

print join("\t\t", @row), "\n";
}
}

This script will also take a file as input.  This time however, the file should be a pcap (packet capture) file, such as one created with tcpdump or wireshark. 

The script will then create a list of the unique source and destination IP addresses from the pcap using tcpdump to read the file.  The IP list is stored in a file called samp.txt.  That file is then read and the addresses are compared to the data that resides in the reputation database.  If no matches are found, the script will output nothing.  Again, if a match is found, the IP address and reputation information are printed to the console.

That last script only matches against the source and destination IP addresses in the pcap.  If there is a host domain listed in the packet, such as a visited URL, it is not looked up by this script.  However, I would encourage you to run an nslookup against the domain and then run a search against the IP address you get against the reputation database.

If you just want to query it directly from mysql, the query is very simple:

use reputation;
select * from iprep where ip = 'xx.xx.xx.xx';

If you come across a pcap with some addresses you aren't sure are malicious or not, this would be a great first step in determining whether or not you should be concerned.  This is not the end all, be all of determining whether or not it is safe, though, and you should also follow your normal network security procedures when trying to determine if a host poses a threat to your network.



Sunday, November 11, 2012

OS and Service Fingerprinting with Nmap

I decided that I wanted to have a network map of all the machines on my network containing information about the Operating System and services that are running on each one.  Furthermore, I want to include this data on my IDS running Snort + BASE. 

I'm running through this proof of concept scenario at the moment.  Don't complain about any code that I post below.  Again, I'm just doing a quick POC, so the code is fairly poorly written.  But it does work.  If you'd like to make it better, please feel free.  Please don't make functionality requests here.  If you would like to see a feature added, please make the changes yourself.  That's the beauty of having the code.  In other words, I'm doing this for me and sharing it with the world.  But in the end, it's for me.  So if you don't like it, I don't want to hear about it because I don't care.  Sorry for all that, just needed to get it out of the way, lest I become inundated with silly requests and negative opinions.

As far as OS and service fingerprinting goes, Nmap is fully capable of doing just that.  So why reinvent the wheel?  I first started trying to use Nmap along with a series of 'greps', but the command became long and well, pretty horrible looking.

Then I realized I could output the data from an Nmap scan to XML format.  My command ended up looking like this:
nmap -A -T5 <IP Address(es) to scan> -oX output.xml

The above command will scan the hosts that you provide, attempting to identify the OS and services running on them.  I usually use a CIDR block for the range to scan, such as 192.168.1.0/24, but you can use any nmap accepted format.

I chose to use perl to parse the output.xml file.  That's because there is a great perl module called Nmap::Parser.  It was built specifically for this sort of activity.

The script I have right now is below:

#!/usr/bin/perl -w

#
#
# Give the XML file as the only program argument
#

use strict;
use Nmap::Parser;          
use DBI;
use DBD::mysql;

my $dbh = DBI->connect(
    'DBI:mysql:database=nmap;host=localhost',
    '<user>',
    '<password>',
    { RaiseError => 1, AutoCommit => 1 },
);

# set the value of your SQL query

my $dquery1 = "delete from osdata";
my $dquery2 = "delete from servicedata";

my $query = "insert into osdata (ip, name, vendor, name_accuracy, class_accuracy)
            values (?, ?, ?, ?, ?) ";

my $query2 = "insert into servicedata (ip, protocol, name, port, product, version, confidence) values (?,?,?,?,?,?,?)";

# prepare your statement for connecting to the database
my $statement = $dbh->prepare($query);
my $statement2 = $dbh->prepare($query2);

my $dstatement = $dbh->prepare($dquery1);
my $dstatement2 = $dbh->prepare($dquery2);

# execute your SQL delete statements

$dstatement->execute();
$dstatement2->execute();
my $np = new Nmap::Parser;

# Parse the input XML file
$np->parsefile("$ARGV[0]");

# Get an array of all hosts that are alive
my @hosts = $np->all_hosts("up");


foreach my $host_obj (@hosts) {

    # Get the IP address of the current host
    my $addr = $host_obj->addr();
    my $hname = $host_obj->hostname();
    if ($hname ne 00) {
    print "$addr\t$hname\n";
    } else {
     print "$addr\n";
}

#Identify the Operating System
my $os = $host_obj->os_sig();
my $osname = $os->name();
my $osacc = $os->name_accuracy();
my $osven = $os->vendor();
my $osacc2 = $os->class_accuracy();
#print "$osname\t$osacc\t$osven\t$osacc2\n";
$statement->execute($addr, $osname, $osven, $osacc, $osacc2);

    # Get a list of open TCP ports for this host
    my @tcp_ports = $host_obj->tcp_open_ports();
   
    # Enumerate the open TCP ports
    foreach my $tcp_port (@tcp_ports) {
            my $service = $host_obj->tcp_service($tcp_port);
        no warnings;
        my $svcname = $service->name();
        my $svcport = $service->port();
        my $svcprod = $service->product();
        my $svcvers = $service->version();
        my $svcconf = $service->confidence();
       
    if (defined($svcname)) {
       
$statement2->execute($addr,'TCP',$svcname,$svcport,$svcprod,$svcvers,$svcconf);
        use warnings;
}
        }
     
    }




You would need to replace <user> and <pass> with your database username and password.

For the sake of testing, I just created a new MySQL database called nmap along with two tables; osdata and servicedata.

mysql -uroot -p

mysql> create database nmap;

mysql> use nmap;
mysql> create table osdata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), name varchar(20), vendor varchar(20), name_accuracy int(3), class_accuracy int(3) );

mysql> create table servicedata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), protocol varchar(3), name varchar(20), port int(6), product varchar(20), version varchar(6), confidence int (3) );

After the fact, I went back and added a timestamp column to each table:

mysql> alter table `osdata` add `lastUpdated` timestamp;
mysql> alter table `servicedata` add `lastUpdated` timestamp;

With the database created, I can simply run the script from above, which I have saved as nmap_parser.pl like this:

./nmap_parser.pl output.xml

The script will run and populate the new database tables with the results it finds.  Instead of dealing with checking if the database rows already exist and changing the insert to an update in the script, each time the script is executed, it completely deletes all the data in the osdata and servicedata tables. 

My thought is that the nmap scan can be set as a cron job on the snort machine.  Then the nmap_parser script can also be set to run after that cron job completes. 

The next step will be to make modifications to the snort front-end, BASE.  I hope to be able to add a new menu item which will read in the data from the osdata and servicedata tables and display them in a friendly format in the BASE UI.  Not sure when I'll have time to get around to that.  But I'll be sure to post my results whenever I do.  And again, this is a work in progress, so I know much needs to be changed in the code I have provided today. 



 
   









Saturday, November 3, 2012

Post Hurricane Sandy RAID Rebuild

I am fortunate that where I live did not suffer much damage in the wake of the recent storm named "Sandy".  I think that we maybe got some 40-50 MPH winds and a fair bit of rain from the storm, but no major damage was done.  Most of our power lines are buried underground in this area, so I was happy that we never lost power during the storm.  We did, however, lose power the day after the storm had passed.  Probably as a side effect of the power company working to restore power for those who had lost it during the storm.

After power was restored, I went around the house turning on all of my computer and server equipment.  I didn't really do a thorough check, though.  Today, I went to put a file on my NAS and noticed that my NFS mount was not present on my workstation.  I tried mounting it manually and it just hung.  I tried pinging the NAS and got no response.  It was powered on, though.  It was time to hook up a monitor and keyboard to this usually headless server.

As soon as the monitor came up, I could see the problem.  The system was sitting on the GRUB menu screen.  This screen usually has a timeout, that when reached, will boot the default selection.  This time, though, there was no timeout.  I thought to myself that something must be wrong.  I proceeded to make the selection and allow the system to boot.

As it booted I noticed that it said my software RAID array was in a degraded state and something about an invalid partition table.  I chose to let it boot anyway.  Once the system was up and running, I logged in and was able to determine that the RAID member with the problem was /dev/sda. 

Below are the steps I used to remove the array and add it back to begin rebuilding the array:

  • mdadm --manage /dev/md127 --fail /dev/sda1
  • mdadm /dev/md127 -r /dev/sda1
  • mdadm --zero-superblock /dev/sda
  • mdadm /dev/md127 -a /dev/sda1

Now I'm using the next command to view the status of the rebuild:

  • watch cat /proc/mdstat

All I can do at this point is wait for the rebuild to complete.  Maybe one day I'll invest in a nice hardware RAID controller.