As you know, I use snort as my IDS product of choice at home. Sometimes I see questions from other people who use snort that just strike me as being dumb questions. Now I understand if you were suddenly told by your boss, "Guess what? You're the new person in charge of our IDS". You have my sympathy. There will obviously be a learning curve for you if you've never managed an IDS or IPS. The people I don't have sympathy for are the ones who have fancy titles like Sr. Security Analyst or Senior Network Engineer, who ask the dumb questions. These people should just know better.
Below is a list of things I thought of off the top of my head that these people should know:
Place your snort sensor inside your firewall. Put it as close to the same network segment that you want to monitor as possible. Don't put it outside your perimeter firewall. You don't need to care about everyone who is knocking at your door. If you want to keep them out, that's why you have a firewall. If you place the sensor outside the firewall, you will get so many alerts that you won't be able to manage them.
Which brings me to my next point...Snort is not a firewall. Please don't treat it like one. Before you ask yourself if snort can block certain traffic, the better question would be to ask if your firewall can block it before it even makes it inside your network. So if you want to do something like rate-limit certain traffic coming from outside to your internal network, do it at your firewall.
Tune your IDS/IPS for your environment. An IDS isn't a set it and forget it type of device. Yes, you should go through all the rules and turn off the ones that do not apply to your environment. If you don't have time, make time. It's that important to the performance of your sensor. If you're not running any Windows machines, then turn off all the rules that apply to Windows hosts.
Don't write rules for every piece of malware under the sun. Let your antivirus software do its job. You do have antivirus software, right?
Do not enable the portscan preprocessor. It will affect performance and gives you very little in terms of value. So what if someone ran a portscan? A portscan is not an exploit. It is not even necessarily a precursor to an exploit. And trust me, someone is always port scanning at your perimeter. Know that and move on.
Using snort to block access to particular websites. Again, this is a task that is better suited for your firewall or proxy.
Don't expect sympathy if you are concerned about running a rule for a 10 year old exploit that only affects certain older versions of software. You had 10 years to move off of that software version. Just because you chose not to does not make it my problem. An IDS isn't designed to replace your patching procedure. It's designed to buy you the time you need to get your software patched. Once the software is patched you should turn off that specific rule because you don't need it any longer.
I'm sure that given enough time I could probably write a book with tips like the above. Right now, however, I'm a little short on time and think that what I've said so far is a good start for anyone who manages an intrusion system.
Thursday, February 7, 2013
Tuesday, February 5, 2013
Snort: Flow-IP Statistics Parser
I recently enabled flow-ip statistics in my snort.conf by editing the perfmon preprocessor line like so:
As you can see, I am logging snort performance data to /var/snort/snort.stats every 5 minutes or 10000 packets. In addition, I'm logging the flow-ip data to /var/snort/ipflow.csv.
The flow-ip data can be used to help identify the top talkers on your network and is useful for troubleshooting performance issues, such as CPU spikes, in snort. But because the output is contained in a CSV file, it isn't very easy to read.
I wrote a small perl script that uses bash commands, such as awk, to make it a little easier to read. The script adds together the total TCP bytes from Host A with the total TCP bytes from Host B and outputs the top 10 in descending order by the total TCP bytes sent between the two hosts.
It does not look for unique hosts, though. And it does the calculations per line in the CSV. So if your CSV file contains multiple lines with the same hosts, and they all happen to have more TCP traffic than the other lines in the CSV, then those hosts will be listed multiple times in the output, sorted only by differences in the total TCP bytes transferred between them.
Feel free to modify and use the code any way you see fit. You can execute the code by saving it to a file, such as flowipparser.pl and then calling it with your CSV file as an argument from the command line: ./flowipparser.pl ipflow.csv
The output to the console will contain three columns: Host A IP Address, Host B IP Address, and the total TCP bytes transferred between the two hosts.
Here's the code:
preprocessor perfmonitor: time 300 file /var/snort/snort.stats pktcnt 10000 flow-ip-file /var/snort/ipflow.csv flow-ip
As you can see, I am logging snort performance data to /var/snort/snort.stats every 5 minutes or 10000 packets. In addition, I'm logging the flow-ip data to /var/snort/ipflow.csv.
The flow-ip data can be used to help identify the top talkers on your network and is useful for troubleshooting performance issues, such as CPU spikes, in snort. But because the output is contained in a CSV file, it isn't very easy to read.
I wrote a small perl script that uses bash commands, such as awk, to make it a little easier to read. The script adds together the total TCP bytes from Host A with the total TCP bytes from Host B and outputs the top 10 in descending order by the total TCP bytes sent between the two hosts.
It does not look for unique hosts, though. And it does the calculations per line in the CSV. So if your CSV file contains multiple lines with the same hosts, and they all happen to have more TCP traffic than the other lines in the CSV, then those hosts will be listed multiple times in the output, sorted only by differences in the total TCP bytes transferred between them.
Feel free to modify and use the code any way you see fit. You can execute the code by saving it to a file, such as flowipparser.pl and then calling it with your CSV file as an argument from the command line: ./flowipparser.pl ipflow.csv
The output to the console will contain three columns: Host A IP Address, Host B IP Address, and the total TCP bytes transferred between the two hosts.
Here's the code:
#!/usr/bin/perl -w
$INPUTFILE="$ARGV[0]";
system qq(awk -F "," '{print \$1, \$2, \$4, \$6}' $INPUTFILE | sort -k3n,3 | sort -r -n -k3 | head | awk -F " " '{ print \$1, \$2, sum=\$3+\$4 }' | sort -r -n -k3 );
Tuesday, January 22, 2013
Maintain a Local IP Reputation Database for Free
There are a lot of IP reputation sites out there that maintain data on certain IP addresses and will let you know if those addresses have been known for serving malware, spam, or other malicious content. The good news is that you can use this data to create more effective whitelists and blacklists. The bad news is that most of these services come at a cost. And the free ones usually impede you by requiring a captcha to be entered before checking an IP address for you.
Fortunately, the good folks over at Alienvault also maintain an IP reputation database that you can download for free. After learning of this, my first thought was "How can I make use of this?". Then I thought, "Wouldn't it be cool to store their IP Reputation list in my own database?". And that's just what I did.
I wrote the following script and set it up to run as a cron job every 2 hours. They update the list you can download every 1 hour.
The above script will use wget to download the latest reputation.snort file from Alienvault. You don't have to use this list with Snort. It just happens to have exactly the information I need for my database in it, so that's the one I went with from their website.
It gets downloaded to your local /tmp directory and then I pull out the IP addresses and reputation information and place it in /tmp/iprep.out.
Finally, it runs loadiprep.sh. That script looks like this:
#!/usr/bin/perl
use CGI qw(:standard);
use DBI;
$opt_user='<user>';
$opt_password='<password>';
$mydb='reputation';
$host='localhost';
$dbh = DBI->connect("DBI:mysql:$mydb:$host",$opt_user,$opt_password) or
die("ERROR");
{
my $query1 = "load data infile '/tmp/iprep.out' replace into table iprep fields terminated by ' # ' lines terminated by '\n' (ip, reputation)";
my $statement = $dbh->prepare($query1);
$statement->execute();
$statement->finish;
}
$rc=$dbh->disconnect;
On my machine I created a MySQL database called "reputation". In that database I created a table called "iprep". The iprep table contains two columns called "ip" and "reputation". The ip column contains the IP addresses and the reputation column contains, you guessed it, the reputation information.
The loadiprep.sh script that is called by the first script I mentioned will populate the database with the downloaded IP reputation information. It will stay updated if you are running it with a cron job like I am. Don't forget to change <user> and <password> in the above script to a user and pass that has access to your reputation database.
Right now my iprep table contains over 300,000 rows. In looking for a way to query this data without the need to log into MySQL each time, I came up with the following:
I took it a step further with another script:
This script will also take a file as input. This time however, the file should be a pcap (packet capture) file, such as one created with tcpdump or wireshark.
The script will then create a list of the unique source and destination IP addresses from the pcap using tcpdump to read the file. The IP list is stored in a file called samp.txt. That file is then read and the addresses are compared to the data that resides in the reputation database. If no matches are found, the script will output nothing. Again, if a match is found, the IP address and reputation information are printed to the console.
That last script only matches against the source and destination IP addresses in the pcap. If there is a host domain listed in the packet, such as a visited URL, it is not looked up by this script. However, I would encourage you to run an nslookup against the domain and then run a search against the IP address you get against the reputation database.
If you just want to query it directly from mysql, the query is very simple:
If you come across a pcap with some addresses you aren't sure are malicious or not, this would be a great first step in determining whether or not you should be concerned. This is not the end all, be all of determining whether or not it is safe, though, and you should also follow your normal network security procedures when trying to determine if a host poses a threat to your network.
Fortunately, the good folks over at Alienvault also maintain an IP reputation database that you can download for free. After learning of this, my first thought was "How can I make use of this?". Then I thought, "Wouldn't it be cool to store their IP Reputation list in my own database?". And that's just what I did.
I wrote the following script and set it up to run as a cron job every 2 hours. They update the list you can download every 1 hour.
#!/bin/sh
wget https://reputation.alienvault.com/reputation.snort -P /tmp/ --no-check-certificate -N
sed -n '/^[0-9]/p' /tmp/reputation.snort > /tmp/iprep.out
/path/to/loadiprep.sh
The above script will use wget to download the latest reputation.snort file from Alienvault. You don't have to use this list with Snort. It just happens to have exactly the information I need for my database in it, so that's the one I went with from their website.
It gets downloaded to your local /tmp directory and then I pull out the IP addresses and reputation information and place it in /tmp/iprep.out.
Finally, it runs loadiprep.sh. That script looks like this:
#!/usr/bin/perl
use CGI qw(:standard);
use DBI;
$opt_user='<user>';
$opt_password='<password>';
$mydb='reputation';
$host='localhost';
$dbh = DBI->connect("DBI:mysql:$mydb:$host",$opt_user,$opt_password) or
die("ERROR");
{
my $query1 = "load data infile '/tmp/iprep.out' replace into table iprep fields terminated by ' # ' lines terminated by '\n' (ip, reputation)";
my $statement = $dbh->prepare($query1);
$statement->execute();
$statement->finish;
}
$rc=$dbh->disconnect;
On my machine I created a MySQL database called "reputation". In that database I created a table called "iprep". The iprep table contains two columns called "ip" and "reputation". The ip column contains the IP addresses and the reputation column contains, you guessed it, the reputation information.
The loadiprep.sh script that is called by the first script I mentioned will populate the database with the downloaded IP reputation information. It will stay updated if you are running it with a cron job like I am. Don't forget to change <user> and <password> in the above script to a user and pass that has access to your reputation database.
Right now my iprep table contains over 300,000 rows. In looking for a way to query this data without the need to log into MySQL each time, I came up with the following:
#!/usr/bin/perl -wIf you save the above perl script as "queryrepdb.pl", you would execute it from the command line like this: ./queryrepdb.pl somefile.txt, where somefile.txt contains a list of IP addresses that you want to check, each on its own line. The script will return no output if none of the IP addresses are found in the database. But if a match is found, it will print the IP address and reputation information to the console.
use DBI;
$dbh = DBI->connect('dbi:mysql:reputation','<user>','<password>')
or die "Connection Error: $DBI::errstr\n";
$LOGFILE = "$ARGV[0]";
open(LOGFILE) or die("Could not open log file.");
foreach $line (<LOGFILE>) {
chomp($line); # remove the newline from $line.
$sql = "select ip,reputation from iprep where ip = ?";
$sth = $dbh->prepare($sql);
$sth->execute($line)
or die "SQL Error: $DBI::errstr\n";
while (@row = $sth->fetchrow_array) {
print join(" ", @row), "\n";
}
}
I took it a step further with another script:
#!/usr/bin/perl -w
use DBI;
$PCAPFILE = "$ARGV[0]";
system qq(`tcpdump -tnr $PCAPFILE | awk -F '.' '{print \$1"."\$2"."\$3"."\$4}' | sort | uniq | sed 's/^...//g' > samp.txt` );
$dbh = DBI->connect('dbi:mysql:reputation','<user>',<password>')
or die "Connection Error: $DBI::errstr\n";
$LOGFILE = "samp.txt";
open(LOGFILE) or die("Could not open log file.");
foreach $line (<LOGFILE>) {
chomp($line);
$sql = "select ip,reputation from iprep where ip = ?";
$sth = $dbh->prepare($sql);
$sth->execute($line)
or die "SQL Error: $DBI::errstr\n";
while (@row = $sth->fetchrow_array) {
print join("\t\t", @row), "\n";
}
}
This script will also take a file as input. This time however, the file should be a pcap (packet capture) file, such as one created with tcpdump or wireshark.
The script will then create a list of the unique source and destination IP addresses from the pcap using tcpdump to read the file. The IP list is stored in a file called samp.txt. That file is then read and the addresses are compared to the data that resides in the reputation database. If no matches are found, the script will output nothing. Again, if a match is found, the IP address and reputation information are printed to the console.
That last script only matches against the source and destination IP addresses in the pcap. If there is a host domain listed in the packet, such as a visited URL, it is not looked up by this script. However, I would encourage you to run an nslookup against the domain and then run a search against the IP address you get against the reputation database.
If you just want to query it directly from mysql, the query is very simple:
use reputation;
select * from iprep where ip = 'xx.xx.xx.xx';
If you come across a pcap with some addresses you aren't sure are malicious or not, this would be a great first step in determining whether or not you should be concerned. This is not the end all, be all of determining whether or not it is safe, though, and you should also follow your normal network security procedures when trying to determine if a host poses a threat to your network.
Sunday, November 11, 2012
OS and Service Fingerprinting with Nmap
I decided that I wanted to have a network map of all the machines on my network containing information about the Operating System and services that are running on each one. Furthermore, I want to include this data on my IDS running Snort + BASE.
I'm running through this proof of concept scenario at the moment. Don't complain about any code that I post below. Again, I'm just doing a quick POC, so the code is fairly poorly written. But it does work. If you'd like to make it better, please feel free. Please don't make functionality requests here. If you would like to see a feature added, please make the changes yourself. That's the beauty of having the code. In other words, I'm doing this for me and sharing it with the world. But in the end, it's for me. So if you don't like it, I don't want to hear about it because I don't care. Sorry for all that, just needed to get it out of the way, lest I become inundated with silly requests and negative opinions.
As far as OS and service fingerprinting goes, Nmap is fully capable of doing just that. So why reinvent the wheel? I first started trying to use Nmap along with a series of 'greps', but the command became long and well, pretty horrible looking.
Then I realized I could output the data from an Nmap scan to XML format. My command ended up looking like this:
The above command will scan the hosts that you provide, attempting to identify the OS and services running on them. I usually use a CIDR block for the range to scan, such as 192.168.1.0/24, but you can use any nmap accepted format.
I chose to use perl to parse the output.xml file. That's because there is a great perl module called Nmap::Parser. It was built specifically for this sort of activity.
The script I have right now is below:
You would need to replace <user> and <pass> with your database username and password.
For the sake of testing, I just created a new MySQL database called nmap along with two tables; osdata and servicedata.
After the fact, I went back and added a timestamp column to each table:
With the database created, I can simply run the script from above, which I have saved as nmap_parser.pl like this:
The script will run and populate the new database tables with the results it finds. Instead of dealing with checking if the database rows already exist and changing the insert to an update in the script, each time the script is executed, it completely deletes all the data in the osdata and servicedata tables.
My thought is that the nmap scan can be set as a cron job on the snort machine. Then the nmap_parser script can also be set to run after that cron job completes.
The next step will be to make modifications to the snort front-end, BASE. I hope to be able to add a new menu item which will read in the data from the osdata and servicedata tables and display them in a friendly format in the BASE UI. Not sure when I'll have time to get around to that. But I'll be sure to post my results whenever I do. And again, this is a work in progress, so I know much needs to be changed in the code I have provided today.
I'm running through this proof of concept scenario at the moment. Don't complain about any code that I post below. Again, I'm just doing a quick POC, so the code is fairly poorly written. But it does work. If you'd like to make it better, please feel free. Please don't make functionality requests here. If you would like to see a feature added, please make the changes yourself. That's the beauty of having the code. In other words, I'm doing this for me and sharing it with the world. But in the end, it's for me. So if you don't like it, I don't want to hear about it because I don't care. Sorry for all that, just needed to get it out of the way, lest I become inundated with silly requests and negative opinions.
As far as OS and service fingerprinting goes, Nmap is fully capable of doing just that. So why reinvent the wheel? I first started trying to use Nmap along with a series of 'greps', but the command became long and well, pretty horrible looking.
Then I realized I could output the data from an Nmap scan to XML format. My command ended up looking like this:
nmap -A -T5 <IP Address(es) to scan> -oX output.xml
The above command will scan the hosts that you provide, attempting to identify the OS and services running on them. I usually use a CIDR block for the range to scan, such as 192.168.1.0/24, but you can use any nmap accepted format.
I chose to use perl to parse the output.xml file. That's because there is a great perl module called Nmap::Parser. It was built specifically for this sort of activity.
The script I have right now is below:
#!/usr/bin/perl -w
#
#
# Give the XML file as the only program argument
#
use strict;
use Nmap::Parser;
use DBI;
use DBD::mysql;
my $dbh = DBI->connect(
'DBI:mysql:database=nmap;host=localhost',
'<user>',
'<password>',
{ RaiseError => 1, AutoCommit => 1 },
);
# set the value of your SQL query
my $dquery1 = "delete from osdata";
my $dquery2 = "delete from servicedata";
my $query = "insert into osdata (ip, name, vendor, name_accuracy, class_accuracy)
values (?, ?, ?, ?, ?) ";
my $query2 = "insert into servicedata (ip, protocol, name, port, product, version, confidence) values (?,?,?,?,?,?,?)";
# prepare your statement for connecting to the database
my $statement = $dbh->prepare($query);
my $statement2 = $dbh->prepare($query2);
my $dstatement = $dbh->prepare($dquery1);
my $dstatement2 = $dbh->prepare($dquery2);
# execute your SQL delete statements
$dstatement->execute();
$dstatement2->execute();
my $np = new Nmap::Parser;
# Parse the input XML file
$np->parsefile("$ARGV[0]");
# Get an array of all hosts that are alive
my @hosts = $np->all_hosts("up");
foreach my $host_obj (@hosts) {
# Get the IP address of the current host
my $addr = $host_obj->addr();
my $hname = $host_obj->hostname();
if ($hname ne 00) {
print "$addr\t$hname\n";
} else {
print "$addr\n";
}
#Identify the Operating System
my $os = $host_obj->os_sig();
my $osname = $os->name();
my $osacc = $os->name_accuracy();
my $osven = $os->vendor();
my $osacc2 = $os->class_accuracy();
#print "$osname\t$osacc\t$osven\t$osacc2\n";
$statement->execute($addr, $osname, $osven, $osacc, $osacc2);
# Get a list of open TCP ports for this host
my @tcp_ports = $host_obj->tcp_open_ports();
# Enumerate the open TCP ports
foreach my $tcp_port (@tcp_ports) {
my $service = $host_obj->tcp_service($tcp_port);
no warnings;
my $svcname = $service->name();
my $svcport = $service->port();
my $svcprod = $service->product();
my $svcvers = $service->version();
my $svcconf = $service->confidence();
if (defined($svcname)) {
$statement2->execute($addr,'TCP',$svcname,$svcport,$svcprod,$svcvers,$svcconf);
use warnings;
}
}
}
You would need to replace <user> and <pass> with your database username and password.
For the sake of testing, I just created a new MySQL database called nmap along with two tables; osdata and servicedata.
mysql -uroot -p
mysql> create database nmap;
mysql> use nmap;
mysql> create table osdata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), name varchar(20), vendor varchar(20), name_accuracy int(3), class_accuracy int(3) );
mysql> create table servicedata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), protocol varchar(3), name varchar(20), port int(6), product varchar(20), version varchar(6), confidence int (3) );
After the fact, I went back and added a timestamp column to each table:
mysql> alter table `osdata` add `lastUpdated` timestamp;
mysql> alter table `servicedata` add `lastUpdated` timestamp;
With the database created, I can simply run the script from above, which I have saved as nmap_parser.pl like this:
./nmap_parser.pl output.xml
The script will run and populate the new database tables with the results it finds. Instead of dealing with checking if the database rows already exist and changing the insert to an update in the script, each time the script is executed, it completely deletes all the data in the osdata and servicedata tables.
My thought is that the nmap scan can be set as a cron job on the snort machine. Then the nmap_parser script can also be set to run after that cron job completes.
The next step will be to make modifications to the snort front-end, BASE. I hope to be able to add a new menu item which will read in the data from the osdata and servicedata tables and display them in a friendly format in the BASE UI. Not sure when I'll have time to get around to that. But I'll be sure to post my results whenever I do. And again, this is a work in progress, so I know much needs to be changed in the code I have provided today.
Saturday, November 3, 2012
Post Hurricane Sandy RAID Rebuild
I am fortunate that where I live did not suffer much damage in the wake of the recent storm named "Sandy". I think that we maybe got some 40-50 MPH winds and a fair bit of rain from the storm, but no major damage was done. Most of our power lines are buried underground in this area, so I was happy that we never lost power during the storm. We did, however, lose power the day after the storm had passed. Probably as a side effect of the power company working to restore power for those who had lost it during the storm.
After power was restored, I went around the house turning on all of my computer and server equipment. I didn't really do a thorough check, though. Today, I went to put a file on my NAS and noticed that my NFS mount was not present on my workstation. I tried mounting it manually and it just hung. I tried pinging the NAS and got no response. It was powered on, though. It was time to hook up a monitor and keyboard to this usually headless server.
As soon as the monitor came up, I could see the problem. The system was sitting on the GRUB menu screen. This screen usually has a timeout, that when reached, will boot the default selection. This time, though, there was no timeout. I thought to myself that something must be wrong. I proceeded to make the selection and allow the system to boot.
As it booted I noticed that it said my software RAID array was in a degraded state and something about an invalid partition table. I chose to let it boot anyway. Once the system was up and running, I logged in and was able to determine that the RAID member with the problem was /dev/sda.
Below are the steps I used to remove the array and add it back to begin rebuilding the array:
Now I'm using the next command to view the status of the rebuild:
All I can do at this point is wait for the rebuild to complete. Maybe one day I'll invest in a nice hardware RAID controller.
After power was restored, I went around the house turning on all of my computer and server equipment. I didn't really do a thorough check, though. Today, I went to put a file on my NAS and noticed that my NFS mount was not present on my workstation. I tried mounting it manually and it just hung. I tried pinging the NAS and got no response. It was powered on, though. It was time to hook up a monitor and keyboard to this usually headless server.
As soon as the monitor came up, I could see the problem. The system was sitting on the GRUB menu screen. This screen usually has a timeout, that when reached, will boot the default selection. This time, though, there was no timeout. I thought to myself that something must be wrong. I proceeded to make the selection and allow the system to boot.
As it booted I noticed that it said my software RAID array was in a degraded state and something about an invalid partition table. I chose to let it boot anyway. Once the system was up and running, I logged in and was able to determine that the RAID member with the problem was /dev/sda.
Below are the steps I used to remove the array and add it back to begin rebuilding the array:
- mdadm --manage /dev/md127 --fail /dev/sda1
- mdadm /dev/md127 -r /dev/sda1
- mdadm --zero-superblock /dev/sda
- mdadm /dev/md127 -a /dev/sda1
Now I'm using the next command to view the status of the rebuild:
- watch cat /proc/mdstat
All I can do at this point is wait for the rebuild to complete. Maybe one day I'll invest in a nice hardware RAID controller.
Sunday, October 21, 2012
BYOD
I thought I'd take a moment to give my opinion on BYOD (Bring Your Own Device). I do not agree with BYOD in the workplace. I don't see what advantages it brings. Personal electronic devices have no place on a corporate network. I can't even begin to imagine the types of security holes and malware infestations that end users would be connecting to the network.
It's obvious the reasons why an IT department would not want this. There are certainly any number of risks associated with plugging in devices that you have no control over. There may be severely out of date software on these devices, malware, and who knows what other security risks. However, I also can't see why end users would want this.
If you need a smartphone, tablet, etc. to do your job efficiently, then these things should be provided by your place of business. You should never have to spend your hard earned cash on tools needed to perform your job. If your employer refuses to give you the tools you need, then maybe it's time to look for another place of employment.
Personally, I have always maintained a line between my personal and my professional life. In the past, when I was told that I needed to join a conference call from home, my response was that they needed to provide me with a phone or I would not be joining that meeting. The result was that I got a company issued phone. There's a difference between being outright insubordinate and protecting your own assets.
I do sometimes feel bad for those people who just prefer to use their own devices at work. Because for every one of those people, there are a dozen others who would just use this as an excuse to play games or socialize all day instead of working on a presumably unmonitored device.
So if you're an end user who has been nagging your IT department to allow you to use your own device, please try to understand why they are telling you "no". It's not because they want to feel powerful by telling you what you can and cannot do. They are busy people, too. Keeping a network safe and secure is a full time job. They don't get to just plug in some appliance and set it and forget it. They must constantly be analyzing intrusion attempts and attack vectors. All the while patching software to minimize those attack vectors. In addition to all that, they are still available whenever you forget your password. So please, take it easy on those guys and gals.
It's obvious the reasons why an IT department would not want this. There are certainly any number of risks associated with plugging in devices that you have no control over. There may be severely out of date software on these devices, malware, and who knows what other security risks. However, I also can't see why end users would want this.
If you need a smartphone, tablet, etc. to do your job efficiently, then these things should be provided by your place of business. You should never have to spend your hard earned cash on tools needed to perform your job. If your employer refuses to give you the tools you need, then maybe it's time to look for another place of employment.
Personally, I have always maintained a line between my personal and my professional life. In the past, when I was told that I needed to join a conference call from home, my response was that they needed to provide me with a phone or I would not be joining that meeting. The result was that I got a company issued phone. There's a difference between being outright insubordinate and protecting your own assets.
I do sometimes feel bad for those people who just prefer to use their own devices at work. Because for every one of those people, there are a dozen others who would just use this as an excuse to play games or socialize all day instead of working on a presumably unmonitored device.
So if you're an end user who has been nagging your IT department to allow you to use your own device, please try to understand why they are telling you "no". It's not because they want to feel powerful by telling you what you can and cannot do. They are busy people, too. Keeping a network safe and secure is a full time job. They don't get to just plug in some appliance and set it and forget it. They must constantly be analyzing intrusion attempts and attack vectors. All the while patching software to minimize those attack vectors. In addition to all that, they are still available whenever you forget your password. So please, take it easy on those guys and gals.
Monday, October 15, 2012
See Percentage of Memory Used in Linux
You can use the following command to see the percentage of memory used on a Linux system. Keep in mind that all it's actually doing is adding together the memory percentage used lines for each process listed. Depending on your input method, the results could vary a little, but should generally be in the same ballpark.
The first example below adds together everything in the 4th column of "ps" output.
The second example takes input from top, by running just one time in batch mode. Then it adds together the values in the 10th column.
The first example below adds together everything in the 4th column of "ps" output.
The second example takes input from top, by running just one time in batch mode. Then it adds together the values in the 10th column.
ps aux | awk '{sum +=$4}; END {print sum}'
top -b -n 1 | awk '{sum +=$10}; END {print sum}'
Subscribe to:
Posts (Atom)