Sunday, November 11, 2012

OS and Service Fingerprinting with Nmap

I decided that I wanted to have a network map of all the machines on my network containing information about the Operating System and services that are running on each one.  Furthermore, I want to include this data on my IDS running Snort + BASE. 

I'm running through this proof of concept scenario at the moment.  Don't complain about any code that I post below.  Again, I'm just doing a quick POC, so the code is fairly poorly written.  But it does work.  If you'd like to make it better, please feel free.  Please don't make functionality requests here.  If you would like to see a feature added, please make the changes yourself.  That's the beauty of having the code.  In other words, I'm doing this for me and sharing it with the world.  But in the end, it's for me.  So if you don't like it, I don't want to hear about it because I don't care.  Sorry for all that, just needed to get it out of the way, lest I become inundated with silly requests and negative opinions.

As far as OS and service fingerprinting goes, Nmap is fully capable of doing just that.  So why reinvent the wheel?  I first started trying to use Nmap along with a series of 'greps', but the command became long and well, pretty horrible looking.

Then I realized I could output the data from an Nmap scan to XML format.  My command ended up looking like this:
nmap -A -T5 <IP Address(es) to scan> -oX output.xml

The above command will scan the hosts that you provide, attempting to identify the OS and services running on them.  I usually use a CIDR block for the range to scan, such as 192.168.1.0/24, but you can use any nmap accepted format.

I chose to use perl to parse the output.xml file.  That's because there is a great perl module called Nmap::Parser.  It was built specifically for this sort of activity.

The script I have right now is below:

#!/usr/bin/perl -w

#
#
# Give the XML file as the only program argument
#

use strict;
use Nmap::Parser;          
use DBI;
use DBD::mysql;

my $dbh = DBI->connect(
    'DBI:mysql:database=nmap;host=localhost',
    '<user>',
    '<password>',
    { RaiseError => 1, AutoCommit => 1 },
);

# set the value of your SQL query

my $dquery1 = "delete from osdata";
my $dquery2 = "delete from servicedata";

my $query = "insert into osdata (ip, name, vendor, name_accuracy, class_accuracy)
            values (?, ?, ?, ?, ?) ";

my $query2 = "insert into servicedata (ip, protocol, name, port, product, version, confidence) values (?,?,?,?,?,?,?)";

# prepare your statement for connecting to the database
my $statement = $dbh->prepare($query);
my $statement2 = $dbh->prepare($query2);

my $dstatement = $dbh->prepare($dquery1);
my $dstatement2 = $dbh->prepare($dquery2);

# execute your SQL delete statements

$dstatement->execute();
$dstatement2->execute();
my $np = new Nmap::Parser;

# Parse the input XML file
$np->parsefile("$ARGV[0]");

# Get an array of all hosts that are alive
my @hosts = $np->all_hosts("up");


foreach my $host_obj (@hosts) {

    # Get the IP address of the current host
    my $addr = $host_obj->addr();
    my $hname = $host_obj->hostname();
    if ($hname ne 00) {
    print "$addr\t$hname\n";
    } else {
     print "$addr\n";
}

#Identify the Operating System
my $os = $host_obj->os_sig();
my $osname = $os->name();
my $osacc = $os->name_accuracy();
my $osven = $os->vendor();
my $osacc2 = $os->class_accuracy();
#print "$osname\t$osacc\t$osven\t$osacc2\n";
$statement->execute($addr, $osname, $osven, $osacc, $osacc2);

    # Get a list of open TCP ports for this host
    my @tcp_ports = $host_obj->tcp_open_ports();
   
    # Enumerate the open TCP ports
    foreach my $tcp_port (@tcp_ports) {
            my $service = $host_obj->tcp_service($tcp_port);
        no warnings;
        my $svcname = $service->name();
        my $svcport = $service->port();
        my $svcprod = $service->product();
        my $svcvers = $service->version();
        my $svcconf = $service->confidence();
       
    if (defined($svcname)) {
       
$statement2->execute($addr,'TCP',$svcname,$svcport,$svcprod,$svcvers,$svcconf);
        use warnings;
}
        }
     
    }




You would need to replace <user> and <pass> with your database username and password.

For the sake of testing, I just created a new MySQL database called nmap along with two tables; osdata and servicedata.

mysql -uroot -p

mysql> create database nmap;

mysql> use nmap;
mysql> create table osdata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), name varchar(20), vendor varchar(20), name_accuracy int(3), class_accuracy int(3) );

mysql> create table servicedata ( id INT AUTO_INCREMENT PRIMARY KEY, ip varchar(20), protocol varchar(3), name varchar(20), port int(6), product varchar(20), version varchar(6), confidence int (3) );

After the fact, I went back and added a timestamp column to each table:

mysql> alter table `osdata` add `lastUpdated` timestamp;
mysql> alter table `servicedata` add `lastUpdated` timestamp;

With the database created, I can simply run the script from above, which I have saved as nmap_parser.pl like this:

./nmap_parser.pl output.xml

The script will run and populate the new database tables with the results it finds.  Instead of dealing with checking if the database rows already exist and changing the insert to an update in the script, each time the script is executed, it completely deletes all the data in the osdata and servicedata tables. 

My thought is that the nmap scan can be set as a cron job on the snort machine.  Then the nmap_parser script can also be set to run after that cron job completes. 

The next step will be to make modifications to the snort front-end, BASE.  I hope to be able to add a new menu item which will read in the data from the osdata and servicedata tables and display them in a friendly format in the BASE UI.  Not sure when I'll have time to get around to that.  But I'll be sure to post my results whenever I do.  And again, this is a work in progress, so I know much needs to be changed in the code I have provided today. 



 
   









Saturday, November 3, 2012

Post Hurricane Sandy RAID Rebuild

I am fortunate that where I live did not suffer much damage in the wake of the recent storm named "Sandy".  I think that we maybe got some 40-50 MPH winds and a fair bit of rain from the storm, but no major damage was done.  Most of our power lines are buried underground in this area, so I was happy that we never lost power during the storm.  We did, however, lose power the day after the storm had passed.  Probably as a side effect of the power company working to restore power for those who had lost it during the storm.

After power was restored, I went around the house turning on all of my computer and server equipment.  I didn't really do a thorough check, though.  Today, I went to put a file on my NAS and noticed that my NFS mount was not present on my workstation.  I tried mounting it manually and it just hung.  I tried pinging the NAS and got no response.  It was powered on, though.  It was time to hook up a monitor and keyboard to this usually headless server.

As soon as the monitor came up, I could see the problem.  The system was sitting on the GRUB menu screen.  This screen usually has a timeout, that when reached, will boot the default selection.  This time, though, there was no timeout.  I thought to myself that something must be wrong.  I proceeded to make the selection and allow the system to boot.

As it booted I noticed that it said my software RAID array was in a degraded state and something about an invalid partition table.  I chose to let it boot anyway.  Once the system was up and running, I logged in and was able to determine that the RAID member with the problem was /dev/sda. 

Below are the steps I used to remove the array and add it back to begin rebuilding the array:

  • mdadm --manage /dev/md127 --fail /dev/sda1
  • mdadm /dev/md127 -r /dev/sda1
  • mdadm --zero-superblock /dev/sda
  • mdadm /dev/md127 -a /dev/sda1

Now I'm using the next command to view the status of the rebuild:

  • watch cat /proc/mdstat

All I can do at this point is wait for the rebuild to complete.  Maybe one day I'll invest in a nice hardware RAID controller.

Sunday, October 21, 2012

BYOD

I thought I'd take a moment to give my opinion on BYOD (Bring Your Own Device).  I do not agree with BYOD in the workplace.  I don't see what advantages it brings.  Personal electronic devices have no place on a corporate network.  I can't even begin to imagine the types of security holes and malware infestations that end users would be connecting to the network.

It's obvious the reasons why an IT department would not want this.  There are certainly any number of risks associated with plugging in devices that you have no control over.  There may be severely out of date software on these devices, malware, and who knows what other security risks.  However, I also can't see why end users would want this. 

If you need a smartphone, tablet, etc. to do your job efficiently, then these things should be provided by your place of business.  You should never have to spend your hard earned cash on tools needed to perform your job.  If your employer refuses to give you the tools you need, then maybe it's time to look for another place of employment. 

Personally, I have always maintained a line between my personal and my professional life.  In the past, when I was told that I needed to join a conference call from home, my response was that they needed to provide me with a phone or I would not be joining that meeting.  The result was that I got a company issued phone.  There's a difference between being outright insubordinate and protecting your own assets. 

I do sometimes feel bad for those people who just prefer to use their own devices at work.  Because for every one of those people, there are a dozen others who would just use this as an excuse to play games or socialize all day instead of working on a presumably unmonitored device.

So if you're an end user who has been nagging your IT department to allow you to use your own device, please try to understand why they are telling you "no".  It's not because they want to feel powerful by telling you what you can and cannot do.  They are busy people, too.  Keeping a network safe and secure is a full time job.  They don't get to just plug in some appliance and set it and forget it.  They must constantly be analyzing intrusion attempts and attack vectors.  All the while patching software to minimize those attack vectors.  In addition to all that, they are still available whenever you forget your password.  So please, take it easy on those guys and gals.

Monday, October 15, 2012

See Percentage of Memory Used in Linux

You can use the following command to see the percentage of memory used on a Linux system.  Keep in mind that all it's actually doing is adding together the memory percentage used lines for each process listed.  Depending on your input method, the results could vary a little, but should generally be in the same ballpark.

The first example below adds together everything in the 4th column of "ps" output. 

The second example takes input from top, by running just one time in batch mode.  Then it adds together the values in the 10th column.

ps aux | awk '{sum +=$4}; END {print sum}'

top -b -n 1 | awk '{sum +=$10}; END {print sum}'

Friday, May 4, 2012

Snort to the Rescue!

I still use Base as a web frontend to my snort installation.  I know a lot of people are using things like Snorby now, but I think Base does everything I need it to do.  Anyway, I was looking at Base this afternoon and I noticed over 200 new alerts. 

All of the alerts were from my main router and they were of the type "ICMP Test".  Closer examination showed that the router was trying to ping a machine that was unreachable.  Since my router also acts as my DNS and DHCP server, I checked the syslog on that machine. 

The syslog was full of DHCP offers to the same IP address that snort was showing as unreachable.  I took the MAC address and ran it through an online MAC to vendor lookup and it showed me that it was a MAC from Motorola CHS.  I went through the house restarting all of my Motorola cable boxes.  Since doing that I noticed that the DHCP log shows that an acknowledgment was sent in response to the DHCP offer.  Snort has also stopped alerting for that particular ICMP Test. 

I guess one of the cable boxes just got hung up a bit.  It happens from time to time.  Usually I don't catch the problem until it is too late (e.g. my favorite TV shows aren't recording as scheduled in the DVR).  Thanks to Snort and Base, that won't be a problem tonight.

NTFSclone

I installed ntfsprogs on my Debian desktop because I have a Windows partition that I'd like to create an image of on my NAS.  I ran ntfsclone with the --save-image option and directed it to place the output in an NFS share to my NAS.  I started it last night and it's almost 60% of the way finished.

My lessons learned are as follows:
-  Software RAID sucks.  I should probably spring for a decent hardware RAID controller.
-  Consumer hard disks also suck for large file copies like this one.  Those cheap Western Digital disks in my NAS may have seemed like a great deal, but they just don't compare to higher-end SCSI disks.  The IO Wait is what's causing it to take so long.  It was over 50% when I last looked at it on the NAS.

I should really invest in better equipment at home :)

Update:  The ntfsclone imaging finally finished.  It turns out that I may have tracked down another culprit relating to the slow file transfer and the high iowait.  I have a 3-disk RAID 5 array in my Openfiler NAS.  Running mdadm -D /dev/md0 showed that one of the disks was faulty.  I rebooted the NAS and re-added that disk to the RAID array.  Right now it is in the process of rebuilding, so I'll have to wait a while to see how that goes.  Even if it comes back online okay, I'll still probably order an extra disk to add to the array as a spare. 

Snort: Sensitive Data

Man, I'll tell you, the sensitive data processor in snort was not designed to be used with web traffic.  If you've used it at all, you know it fires all the time when used in conjunction with regular web traffic.  If seems to throw alerts for detecting email addresses if it so much as finds the '@' symbol in a packet.  Any string of numbers in a packet makes it alert for finding supposed credit card numbers. 

Since in my current setup snort is processing all packets sent from my router, I'm going to have to disable sensitive data processing.  I guess if I was only monitoring traffic from my internal network, then there would be fewer of these alerts.  And then, the alerts I do get would probably at least be worth taking a look at.  Right now, though, I'm just getting flooded with false positives. 

Saturday, April 28, 2012

Underscore in Hostname

I had to add this line to my Bind9 configuration for my home DNS server:
check-names master warn;

Had to do this mainly because I was getting crap like "bad owner name (check-names)" in my syslog everytime my Android phone tried to join the network.  Apparently, DNS doesn't like special characters in hostnames.  It seems Android phones tend to have an underscore character in their hostnames.  The annoying thing about this is that you can't change the hostname on a non-rooted phone.

But anyway, using the line up above in your named.conf file will stop your syslog from adding these alerts.  I think it's probably not the best thing to do, but until my phone uses a normal hostname, I have to do it to avoid sifting through tons of useless syslog alerts.

Interestingly enough, before adding that to clean up the log, Snort was also firing each time my phone joined the network, citing it as a possible Bind9 DoS attempt.  So even if I wasn't checking my logs regularly, snort would have at least let me know that I should pay closer attention to my phone. 

IPtables Port Spanning -- Sort of

I installed Snort on a Debian virtual machine in my home test lab.  I thought it would be fun to learn more about it.  What better way to learn than to just start using it?

I pretty much just followed one of the guides available at snort.org to get snort installed with PulledPork for downloading updated rulesets.  I've actually installed snort before, but it was several years ago and on Slackware instead of Debian.  I have noticed some things that are new to snort since the last time I installed it.  Things like the sensitive data preprocessor.  No, SDP, you're wrong.  None of those web packets contain CCN's or SSN's.  I actually disabled SDP in the end.  If you just comment out the sensitive data preprocessor line in snort.conf and the sensitive-data-rules line, you'll probably find that snort will die often and complain about something with "sd_pattern".  Just look in the snort.rules file and comment out the alert lines containing "sd_pattern".  That should solve that problem.  Unless of course, you really do need to look for sensitive data.  In that case, don't turn it off.  But do get ready for loads of false positives.

Having an IDS installed is not fun if you can't get packets for it to process, though.  I don't really care about chatter inside my network between the different machines.  I just wanted to process any packets sent or received through my router (machine running Debian and iptables).

I own a switch, not a hub, so plugging in the physical network connection used by the snort VM won't do any good.  And my switch is a cheap, unmanaged switch.  So I can't do a span port on it, either.

Then I learned about the TEE target in iptables.  It will copy the packets passing through the firewall and forward them over the network to another machine.  Yes, it's probably not the most secure method in the world, but it was exactly what I needed for my home network.  Kind of like a simulated span port.

However, it was a pain to get going on Debian stable.  In fact, you have to use xtables-addons along with iptables.  But it's not easy to get working correctly because of the versions of xtables-addons and iptables in the stable repos.  Instead I just ended up upgrading my router to Debian testing.  It was the easy way out.  That way everything is a new version and I got a nice shiny new 3.x kernel.  Once you get your system to recognize the TEE target and the --gateway option, you can add something like this to your iptables rules:

iptables -t mangle -A PREROUTING -j TEE --gateway x.x.x.x
(where x.x.x.x is the IP of the machine you want to receive the packets.)

Now that that's over, I can get back to configuring snort.  Any traffic coming into or out of my network through my router will also get copied over to snort for processing.  Yay!


Saturday, March 31, 2012

Irony


Optical Character Recognition with Linux

My company currently uses ABBYY products to perform OCR on PDF documents.  I thought it would be fun to see what open source alternatives existed out there for this purpose.  After a little bit of searching, I found that most people seemed to agree that Tesseract is one of the more accurate OCR programs available that is also open source.  I decided to give it a try on a Debian virtual machine.

I first tried using Tesseract on a TIFF file with cursive handwriting.  This failed miserably and just gave me a bunch of garbage as text output.  Maybe if I put forth the time and effort to "train" Tesseract I could get this to work somewhat.  But then again, everybody's cursive handwriting is different, so I can't ever see getting this to work reliably. 

I then tried a PDF document with typed text.  Tesseract wouldn't even read the PDF, as is.  I first had to convert it into an image file (I chose TIFF, again).  This worked much better.  I'd say it was probably about 90 - 95% accurate.  I then tried the same PDF in Windows using ABBYY Corporate Edition version 10.  ABBYY had much better results.  In fact, it was almost 100% accurate.  I have to give the point to ABBYY on this one.

Finally, I tried scanning a gas station receipt and OCR-ing it.  Using Tesseract produced more garbage.  This time, though, ABBYY also produced garbage.  Granted, though, this receipt was very crumpled and the text on the receipt was printed very lightly.  I had to strain my eyes to read it myself, so I can't blame the two pieces of software for not being able to properly OCR it.

The fact is that I don't really have any personal uses for OCR software.  That makes it difficult for me to think up more testing scenarios.  My results seem to show that if I needed to OCR documents for a business, I'd probably put my trust in the commercial ABBYY product line.  The Tesseract project does show a lot of promise, though.  Tesseract did seem to perform much faster and use far less resources than ABBYY.  With some improved out-of-the-box accuracy, my recommendation could certainly shift in favor of Tesseract.

Saturday, March 17, 2012

Ethernet Auto-Negotiation

I've probably mentioned this before, but I'm surprised how many people I encounter that still don't get how auto-negotiation works.  The easiest way to spot network administrators who just don't get it is to analyze their network for duplex mismatches.  And you usually don't end up looking for those until you have users complaining about "slowness" over the network.

The problem is that most people think auto-negotiation will work if they just set the network interface in their computer to use it.  This will not guarantee a successful negotiation of the correct duplex setting.   That is because auto-negotiation requires each end of the connection to be set for auto-negotiation.  That means that in addition to the NIC in the computer, the switch that it is connected to must also be set for auto-negotiation.  This is defined as part of the IEEE 802.3 standard for Ethernet connections.

The standard says that if only one end of the connection is set for auto-negotiation, the speed (100Mbps, 1Gbps, etc..) will negotiate fine, but the duplex setting will become half-duplex.  If you want to see how you are negotiated from your PC, then you would need a tool from your NIC manufacturer.  Some manufacturers like Broadcom do offer such tools that you can install, but not all manufacturers will provide such a tool.

A more complex example would be using small, unmanaged switches with managed switches.  Those cheap unmanaged switches usually do not allow you to specify the duplex setting for each port.  Therefore, they use auto-negotiation by default.  So if you connect that to a port on a managed switch that is explicitly set for, let's say, full-duplex, then you will have a duplex-mismatch resulting in a half-duplex connection.  In this case, even if you checked the connection with a tool from a workstation connected to the unmanaged switch, you would not get correct results.  The tool would most likely show a full-duplex connection.  That is because the connection from the PC to the unmanaged switch may be fine.  The issue would be with the backbone connection from the unmanaged switch to the managed switch.

The easiest remedy for these types of duplex mismatches is to simply always use auto-negotiation on all of your devices.   Almost all modern devices are going to use auto-neg by default.  So anything you purchase and connect to your network will always negotiate the proper duplex as long as all of your network devices are configured for auto-negotiation.

Remember, it takes two for auto-negotiation over Ethernet networks.  

Wednesday, March 7, 2012

Overheard Conversation at Work

I just had to share this with the world.  I was just sitting at my desk and caught the tail end of a conversation between two coworkers.  They were talking about the disk configuration in a server. 

Person #1 --  How are the disks configured?
Person #2 --  When data is written to one drive, it is also written to another drive.
Person #1 -- Okay, the disks are mirrored.
Person #2 -- No, it isn't mirrored.  It's RAID-ed. 

I wasn't a part of the conversation, so maybe Person #2 meant something else.  But he just kept describing disk mirroring.  While there are RAID levels for mirroring, it can also be called just that...mirroring.   One isn't right and the other wrong.

You really had to be there.  It was just how Person #2 was talking down to Person #1 acting like saying "mirroring" in this situation was completely wrong.  Could you imagine if someone was telling you that, let's say, RAID1 does not equal "disk mirroring"?

In this case, I think the person had just learned the term "RAID" and was trying to shun any other words to describe what his pretty new word was describing.

Sunday, March 4, 2012

Debian Wheezy & GDM3 :: Disable User List

I wanted to disable the user list that shows on the GDM3 greeter.  You know, where you can point and click on a username and then enter the password.  I don't find this very secure, since knowing the username is half the battle for someone breaking into your computer.  And Jesus F-ing Christ was it hard to disable!  Hey, Gnome devs, please dear God, start working on some better system settings tools.  Stuff like this should be easier.  I mean, I can put up with this whole one desktop environment to rule them all (PCs, tablets, and phones) mentality that seems to be infecting the industry right now.  I'd even say that things like Unity and Gnome 3 are growing on me the more I use them.  But even Microsoft still gives me a bunch of Control Panel applets with the Windows 8 Consumer Preview.  There are almost no important settings which can be modified in Gnome 3 or Unity by using a point and click method.  If I can do it by editing a file or running a command on a server, then great.  But at the end of the day, I'd like to simply point and click on my desktop system. 

Sorry for the rant.  Back to the problem at hand...

At first I had tried things like adding a line to /etc/gdm3/greeter.gconf-defaults to disable the user list, as is documented all over the Internet to do so.  This did not work. 

I finally found my solution on the Linux Mint forum.  The command I had to run was sudo gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.mandatory --type Boolean --set /apps/gdm/simple-greeter/disable_user_list True.

Saturday, March 3, 2012

Debian :: Upgrade Stable to Wheezy

I've been running Debian Stable on my laptop and desktop at home for a couple of months now.  Even though I thought I could live with fairly out of date software in the repositories, I decided last night that I couldn't.  So I set out to upgrade to Wheezy (Testing). 

I simply changed every mention of "stable" to "testing" in my /etc/apt/sources.list.  Then I ran sudo aptitude update and sudo aptitude dist-upgrade.  The second command would exit every once in a while, so I'd have to run it again.  I just accepted whatever it told me it needed to do.  I kept running it until it finally didn't show any packages that needed to be upgraded.  At first, I had tried using sudo apt-get dist-upgrad, but after a while I couldn't get it to run anymore -- it kept showing unresolvable dependency errors.  Aptitude worked fine, though.  After it was done, I crossed my fingers and rebooted. 

On the desktop, after the reboot, I was greeted with a text login instead of with gdm3.  I ran sudo apt-get install gdm3 gnome-core.  Rebooted again, and still no graphical login.

I thought the issue might have been related to my NVIDIA drivers (I have a GTS 250 video card), so I uninstalled all the nvidia driver stuff that I had installed.  After that I tried installing the binary driver from NVIDIA's website.  It just SIGTERM-ed after accepting the license agreement. 

I re-installed all the nvidia dkms stuff and then deleted any xorg.conf that I had in /etc/X11.  Then I created a directory called /etc/X11/xorg.conf.d and (as root -- not with sudo) ran the command echo -e 'Section "Device"\n\tIdentifier "My GPU"\n\tDriver "nvidia"\nEndSection' > /etc/X11/xorg.conf.d/20-nvidia.conf.

This time after restarting, I got my gdm3 login screen that I was hoping for.  I logged in and everything seemed to be in order.  I brought up the dash and decided to type in the name of some software to search for.  As soon as I hit a key, the dash would crash.  This happened every time I tried to type something into the dash's search box.  I was able to fix that by clearing the contents of a file and then making the file immutable with the chattr command:  echo > ~/.local/share/recently-used.xbel && sudo chattr +i ~/.local/share/recently-used.xbel.

None of my NFS mounts were mounting as specified in /etc/fstab, and I discovered I needed to install nfs-common again using apt-get.  I also installed gnome-tweak-tool so I could have some more control over the Gnome environment.  One of the big things I used the tweak tool for was adding minimize and maximize buttons to windows.

Now the only problem I have is that the Software Center doesn't open for me.  Well, the window opens and it looks like it's trying to do something for a minute, but then just leaves me with a blank window.  I'm not too terribly concerned about it because I can still use aptitude, apt-get, and synaptic to install software.  It would be nice if it still worked, but I'm betting it will be fixed in a future update.  Either that or I'll end up fixing it myself or just re-installing at some point.   





Monday, February 6, 2012

Final Fantasy XIII-2 Ending Rant

Sorry, but this is a non-technical post.  So if you're only looking for the system administration type stuff, you can skip over this one.  I just needed a place to rant.  This post also contains a spoiler for the new video game Final Fantasy XIII-2, so stop reading unless you want to see it.

With that said, on to the rant...

I've been a huge fan of the Final Fantasy franchise ever since its early days.  I'm also more forgiving of others for titles in the franchise that most people deem unpopular.  Probably because the ones that are good, are so good, in my opinion, that I'm willing to look past the shortcomings of the other titles.  For instance, I thought Final Fantasy XIII was a good game.  I certainly don't think it was the best game in the series, but I also didn't jump on the "this game sucks" bandwagon that many chose to. 

I purchased a copy of Final Fantasy XIII-2 just a couple of days after its release.  I spent most of this weekend playing it non-stop.  I thought the game was great.  It was fun and it looked beautiful.  It seemed like Square Enix had accounted for and rectified most of the complaints people had with the first title.  As is usual for me, I wanted to first see the story and ending, so I played through the game quickly and at high enough levels just to get by.  This usually works out until some boss battle at or near the end in which I have to deal with a very precarious balance of healing and attacking in order to achieve a very narrow victory.  This time was no exception.  The final boss battle lasted just about an hour for me.  It was quite a rush, so I was thrilled that I had finished it and sat back to watch the ending. 

Well, the ending takes a turn for the worse and I start wondering what is going to happen next to resolve things.  Then, blammo!  There are those three little words..."To be continued". 

I immediately turned to the internet to see what plans Square Enix has for another sequel.  And I'm shocked at what I find.  Apparently, SE has announced to the press that there are no plans at this time for a sequel to FF XIII-2.  Instead they are suggesting that the game's true ending will be experienced via downloadable content.  And that's where I start to have a problem. 

So, as far as I understand, what they are trying to tell me is that they charged me full price for an incomplete game.  On top of that, they are suggesting that I spend more money on content that will complete that game.  Seems a lot like highway robbery to me.

I'm not 100% against downloadable content.  After all, it can be used to breathe new life into an old game.  Charging me to download side quests that aren't part of the original game is fine.  But the ending?!  When I pay almost $60 for a video game, I expect it to have a beginning, middle, and end.  You've left off the ending this time.  Do I get some of my money back now?  Or surely you're at least going to let everyone who bought the game access the ending for free, right?
 

Monday, January 16, 2012

Home Virtualization Server

I chose to install Citrix XenServer 6 on my server.  The bad news is that Citrix doesn't offer a management console for Linux or a web console, so it's next to impossible to manage your virtual environment without a Windows machine.  I found a project called OpenXenManager that tries to fill that void.  I installed it on my Ubuntu machine by following some instructions I found (see below).

sudo apt-get install subversion python-glade2 python-gtk-vnc
svn co https://openxenmanager.svn.sourceforge.net/svnroot/openxenmanager openxenmanager 
cd openxenmanager/trunk 
./openxenmanager 

I first had to edit the openxenmanager file in the trunk directory and replace the occurrence of python2 with python before it would run correctly.

I'm glad I was able to find OpenXenManager because it does allow me to do a lot of management tasks directly from my Linux workstation.  However, it is not a polished product.  It will occasionally lock up or crash.  I also didn't see a way to activate my XenServer installation from the OpenXenManager interface.  Same goes for applying patches to XenServer.  But I was able to use it to install a few Windows server guests.

Ironically, it doesn't work so well for Linux guests.  At least for the Ubuntu 10.04 releases I have tried to install using it.  Actually, it will install an HVM Linux guest without any problems.  But with HVM you don't get features like live migration.  For that, the virtual machine needs to be running in paravirtualized mode (PV).  Doing this from OpenXenManager is a lot of extra work than doing it from the Windows-based XenCenter console.  And even when I did this from OpenXenManager, I still couldn't access the console of the VM.  I could, however, access the console from XenCenter.  Maybe this has something to do with the fact that the latest SVN release of OpenXenCenter is still the equivalent of XenCenter 5.something instead of the latest version 6 release.

Speaking from experience, if you have a motherboard that supports it, make sure you disable AMD's Cool & Quiet feature in your BIOS.  Before I disabled this, VM guest installation was incredibly slow.  It sped up tremendously with this feature disabled.

Other than the stuff I mentioned above, XenCenter 6 is doing a great job.  Honestly, if you running this in a Windows environment, I don't think you'll have any issues.  Most of my complaints so far are based around the fact that Citrix isn't planning to offer a Linux management console.  Even better would be a web based management interface.  Something that would work across most platforms with a browser. 





Switching Desktops at Home

Yesterday I changed out my primary desktop machine at home.  I decided my existing desktop would be better served as a host server for my virtual machines.  Since I still like having a machine other than my laptop, I found an old HP Pavilion a220n in my junk collection (c'mon you know you have stuff lying around like this, too).  I installed Ubuntu 11.10 on it and it's running surprisingly well.  There is a slight delay when launching programs and it certainly isn't nearly as "zippy" as my six core machine, but I think it'll be fine for a while.  Since I still have a four core processor that I'm not using, I may look at ordering a motherboard and building a new desktop.  Until then, the old pavilion will have to do.

Update:   I decided the delays on the pavilion when launching applications were a bit too much for me.  If I had a decent video card that would fit in one of the PCI slots in that machine, it might improve.  I think the onboard video is my main bottleneck (or at least a bottleneck).  I remembered that I had another AMD Athlon 64 system in my closet.  I have since hooked that system up and loaded Ubuntu on it.  It seems to be running much more smoothly.  I think I'll stick with it for a while. 

Tuesday, January 3, 2012

Web Conferencing and VOIP

I realized that I was missing something with OpenMeetings.  With WebEx, I get a phone number for the conference that participants can dial into.  It looks like OpenMeetings was designed only for web based conferencing.  From what I've read on mailing lists, the devs suggest that you hire someone familiar with the telcom aspect to help build an integration for you.  This is a little disappointing, but I still like OpenMeetings as a conferencing tool. 

This also made me realize that I don't really have any experience with VOIP.  Well, at least outside of the enterprise with an Avaya PBX.  I'd like to do something on a smaller scale with something like Asterisk.  I think I'll start coming up with a plan to introduce an Asterisk box into my home.  I don't have a land line, so it would kind of fill that void.

I'll post more about this as things start to develop with it.  I'm fairly lazy, so it might take a while.   

Linux and Active Directory Integration

I've been down the road of integrating Linux machines with Active Directory before.  I thought I had pretty much done it all.  I've installed the Unix tools on Windows domain controllers before.  I've setup a Samba 3 PDC that used OpenLDAP for authentication for all Linux and Windows clients.  I've even setup an alpha release of Samba 4 for Active Directory services instead of a Windows domain controller.  My favorite option thus far has been using Samba 4.  Of course, Samba 4 isn't quite ready for prime time yet.  It is just an alpha release at this point, after all.  Some things still don't work quite right (or at all), such as allowing Exchange to extend the schema during installation.

I'm writing now to tell you about something I just found out about.  It's called Likewise Open.  Actually, I think that it's most recent name is Powerbroker Identity Services Open Edition.  I'm still going to call it Likewise Open, though.  In short, it's an application that allows you to very easily join a Linux machine to an Active Directory domain.  Once joined, you can login to your Linux box using your AD credentials.  It even supports changing your AD password from Linux.  You know, if you're required to change your password every so often by policy.

The tool, like most Linux apps, supports command line usage.  It also has a simple GUI that allows you to join or remove a workstation from AD.  If you use a Debian based distro like Ubuntu or Mint, it's even available in the repos.  I tested it on Ubuntu 10.04 LTS and joined a Windows 2003 AD domain in a matter of minutes.  The only tweak I had to make was to replace the "hosts" line in /etc/nsswitch.conf with "hosts:  files dns".  After that, joining the domain was a cinch.

I'm very impressed with Likewise Open.  I can definitely see how valuable it could be in an enterprise environment.  BeyondTrust, the company that makes the product has some commercial versions with extra features as well.  According to their website, with the "enterprise" version of their product you can define Group Policy Objects for your Linux machines.  You can even manage AD (think ADUC) from a Linux workstation.  They offer a trial download on their site, but you have to fill out a form and someone from BeyondTrust will contact you with the download details.  That is what is keeping me from testing what sounds like an awesome product.  If you want me to buy your product, just let me download a trial immediately.  Don't make me talk to one of your sales goons.  I'd rather your product speak for itself instead of having to listen to some sales pitch.  Don't get me wrong, I'm very intrigued by what you have to offer, but I'm not giving your personally identifiable information for something that I might decide isn't even a good match for me or my company.

But as far as Likewise Open goes, you should all try it out.  You'll be amazed at how easy it is to install and start using.