Tag Archives: Linux

Checking OS Version Across Multiple Hosts with Ansible

Often when you are maintaining a large number of servers, it is useful to be able to query those systems all at once to find out information like IP address, configured hostnames, and even the OS version.  

In this post we’re going to focus on pulling the OS version from multiple systems at once with a single Ansible playbook.  

First, lets get an idea of how our directory structure should look in the end, then we’ll break things down:

Ansible playbook and file-system layout:

[~/sources/ansible]
$ tree .
.
├── get-os-version.yaml
├── group_vars
│   └── linux
├── hosts
│   └── home
│   ├── archive.jbldata.com
│   ├── jbllnxwks.jbldata.com
│   └── yoga2.jbldata.com
└── host_vars

4 directories, 5 files

Now that we know how things should look in the end, lets setup our host configurations.

Host configurations:

[~/sources/ansible]
$ tree hosts/
hosts/
└── home
 ├── archive.jbldata.com
 ├── jbllnxwks.jbldata.com
 └── yoga2.jbldata.com

1 directory, 3 files

[~/sources/ansible]
$ cat hosts/home/archive.jbldata.com 
[linux]
archive.jbldata.com

As you can see, the host configurations can be fairly simple and straight-forward to start, following the directory structure outlined above. Now lets setup our group_vars.

Group configuration:

[~/sources/ansible]
$ cat group_vars/linux 
---
ansible_ssh_private_key_file: /home/jbl/.ssh/jbldata_id_rsa

In this case, each of the servers I’m dealing with are secured by password-protected SSH keys, so I’m setting up my group vars to reference the correct SSH private key to use when connecting to these servers.  Pretty simple so far?  Great, now lets look at the playbook.

The Ansible playbook:

[~/sources/ansible]
$ cat get-os-version.yaml 
---
- name: Check OS Version via /etc/issue
 hosts: linux
 tasks:
 - name: cat /etc/issue
 shell: cat /etc/issue
 register: etc_issue
 - debug: msg="{{etc_issue.stdout_lines}}"

This playbook is very simple, but does exactly what we need.   Here we are specifying the use of the ‘shell’ module in order to execute the cat command on our remote servers.

We use the ‘register’ keyword to save the resulting output of the command in a variable called ‘etc_issue’.  We then use the ‘debug’ module to print the contents of that variable via ‘etc_issue’.  

When executing a command via the ‘shell’ module, there are several return values that we have access to, which are also now captured in the ‘etc_issue’ variable. In order to access the specific return value we are interested in, we use ‘debug’ to dump the STDOUT return value specifically via ‘etc_issue.stdout_lines’.

Now we have an Ansible playbook and associated configuration that allows us to quickly query multiple servers for their OS version.

It’s important to note that since I’m using password-protected SSH keys, that I’m using SSH Agent before I execute the playbook.  This only has to be done once for repeated runs of the same playbook within your current terminal session, for example:

[~/sources/ansible]
$ ssh-agent 
SSH_AUTH_SOCK=/tmp/ssh-82DKhToCuPUu/agent.4994; export SSH_AUTH_SOCK;
SSH_AGENT_PID=4995; export SSH_AGENT_PID;
echo Agent pid 4995;


[~/sources/ansible]
$ ssh-add ~/.ssh/jbldata_id_rsa
Enter passphrase for /home/jbl/.ssh/jbldata_id_rsa: 
Identity added: /home/jbl/.ssh/jbldata_id_rsa (/home/jbl/.ssh/jbldata_id_rsa)

Now, we’re ready to execute the ansible playbook.  Here’s the resulting output:

[~/sources/ansible]
$ ansible-playbook -i hosts/home get-os-version.yaml 

PLAY [Check OS Version via /etc/issue] *****************************************

TASK [setup] *******************************************************************
ok: [jbllnxwks.jbldata.com]
ok: [yoga2.jbldata.com]
ok: [archive.jbldata.com]

TASK [cat /etc/issue] **********************************************************
changed: [yoga2.jbldata.com]
changed: [jbllnxwks.jbldata.com]
changed: [archive.jbldata.com]

TASK [debug] *******************************************************************
ok: [archive.jbldata.com] => {
 "msg": [
 "Ubuntu 14.04.3 LTS \\n \\l"
 ]
}
ok: [yoga2.jbldata.com] => {
 "msg": [
 "Ubuntu 14.04.5 LTS \\n \\l"
 ]
}
ok: [jbllnxwks.jbldata.com] => {
 "msg": [
 "Debian GNU/Linux 5.0 \\n \\l"
 ]
}

PLAY RECAP *********************************************************************
archive.jbldata.com : ok=3 changed=1 unreachable=0 failed=0 
jbllnxwks.jbldata.com : ok=3 changed=1 unreachable=0 failed=0 
yoga2.jbldata.com : ok=3 changed=1 unreachable=0 failed=0

And that’s pretty much it!  Now we just have to add more hosts under our hosts/ configuration, and we can query as many servers as we want from a single command.  Happy orchestrating!

rbenv and multiple local ruby version installs (like perlbrew)

When I was using perl as my primary development language, I had a platform of tools in place to make my perl development fun and productive. This included tools like Perl::Dancer, DBIx::Class, cpanm, and perlbrew. Perlbrew was a tool I used to maintain multiple versions of perl in my local development environment, so that I could test my code against multiple perl and module versions to ensure that it worked on the largest range of platforms ( and to avoid dependency related bugs ).

This allowed me to run my code against Perl 5.10, 5.12, and 5.14, and so on each with their own module-base, fully isolated from each-other.

Now I’m working with many different tools these days, and haven’t had the opportunity to work with other languages to the extent that I’ve worked with Perl, but I have been playing with Ruby and Golang. Using Ruby, I immediately thought that I would like to play with multiple versions of Ruby without altering the ‘system’ ruby on my workstation. A quick search of ‘perlbrew for ruby’ lead me to rbenv which seems to be exactly what I was looking for.

Some examples of how rbenv works:

# list all available versions:
$ rbenv install -l

# install a Ruby version:
$ rbenv install 2.0.0-p247

# Sets a local application-specific Ruby version by writing the version name to a .ruby-version file in the current directory.
$ rbenv local 1.9.3-p327

# Sets the global version of Ruby to be used in all shells by writing the version name to the ~/.rbenv/version file.
$ rbenv global 1.8.7-p352

# Sets a shell-specific Ruby version by setting the RBENV_VERSION environment variable in your shell
$ rbenv shell jruby-1.7.1

# Lists all Ruby versions known to rbenv, and shows an asterisk next to the currently active version.
$ rbenv versions 1.8.7-p352 1.9.2-p290 * 1.9.3-p327 (set by /Users/sam/.rbenv/version) jruby-1.7.1 rbx-1.2.4 ree-1.8.7-2011.03

# Displays the currently active Ruby version, along with information on how it was set.
$ rbenv version 1.9.3-p327 (set by /Users/sam/.rbenv/version)

# Displays the full path to the executable that rbenv will invoke when you run the given command.
$ rbenv which irb
/Users/sam/.rbenv/versions/1.9.3-p327/bin/irb

Ansible Playbooks – Externalization and Deduplication

Image result for ansible

Externalization and Deduplication

Developers who understand the concepts of modularity and deduplication should immediately recognize the power behind being able to include settings and commands from external files.   It is seriously counter-productive to maintain multiple scripts or playbooks that have large blocks of code or settings that are exactly the same.   This is an anti-pattern.

Ansible is a wonderful tool, however it can often be implemented in counter-productive ways.  Lets take variables for example.

Instead of maintaining a list of the same variables across multiple playbooks, it is better to use Variable File Separation.

The Ansible documentation provides an excellent example of how to do this.  However I feel that the reasoning behind why you would want to do it falls short in describing the most common use-case, deduplication.

The documentation discusses the possible needs around security or information sensitivity.  I also believe that deduplication should be added to that list.  Productivity around how playbooks are managed can be significantly increased if implemented in a modular fashion using Variable File Separation, or vars_files.   This by the way also goes for use of the includes_vars module.

Here are a list of reasons why you should immediately consider a deduplication project around your Ansible playbook variables:

Save Time Updating Multiple Files

This may seem like a no-brainer, but depending on the skills and experience of the person writing the playbook, this can become a significant hindrance to productivity.   Because of Ansible’s agent-less and decentralized manner, playbooks can be written by anyone who wants to get started with systems automation.  Often, these can be folks without significant proficiencies in programmer-oriented text editors such as Vim, Emacs, or Eclipse – or with bash scripting experience around command-line tools like awk, sed, and grep.

It is easy to imagine a Java developer without significant Linux command-line experience opening up one playbook at a time, and modifying the value for the same variable, over and over… and over again.

The best way for folks without ninja text-editing skills to stay productive is to deduplicate, and store common variables and tasks in external files that are referenced by multiple playbooks.

Prevent Bugs and Inconsistent Naming Conventions

In a perfect world, everyone would understand what a naming convention was.  All our variables would be small enough to type quickly, clear enough to understand its purpose, and simple enough that there would never be a mis-spelling or type-o.  This is rarely the case.

If left un-checked, SERVER_1_IP can also be SERVER1_IP, Server_1_IP, and server_1_ip.  All different variable names across multiple files, referencing the same value for the exact same purpose.

This mess can be avoided by externalizing this information in a shared file.

Delegate Maintenance and Updates to Variables That Change Frequently

In some environments, there may be playbook variables that need to change frequently.  If these variables are part of some large all-encompassing playbook that only some key administrators have access to be able to modify, your teams could be left waiting for your administrator to have free cycles available just to make a simple change.  Again, deduplication and externalization to the rescue!  Have these often-changing variables externalized so that users who need these changes immediately can go ahead and commit these changes to very specific, isolated files within your version control system that they have special rights to modify.

Cleaner Version Control History (and therefore Audit History)

If you have the same variables referenced by multiple files, and you make changes to each of those files before you commit them to version control, then your version control history can become a complete mess.  Your version control history will show a change to a single value affecting multiple files.  If you come from a software development background, and are familiar with the concept of code reviews, then you can appreciate being able to look at a simple change to a hard-coded value (or a constant), and see that it only affects one or two files.

I hope the reasons above convince some of you to start browsing your playbook repositories for possible candidates for deduplication.  I really believe that such refactoring projects can boost productivity and execution speed for individuals and teams looking to push changes faster while minimizing obstacles around configurations shared by multiple systems.  Send me a note if this inspires you to start your own deduplication project!

Installing CentOS 6.4 from a Net Install Image on a Virtual Host

An Opportunity To Play Around with CentOS

One of the personal projects that I’ve always had itching away at the back of my mind was the urge to revamp my home network monitoring and security.  One of the tools that I love using for network monitoring is Xymon.  However, this gives me an opportunity to do things slightly different.  I have decided to give CentOS a go instead of my typical choice of Debian for a Linux distro in a server environment.  I am curious to see what advancements have been made in the RPM world, and I’d like to keep my Red Hat skills up to date.  What better way to do so than to set up a CentOS server with some production tools and services on it :)

Pre-installation Setup

So here we are, I have the CentOS Netinst (Net Install) image loaded into a VM, and I boot up the guest.

Since this is a fresh install on a 20GB virtual disk, I’m going to select “Install or upgrade an existing system” here.

I press “enter” and lots of console logging and scrolling action takes place.

Eventually I am prompted to “test the media”.  Usually this is referring to a physical CD typically used to install the OS on a physical server.  To me the phrasing feels a bit antiquated in this day of cloud services.

In any case, I still say yes, hoping that it will catch any errors in the ISO image file before I run into a bug during the installation process.  Better safe than sorry.

After the virtual disk is “successfully verified” to be OK, I try to move forward with the installation.

Be sure to note that after your virtual disk is verified OK, that the installer may decide to eject your CD media, in order to give you an opportunity to test other media.

Since I have no other media to test, this is actually kind of annoying. In order to continue with the installation, I have to go into the VM settings and re-connect the CDROM to the VM.

Select your language and keyboard options if the defaults are not suitable.  Otherwise, just  move past these dialogues by selecting “OK”, or hitting enter.

 

When you are asked “What type of media contains the installation image?”, select “URL”.
Continue reading Installing CentOS 6.4 from a Net Install Image on a Virtual Host

What is the “Cloud”?

The “Cloud” Will Save Us!

You hear about it every day, “cloud services”, “cloud storage”, “the cloud as a platform”. But what is the “cloud” really? The definition of what the “cloud” is, is different for everyone.  Some believe it is the implementation of a certain group of technologies, such as web servers, virtual hosts, and GUI frameworks.  Others believe it is a philosophy for modern software development and implementations – in particular web-based and mobile implementations. Others still see the “cloud” as simply a way of out-sourcing infrastructure – yet still somehow see the need to have dedicated “Cloud Administrators”.

So what is the “Cloud” really?  I offer my humble opinion below.

In With Old, Out with the New

Virtualization has been around for a very long time, so has Software as a Service (SaaS) and Platform as a Service (PaaS).  These technologies have been with us in different forms and iterations since the time of X11.  Of course, these technologies have evolved significantly over time, but that does not make them revolutionary, merely evolutionary.

I keep hearing phrases and comments to the effect of “the cloud changes everything”, when in fact it really doesn’t.  It is simply another form of outsourcing.  The real benefit of todays’ “cloud” technology is that it makes (or seems to make) management of infrastructure easier.  But convenience always come with a price.

Easier? Maybe Not So Much.. Especially For Seasoned Professionals

The easier things are, the more often you are likely to do them.  If it becomes easier to deploy apps via Amazon EC2/S3, or to a DotCloud instance, then there is a strong likelihood your organization will deploy more of them.  Instead of managing infrastructure, you are now concerned with managing deployment practices, configuration standards, and code-bases. Not to mention the human resources required to maintain those applications going forward.

The infrastructure “problem” doesn’t go away, it’s just relocated – it’s now someone else’s problem.

Over-Reacting and Under-Utilizing

When organizations frantically down-size their teams in a drastic attempt to remain modern, it bothers me; saddens me really, because deep down I know that the new cloud-based technologies these organizations are hoping to take advantage of are simply re-iterations and re-implementations of the same technologies they’ve always had to deal with.  HTTP, CSS, SSH, and Linux, for example.  It is quite likely that most companies with significant IT resources already have people who are skilled enough to rip through the implementation of “cloud” technologies, armed only with their previous experiences, and the core “problem-solver” attitude that they’ve always had, that doesn’t go away with time.

“Not enough Cloud experience.” Really? Do you mean using a GUI web interface to setup a remote host?  Or perhaps you mean the command-line configuration that needs to be done to YAML formatted text files in order to get a Rails application up and running?  Of course old-hat Systems Administrators or Web-Application developers don’t know “precisely” how it all works – the first time around.  But after the effort is put in to get the application up and running, to document the setup and check it into version control, and to automate as much of the time-consuming or repetitive manual tasks as much as possible, the rest is, as they say, “cake”.  What you need to focus on is developing the kind of people who can do all of this, and have fun with it.  This is how you effectively re-train.  This is how you retain good talent.  You have to allow the people you have to show you they can adapt.  It is a waste of experience to let people go because their experience is not up-to-date.  That’s not their fault.

More Of The Same, Spot The Patterns

Newer scripting languages and frameworks are being hyped as if they can do things that have never been done before.  I’ve seen this with the likes of Ruby, Python, and Perl. Despite the fact that Perl has one of the largest, organized, stable, and well-tested libraries  of any programming language to date (the CPAN), it doesn’t get the same kind of love that newer languages like Ruby and Python do, especially in corporate environments.  Sometimes it in fact does pay to re-invent the wheel, but most often it does not.

In Conclusion

If you are still trying to figure out what the “cloud” really is, know that it is simply a string of technologies that have been around for a long time, re-branded to look new and cool (for marketing purposes), and bundled with some new management tools and remote storage to make things “easier”.

To sales and marketing folks, it could simply mean trendy and cool.  To developers, it may mean LAMP or MEAN.  To systems and infrastructure people it could mean hyper-visors, virtual machines, and software containers.  To DevOps folks, it may involve Puppet, Chef, and Ansible automations, or Continuous Integration.

To recruiters and hiring managers, it often means Amazon AWS and Spring Framework Experience.  And to end-users, it typically means anything they can access from all of their phones, laptops, tablets, and PCs simultaneously.

The “Cloud” means many things to many different people.  My humble opinion? At it’s core – at the heart of the all the technology and implementation that has made it all possible; are tools, software, and individual experience that have been around since the beginning, and it is ALL based on the concept of Open Communication, and the spirit and foundation of Free and Open-Source Software.

Jolicloud is of the Awesome

So if you haven’t heard of Jolicloud http://www.jolicloud.com/, then you need to download and install it now. It’s an Ubuntu based OS (a self-proclaimed “Cloud OS”) specifically designed for Netbooks, and it rocks. I have Jolicloud installed on my Samsung N110 Netbook, and I use it for everything from e-mail to games (snes9x) to work (Perl/Vim/Screen). Now what makes Jolicloud super-awesome is that it treats web applications no differently from desktop applications. Each application gets it’s own icon on the “Home screen”. It’s also socially aware – it can connect to facebook and allow you to search for applications and/or people who’ve used those applications, so that you can ask them questions and get guidance on the tools you’re trying to use.

The interface is very slick – big icons and a clean method of navigation to the lesser used functions of a standard Gnome/Ubuntu desktop. The most-awesomest part is that once you load up a terminal, you have full access to the command-line and all Ubuntu apt repositories.

Jolicloud isn’t just for netbooks! I’ve also installed it on my Acer Veriton (similar to the Acer Revo), and am using it as a media center OS. Jolicloud also comes in an “express” edition, which allows you to install it under windows, where it will come up as a secondary OS option under the windows boot-loader.

If you have a netbook, nettop, or any light-weight PC, then install Jolicloud. Highly recommended.

Using DZEN with Xmonad to view Currently Active Network Shares

Currently Xmonad is my window manager of choice, because it’s clean, functional, and removes all the unnecessary crap that most modern desktops usually come with by default.

Although Xmonad is very cool, there are still some things that it’s lacking as far as functionality. Much of this is made up for by the use of Xmobar, Trayer, and other Xmonad compatible plugins and applications. I recently came across another one of these applications, and found it to be an exciting find. The tool is called Dzen.

Dzen is a desktop messaging tool which allows you to easily write some useful scripts, and have the output of those scripts become part of your desktop interface. Many examples of how this works are available on the Dzen webite, but some examples are as follows:

  • CPU Monitoring graphs
  • dmesg log monitoring
  • Notification of system events which are commonly found in syslog
  • E-mail or twitter alerts shown on your desktop as they come in
  • Custom calendar alerts
  • and much more..

Now this idea is not new – I remember there being a project called “OSD” (on-screen display) which essentially allows you to do the same thing. However, I think OSD was meant as more of an single message notification system, rather than the way that Dzen works, with master and slave windows, and the ability to implement menus, etc.

In any case, I decided to give Dzen a try, and am happy with the tool that I’ve been able to whip up. For the longest while, I wanted the ability for my xmonad environment to tell me, at a quick glance, what network mounts and removable devices I currently have mounted. I’m sure that this kind of information is easily available on many bloated desktops, including GNOME and KDE, but I was looking for something simple, small and configurable. Didn’t find it, so I ended up writing my own – with the help of Dzen.

Here are a couple of screenshots of how it looks:

Dzen “Active Mounts” widget (mouse out):
dzen-1

 

Dzen “Active Mounts” widget (mouse over):
dzen-2

 

I wrote the scripts fairly quickly, so I’m sure they could be written better, but I think they will provide those of you who are interested, a good example of how to implement a regularly updated notification widget with Dzen.

The scripts are written to check for changes in the mount list, and only update Dzen when a change is detected. It is written in two components:

1) A perl script which captures the mount information in the exact format that I want, and
2) a bash script which handles loading Dzen

Here’s the source code (perl script):

#!/usr/bin/perl

# Written by J. Bobby Lopez  - 27 Jan 2010
# Script to -be loaded- by the 'dzen-mounts.bash' script
# This script can also be run by itself, if you want to dump a
# custom plain-text table of your network shares or removable
# devices.
#
# This script is meant to be utilized the Dzen notification system
# Information on Dzen can be found at http://dzen.geekmode.org/

use strict;
use warnings;

use Data::Dumper;
use Text::Table;

my @types = qw( cifs ntfs davfs sshfs smbfs vfat );

sub getmounts
{
    my @valid_mounts; # to hold mounts we want
    my @all_mounts = split (/\n/, `mount`);
    foreach my $mount (@all_mounts)
    {
        foreach my $type (@types)
        {
            if ( $mount =~ m/$type/ )
            {
                push (@valid_mounts, $mount);
            }
        }
    }
    return @valid_mounts;
}

sub getsizes
{
    my @mounts = getmounts();
    my @list;
    foreach my $mount (@mounts)
    {
        my @cols = split (/\ /, $mount);
        my @df_out = split (/\n/, `df -h $cols[2]`);
        $df_out[1] .= $df_out[2] if defined($df_out[2]);
        $df_out[1] =~ s/[[:space:]]+/\ /;
	    my @df_cols = split (/[[:space:]]+/, $df_out[1]);
        push (@list, ([@df_cols]));
    }
    return @list;
}

my $tb = Text::Table->new(
	"Filesystem", "Size", "Used", "Avail", "Use%", "Mounted on"
);
$tb->load(getsizes());
print "Active Mounts\n";
print $tb;

And the bash script:

#!/bin/bash

# Script to load Dzen with output from 'dzen-mounts.pl' script
# Written by J. Bobby Lopez  - 27 Jan 2010
#
# This script utilizes the Dzen notification system
# Information on Dzen can be found at http://dzen.geekmode.org/

function mountlines
{
        LINES=`perl dzen-mounts.pl|wc -l`;
        echo "$LINES"
}

function freshmounts
{
        OUTPUT=`perl dzen-mounts.pl`;
        echo "$OUTPUT"
}

function rundzen
{
        OUTPUT=`freshmounts`;
        MOUNTLINES=`mountlines`;
        echo "$OUTPUT" | dzen2 -p -l "$MOUNTLINES" -u -x 500 -y 0 -w 600 -h 12 -tw 120 -ta l &
        PID=`pgrep -f "dzen2 -p -l $MOUNTLINES -u -x 500 -y 0 -w 600 -h 12 -tw 120 -ta l"`;
        echo "$PID"
}

function killdzen
{
        PID="$1"
        if [ ! "$PID" ]; then
            MOUNTLINES=`mountlines`;
            PID=`pgrep -f "dzen2 -p -l $MOUNTLINES -u -x 500 -y 0 -w 600 -h 12 -tw 120 -ta l"`;
        fi

        if [ "$PID" ]; then
            #echo "Killing $PID..";  # DEBUG STATEMENT
            kill "$PID";
        fi;
}

function checkchanges
{
    while true; do
        NEW=`freshmounts`;
        #echo "$NEW - new";  # DEBUG STATEMENT
        if [ "$OLD" != "$NEW" ]; then
            killdzen "$PID";
            rundzen;
            #echo "$PID started";  # DEBUG STATEMENT
            OLD="$NEW";
            #echo "$OLD - old updated"  # DEBUG STATEMENT
        fi
        sleep 1;
    done
}

checkchanges

You can also download the scripts in a tgz archive here. Enjoy!

Back to Basics – Very Simple Log Monitoring with Perl

multitail_apache

There are many many tools out there which allow you to monitor and view your system or networking logs in several different ways.  Sometimes though, you may find yourself looking for a specific feature that none of these tools currently provide.  Whenever your goals are very specific, and you don’t want to use a big feature-full program to accomplish a simple task, you may want to consider writing your own tool.

Below is a simple Perl program I wrote which does just that. All the requirements of the program are within the script itself (using the __DATA__ handle at the bottom of the file). The only thing you may need to install on your system to get this to work is the File::Tail CPAN package.

#!/usr/bin/perl -w

use strict;
use File::Tail;

my @patterns = ;
my $file = File::Tail->new ("/var/log/syslog");
while ( defined(my $line=$file->read) )
{
    my $match = &filter($line);
    if ( $match eq "no" )
    {
        print $line;
    }
}

sub filter ()
{
    my $line = $_[0];
    my $match = "no";
    foreach my $test (@patterns)
    {
        chomp($test);
        if ( $line =~ m/$test/ )
        {
            $match = "yes";
        }
    }
    return $match;
}

__DATA__
PROTO=UDP SPT=67 DPT=68
ACCEPT IN=br0 OUT=vlan1 src=192.168.0.111.*PROTO=TCP.*DPT=80
ACCEPT IN=br0 OUT=vlan1 src=192.168.0.102.*PROTO=TCP.*DPT=80
ACCEPT IN=br0 OUT=vlan1 src=192.168.0.111.*PROTO=TCP.*DPT=443
ACCEPT IN=br0 OUT=vlan1 src=192.168.0.102.*PROTO=TCP.*DPT=443
ACCEPT IN=vlan1 OUT=br0.*DST=192.168.0.101.*PROTO=UDP.*DPT=1755
ACCEPT IN=vlan1 OUT=br0.*DST=192.168.0.101.*PROTO=TCP.*DPT=1755
ACCEPT IN=br0 OUT=vlan1 src=10.100.0.1.*PROTO=UDP.*DPT=53
JBLLNXWKS dhclient

__END__

This program will monitor the end of the file (like the Unix ‘tail’ command) and check for new log entries. When it detects new lines in the log, it will filter those lines with the patterns defined at the end of the script (under __DATA__) and display anything it detects in the logs except those filter lines.

You’ll probably notice that the filter lines are regular expressions, which makes this script more powerful than doing filtering by simple full-string comparison.

Aside from simply printing the output to STDOUT, you could use regular expressions to pop pieces of each line into an array or hash, in order to do calculations, such as how many entries had a source IP of X, or destination port of Y, etc.

Its definitely a good thing to keep in mind that whatever software you could possibly need is already out there on the internet, and possibly open source. However, its also good to keep in mind that YOU can create a tool yourself to accomplish your specific task; all it takes is a little self-confidence, effort, and patience.

Configuring X.Org Display Resolutions Under Ubuntu 8.04

Problems with High Resolution Monitors and X.Org

I recently bought a 28′ LCD Monitor, the I-INC iF281D. I bought the monitor to increase my usable desktop workspace on my Ubuntu 8.04 (Hardy) Linux workstation. I do a lot of programming, systems administration, log analysis and monitoring – and I usually have three or more X Terminals open at any one time (with some mixture of “top”, “vim”, “tail”, and “screen” open), so the more desktop space I have, the better.

When I first attempted to configure my new monitor using the ‘nvidia-settings’ GUI application that comes with NVIDIA’s Linux driver, I noticed that it didn’t present all the resolutions that the monitor supported.  NVIDIA’s Settings GUI was limiting the available resolutions to a maximum of 1280×800. This was odd, because the monitor supported resolutions up to 1920×1200, and supported DCC, so the GUI should have been able to pick up the monitor specifications and provide configuration options for all supported features (resolutions, refresh rates, etc.)

At first I thought that the problem was with the NVIDIA driver I was using, and so I downloaded the latest driver from NVIDIA’s website, recompiled and tried again.  I still however was not able to get the resolutions that the monitor supported.


My 28 Inch Desktop @ 1920x1200

Solving The Problem

After much more digging and research, I found that the problem was not with the monitor, or the NVIDIA driver, but with the mode settings in ‘xorg.conf’. What I wasn’t aware of was that the Horizontal Sync rate definition within my ‘xorg.conf’ directly reflects the resolutions that will be available to me.

For example, the ‘nvidia-settings’ application set the Horizontal Sync and Vertical Refresh to low (safe) values by default. Therefore the available resolutions were limited. Once I figured out what my maximum HZ Sync and VT Refresh were, I was able to achieve the resolutions that the monitor was capable of.

There are Xfree86 Video Timing HOWTO’s available if you want to get into the gory details of how to calculate the correct xorg.conf settings for your specific monitor. However, if you’re a programmer like me, then you’ll want to skip this step, expecting that someone else out there must have already been through this, and has likely created a tool to make our lives easier.

Lo and behold! Xtiming is a great web tool which helps you calculate your Horizontal Sync and Vertical Refresh Rate settings. Simply enter the resolution that you are trying to achieve, and Xtiming will tell you the settings you’ll need in your ‘xorg.conf’ file to get it.

For example, if you leave empty all the other values that Xtiming asks you for, and simply enter “1600×1200” for Visible Resolution, and “60” for Refresh Rate, then click “Calculate Modeline”; you’ll see that Xtiming returns the Mode Line that you should use, along with (and most importantly!) the Horizontal Sync rate you will need to achieve that resolution at the specified refresh rate:

Modeline "1600x1200@60" 176.70 1600 1632 2296 2328 1200 1224 1236 1261
Horizontal sync frequency: 75.9 kHz

I personally found that I didn’t need to use the “Modeline”, but the Horizontal Sync Frequency was essential. Here’s an excerpt of my ‘xorg.conf’ file using the above settings:

(...)

Section "Monitor"

    # HorizSync source: xconfig, VertRefresh source: xconfig
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "AUO"
    HorizSync       30.0 - 75.9
    VertRefresh     60.0
    Option         "DPMS"
EndSection

(...)

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "TwinViewXineramaInfoOrder" "DFP-0"
    Option         "TwinView" "1"
    Option         "metamodes" "DFP-0: 1440x900 +1600+0, DFP-1: nvidia-auto-select +0+0"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

I’m using TwinView because I have a dual monitor setup. However, now that I have a 28′ display, dual displays don’t really seem to be a requirement any more :)

Setting Up SSH Keys

Here is my attempt at a very quick and dirty guide to setting up SSH Keys using OpenSSH. If you are looking for a way to securely login to one or more boxes, without being prompted for your password every time, then using SSH Keys is probably your best bet.

Here we go..

Using SSH keys allows you to SSH from one host to another in a more secure manner, or (optionally) without the need for a password.

Lets name our example hosts:

  • local host is called my_pc
  • remote host is called devhost



On my_pc (the host you are SSH’ing from):

ssh-keygen -t dsa

This will generate two files (a key pair), for example:

id_dsa – this is your private key
id_dsa.pub – this is your public key

ssh-keygen will ask you for a password/pass-phrase. At this point, you can enter a pass-phrase, or just hit [enter] to use a blank password.

  • Note: if you are creating a key for a user account, you should always use a pass-phrase! Only consider omitting pass-phrases when the key is being used for one-off automated system to system transactions.

If you haven’t done so already, give your private key a descriptive name, like my_pc.id_dsa.

If it doesn’t exist, create the file ‘~/.ssh/config‘, with the following contents:

Host devhostIdentityFile ~/.ssh/my_pc.id_dsa

Note that the ‘config‘ file can be configured with multiple private keys. Make sure that devhost is resolvable by hostname, or this will not work.

(Note: I’ve had some trouble using IP’s in the ‘config‘ file)

Make sure that ‘my_pc.id_dsa‘ is only readable/writeable by it’s owner. Make sure that ‘config‘ is only writable by it’s owner.



On devhost (The host you are SSH’ing to):
Copy the public key from my_pc to devhost, and append it’s contents to the end of the ‘authorized_keys2‘ file, like so:

cat id_dsa.pub >> ~/.ssh/authorized_keys2

Note that the ‘authorized_keys2‘ file can hold multiple public keys.Make sure that ‘authorized_keys2‘ is only readable/writeable by it’s owner.You’re Finished! You should now be able to SSH from my_pc to devhost using SSH keys, and without the need for a password if you so desired.



Troubleshooting:
Use ‘ssh -v’ to enable verbose debugging when testing SSH connectivity.This was tested with OpenSSH on Ubuntu 8.04 LTS, and I’ve used this same method successfully on previous versions of OpenSSH, and on other Debian-based operating systems. Your mileage may vary depending on your OS and version of OpenSSH.

Mindnet.Ca Now Running Ubuntu Linux

After a long run of having the Mindnet.Ca server running off a Knoppix Live CD, I figured it was time to pull everything together and put www.mindnet.ca, the Mindnet CVS, and The Bag Of Holding (TBOH) all on a properly installed and configured server. After careful consideration with regards to security, performance, and especially stability, I’ve decided to have the server run on Ubuntu Linux. Ubuntu is a Debian GNU/Linux based distribution that has a more consistant release cycle. Ubuntu is a fairly fresh (new) distribution, so it’s development model still needs to mature, however the project looks promising.

Too Much Work Work

So I haven’t really been able to code anything lately as we’ve been quite busy at work for the past few weeks. Right now some of the big issues we’ve had with clients are e-mail issues. We’re still dealing with the remnants of the MyDoom virus (and it’s derivatives) causing problems with Exchange servers. I’m so glad I’m in the Linux camp, because Exchange is just not something I want to deal with regularly. I mean, come on, what kind of mail server has SMTP as an “option”?

Mounting EXT2FS under Windows

I found a cool utility that allows you to mount Ext2 filesystems under Win2k – it’s called Ext2FSD. It’s Open Source, and it works pretty well. I was able to play MP3’s nicely off the mounted partition while playing X-Tension (got tired of the in-game music).

I also found a nice Palm app called CryptoPad. Those of you who were smart enough to realize that information on your Palm is insecure because it’s all plain-text (pdb’s don’t count, they’re easily exportable), probably got a hold of SecureMemo from Certicom. Certicom was giving the app away for free, however they stopped development on it, and don’t support it or provide it on their web-site anymore. CryptoPad is better, since it’s Open Source, and it uses blowfish encryption (strong, clean, yum).