Category Archives: Computing

All about computers and information technology. Everything from discussions on cool desktop software and hip electronic gadgets, to in-depth discussions on chipset design and the science of programming languages.

Checking OS Version Across Multiple Hosts with Ansible

Often when you are maintaining a large number of servers, it is useful to be able to query those systems all at once to find out information like IP address, configured hostnames, and even the OS version.  

In this post we’re going to focus on pulling the OS version from multiple systems at once with a single Ansible playbook.  

First, lets get an idea of how our directory structure should look in the end, then we’ll break things down:

Ansible playbook and file-system layout:

[~/sources/ansible]
$ tree .
.
├── get-os-version.yaml
├── group_vars
│   └── linux
├── hosts
│   └── home
│   ├── archive.jbldata.com
│   ├── jbllnxwks.jbldata.com
│   └── yoga2.jbldata.com
└── host_vars

4 directories, 5 files

Now that we know how things should look in the end, lets setup our host configurations.

Host configurations:

[~/sources/ansible]
$ tree hosts/
hosts/
└── home
 ├── archive.jbldata.com
 ├── jbllnxwks.jbldata.com
 └── yoga2.jbldata.com

1 directory, 3 files

[~/sources/ansible]
$ cat hosts/home/archive.jbldata.com 
[linux]
archive.jbldata.com

As you can see, the host configurations can be fairly simple and straight-forward to start, following the directory structure outlined above. Now lets setup our group_vars.

Group configuration:

[~/sources/ansible]
$ cat group_vars/linux 
---
ansible_ssh_private_key_file: /home/jbl/.ssh/jbldata_id_rsa

In this case, each of the servers I’m dealing with are secured by password-protected SSH keys, so I’m setting up my group vars to reference the correct SSH private key to use when connecting to these servers.  Pretty simple so far?  Great, now lets look at the playbook.

The Ansible playbook:

[~/sources/ansible]
$ cat get-os-version.yaml 
---
- name: Check OS Version via /etc/issue
 hosts: linux
 tasks:
 - name: cat /etc/issue
 shell: cat /etc/issue
 register: etc_issue
 - debug: msg="{{etc_issue.stdout_lines}}"

This playbook is very simple, but does exactly what we need.   Here we are specifying the use of the ‘shell’ module in order to execute the cat command on our remote servers.

We use the ‘register’ keyword to save the resulting output of the command in a variable called ‘etc_issue’.  We then use the ‘debug’ module to print the contents of that variable via ‘etc_issue’.  

When executing a command via the ‘shell’ module, there are several return values that we have access to, which are also now captured in the ‘etc_issue’ variable. In order to access the specific return value we are interested in, we use ‘debug’ to dump the STDOUT return value specifically via ‘etc_issue.stdout_lines’.

Now we have an Ansible playbook and associated configuration that allows us to quickly query multiple servers for their OS version.

It’s important to note that since I’m using password-protected SSH keys, that I’m using SSH Agent before I execute the playbook.  This only has to be done once for repeated runs of the same playbook within your current terminal session, for example:

[~/sources/ansible]
$ ssh-agent 
SSH_AUTH_SOCK=/tmp/ssh-82DKhToCuPUu/agent.4994; export SSH_AUTH_SOCK;
SSH_AGENT_PID=4995; export SSH_AGENT_PID;
echo Agent pid 4995;


[~/sources/ansible]
$ ssh-add ~/.ssh/jbldata_id_rsa
Enter passphrase for /home/jbl/.ssh/jbldata_id_rsa: 
Identity added: /home/jbl/.ssh/jbldata_id_rsa (/home/jbl/.ssh/jbldata_id_rsa)

Now, we’re ready to execute the ansible playbook.  Here’s the resulting output:

[~/sources/ansible]
$ ansible-playbook -i hosts/home get-os-version.yaml 

PLAY [Check OS Version via /etc/issue] *****************************************

TASK [setup] *******************************************************************
ok: [jbllnxwks.jbldata.com]
ok: [yoga2.jbldata.com]
ok: [archive.jbldata.com]

TASK [cat /etc/issue] **********************************************************
changed: [yoga2.jbldata.com]
changed: [jbllnxwks.jbldata.com]
changed: [archive.jbldata.com]

TASK [debug] *******************************************************************
ok: [archive.jbldata.com] => {
 "msg": [
 "Ubuntu 14.04.3 LTS \\n \\l"
 ]
}
ok: [yoga2.jbldata.com] => {
 "msg": [
 "Ubuntu 14.04.5 LTS \\n \\l"
 ]
}
ok: [jbllnxwks.jbldata.com] => {
 "msg": [
 "Debian GNU/Linux 5.0 \\n \\l"
 ]
}

PLAY RECAP *********************************************************************
archive.jbldata.com : ok=3 changed=1 unreachable=0 failed=0 
jbllnxwks.jbldata.com : ok=3 changed=1 unreachable=0 failed=0 
yoga2.jbldata.com : ok=3 changed=1 unreachable=0 failed=0

And that’s pretty much it!  Now we just have to add more hosts under our hosts/ configuration, and we can query as many servers as we want from a single command.  Happy orchestrating!

How to Backup an Ubuntu Desktop (12.04, 14.04)

Source: http://askubuntu.com/questions/9135/how-to-backup-settings-and-list-of-installed-packages

Warning: Read about caveats in the link above before use

#——————————————————-

## The backup script
 dpkg --get-selections > ~/Package.list
 sudo cp -R /etc/apt/sources.list* ~/
 sudo apt-key exportall > ~/Repo.keys
 rsync --progress /home/`whoami` /path/to/user/profile/backup/here
## The Restore Script
 rsync --progress /path/to/user/profile/backup/here /home/`whoami`
 sudo apt-key add ~/Repo.keys
 sudo cp -R ~/sources.list* /etc/apt/
 sudo apt-get update
 sudo apt-get install dselect
 sudo dpkg --set-selections < ~/Package.list
 sudo dselect

#——————————————————-

Who is this for: users that have normal regular use of their computer, that have done minimal or no configuration outside their home folder, did not mess up startup scripts and services. A user that wants to have his software restored to how it was when he installed it with all customizations being done and kept in their home folder.

Who this will not fit for: servers geeks, power users with software installed by source (restoring the package list might break your system), users that have changed the startup script of some application to fit better their needs. Caution: there is a big chance any modifications outside home will be over written.

Ansible Playbooks – Externalization and Deduplication

Image result for ansible

Externalization and Deduplication

Developers who understand the concepts of modularity and deduplication should immediately recognize the power behind being able to include settings and commands from external files.   It is seriously counter-productive to maintain multiple scripts or playbooks that have large blocks of code or settings that are exactly the same.   This is an anti-pattern.

Ansible is a wonderful tool, however it can often be implemented in counter-productive ways.  Lets take variables for example.

Instead of maintaining a list of the same variables across multiple playbooks, it is better to use Variable File Separation.

The Ansible documentation provides an excellent example of how to do this.  However I feel that the reasoning behind why you would want to do it falls short in describing the most common use-case, deduplication.

The documentation discusses the possible needs around security or information sensitivity.  I also believe that deduplication should be added to that list.  Productivity around how playbooks are managed can be significantly increased if implemented in a modular fashion using Variable File Separation, or vars_files.   This by the way also goes for use of the includes_vars module.

Here are a list of reasons why you should immediately consider a deduplication project around your Ansible playbook variables:

Save Time Updating Multiple Files

This may seem like a no-brainer, but depending on the skills and experience of the person writing the playbook, this can become a significant hindrance to productivity.   Because of Ansible’s agent-less and decentralized manner, playbooks can be written by anyone who wants to get started with systems automation.  Often, these can be folks without significant proficiencies in programmer-oriented text editors such as Vim, Emacs, or Eclipse – or with bash scripting experience around command-line tools like awk, sed, and grep.

It is easy to imagine a Java developer without significant Linux command-line experience opening up one playbook at a time, and modifying the value for the same variable, over and over… and over again.

The best way for folks without ninja text-editing skills to stay productive is to deduplicate, and store common variables and tasks in external files that are referenced by multiple playbooks.

Prevent Bugs and Inconsistent Naming Conventions

In a perfect world, everyone would understand what a naming convention was.  All our variables would be small enough to type quickly, clear enough to understand its purpose, and simple enough that there would never be a mis-spelling or type-o.  This is rarely the case.

If left un-checked, SERVER_1_IP can also be SERVER1_IP, Server_1_IP, and server_1_ip.  All different variable names across multiple files, referencing the same value for the exact same purpose.

This mess can be avoided by externalizing this information in a shared file.

Delegate Maintenance and Updates to Variables That Change Frequently

In some environments, there may be playbook variables that need to change frequently.  If these variables are part of some large all-encompassing playbook that only some key administrators have access to be able to modify, your teams could be left waiting for your administrator to have free cycles available just to make a simple change.  Again, deduplication and externalization to the rescue!  Have these often-changing variables externalized so that users who need these changes immediately can go ahead and commit these changes to very specific, isolated files within your version control system that they have special rights to modify.

Cleaner Version Control History (and therefore Audit History)

If you have the same variables referenced by multiple files, and you make changes to each of those files before you commit them to version control, then your version control history can become a complete mess.  Your version control history will show a change to a single value affecting multiple files.  If you come from a software development background, and are familiar with the concept of code reviews, then you can appreciate being able to look at a simple change to a hard-coded value (or a constant), and see that it only affects one or two files.

I hope the reasons above convince some of you to start browsing your playbook repositories for possible candidates for deduplication.  I really believe that such refactoring projects can boost productivity and execution speed for individuals and teams looking to push changes faster while minimizing obstacles around configurations shared by multiple systems.  Send me a note if this inspires you to start your own deduplication project!

Examples of recursion, in Perl, Ruby, and Bash

Image result for recursion

This article is in response to the following question posted in the Perl community group on  LinkedIn:

I’m new to PERL and trying to understand recursive subroutines. Can someone please explain with an example (other than the factorial ;) ) step by step, how it works? Thanks in Advance.

Below, are some very simplified code examples in Perl, Ruby, and Bash.

A listing of the files used in these examples:

blopez@blopez-K56CM ~/hello_scripts 
$ tree .
 ├── hello.pl
 ├── hello.rb
 └── hello.sh
0 directories, 3 files
blopez@blopez-K56CM ~/hello_scripts $


Recursion example using Perl:

– How the Perl script is executed, and it’s output:

blopez@blopez-K56CM ~/hello_scripts $ perl hello.pl "How's it going!"
How's it going!
How's it going!
How's it going!
^C
blopez@blopez-K56CM ~/hello_scripts $ 

– The Perl recursion code:

#!/usr/bin/env perl
use Modern::Perl;
my $status_update = $ARGV[0]; # get script argument

sub hello_world
{
    my $status_update = shift; # get function argument
    say "$status_update";
    sleep 1; # sleep, or eventually crash your system
    &hello_world( $status_update ); # execute myself with argument
}

&hello_world( $status_update ); # execute function with argument


Recursion example using Ruby:

– How the Ruby script is executed:

blopez@blopez-K56CM ~/hello_scripts 
$ ruby hello.rb "Doing great!"
Doing great!
Doing great!
Doing great!
^Chello.rb:7:in `sleep': Interrupt
    from hello.rb:7:in `hello_world'
    from hello.rb:8:in `hello_world'
    from hello.rb:8:in `hello_world'
    from hello.rb:11:in `'
blopez@blopez-K56CM ~/hello_scripts $

Note: In Ruby’s case, stopping the script with CTRL-C returns a bit more debugging information.

– The Ruby recursion code:

#!/usr/bin/env ruby
status = ARGV[0] # get script argument

def hello_world( status ) # define function, and get script argument
    puts status
    sleep 1 # sleep, or potentially crash your system
    return hello_world status # execute myself with argument
end

hello_world status # execute function with argument

Recursion example using Bash:

– How the Bash script is executed:

blopez@blopez-K56CM ~/hello_scripts $ bash hello.sh "..nice talking to you."
..nice talking to you.
..nice talking to you.
..nice talking to you.
^C
blopez@blopez-K56CM ~/hello_scripts $ 

– The Bash recursion code:

#!/usr/bin/env bash

mystatus=$1 # get script argument

hello_world() {
    mystatus=$1 # get function argument
    echo "${mystatus}"
    sleep 1 # breath between executions, or crash your system
    hello_world "${mystatus}" # execute myself with argument
}

hello_world "${mystatus}" # execute function with argument

Managing Your SMTP Relay With Postfix – Correctly Rejecting Mail for Non-local Users

Image result for SMTP postfix

I manage a few personal mail relays that I use for relaying my own mail and for experimentation purposes (mail logs are a great source of unique and continuously flowing data that can you use to try out different ideas in GUI, database, or parser development).  One of them was acting up recently.  I got a message from my upstream mail-queue host saying that they’ve queued up quite a bit of mail for me over the last few weeks, and that I should investigate, as they do want to avoid purging the queue of valid mail.

Clearly I wanted to avoid queuing up mail on a remote server that is intended for my domain, and so I set out about understanding the problem.

What I found was that there was a setting in my /etc/postfix/main.cf that, although it was technically a valid setting, was incorrect for the role that mail-server was playing.  Specifically the mail server was supposed to be rejecting email completely for non-local users, instead of just deferring it with a “try again later” message.

In this case, I’m using Postfix v2.5.5. The settings that control this configuration in /etc/postfix/main.cf are as follows:

  • unknown_local_recipient_reject_code
  • local_recipient_maps

local_recipient_maps

local_receipient_maps defines the accounts that this mail server will accept and relay mail for. All other accounts would be “rejected” by the mail server.

However, how rejected mail is treated by Postfix depends on how it is configured, and this was the problem with this particular server.

For Postfix, it is possible to mark a message as “rejected”, but actually have it mean “rejected right now, but maybe not permanently, so try again later”. This “try again later” will cause the e-mail message to be queued on the upstream server, until it reaches some kind of retry time-out and delivery is once again attempted. Of course this will fail again, and again.

This kind of configuration is great for testing purposes, because it allows you to test the same messages over and over again without losing them, or to queue them up so that they can be reviewed to ensure they are indeed invalid e-mail messages. However this is not the state you want your mail server to be in permanently. At some point once things are ready for long-term (production) use, you want your mail server to actually reject messages permanently.

unknown_local_recipient_reject_code

That is where unknown_local_recipient_reject_code comes in. This configuration property controls what the server means when it “rejects” a message. Does it mean right now, or permanently?

The SMTP server response code to reject mail permanently is 550, and the code to reject mail only temporarily is 450.

Here is how you would configure Postfix to reject mail only temporarily:

unknown_local_recipient_reject_code = 450

And here is how you set Postfix to reject mail permanently:

unknown_local_recipient_reject_code = 550

In my case, changing the unknown_local_recipient_reject_code from 450 to 550 is what solved the problem.

In summary, if you ever run into an issue with your Postfix mail server where you believe mail is set to be REJECTED but it still seems to be queuing up on your up-stream mail relay, double-check the unknown_local_recipient_reject_code.

# Local recipients defined by local unix accounts and aliases only
local_recipient_maps = proxy:unix:passwd.byname $alias_maps

# 450 (try again later), 550 (reject mail)
unknown_local_recipient_reject_code = 550

References
http://www.postfix.org/LOCAL_RECIPIENT_README.html
http://www.postfix.org/postconf.5.html#unknown_local_recipient_reject_code

My Fun With Necrolinguaphilia

Last night I attended a talk given by Dr. Damian Conway (of Perl Best Practices fame) titled “Fun With Dead Languages“.  Although this is a talk that Damian had given previously, it is the first time that I heard it, and I’m so glad I did!

I was the first to arrive at the Mozilla office building at 366 Adelaide, and so was able to score a sweet parking spot right across the street (no small feat in downtown Toronto).

I arrived and introduced myself to Damian as he was preparing for his delivery shortly before a herd of approximately 70 hackers (according to Mozilla) from all language and computing backgrounds started pouring through the meeting room doors to be seated.

Damian has a very energetic style of presentation, and was able to hold our attention while covering everything from the virtual extinction of the Gros Michel Banana, to the benefits and efficiencies of stack-based programming (using PostScript as an example).  He compares many, very different languages including Befunge, Brainfuck, Lisp, and Piet, and suggests that a great place to look for new ideas is what he calls the “Language Morgue”, where he includes languages such as Awk, Prolog, Cobol… and even C++ as examples of dead languages and language paradigms.

Mr. Conway also dived into excruciating detail on how the Latin natural language can be used as an effective computer programming language, and has even gone so far as to write a module called Lingua::Romana::Perligata, which he has made available on the CPAN.

I also had the special treat of sitting right behind Sacha Chua who brilliantly sketched notes of the entire talk in real-time.  I haven’t had the pleasure of formally meeting Sacha just yet (didn’t even say “hello”, my bad!) as I didn’t want to distract her.  Aside from having my mind blown by Damian’s talk, I was also being mesmerized by Sacha’s artistic skills, and so I do feel somewhat justified in keeping my mouth shut just to absorb everything that was going on right in front of me (front-row seats FTW!).

20130806 Fun with Dead Languages - Damian Conway

Sacha has made her “Fun With Dead Languages” sketch notes publicly available on her blog for everyone to review and enjoy, and has placed it under a Creative Commons license, so please share freely (and drop her a note to say “thanks!”).

Overall, I learned a lot from this this talk and appreciate it immensely.  The energy of the audience made the discussion that much more enjoyable.  If you are interested in programming languages or language theory in general, I suggest you attend this talk the next time Damian decides to deliver it (or find a recording if one happens to be available?).  Damian Conway’s insights and humorous delivery are well worth the brainfuck ;)

Installing CentOS 6.4 from a Net Install Image on a Virtual Host

An Opportunity To Play Around with CentOS

One of the personal projects that I’ve always had itching away at the back of my mind was the urge to revamp my home network monitoring and security.  One of the tools that I love using for network monitoring is Xymon.  However, this gives me an opportunity to do things slightly different.  I have decided to give CentOS a go instead of my typical choice of Debian for a Linux distro in a server environment.  I am curious to see what advancements have been made in the RPM world, and I’d like to keep my Red Hat skills up to date.  What better way to do so than to set up a CentOS server with some production tools and services on it :)

Pre-installation Setup

So here we are, I have the CentOS Netinst (Net Install) image loaded into a VM, and I boot up the guest.

Since this is a fresh install on a 20GB virtual disk, I’m going to select “Install or upgrade an existing system” here.

I press “enter” and lots of console logging and scrolling action takes place.

Eventually I am prompted to “test the media”.  Usually this is referring to a physical CD typically used to install the OS on a physical server.  To me the phrasing feels a bit antiquated in this day of cloud services.

In any case, I still say yes, hoping that it will catch any errors in the ISO image file before I run into a bug during the installation process.  Better safe than sorry.

After the virtual disk is “successfully verified” to be OK, I try to move forward with the installation.

Be sure to note that after your virtual disk is verified OK, that the installer may decide to eject your CD media, in order to give you an opportunity to test other media.

Since I have no other media to test, this is actually kind of annoying. In order to continue with the installation, I have to go into the VM settings and re-connect the CDROM to the VM.

Select your language and keyboard options if the defaults are not suitable.  Otherwise, just  move past these dialogues by selecting “OK”, or hitting enter.

 

When you are asked “What type of media contains the installation image?”, select “URL”.
Continue reading Installing CentOS 6.4 from a Net Install Image on a Virtual Host

What is the “Cloud”?

The “Cloud” Will Save Us!

You hear about it every day, “cloud services”, “cloud storage”, “the cloud as a platform”. But what is the “cloud” really? The definition of what the “cloud” is, is different for everyone.  Some believe it is the implementation of a certain group of technologies, such as web servers, virtual hosts, and GUI frameworks.  Others believe it is a philosophy for modern software development and implementations – in particular web-based and mobile implementations. Others still see the “cloud” as simply a way of out-sourcing infrastructure – yet still somehow see the need to have dedicated “Cloud Administrators”.

So what is the “Cloud” really?  I offer my humble opinion below.

In With Old, Out with the New

Virtualization has been around for a very long time, so has Software as a Service (SaaS) and Platform as a Service (PaaS).  These technologies have been with us in different forms and iterations since the time of X11.  Of course, these technologies have evolved significantly over time, but that does not make them revolutionary, merely evolutionary.

I keep hearing phrases and comments to the effect of “the cloud changes everything”, when in fact it really doesn’t.  It is simply another form of outsourcing.  The real benefit of todays’ “cloud” technology is that it makes (or seems to make) management of infrastructure easier.  But convenience always come with a price.

Easier? Maybe Not So Much.. Especially For Seasoned Professionals

The easier things are, the more often you are likely to do them.  If it becomes easier to deploy apps via Amazon EC2/S3, or to a DotCloud instance, then there is a strong likelihood your organization will deploy more of them.  Instead of managing infrastructure, you are now concerned with managing deployment practices, configuration standards, and code-bases. Not to mention the human resources required to maintain those applications going forward.

The infrastructure “problem” doesn’t go away, it’s just relocated – it’s now someone else’s problem.

Over-Reacting and Under-Utilizing

When organizations frantically down-size their teams in a drastic attempt to remain modern, it bothers me; saddens me really, because deep down I know that the new cloud-based technologies these organizations are hoping to take advantage of are simply re-iterations and re-implementations of the same technologies they’ve always had to deal with.  HTTP, CSS, SSH, and Linux, for example.  It is quite likely that most companies with significant IT resources already have people who are skilled enough to rip through the implementation of “cloud” technologies, armed only with their previous experiences, and the core “problem-solver” attitude that they’ve always had, that doesn’t go away with time.

“Not enough Cloud experience.” Really? Do you mean using a GUI web interface to setup a remote host?  Or perhaps you mean the command-line configuration that needs to be done to YAML formatted text files in order to get a Rails application up and running?  Of course old-hat Systems Administrators or Web-Application developers don’t know “precisely” how it all works – the first time around.  But after the effort is put in to get the application up and running, to document the setup and check it into version control, and to automate as much of the time-consuming or repetitive manual tasks as much as possible, the rest is, as they say, “cake”.  What you need to focus on is developing the kind of people who can do all of this, and have fun with it.  This is how you effectively re-train.  This is how you retain good talent.  You have to allow the people you have to show you they can adapt.  It is a waste of experience to let people go because their experience is not up-to-date.  That’s not their fault.

More Of The Same, Spot The Patterns

Newer scripting languages and frameworks are being hyped as if they can do things that have never been done before.  I’ve seen this with the likes of Ruby, Python, and Perl. Despite the fact that Perl has one of the largest, organized, stable, and well-tested libraries  of any programming language to date (the CPAN), it doesn’t get the same kind of love that newer languages like Ruby and Python do, especially in corporate environments.  Sometimes it in fact does pay to re-invent the wheel, but most often it does not.

In Conclusion

If you are still trying to figure out what the “cloud” really is, know that it is simply a string of technologies that have been around for a long time, re-branded to look new and cool (for marketing purposes), and bundled with some new management tools and remote storage to make things “easier”.

To sales and marketing folks, it could simply mean trendy and cool.  To developers, it may mean LAMP or MEAN.  To systems and infrastructure people it could mean hyper-visors, virtual machines, and software containers.  To DevOps folks, it may involve Puppet, Chef, and Ansible automations, or Continuous Integration.

To recruiters and hiring managers, it often means Amazon AWS and Spring Framework Experience.  And to end-users, it typically means anything they can access from all of their phones, laptops, tablets, and PCs simultaneously.

The “Cloud” means many things to many different people.  My humble opinion? At it’s core – at the heart of the all the technology and implementation that has made it all possible; are tools, software, and individual experience that have been around since the beginning, and it is ALL based on the concept of Open Communication, and the spirit and foundation of Free and Open-Source Software.

Just Another Perl Hacker

Sometimes writing small snippets of code can be meditative.

The other day, I realized that, even though I happen to be, among other things, Just Another Perl Hacker, I never bothered to write my own JAPH signature.  So I went ahead and wrote up a very simple (but effective) one.  Once it was complete, it dawned on me that other perl hackers may appreciate the ability to generate a signature like my own.

Since perl is all about code reuse and sharing, I figured I would write up a JAPH signature generator so that anyone can have an awesomely obfuscated JAPH signature like I do.

Firstly here’s my JAPH signature:

$_='Kvtu!Bopuifs!Qfsm!Ibdlfs-!K/!Cpccz!Mpqf{';
@_=split//;foreach(@_){print chr(ord()-1)}

You can run the JAPH signature by copy/pasting it to a text file (e.g., japh_sig.pl), and running with

perl japh_sig.pl

Which returns:

Just Another Perl Hacker, J. Bobby Lopez

You can also run the JAPH signature straight off the command line (with ‘perl -e’), but you have to replace the single quote characters in the string with double-quote characters, for example:

perl -e '$_="Kvtu!Bopuifs!Qfsm!Ibdlfs-!K/!Cpccz!Mpqf{";@_=split//;foreach(@_){print chr(ord()-1)}'

This is all just for fun of course, but if you do end up using my JAPH signature generator, please let me know by sending me a quick message on Twitter to @jbobbylopez.

Have Fun!

The JAPH Signature Generator

#!/usr/bin/env perl
##############################################
# USAGE:
#   perl japh.pl This is my awesome signature
#
# OUTPUT:
#   Your JAPH Signature:
#       $_='Uijt!jt!nz!bxftpnf!tjhobuvsf';
#       @_=split//;foreach(@_){print chr(ord()-1)} 
#
#   Returns:
#       This is my awesome signature
#
# AUTHOR: J. Bobby Lopez <jbl@jbldata.com>
##############################################

use feature say;

my $offset = 1;
my $signature = join (" ", @ARGV);
my $obfus_sig;

my @obfus = ();
my @sig = split //, $signature;

foreach my $c (@sig)
{
    push @obfus, ( chr ( ord($c) + $offset ) );
}

$obfus_sig = join ("", @obfus);

say <<"OUT";

Your JAPH Signature:
\t\$_='$obfus_sig';
\t\@_=split//;foreach(\@_){print chr(ord()-$offset)} 

OUT
print "Returns:\n\t";
@_=split//,$obfus_sig;foreach(@_){print chr(ord()-$offset)};
say;
1;

Why You Shouldn’t be Sharing “Live” Documents by E-mail

 

If you and your team, in 2013, are still sharing Microsoft Office (Word, Excel) documents via internal corporate e-mail, I’ve got news for you.  You’re doing it wrong.

“Live documents” are documents that are actively being updated and collaborated on by multiple people.  Collaborating on these documents by e-mail is a process that you should avoid.  It is a process that can eat away at your team’s productivity precious minutes at a time, and can severely impede your team’s work-flow and ability to stay synchronised.

I’ve been involved with projects where this method of collaboration was adopted.  Whenever I recognize this to be the case, I would immediately share my concerns, and try to suggest better ways of getting the team organized. There are always better ways to do it.

One of the biggest problems with e-mail document sharing is that there is no tracking or accountability.  There is no way to easily know what version of the document you have in your possession.  Is it the latest?  Perhaps it’s new enough?  Ever had to find an email with a document attachment, and ended up trying to craft clever little search terms to search your inbox?  Even if you find the document you were looking for, there is no way for you to know whether it is the last official revision, unless a system is implemented to allow “official” versions of the document to reside in a central location.

If there is a point-person in charge of managing this kind of set-up (for example, a simple system implemented with shared folders), and the maintainer ends up leaving the company for any reason (vacation, short-term disability, lay-off), then you still end up in a bad situation.  Without someone actively maintaining the structure of the document store, things will end up getting messy very quickly.  Users will begin storing documents in arbitrary locations (whatever feels right at the time), and before you know it, you will have to start yet another document archive clean-up project.

Version control is ubiquitous, and it is here to stay.  Any company (in any industry, not just IT) not seriously considering a process for document revision control should at least make it a point to have the discussion at least once a year.  You may find that your current document handling processes are actually a significant time waster, and that implementing a document management system could save you a lot of time (or money) over the long run.

There are many document sharing and collaboration technologies available today.  Some of the more popular include Sharepoint (if you are a Microsoft shop) or Documentum.  There are also many open source (free) packages, such as Drupal, Joomla, and Liferay.  There are even projects like Etherpad that make collaboration just plain fun.  You can also roll-your-own (if you are so inclined) by developing a custom system on top of foundational version control software such as Git or Bazaar, as I personally have done in the past.

Do your research when considering a content management system.  Some important considerations you might want to make include:

  • Is it easy to set up?
  • Is it easy to use?  Does it blend well with our team’s work-flow?
  • Is it safe?  Is it easy to make backups?
  • What kind of security mechanisms does it have built-in?
  • Is it easy to get our data out of the system (strong import/export functionality), in the event that we decide to move to another system in the future?
  • Is it cross-platform, or does it tie us to a specific platform (operating system)?
  • Is the cost worth the investment for a company our size?

The important thing here is to start thinking about it.  Be open to evaluating multiple products before you decide on a system that blends best with your organization’s work-flow. Software is about solving problems, which includes eliminating routine and time-consuming tasks.  If your company is not continually looking at new ways to improve efficiencies via clever (and practical) software implementations, then it will eventually be left in the dust as more efficient start-ups and entrepreneurs bring their shiny new productivity platforms to the game.

Adventures with Ubuntu 12.04 and Linux Mint 14 (Nadia)

Over the last week I’ve been playing around with Ubuntu 12.04 (Precise Pangolin) and Linux Mint 14 (Nadia).  Although I can appreciate Linux Mint (it is indeed very elegant), I think I will be sticking with Ubuntu 12.04 LTS for the time being.

My affection for the Unity interface that comes with Ubuntu 12.04 stems from the fact that I’ve been a heavy user of Mac OSX over the last year.  Before that, I was using Ubuntu 9.04, but the UI was heavily modified and stripped down, as I was a heavy user of the Xmonad window manager.

Having that experience with Xmonad, which is essentially a high-productivity, tiling window manager; and later working with the MacBook Pro (Late 2011) OSX environment, I’ve come to appreciate how important it is to have a powerful desktop UI that also gets out of your way.  The Unity interface follows that line of thinking, and is a real treat to work with once you start getting the hang of it.

There are some drawbacks to Unity, especially with regard to how applications are organized within the launcher, however I find that overall it will be a very rewarding environment to work in.

Luckily I have all my Vim and GNU Screen configuration files checked into version control, so it was easy enough for me to get GVim and all my other cross-platform apps up and running in my new desktop environment with minimal fuss.

Some screen shots of my desktop environment below:

The only real problems that I ran into with Ubuntu 12.04 were problems that were really hardware related. I’m running an ASUS S56CM Ultrabook, which has an oddly integrated Nvidia GT635M GPU.   So for now, I need to run my graphics intensive (OpenGL) applications via Bumblebee v3.0, however once that was set up, everything worked fantastically!

The Apache Software Foundation Celebrates the 17th Anniversary of the Apache HTTP Server with the release of v2.4

World’s most popular Web Server powers nearly 400 million Websites across the globe

Numerous enhancements make Apache HTTP Server v2.4 ideally suited for Cloud environments. They include:
•    Improved performance (lower resource utilization and better concurrency)
•    Reduced memory usage
•    Asyncronous I/O support
•    Dynamic reverse proxy configuration
•    Performance on par, or better, than pure event-driven Web servers
•    More granular timeout and rate/resource limiting capability
•    More finely-tuned caching support, tailored for high traffic servers and proxies.

Read the full press release at The Apache Foundation’s blog.

UK Government To Demand Data On Every Call And Email

[techweekeurope.co.uk] UK Government To Demand Data On Every Call And Email

Plans could force ISPs and phone operators to hand over records on all phone calls, emails, Tweets and Facebook messages

[telegraph.co.uk] Phone and email records to be stored in new spy plan

Details of every phone call and text message, email traffic and websites visited online are to be stored in a series of vast databases under new Government anti-terror plans.

This story also made the Slashdot front page.

Using CouchDB with Perl?

I’ve been working away on a project where I’ll be using CouchDB and Perl, and was searching the ‘net for information on CouchDB CPAN modules.

There were, of course a lot of CouchDB modules that turned up on MetaCPAN, however I couldn’t figure out which one I should bother messing around with.  I was looking for something simple and straight-forward, similar to the module that was posted in full-source format on Apache CouchDB’s “Getting Started with Perl” guide.

I looked at CouchDB::Client, but found the implementation a little scattered – there are no functions documented that explain how to deal with couchdb documents, only for getting info on and creating databases (the tests in ‘t/’ weren’t very helpful here either).  And the functions don’t say anything about document id’s, which would have been nice.

I also looked at AnyEvent::CouchDB, but again there seemed to be too much going on.. too many methods for doing many things that I won’t need to do.

The “Getting Started with Perl” guide talks about a module called Net::CouchDB, a module that is curiously missing from CPAN as far as MetaCPAN and search.cpan.org are concerned – but Jeremy Zawodny wrote up a nice guide called “Hacking with CouchDB” that uses this module, showing it’s clear interfaces with function names like:

  • $cdb->create_db
  • $cdb->put, and
  • $cdb->new

These are of course in contrast to confusing function names like:

  • $cdb->couch(), or
  • $cdb->replicate()

.. that you’d have to deal with in AnyEvent::CouchDB.

Eventually MetaCPAN lead me to “CouchDB::Simple” which sounded a lot more like what I was looking for.  I didn’t get very far with it however, since the install failed.  I e-mailed the author to give him a heads up, since I don’t think it was my environment at fault (perlbrew perl 5.14.2 + cpanm).  I’ll give it another try this week and see if I can get through that hurdle.

 Update (2012-01-25):  Gave AnyEvent::CouchDB a try, things didn’t go as smoothly as I was hoping.  I can attribute some of this struggle to my lack of familiarity with CouchDB.  Also gave DB::CouchDB a try and got a lot further, but ran into a problem with “bad_content_type” error messages when attempting to post a document to the database.  A little reading and I found that this error can easily be triggered by JSON syntax errors.. but isn’t that what the module is supposed to handle?  I’m thinking I may now give this a try with WWW::Curl or some such, since I’m not having too much luck with CouchDB specific modules… but I’m not done yet :)

 

Just had an idea for a multi-player, multi-controller, single interface, multi-achievement gaming environment.

Imagine a multiplayer game where two or more people are playing the same game simultaneously, and controlling the same character. I don’t mean controlling parts of the character, I mean the whole character (for example, if implemented in a first-person shooter, or RPG like Oblivion).

The game starts, and you are both playing the same character at the same point in time. The way this works is that each player is playing an instance of that character in the same world. Each player is able to make decisions and do things however they see fit with their character instance. The character that has the higest achievement score after a major decision or event becomes the save-point for the next period of play. So whoever makes the better decision, or who ever fights the best and delivers the most damage and kills the bad guy, win that round, and the game continues forward from that point.

This could be applied to games like Oblivion, where multiple people are playing the same character, and the one who kills the vampire, or the one who is able to pick the lock, wins that “encounter” receives a separately counted set of points (tied to the player, not the character), and the game is saved and continues from that point.

I think this would be a great game to play, you can jump in and out any time you want, and the game will continue to move forward because of the other players. Consider this idea GPL’d.

Playing With Prime Numbers

I’ve been toying around with functional programming, and recently came across a perlmonks thread discussing multiple ways to calculate prime numbers.  One of the things I noticed about many of the examples was that almost all of them used loops of some sort (for, when, etc).  So I decided to tackle the problem without using any loops.  Instead, I’ll just use recursive functions.

Firstly, here’s the perlmonks thread: Prime Number Finder

And here’s the solution I came up with:

#!/usr/bin/env perl

use strict;
use warnings;
use 5.010;

$DB::deep = 500;
$DB::deep = $DB::deep; # Avoids silly 'used only once' warning

no warnings "recursion";

# Identify primes between ARG0 and ARG1

my ($x, $y, $re_int, $result);
my ($prime, $is_int);

$x = $ARGV[0];
$y = $ARGV[1];

$is_int = sub {
    my $re_int = qr(^-?\d+\z);
    my ($x) = @_;
    $x =~ $re_int
      ? 1
      : 0;
};

$prime = sub {
    my ( $x, $y ) = @_;
    if ( $y > 1 ) {
        given ($x) {
            when ( $is_int->( $x / $y ) ) {
                return 0;
            }
            default {
                return $prime->( $x, $y - 1 );
            }
        }
    }
    else { return 1; }
};

$result = sub {
    my ( $x, $y ) = @_;
    if ( $x <= $y ) {
        if ( $prime->($x, $x-1) ) {
            say $x;
        }
        $result->( ( $x + 1 ), $y );
    }
};

$result->($x, $y);

When running this code with larger numbers, I would eventually run into “deep recursion” warnings, which is why I’ve had to use no warnings "recursion"; and set $DB::deep to a specific value higher than 100 (which is the default). $DB::deep is a debugging variable used specifically to limit recursion depth, in order to prevent long-running or infinite recursive operations.

The method I’m using here to calculate prime numbers isn’t the most efficient, since I’m not doing anything to reduce the amount of numbers I have to test at each cycle. However, adding some extra intelligence to this, such as the filtering used by the Sieve of Eratosthenes (an “ancient Greek algorithm for finding all prime numbers up to a specified integer.”) should be doable.

I’ll be keeping an eye out for other solutions, since I’m sure there are many (especially in perl), but so far this one seems to be fairly fast and clean. I’m looking forward to what Math::BigInt can offer here as well, if anything.

Playing with Factorials, Haskell, and Perl

I’m currently making may way through a book called “Seven Languages in Seven Weeks” by Bruce A. Tate.  So far it’s been an interesting read, but I’m far from finished.

One of the things in the book that caught my eye was a recursive factorial function in Haskell, which seemed so simple, that I had to see what it would look like in perl.

So I wrote up the following perl snippets to calculate factorials.  There are, of course, multiple ways to do it as I’ll describe below.  There are also (likely) many other ways which I haven’t thought of, so if you have an interesting solution, please share.

One of the things that really caught my attention was how simplistic the syntax was for writing somthing so complex.  Recursion is a fairly simple idea once you’ve seen it in action – a function that executes itself.  However, the implementation of recursion in a given programming language can be somewhat difficult to comprehend, especially for new programmers or those without programming experience.

Although I haven’t dived into Haskell quite yet, it seems to make implementing a factorial function so simple, that I kind of stumbled when trying to understand it, thinking I was missing something.. but it was all there in front of me!

Firstly, let’s clarify what a factorial is (from wikipedia):

In mathematics, the factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n. For example,

5 ! = 5 \times 4 \times 3 \times 2 \times 1 = 120 \

 

So the factorial of 5 is 120.  Or 5! = 120.   Lets look at the Haskell example from the book.

let fact x = if x == 0 then 1 else fact (x - 1) * x

The above line is saying “if x is 0, then the factorial is 1 – otherwise, call myself with (x – 1), multiplied by x”

Lets look at this in ghci (the Haskell console):

[jbl@watchtower tmp]$ ghci
GHCi, version 7.0.3: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
Prelude> let fact x = if x == 0 then 1 else fact (x - 1) * x
Prelude> fact 5
120
Prelude>

After seeing how easy it was to implement the recursive factorial function in Haskell, here are my attempts in perl.

Firstly, using a loop:

#!/usr/bin/env perl

use strict;
use warnings;
use feature "say";

my $nni = $ARGV[0] ? $ARGV[0] : 5;

for my $i ( 1..($nni - 1) )
{
    $nni = $nni * $i;
    say $nni;
}

This first example doesn’t implement a function, and is really just bad (but still working) code. It requires that your base number be global and alterable, in this case $nni.

Now, lets try it with an actual function:

#!/usr/bin/env perl

use strict;
use warnings;
use feature "say";

my $nni = $ARGV[0] ? $ARGV[0] : 5;

sub fact 
{ 
    my ($nni) = @_;
    return !$nni ? 1 : fact( $nni - 1 ) * $nni;
}

say fact($nni);

This second method works similarly to the Haskell implementation. It implements a function that calls itself, without any looping required.

However, it’s still not as concise as the Haskell version, so lets try again:

#!/usr/bin/env perl

use strict;
use warnings;
use feature "say";

my $nni = $ARGV[0] ? $ARGV[0] : 5;
my $fact;
$fact = sub { my ($nni) = @_; !$nni ? 1 : $fact->( $nni - 1 ) * $nni };
say $fact->($nni);

Aha, now we’re getting somewhere. In this third example, the fact() function is anonymous, and we’re assigning it to $fact via reference. This allows us to use $fact like an object with a single method that does the factorial calculation.

Although this is pretty much as concise as I was able to get it while taking readability into account, here’s a final example that goes a step further:

#!/usr/bin/env perl

use strict;
use warnings;
use feature "say";

my ($nni, $fact);
$nni = $ARGV[0] ? $ARGV[0] : 5;
$fact = sub { !$_[0] ? 1 : $fact->( $_[0] - 1 ) * $_[0] };
say $fact->($nni);

This last example uses perl’s pre-defined variable @_ which automatically holds a list of function arguments by default. I usually avoid doing this, since it hurts readability, especially for those who don’t live and breathe perl on a daily basis.

To my surprise, it would seem that Haskell has Perl beat (at least in this example) as far as readability + conciseness is concerned.

I haven’t spent much time playing golf here to reduce the number of lines or characters beyond the last example, but if anyone does come up with a tighter solution, please let me know!


Edit (20111005T22:43:50): Here’s a version I found that uses the Math::BigInt module

#!/usr/bin/env perl

use strict;
use warnings;
use feature "say";
use Math::BigInt lib=>'GMP';

my $b = Math::BigInt->new($ARGV[0]);
say $b->bfac();

This version is likely much faster, since the Math::BigInt package is intended to be used in situations where large integers are being handled.

Here’s the post I found with examples written in other languages as well: Factorial Challenge: Python, Perl, Ruby, and C

Now Listening: Deadmau5 – Faxing Berlin (Grifta Dubstep Remix)

Now that I’ve updated my Arch Linux 64 desktop with some newer packages, I’ve come across a few surprises. For one, that XMBC works! I was so peeved when I couldn’t get it to work before. XMBC is an awesome media player for Linux (and the original Xbox).

The latest releases of XBMC includes an visualization plugin that I’ve been trying to get my hands on for the longest time, called ProjectM. No other media player that I was comfortable installing had a ProjectM plugin, including VLC and Totem.

Anyway, I now finally have XBMC running (and I can even run it in a window, under Xmonad.. which looks freaking awesome.)

Happy Friday!

Understanding the Concepts of Classes, Objects, and Data-types in Computer Programming

book-car

Every once in a while I get into discussions with various people about computer programming concepts such as objects, classes, data-types, methods, and functions.

Sometimes these discussions result in questions about the definition of these terms from those who have some programming experience, and sometimes these questions come from those who have no background in computer science or familiarity with programming terminology whatsoever.

I usually attempt to answer these questions using various metaphors and concepts that I feel the individual or group may be able to relate to.

It is quite possible that even my own understanding of these concepts may be incorrect or incomplete in some way. For the sake of reference and consistency, I am writing this brief article to explore these concepts in the hope that it will provide clarity into their meaning and purpose.

 

So, what is a data-type?

Some languages, like C, have a strict set of data-types. Other languages, like C++, and Java offer the developer the ability to create their own data-types that have the same privilages as the data-types that are built into the language itself.

A data-type is a strict set of rules that govern how a variable can be used.

A variable with a specific data-type can be thought of in the same way as material things in the real world. Things have attributes that make them what they are. For example, a book has pages made of paper that can be read. A car is generally understood to be an automobile that has four wheels and can be used for transport. You cannot drive a book to the grocery store, in the same sense that you cannot turn the pages of a car.

A data-type is a specific set of rules for how a variable can be used.

 

Data-types in computer programming may include examples such as:
_____________________ 
Object Type
==== ========
Int = number, no decimal places
Float = large number with decimal places
Char = a plain-text character
_____________________  

 

More familiar, real-world examples may include:
_____________________
Object Type
==== ========
Bucket = Strong, can hold water, has handle
Balloon = Fragile, can hold a variable amount of air, elastic, portable
Wheel = Round, metal, rubber, rolls
_____________________ 

 

In languages like C++, there are core data-types such as the ones found in C. However, C++ also offers developers the ability to create their own data-types.  Providing developers the ability to create their own data-types makes the language much more flexible. We more commonly refer to a user-defined data-type by the more popular term, class.

In C++, a class is a user-defined data-type [1]. That’s all it is. It provides the developer the ability to create a variable (or object) with specific attributes and restrictions, in the same way that doing “int dollars = 5;” creates an object called “dollars” who’s attribute is to have a value which is strictly an integer. In the real world, a five-dollar bill cannot be eaten (technically), and it cannot be driven like a car to a grocery store (even though that’s where it will likely end up).

An object is a variable that has been defined with a specific data-type. A variable is an object when it is used as an intance of a class, or when it contains more than just data. An object in computer programming is like an object in the real world such as a car, or a book. There are specific rules that govern how an object can be used which are inferred by the very nature of the object itself.

The nature of computer programming means that developers have the ability to redefine objects, for example making the object “book” something that can be driven. In the real world however, we know that you can call a car a book, but it’s still a car. The core understanding of what a car is has been ingrained within us. Although “car” is simply a three letter word (a symbol, or label), there are too many people and things in the world that depend on the word “car” having a specific definition. Therefore objects in the real world cannot be as easily redefined as their counter-parts in computer programming (however, it is still possible [2]).


So what is a method?

In computer programming, we have things called “functions”. A function is an enclosed set of instructions which are executed in order to generate (or “return”) a specific result or set of results. You can think of a function as a mini program. Computer programs are often created by piecing together multiple functions in interesting and creative ways.

Functions have many names, and can also be referred to as subroutines, blocks and methods. A method is a function which is specifically part of a class, or a user-defined data-type, which makes a method an attribute of an object – something that the object is capable of doing.  Just like in the real world, methods can be manipulated and redefined for an object, but not for that object’s base class.  A book can be used to prop-up a coffee table, but that does not mean that books are by definition meant to be used in this way.


Enlightenment achieved!

I’m not really sure where I was going with all of this, but the above should be sufficiently lucid.   I was motivated to write this after recently referencing Bjarne Stroustrup’s “The C++ Programming Language”.  If you’ve ever asked yourself the question “what is an object?” or “what is a class?”, then the above descriptions should serve as a useful reference.

[1] “The C++ Programming Language – Special Edition”, page 224.

[2] For example, the definition of “phone” has been redefined several times in recent history, from the concept of a dial-based phone, to cell phones, to modern smart-phones.

 

Coke Zero taste in my mouth, and my legs are rather numb

So I’m sitting here in front of my 28-inch I-INC monitor again, after the longest while. A couple of months ago we decided to mount the I-INC on the wall in our bedroom, and move the 32-inch Samsung LCD TV into the basement to serve as my replacement monitor. This was a big mistake. The reasons were relatively justified. We recently bought a 55-inch Samsung plasma to replace our now puny 32-inch Samsung in the living-room -and- I wanted to start using my original XBOX again because of other personal quest to start playing DDR again with my Red-Octane dance pad.

The Red-Octane dance pad didn’t work with the 360, so I had to use the original XBOX. However, the original XBOX did not support HDMI connectors, so I had to find a way to connect it to my monitor in the basement. Since this didn’t seem feasible, I decided that replacing my 28-inch I-INC LCD monitor with my 32-inch TV would be a good idea. Again, bad idea.

Anyway, after several weeks of utilizing an awkward 1366×768 resolution with 75 DPI fonts.. I decided it was time to admit when I was wrong, and revert back to using the 28-inch monitor (I put the 32-inch in the bedroom, like I should have done all along).

Someone Hacked My Web Server

So I just found that someone hacked into my web server recently, I’m not sure when they started poking around, but I saw some significant activity around December 17th.

I say “hacked” instead of “cracked” or defaced/damaged because I haven’t seen any actual malicious activity, just a lot of wordpress php scripts which had some eval code appended to the top.

I’ve backed up the hacked php scripts and will try to decipher them later. The scripts are basically a bunch of php evals of statements encoded in base64. I could probably decode them quickly via some perl scripts to change all the evals to print statements, and then use the equivalent of perltidy to make them readable in order to find out exactly what they were trying to do.

In any event, it’s likely they still have some backdoor set up, because it seems they got root access, or at least the ability to write a file with root permissions into the DocumentRoot, so I’ll have to keep an eye out.

I’ve upgraded the system to Lenny (was Debian etch, so yeah I’m at fault there) and upgraded wordpress from 2.3.x to the latest 3.0.4. I blew away the hacked wordpress instance, and just installed wordpress from scratch, along with some other things which hopefully will alert me when something like this happens again.

To the person responsible – I’m not running this web server as some sort of proof of my skill set, it’s simply a personal web server which I am hosting myself because I don’t very much like to be pushed into the idea of cloud computing and hosting my stuff on blogspot, etc. I think it’s good to be able to host your own applications and services, and not be tied down to services provided by Big Corp.

My message to you is this, use your head. It was probably fun to try and break in, but actions like this are what’s causing people to subscribe to cloud computing with open arms, and eventually Big Corp will be hosting everyone’s data, and the freedom that you have to learn how to manipulate PHP will be non-existent because we’ll all be stuck in AOL hell.

If you want to do something cool and interesting, why not trying using your skills to help people.

If anyone’s interested in taking a look encoded PHP, here’s what looks to be one of the primary sources: style.css.php.  Note that the script is basically all on a single, really long line, so most text editors may have trouble reading it.

S5 Presentation Software, XMind, Freemind, and mm2s5

I’m tired and a bit wired, but I figured I’d put a few words together just to purge my messy mind. So today I’d like to talk about presentation software (a la powerpoint); mind-mapping software, and how to get from one to the other in an interesting way.

I’ve been a mind-mapping fanatic for many years, as far back as 2004 if I recall correctly. Back then (and even up to today) I’ve used and loved the free and open-source mind-mapping software called Freemind [http://freemind.sourceforge.net/wiki/index.php/Main_Page]. It’s a great little piece of java software which provides a great UI for doing brainstorming and outlining using mind-maps.

These days, I use a mix of Freemind and XMind to do my day-to-day brainstorming and planning. XMind is like Freemind (in fact, I’m sure it borrowed many ideas from that project), but has a nicer UI, and many more options in terms of layout, tagging, markers, etc. I find that I jump between the two often, until my brainstorming takes on a life of it’s own, then I will stick to one or the other for the remainder of the map creation.

I recently had to put together a presentation for the Toronto Perl Mongers group to discuss, well Perl.. and VMware. And of course I whipped out Freemind and XMind to start the brainstorming process. XMind has a nice feature that allows you to export your mind-maps to an MS Power Point or OpenOffice Impress type format, which is great and what I needed. Problem is though that this feature is not free, it comes as part of XMind’s online subscription services for their “professional” version of the product. Even though the price is fairly reasonable, and I’m sure at some point may just bite the bullet and subscribe, I wasn’t ready to do that just yet. So I was on the hunt for some way to convert my mind-map into some kind of presentation.

To their credit, one thing that XMind does do properly is allow you to export your XMind maps to Freemind’s .mm format. This is great, because Freemind itself has multiple freely accessible export formats, including exports to OpenOffice.org and PDF. However, I wasn’t satisfied, I was looking for something that would do the job more completely.

Eventually I came across a neat little HTML/Javascript based presentation tool called S5, which stood for “Simple Standards-Based Slide Show System”. This tool was exactly what I was looking for! It’s small, clean, no-fluff implementation meant that I could whip up a professional looking presentation without the need to load up any bulky software aside from Firefox. Problem remained though, that my data was still in XMind (and Freemind) formats. I was considering writing a tool that would convert Freemind XML files into S5 HTML documents, which would have been fairly easy since both formats are fairly open and clear, however that would have taken a good deal of time, and time is one that that I never seem to have enough of these days.

So I went hunting on the plains of Google to see if anyone was experiencing the same problem I was, and if they did anything about it. And what do you know! I found a project on Google Code that does exactly that! The project is called (reasonably enough) mm2s5, and does a wonderful job at converting my Freemind mind-maps into S5 Presentation format!

Anyone who’s interested in finding a nice way to brainstorm and turn their ideas into presentations should seriously consider trying these tools out, they’re fantastic, and they’re free!

Work on CPAN-API and Perl Modules Indexing

Since the last TPM meeting in October, some of the TPM members have been working diligently to improve the CPAN search experience by re-architecting CPAN search from the bottom up. I’ve joined the design team in the hopes of providing the Perl community a much more improved CPAN experience.

As most Perl developers are aware, search.cpan.org is great for finding useful libraries and modules, but horrible at providing any significant information which relates modules to each-other, or providing useful meta-information or statistics which can be used to make better decisions on which modules to use, let alone deploy in a production environment.

If you are interested in taking part in the CPAN-API community project, please contact me, or visit the CPAN-API project site on GitHub.

CPAN-API: https://github.com/CPAN-API/cpan-api/wiki/
Toronto Perl Mongers: http://to.pm.org/

Jolicloud is of the Awesome

So if you haven’t heard of Jolicloud http://www.jolicloud.com/, then you need to download and install it now. It’s an Ubuntu based OS (a self-proclaimed “Cloud OS”) specifically designed for Netbooks, and it rocks. I have Jolicloud installed on my Samsung N110 Netbook, and I use it for everything from e-mail to games (snes9x) to work (Perl/Vim/Screen). Now what makes Jolicloud super-awesome is that it treats web applications no differently from desktop applications. Each application gets it’s own icon on the “Home screen”. It’s also socially aware – it can connect to facebook and allow you to search for applications and/or people who’ve used those applications, so that you can ask them questions and get guidance on the tools you’re trying to use.

The interface is very slick – big icons and a clean method of navigation to the lesser used functions of a standard Gnome/Ubuntu desktop. The most-awesomest part is that once you load up a terminal, you have full access to the command-line and all Ubuntu apt repositories.

Jolicloud isn’t just for netbooks! I’ve also installed it on my Acer Veriton (similar to the Acer Revo), and am using it as a media center OS. Jolicloud also comes in an “express” edition, which allows you to install it under windows, where it will come up as a secondary OS option under the windows boot-loader.

If you have a netbook, nettop, or any light-weight PC, then install Jolicloud. Highly recommended.

Diving in with Arch Linux

The Problem

The time had come for me to “invest” in getting some new equipment. The only workstation that I had up until recently was a company laptop which I had toted back and forth between VMware and my home office. I keep my personal documents on removable storage, but that doesn’t really help when you don’t have a workstation at home, so lugging the laptop around with me was a must.

Don’t get me wrong, I have systems, but their mostly systems running as file servers or VM servers doing various little things automagically, and they’re not sitting in or around my actual desk at home. Also, my printers/scanner at home relied on my laptop to be of any use. It was time to fix all of these unecessary grievences.

The Dilemma

For the past couple of weeks I had been thinking hard about what kind of system I should buy – should it be a powerful / modern desktop system with lots of RAM and screaming CPU/Video? Or would it be a powerful laptop/notebook which would serve as a desktop replacement? Should I go for the i3, i5, or i7 processor? ATI or Nvidia? What kind of budget was I looking at?

All of these questions plagued me for quite some time (okay, not that long.. I admit I’m a bit of an impulse buyer). I’ve spent long enough thinking about this that I realized a lot about myself. For one, I’m not a gamer. I was once one of those people who would have been ecstatic about getting next-gen hardware to play the lastest power-hungry games. Not any more.. and not for quite some time. The last time I seriously played a PC game was about 3 years ago. When I say “seriously”, I mean played it regulary, at least once a week. The last game I played was a game that I was really into; it was X2 of the X-Series space combat simulators.

Since then, I’ve touched a game or two, on and off, but the has fascination is no longer there. i’m more interested in hacking around with open source programs and becoming a better developer.

The Solution

Since I wasn’t going to focus on gaming and media for my new system purchase, this opened the door for a lot of possibilities that I haven’t considered, and some unexpected disappointments. First off, since I wasn’t going to plop $1,000.00 on a single system, I could, theoretically buy two lower-powered systems. And that’s exactly what I did. Instead of going with a full-fledged desktop or power-house laptop, I ended up buying an Acer Aspire Revo net-top unit as my primary workstation, and a Samsung N110 Netbook as my portable. This Revo is awesome! It has 2GB of RAM (upgradable to 4), an Nvidia ION chipset, and an Intel Atom processor (dual-core). I didn’t need much more than this for my purposes, this was perfect. The Samsung N110 was also a nice little beauty. It was a Atom processor with integrated graphics, but was light, pretty, and had a 6-cell battery, which meant that it would last about 8 hours during heavy use. I quickly installed JoliCloud Express on the Netbook, and have been very happy with it ever since.

The Disappointment (In myself)

The disappointment that I experienced was not in the purchase or the hardware, but it was in the fact that I hesitated for a long time to wipe away the Revo’s bundled OS to install Linux. The OS that the Revo came with was Windows 7 Home edition (the Samsung netbook had Windows XP). I haven’t used windows as my primary OS in years, and have always been proud to say so. For the last four years or so, I’ve been using Ubuntu (severely customized), and before that I was using Debian. When I initially started up the Revo, I was impressed by the windows 7 user interface, the nice colors, the clean lines, and the fact that it picked up all my hardware. It was pretty simple, and I have to admit somewhat luring. I’m definately not the little hacker I was 10 years ago. I don’t have time to spend hours hacking away into the wee morning just on my OS configuration. At least that’s what I keep telling myself :) But then it dawned on me – that’s how I got where I am today, by embracing curiousity, and defying conformity. That’s where life becomes interesting and liberating, and that’s where I feel at home. All these thoughts of nostalgia hit me shortly after I hard-reset the Revo, and windows 7 came up saying “system wasn’t shut down correctly – use safe mode” or something to that effect. There was no way for me to tell it to disregard the unclean boot-up, it persisted to ask me to go into safe mode, with no specific explanation. That’s when I wish I had a grub prompt or command line handy.

Diving in with Arch Linux

After coming to my senses, I realized that I definately didn’t want to go back to using Ubuntu for my primary workstation. For a while I’ve been feeling like Ubuntu has lost much of it’s luster, especially for someone like me who loves simplicity and minimalism over fancy GUIs and extra features. I wanted a distribution that tried to stay at the cutting edge with it’s packages, but didn’t screw with the basics of linux so much that you’re forced to use GUIs to configure your OS. Debian didn’t fit the bill here – it’s great for servers – rock solid, but it’s not that great if you want a cutting edge workstation without having to compile things from source.

After a little bit of reading and browsing distrowatch.com, I came across Arch Linux (which I’ve known of only in passing before), and decided that this was the OS for me. The Arch Linux community is small enough that I could make some significant contributions without much effort. The distribution itself is awesome, very clean, and very minimal. And most importantly, all of the system configurations are done by editing text files!

The Arch Way

Installing Arch was relatively straight-forward (IMO). It wasn’t as easy as installing, say, Linux Mint, but it also wasn’t as hard as installing Debian 3.0 either. The installation dialogs were ncurses based, but they were descriptive, linear, and logical. When it came time to supply arguments for the initial configuration of the packages I selected, they were all text files (very well documented) which I could edit with vim! I think at that point I knew that was about to embrace a distribution that was very special indeed. This distro was going back to basics, and not flooding it’s users with fancy splash screens and progress meters, it was doing the needful, and it was doing it well.

I still have a lot more to learn about Arch, as I’ve only scratched the surface so far. I’ve been able to set up sound (with alsa) and video using the latest Nvidia drivers. I’ve configured Xmonad as my window manager, and have gotten a handle of how to query and install packages with “pacman”, the Arch package installer. The only real problem that I’ve run into is setting up CUPS for my printers. After some research, it seems that the version of CUPS (1.4.3-2) available in the Arch packages is the latest version available from the CUPS source repository, and that I may have to downgrade (to 1.3.9) it in order to get my printers working.

Overall, I like what I see so far with Arch. I expect to post more on my experiences with it as I learn.

Syncronizing Xymon’s ‘bb-hosts’ Configurations

I’ve been using Xymon (formerly known as “Hobbit”) for a long time.  In most situations, I have Xymon running in a redundant configuration, with two or more instances of Xymon working together to monitor a network.

Even though Xymon works very well, a single change to the primary server’s configuration file (the “bb-hosts” file) means that you have to make the same change to all other ‘bb-hosts’ files in all other Xymon instances.

There are some creative ways to eliminate the drudgery of updating all these files any time a change to the primary file is necessary.  One method, for example would be to have the master file exported via NFS to all the other Xymon server instances, and each of those instances would sym-link to that primary ‘bb-hosts’ file from their local mount of that NFS export.

I don’t like the NFS export idea, because if the primary server has a problem, and the NFS export is no longer available, all instances of Xymon would break – badly.

Instead, I’ve opted for automatically synchronizing the ‘bb-hosts’ file across all Xymon instances via the use of apache, cron, a sym-link, and a simple bash script.

Here’s  how it works:

  • On the primary Xymon instance, sym-link ‘/home/xymon/server/etc/bb-hosts’ to ‘/var/www/bb-hosts’.
  • On the other instances of Xymon, run a bash script which grabs the primary server’s ‘bb-hosts’ via HTTP, which does some simple comparisons, and over-writes the local Xymon ‘bb-hosts’ if changes are detected.
  • Automat this script with cron.

Perhaps the trickiest part of doing this is the actual script used to grab, compare, and over-write the ‘bb-hosts’ file for the other instances of Xymon.  The script I’ve written below grabs the primary ‘bb-hosts’ file, and does a simple MD5 comparison with md5sum, and if it detects a change in the ‘bb-hosts’ file, it will send an e-mail to notify me that this change has occurred, along with details on what has changed.

Here’s the script:

#!/bin/bash

REMOTE_BB_HOSTS="/tmp/bb-hosts"
LOCAL_BB_HOSTS="/home/xymon/server/etc/bb-hosts"
BB_HOSTS_DIFFS="/tmp/bb-hosts-diffs"

wget http://somewebhost.domain.com/bb-hosts -qO "$REMOTE_BB_HOSTS"

LOCAL_MD5=`md5sum $LOCAL_BB_HOSTS  | cut -d " " -f 1`
REMOTE_MD5=`md5sum  $REMOTE_BB_HOSTS  | cut -d " " -f 1`

#echo "$LOCAL_MD5"
#echo "$REMOTE_MD5"

if [ "$LOCAL_MD5" != "$REMOTE_MD5" ]; then
        echo "Generated by $0" > $BB_HOSTS_DIFFS;
        diff $LOCAL_BB_HOSTS $REMOTE_BB_HOSTS >> $BB_HOSTS_DIFFS;
        cp $REMOTE_BB_HOSTS $LOCAL_BB_HOSTS;
        mail -s "Xymon: monitor-02 bb-hosts updated" alertme@email.com < $BB_HOSTS_DIFFS;
fi

If you need a way to keep your Xymon 'bb-hosts' files in sync, something along the lines of the above script just may be what you're looking for. If you're currently accomplishing the same thing in an interesting way, please post a comment and let me know!

Using DZEN with Xmonad to view Currently Active Network Shares

Currently Xmonad is my window manager of choice, because it’s clean, functional, and removes all the unnecessary crap that most modern desktops usually come with by default.

Although Xmonad is very cool, there are still some things that it’s lacking as far as functionality. Much of this is made up for by the use of Xmobar, Trayer, and other Xmonad compatible plugins and applications. I recently came across another one of these applications, and found it to be an exciting find. The tool is called Dzen.

Dzen is a desktop messaging tool which allows you to easily write some useful scripts, and have the output of those scripts become part of your desktop interface. Many examples of how this works are available on the Dzen webite, but some examples are as follows:

  • CPU Monitoring graphs
  • dmesg log monitoring
  • Notification of system events which are commonly found in syslog
  • E-mail or twitter alerts shown on your desktop as they come in
  • Custom calendar alerts
  • and much more..

Now this idea is not new – I remember there being a project called “OSD” (on-screen display) which essentially allows you to do the same thing. However, I think OSD was meant as more of an single message notification system, rather than the way that Dzen works, with master and slave windows, and the ability to implement menus, etc.

In any case, I decided to give Dzen a try, and am happy with the tool that I’ve been able to whip up. For the longest while, I wanted the ability for my xmonad environment to tell me, at a quick glance, what network mounts and removable devices I currently have mounted. I’m sure that this kind of information is easily available on many bloated desktops, including GNOME and KDE, but I was looking for something simple, small and configurable. Didn’t find it, so I ended up writing my own – with the help of Dzen.

Here are a couple of screenshots of how it looks:

Dzen “Active Mounts” widget (mouse out):
dzen-1

 

Dzen “Active Mounts” widget (mouse over):
dzen-2

 

I wrote the scripts fairly quickly, so I’m sure they could be written better, but I think they will provide those of you who are interested, a good example of how to implement a regularly updated notification widget with Dzen.

The scripts are written to check for changes in the mount list, and only update Dzen when a change is detected. It is written in two components:

1) A perl script which captures the mount information in the exact format that I want, and
2) a bash script which handles loading Dzen

Here’s the source code (perl script):

#!/usr/bin/perl

# Written by J. Bobby Lopez  - 27 Jan 2010
# Script to -be loaded- by the 'dzen-mounts.bash' script
# This script can also be run by itself, if you want to dump a
# custom plain-text table of your network shares or removable
# devices.
#
# This script is meant to be utilized the Dzen notification system
# Information on Dzen can be found at http://dzen.geekmode.org/

use strict;
use warnings;

use Data::Dumper;
use Text::Table;

my @types = qw( cifs ntfs davfs sshfs smbfs vfat );

sub getmounts
{
    my @valid_mounts; # to hold mounts we want
    my @all_mounts = split (/\n/, `mount`);
    foreach my $mount (@all_mounts)
    {
        foreach my $type (@types)
        {
            if ( $mount =~ m/$type/ )
            {
                push (@valid_mounts, $mount);
            }
        }
    }
    return @valid_mounts;
}

sub getsizes
{
    my @mounts = getmounts();
    my @list;
    foreach my $mount (@mounts)
    {
        my @cols = split (/\ /, $mount);
        my @df_out = split (/\n/, `df -h $cols[2]`);
        $df_out[1] .= $df_out[2] if defined($df_out[2]);
        $df_out[1] =~ s/[[:space:]]+/\ /;
	    my @df_cols = split (/[[:space:]]+/, $df_out[1]);
        push (@list, ([@df_cols]));
    }
    return @list;
}

my $tb = Text::Table->new(
	"Filesystem", "Size", "Used", "Avail", "Use%", "Mounted on"
);
$tb->load(getsizes());
print "Active Mounts\n";
print $tb;

And the bash script:

#!/bin/bash

# Script to load Dzen with output from 'dzen-mounts.pl' script
# Written by J. Bobby Lopez  - 27 Jan 2010
#
# This script utilizes the Dzen notification system
# Information on Dzen can be found at http://dzen.geekmode.org/

function mountlines
{
        LINES=`perl dzen-mounts.pl|wc -l`;
        echo "$LINES"
}

function freshmounts
{
        OUTPUT=`perl dzen-mounts.pl`;
        echo "$OUTPUT"
}

function rundzen
{
        OUTPUT=`freshmounts`;
        MOUNTLINES=`mountlines`;
        echo "$OUTPUT" | dzen2 -p -l "$MOUNTLINES" -u -x 500 -y 0 -w 600 -h 12 -tw 120 -ta l &
        PID=`pgrep -f "dzen2 -p -l $MOUNTLINES -u -x 500 -y 0 -w 600 -h 12 -tw 120 -ta l"`;
        echo "$PID"
}

function killdzen
{
        PID="$1"
        if [ ! "$PID" ]; then
            MOUNTLINES=`mountlines`;
            PID=`pgrep -f "dzen2 -p -l $MOUNTLINES -u -x 500 -y 0 -w 600 -h 12 -tw 120 -ta l"`;
        fi

        if [ "$PID" ]; then
            #echo "Killing $PID..";  # DEBUG STATEMENT
            kill "$PID";
        fi;
}

function checkchanges
{
    while true; do
        NEW=`freshmounts`;
        #echo "$NEW - new";  # DEBUG STATEMENT
        if [ "$OLD" != "$NEW" ]; then
            killdzen "$PID";
            rundzen;
            #echo "$PID started";  # DEBUG STATEMENT
            OLD="$NEW";
            #echo "$OLD - old updated"  # DEBUG STATEMENT
        fi
        sleep 1;
    done
}

checkchanges

You can also download the scripts in a tgz archive here. Enjoy!

Xmonad: For Hardcore Desktop User Interface Efficiency

Long time linux/unix hackers know of the plethora of window managers and user interfaces that have been and currently are available for Linux and BSD operating systems.  I’ve had great times in the past trying out different window managers such as Elightenment, Sawfish, Black Box, IceWM, xfwm, KDE, Gnome,  and others.  These days the two most popular which are shipped with the more popular distributions (Fedora, Ubuntu) are KDE and Gnome.

However, I remember back in the day when I was using a Enlightenment, or Ratpoison, doing strange and cool things (at the time) like applying transparencies to your windows and modifying the the window borders to be anything but normal and square.

I used to share screenshots of my desktop with others who are also into “desktop eyecandy”, where I’d have floating or docked window maker panels, and monitoring applets anchored to the desktop as if they were part of the background wallpaper.. and this was around 1999.  It was fun times.

One of the more interesting things that I was into at the time was increasing the efficiency and usability of my desktop by trying to reduce the need to reach for my mouse.  I’ve been very accustomed to this already being user of vi and the GNU Screen terminal multiplexor, but the window managers never seemed to try to attain the same level “hacker cool”.  That is, of course until I came across Ratpoision. Ratpoison was exactly what the name implied, a window manager that killed your dependency on the mouse (or rat).  It was awesome, but it wasn’t scalable and didn’t evolve much to keep up with modern technological advancements and requirements such as multi-monitor support.

I recently thought that those days were long lost, until I recently had the urge to streamline my desktop environment.  I now have a 28″ Monitor, and was certain there was a better way to interact with the desktop than the standard Ubuntu/Gnome environment.  So I went looking.  I started looking of course at things I was already familiar with – I looked up Ratpoision to see if there were any major improvements over the years.

I took a look at a Ratpoison again, but it was showing it’s age.  I looked at it’s successor, Stumpwm, but I didn’t feel the love.  Then I tried out Xmonad, created by Spencer Janssen, Don Stewart, and Jason Creighton – and written in Haskell.  I immediately fell in love.

If you haven’t used GNU Screen, Gnome Multi-Terminal, Ratpoision, or any minimalist Window Manager before, then it will be hard to explain why Xmonad is worth your time.  Instead, visit the Xmonad website here: http://www.xmonad.org/

Here are some suggestions on how get Xmonad working on Ubuntu 8.10:

Install Xmonad:

apt-get install xmonad

We’re going to create another X window session, so that we don’t mess with your existing one. That way, if you don’t like Xmonad, you can go back to using your existing window manager without worrying about breaking your configuration.

Set up your second X window session. Press “ctrl + alt + f2” – this will take you to the command-line terminal where you will start your second X session. Start the session using following command:

xinit -- :1 vt12

This will start up another X session which will sit at virtual terminal 12 – meaning that you have to press ‘ctrl-alt-F12′ to get to it.

Once at your new X session, you should see nothing more than an plain old xterm window. Type “xmonad’, and the terminal window should now be maximized. Xmonad is now running.

Type ‘man xmonad’ to view the help documentation on how to use it.  It’s pretty straight forward, and a joy to use!

Recession, War, Politics, Poverty…. Software Development?

The way things are these days, you’d think that I, like I would imagine many other people in the world, would be thinking about money, the recession, the potential for war between countries who have been flirting with the bomb, my mother and the sale of her house, poverty in Africa, and the general suckage (is that a word?) in the world.

But no, I’m not thinking about those things.  What’s on my most most of the time is software development and programming.  I’m constantly thinking about what I’m good at, what I suck at, and what I need to do to get better.  Is that selfish?  Let me answer that – yes it is very selfish, but I don’t necessarily believe that selfishness is always a bad thing (part of me can relate to Ayn Rand’s philosophy of Rational Self-interest).

The question though is not “is this selfish?” Rather, the question I’m putting out there is “is this normal?” There are enough things going on right now in my life, dealing with situations and people that I find simply unreasonable, that I’m finding it hard to identify what is “reasonable” any more, because what I see as unreasonable seems to be the norm for the majority.

So is it wrong to think about my career and personal development during times of stress? I feel it to be instinctive to focus on your strengths during times of uncertainty, but what do others out there think? Do you feel that in times of stress, you should cut away from what you’re used to and try something new, or go on vacation? Or do you believe that it’s the perfect time to share with others, give back to your community or family and try to increase your karma (if you believe in such things)? These courses of action are not mutually exclusive, but it helps to identify what needs focus if they’re not jumbled together.

If this post seems a little incoherent, it’s 1am, and my eye-lids have been drooping constantly since I started typing.
Have a good night all :)

Converting Freemind Mind-maps Directly to Perl Hash Trees

freemind

I use Freemind quite a bit for brainstorming and as an outliner.  One of it’s better uses for me is to hammer out an idea for a perl hash tree very quickly.  The problem is that once I have the hash tree exactly the way I want it in Freemind, I have to manually re-create the hash tree in perl source, with all the required formatting.

This is no longer the case, as I’ve written a quick and dirty “freemind2perl” script (below) which takes a Freemind mind-map file, and converts it into a perl hash tree automagically.  I’m not sure if it will work with all versions of Freemind, but mind-map files (.mm files) are XML based, and the format really hasn’t changed across versions.

Just save the script below as ‘freemind2perl.pl’ and run it with ‘perl freemind2perl.pl yourmap.mm’.  It requires the “XML::Simple” perl module to be installed.

Here’s the script (click here to download):

#!/usr/bin/perl -w

use strict;

use XML::Simple;
use Data::Dumper;

my $xml = new XML::Simple;
my $mm_file = shift;
my $data = $xml->XMLin("$mm_file");
my $clean;

sub prep_clean
{
    my $data = shift;
    my $clean;

    foreach my $key ( keys %{ $data } )
    {
        if ( $key eq "TEXT" )
        {
            $clean->{$data->{$key}} = 1;
        }

        if ( $key eq "node" )
        {
            if ( ref( $data->{$key} ) eq "HASH" )
            {
                $clean->{$data->{'TEXT'}} = prep_clean(\%{$data->{$key}});
            }

            if ( ref( $data->{$key} ) eq "ARRAY" )
            {
                my $sub_hashes = {};
                for ( my $i = 0; $i <= $#{$data->{$key}}; $i++)
                {
                    foreach my $sub_hash ( \%{ $data->{$key}[$i] } )
                    {
                        my $subout = prep_clean( $sub_hash );
                        $sub_hashes = { %$sub_hashes, %$subout };
                    }
                }
                $clean->{$data->{'TEXT'}} = $sub_hashes;
            }
        }
    }
    return $clean;
}

$clean = prep_clean( \%{ $data->{'node'} } );

print Dumper(\$clean);

exit;

VMware vSphere 4 Announced!

Working at VMware, I (virtually) had a front-row seat to the VMware vSphere simulcast on April 21.  It was an exciting event – everyone was anxious to hear what our industry partners (Cisco, Intel, Dell, etc) had to say about the new product.  The overall excitement and energy shown by these companies was impressive.

I think what I liked most was Steve Herrod’s “Blackberry Demo” which showed how resilient the platform was even to extreme hardware failure.  I don’t think many people truly understand what this technology means for disaster recovery and disaster avoidance – it essentially eliminates the risk.  I know it’s a big claim, but if your company does it’s due diligence, and has an appropriate and active back-up strategy for all critical systems; if you have proper 2×2 redundancy of systems in place to make sure that there are no single points of failure, you can essentially have 99.999% uptime at a fraction of the cost of doing all of this on physical systems. Small businesses can now experience the stability of software and services which were previously enjoyed only by large corporations which could afford it. And these same small businesses now have a new arsenal of tools which can help them compete against their larger, more established counterparts. It is an exciting time in the industry.

If anyone has an interest in virtualization, but doesn’t know where to start, the best thing to do would be to download a copy of VMware Server, or VMware ESXi. Both are free to download and use, and include the latest features and capabilities built into the enterprise (ESX/vSphere) hypervisors.

Double Shot of Tequila

I woke up early this morning with a mission on my mind, to finally organize my server rack the way I’ve always been meaning to, but for some reason (*cough*laziness*cough*) , I never got around to it.  I had recently bought some new hardware to re-build a system which I thought was dead, but which turned out not to be.  I didn’t really feel like returning the hardware, because this was the chance to build an up-to-date server to migrate all my VMs over to, which is something else I’ve been meaning to do for quite some time.

In any case, I finally got around to re-organizing my server rack today, and I’m proud of how it turned out.  With that accomplishment in hand, I decided to install our living room air conditioner (starting to get a tad warm, especially for computer systems). I headed out to Home Depot and purchased some wire mesh, or “screen” as one of their reps called it. Last year we found that we had a lot of mosquitoes and small flies coming in through the air conditioner. Considering it was a fairly inexpensive one, I figured that I got what I paid for. I decided to turn my $100 air conditioner into a $300 air conditioner, but adding on some custom filters in order to block any debris which it may collect through it’s many open vents. The roll of mesh cost around $15, and was easy enough to cut and shape. The end result turned out better than I had expected, and so this year I expect we will have a lot fewer bugs getting in.

And so the air conditioner was installed – this too had been completed.  I was on a roll and feeling good.  I decided then to try my hand at building my new server from scratch.

I had an old rack-mount server case ((solid steel, heavy beast)) which I gutted, and started building the new server in there.  The new components included a new motherboard – the Asus M3N78-VM, an AMD Athlon 1640 CPU, and 4GB of OCZ Dual Channel SLI Ready RAM. The Micro-ATX form factor of the motherboard made it super easy to fit into the monster rack-mount case. With a few simple connections, I was ready to test boot-up, and things should have been smooth from there. It wasn’t.

The system wouldn’t power on – at all. My first mistake was that I plugged the front panel connectors into the wrong pins on the motherboard. No sweat, figured that out, and moved forward. Switched it on again, saw the motherboards “SB Power” LED come on (which was a good sign), fans started spinning, thought I was getting close, but nothing. I couldn’t get it to POST anything, no errors, warnings, or beeps at all. I decided to rip out all the peripherals and go bare-bones in order to isolate the problem. Still nothing!! Removed RAM, nothing.. Removed the CPU, nothing. So at this point, aside from being frustrated, I’ve been able to narrow it down to one of two things, it’s either the motherboard, or the power supply. The power supply should be fine, because it worked with the old hardware that I had in the case originally. However, there is a chance that the power supply isn’t compatible with this motherboard in some way.

If it’s not the power supply, then I’ve received a motherboard that was DOA. I’m hoping this is the case! I’d hate to take this thing back to Tiger Direct tomorrow, have them test it out, and find out that it’s just fine. That would be both embarrassing and frustrating.

Anyway, after all these triumphs and frustrations, I decided to finish off the night with a double shot of Tequila, and damn did it go down smooth :)

If this blog post seems at all incoherent, it probably has to do with the fact that its late, and I’m tired.  Oh, and maybe just a little to do with that double shot of Tequila.

I’m not crazy

At work these days, because I’m the only developer on my “team”, I’ve been in the situation where I’m extending (which includes extensive, and often times ridiculous rounds of debugging) other peoples code.  Many of the projects I’ve inherited weren’t written to be maintained by anyone other than the original developers.  I’ve long ago come to accept that most programmers are not passionate about simplicity and elegance, and therefore write endless reams of code that over-complicate simple problems.

Now at VMware, I do work around some severely intelligent people, but unfortunately they are not developers, so I don’t work with them.  Because of this I often times rant to them about the ridiculousness of a given situation; and they’re smart, so they understand the problem technically, but because they aren’t working with me it would be hard for them to empathize with my frustrations.

I love reading Paul Graham‘s essays every once in a while, because he seems to be able to understand and articulate my frustrations so well.  One in particular that I’ve been re-reading is Great Hackers which always makes me breathe a sigh of relief because he reminds me that I’m not crazy.

If you are a manager and have to manage a group of experienced programmers, I urge you to read that essay.  You just may prevent one of your developers from committing heinous acts of insanity.

WebPIM: A Custom, Web-based, Personal Information Manager

I’ve always wanted a web-based application to help me manage all my stuff. “WebPIM” (as I’ve nick-named it for now), is currently one of my main personal projects that I have been working on.  I started this project back in 2003 as a simple web-based file manager, and have been slowly hacking away at it in my spare time ever since. “WebPIM” can act as a central reference point for all personal or project information. The way I’ve implemented my custom PIM is purely based on the way I work, so it may not be to everyone’s liking. However, I think it could really help individuals who need a way to organize tasks, projects, documents, and general files in a free-form, yet hierarchical and accessible way. Much of the thinking behind the way WebPIM is being developed relates to GTD ((Getting Things Done – David Allen)), and how to get “stuff” off your mind, and into a system.

Here’s the general idea – you have a lot of “stuff” – stuff that’s just sitting around on scraps of paper, on your hard drive, in your e-mail, and every other place you can’t seem to remember. This may be un-important stuff, or it may be severely important stuff – but none of it is organized into any kind of easily reference-able and “trusted system” ((GTD terminology)).

You have several options; the first of which is to do nothing. Unfortunately, ignoring the problem and hoping it will go away won’t solve the problem. Lets assume you want to change your situation, and we’ll use my experiences as a baseline for discussion.

I have tried many personal information managers over the years, and all of them have been incomplete in one way or another. Also, with the new wave of hosted applications like Google’s GMail, Calendar, and Google Docs, I am becoming more and more uncomfortable storing all my stuff on a remote, corporate server over which I have no control ((This has become more and more of a concern for me, having accounts on Google, Facebook, and others. Maybe I’m just paranoid.)).

My solution to this dilemma has been to write my own PIM, and so far, I’ve been happy with the results.

The way WebPIM currently works is by operating as a front-end to a linux based file-system. From WebPIM, I can create directories, create text files, upload files from my local hard drive, and move files around from one directory structure to another. This is the simple stuff that I think any web-based file manager should be capable of. More than this however, WebPIM provides the following features:

  • Move multiple files from one directory to another (batch move)
  • Text-dialog editing of all files (you can edit HTML and XML files in the interface)
  • Full path display when traversing directories, which allows you to go directly to any directory within your current absolute path via a hyper-link
  • Web-download functionality allowing you to download a copy of your favourite web page or web-accessible file into your current directory.
  • Recursive web-download, so that you can download an entire website for later reference (implemented using HTTrack ((www.httrack.com/)) in the back-end).
  • Project short-cuts, so that you can create short-cut groups to access multiple directory structures on the same interface. This allows you to access general reference information, along with specific project information all within the same interface, and without disrupting your overall PIM hierarchy.

I think the idea can be better explained with a screenshot of the main interface:

WebPIM Interface
– WebPIM Interface (Click on the image for a larger view) –

 

Obviously there is still a lot of polish required before this becomes useful to the general public, but I really do believe there is a market for it.  If anyone is interested in trying this out, leave a comment and let me know.  I can probably set up a demo, or provide the source code as-is so that you can give it a shot on your own system.

Back to Basics – Very Simple Log Monitoring with Perl

multitail_apache

There are many many tools out there which allow you to monitor and view your system or networking logs in several different ways.  Sometimes though, you may find yourself looking for a specific feature that none of these tools currently provide.  Whenever your goals are very specific, and you don’t want to use a big feature-full program to accomplish a simple task, you may want to consider writing your own tool.

Below is a simple Perl program I wrote which does just that. All the requirements of the program are within the script itself (using the __DATA__ handle at the bottom of the file). The only thing you may need to install on your system to get this to work is the File::Tail CPAN package.

#!/usr/bin/perl -w

use strict;
use File::Tail;

my @patterns = ;
my $file = File::Tail->new ("/var/log/syslog");
while ( defined(my $line=$file->read) )
{
    my $match = &filter($line);
    if ( $match eq "no" )
    {
        print $line;
    }
}

sub filter ()
{
    my $line = $_[0];
    my $match = "no";
    foreach my $test (@patterns)
    {
        chomp($test);
        if ( $line =~ m/$test/ )
        {
            $match = "yes";
        }
    }
    return $match;
}

__DATA__
PROTO=UDP SPT=67 DPT=68
ACCEPT IN=br0 OUT=vlan1 src=192.168.0.111.*PROTO=TCP.*DPT=80
ACCEPT IN=br0 OUT=vlan1 src=192.168.0.102.*PROTO=TCP.*DPT=80
ACCEPT IN=br0 OUT=vlan1 src=192.168.0.111.*PROTO=TCP.*DPT=443
ACCEPT IN=br0 OUT=vlan1 src=192.168.0.102.*PROTO=TCP.*DPT=443
ACCEPT IN=vlan1 OUT=br0.*DST=192.168.0.101.*PROTO=UDP.*DPT=1755
ACCEPT IN=vlan1 OUT=br0.*DST=192.168.0.101.*PROTO=TCP.*DPT=1755
ACCEPT IN=br0 OUT=vlan1 src=10.100.0.1.*PROTO=UDP.*DPT=53
JBLLNXWKS dhclient

__END__

This program will monitor the end of the file (like the Unix ‘tail’ command) and check for new log entries. When it detects new lines in the log, it will filter those lines with the patterns defined at the end of the script (under __DATA__) and display anything it detects in the logs except those filter lines.

You’ll probably notice that the filter lines are regular expressions, which makes this script more powerful than doing filtering by simple full-string comparison.

Aside from simply printing the output to STDOUT, you could use regular expressions to pop pieces of each line into an array or hash, in order to do calculations, such as how many entries had a source IP of X, or destination port of Y, etc.

Its definitely a good thing to keep in mind that whatever software you could possibly need is already out there on the internet, and possibly open source. However, its also good to keep in mind that YOU can create a tool yourself to accomplish your specific task; all it takes is a little self-confidence, effort, and patience.

Configuring X.Org Display Resolutions Under Ubuntu 8.04

Problems with High Resolution Monitors and X.Org

I recently bought a 28′ LCD Monitor, the I-INC iF281D. I bought the monitor to increase my usable desktop workspace on my Ubuntu 8.04 (Hardy) Linux workstation. I do a lot of programming, systems administration, log analysis and monitoring – and I usually have three or more X Terminals open at any one time (with some mixture of “top”, “vim”, “tail”, and “screen” open), so the more desktop space I have, the better.

When I first attempted to configure my new monitor using the ‘nvidia-settings’ GUI application that comes with NVIDIA’s Linux driver, I noticed that it didn’t present all the resolutions that the monitor supported.  NVIDIA’s Settings GUI was limiting the available resolutions to a maximum of 1280×800. This was odd, because the monitor supported resolutions up to 1920×1200, and supported DCC, so the GUI should have been able to pick up the monitor specifications and provide configuration options for all supported features (resolutions, refresh rates, etc.)

At first I thought that the problem was with the NVIDIA driver I was using, and so I downloaded the latest driver from NVIDIA’s website, recompiled and tried again.  I still however was not able to get the resolutions that the monitor supported.


My 28 Inch Desktop @ 1920x1200

Solving The Problem

After much more digging and research, I found that the problem was not with the monitor, or the NVIDIA driver, but with the mode settings in ‘xorg.conf’. What I wasn’t aware of was that the Horizontal Sync rate definition within my ‘xorg.conf’ directly reflects the resolutions that will be available to me.

For example, the ‘nvidia-settings’ application set the Horizontal Sync and Vertical Refresh to low (safe) values by default. Therefore the available resolutions were limited. Once I figured out what my maximum HZ Sync and VT Refresh were, I was able to achieve the resolutions that the monitor was capable of.

There are Xfree86 Video Timing HOWTO’s available if you want to get into the gory details of how to calculate the correct xorg.conf settings for your specific monitor. However, if you’re a programmer like me, then you’ll want to skip this step, expecting that someone else out there must have already been through this, and has likely created a tool to make our lives easier.

Lo and behold! Xtiming is a great web tool which helps you calculate your Horizontal Sync and Vertical Refresh Rate settings. Simply enter the resolution that you are trying to achieve, and Xtiming will tell you the settings you’ll need in your ‘xorg.conf’ file to get it.

For example, if you leave empty all the other values that Xtiming asks you for, and simply enter “1600×1200” for Visible Resolution, and “60” for Refresh Rate, then click “Calculate Modeline”; you’ll see that Xtiming returns the Mode Line that you should use, along with (and most importantly!) the Horizontal Sync rate you will need to achieve that resolution at the specified refresh rate:

Modeline "1600x1200@60" 176.70 1600 1632 2296 2328 1200 1224 1236 1261
Horizontal sync frequency: 75.9 kHz

I personally found that I didn’t need to use the “Modeline”, but the Horizontal Sync Frequency was essential. Here’s an excerpt of my ‘xorg.conf’ file using the above settings:

(...)

Section "Monitor"

    # HorizSync source: xconfig, VertRefresh source: xconfig
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "AUO"
    HorizSync       30.0 - 75.9
    VertRefresh     60.0
    Option         "DPMS"
EndSection

(...)

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "TwinViewXineramaInfoOrder" "DFP-0"
    Option         "TwinView" "1"
    Option         "metamodes" "DFP-0: 1440x900 +1600+0, DFP-1: nvidia-auto-select +0+0"
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

I’m using TwinView because I have a dual monitor setup. However, now that I have a 28′ display, dual displays don’t really seem to be a requirement any more :)

Dev Tools: Recent Updates to Bazaar and Loggerhead

I’ve been using the Bazaar Version Control System, along with it’s Loggerhead repository viewer for only a short while, but I am very happy with how my development environments are coming together.  If you are not happy with your current version control system, then these tools are definitely worth a look.

Bazaar

For those who are using the Bazaar Version Control System (a.k.a BZR), version 1.11 was released on January 19th, which fixes a number of bugs, adds performance enhancements, and tightens up a few GUI integration features for Windows users.

Loggerhead

In step with the Bazaar project, Loggerhead, the web based BZR repository viewer has been updated to version 1.10 which includes GUI updates and improvements to Loggerhead’s repository browsing and caching features.

Download Bazaar and Loggerhead using APT

Both Loggerhead and Bazaar can be downloaded via Debian style APT repositories using these directions. Make certain to use the correct repositories for your version of Ubuntu, and add the appropriate APT repository key to your APT key ring.

For example, if you are using “hardy” (Ubuntu 8.04), you would add the following repositories to your /etc/apt/sources.list:

deb http://ppa.launchpad.net/bzr/ppa/ubuntu hardy main
deb-src http://ppa.launchpad.net/bzr/ppa/ubuntu hardy main

Then, you would add the repository public key to your key-ring, like so:

gpg --no-default-keyring --keyring /tmp/bzr.keyring --keyserver keyserver.ubuntu.com --recv   ECE2800BACF028B31EE3657CD702BF6B8C6C1EFD
gpg --no-default-keyring --keyring /tmp/bzr.keyring --export --armor  ECE2800BACF028B31EE3657CD702BF6B8C6C1EFD | sudo apt-key add -
rm /tmp/bzr.keyring

Then just update your local package listing, and install BZR and Loggerhead as follows:

sudo apt-get update
sudo apt-get install bzr loggerhead

And voila, you’re good to go!

High Praise for Cogeco Cable

Since moving out to Burlington, I’ve had to migrate my internet connectivity services from Rogers Cable in Toronto, to Cogeco.  I’ve been using Cogeco for a little more than a year, and I so far been impressed with their services.  The connection is stable (except when there is construction going on in the neighbourhood), the bandwidth is high and consistent, and I rarely have my dynamic IP address change on me.. rarely.

Sometimes though it does happen, and it is annoying.  I finally decided that I needed a static IP address.  Conveniently enough, Cogeco offers a “Reserved IP” SOHO package at $99/month, or $85/month with a 12 month contract.  So I decided to go for it, and now I have a connection with 16Mbit download, 1Mbit upload, 3 IP addresses (1 Reserved, which essentially means “static”), and it all took me literally 5 mins to setup, with an extra 20-30 mins wait for everything to take effect.  So far, Cogeco is making me very happy :)

Thanks go to Adam Bray for giving me the heads up on this package, I think it’s definately worth it, especially if your a high-bandwidth junkie.