Category Archives: Knowledge & Reference

Category dedicated to knowledge and reference materials spanning all topics, including books, essays, and news articles. Reviews, comments, or opinion pieces may also be included.

Tiny linux disk usage reporting script – dux.bash

I decided to write a very small disk usage reporting script that provides some extra information than does just using ‘du‘ directly.  The script of course uses du, among other command line tools and parsing commands to generate the little report.  It suits my needs for the moment. Hopefully others out there will also find it useful.

Firstly, here is the script output to give you an idea of how to execute it, what it does, and how it looks:

$ ./dux.bash /home/jbl /home/jbl
Building extended du reports for /home/jbl in /home/jbl ...
[du extended report]:
13G     ./Personal
4.7G    ./DATA
2.4G    ./Pictures
1.4G    ./Downloads
Total (GB): 21.5
 
373M    ./core
260M    ./tmp
73M     ./game-saves
37M     ./new-music
33M     ./new-books
32M     ./vim-env
24M     ./random-tools
15M     ./stuff
Total (MB): 847

The script takes two arguments, the directory you want to analyze, and the directory where you want to store the reports.

As you will see, the script provides me an at-a-glance look at the following characteristics of a particular directory structure:

  • Grouping and separation of larger (GB sized) directories from smaller (MB sized) directories.
  • Directories sorted by size in each group.
  • Total sizes for each group of files and directories.

With this output I can see clearly which directories are causing the most contention, and how much of an impact they have compared to other directories.

The script is very crude, and probably needs some work and error correction (accounting for files off or root, etc.)   It also creates some temporary text files (used to construct the report), which is the reason for the second argument to the script.  However for now it’s doing what I need, so I figure it’s worth sharing.  Here it is:

#!/bin/bash
echo "Building extended du reports for $1 in $2 ...";
cd $1
du -sh $1/* > $2/du-output.txt
cat $2/du-output.txt | egrep '([0-9][0-9]M)' > ~jbl/du-output-MB.txt
cat $2/du-output.txt | egrep '[0-9]G'> ~jbl/du-output-GB.txt
cat $2/du-output-MB.txt | sort -hr > $2/du-output-MB-sorted.txt
cat $2/du-output-GB.txt | sort -hr > $2/du-output-GB-sorted.txt
echo '[du extended report]:';
cat $2/du-output-GB-sorted.txt
echo -ne "Total (GB): " && cat ~jbl/du-output-GB-sorted.txt | perl -pe 's/^(\d+\.+\d+|\d+)\w*.*/$1/g' | paste -sd+ | bc
echo ""
cat $2/du-output-MB-sorted.txt
echo -ne "Total (MB): " && cat ~jbl/du-output-MB-sorted.txt | perl -pe 's/^(\d+\.+\d+|\d+)\w*.*/$1/g' | paste -sd+ | bc

I’m not sure what more I will need from it going forward, so it may not get much love in the way of improvements.  Since it would be nice to have it on each system I’m using, I may convert it to a single script that has no need to generate temporary text files.  

Nuff said!  Hopefully you find either the script, or portions of it useful in some way!

For convenience, I’ve also added this script as a public gist so you can download dux.bash from github.

Cheers!

Our Galaxy is a Seed that Will Eventually Grow into it’s Own Universe

The universe is accelerating away from the center of the Big Bang.

The universe is cooling down, because galaxies are moving away from each other.
The number of stars in the sky will diminish over time, there will eventually be a few, then there will be none. This is the current theory (paraphrased) held by many scientists today, typically referred to as heat death.

I’m no scientist, but I like to visualize.  Read this article: Speculative Sunday: Can a Black Hole Explode?

I was inspired, in particular by this image:

This artist’s impression shows the remains of a star that came too close to a supermassive black hole. Extremely sharp observations of the event Swift J1644+57 with the radio telescope network EVN (European VLBI Network) have revealed a remarkably compact jet, shown here in yellow. – ESA/S. Komossa/Beabudai Design

 

The above image is an artists rendition of the results of the data received from an “earth-sized radio telescope”. The detail is specific, even if interpreted. What I’m seeing here is a pattern. Spiral falling / contraction (gravity), with a projection of stuff out the north and south poles. This projection from the black hole is likely directly related to the consumption of the star, which we see visualised as the star being smeared in a spiral around the singularity.

This is the pattern. Gravity pulls things in on one “plane” and creates a jet stream at the north and south poles of the black hole.  The jet stream is comprised of particles of matter that have been deflected or have narrowly escaped being captured by the black whole, only to be accelerated away at high speed again.   Now this particular aspect of how black holes function is very interesting because the process heats up space, to the point where it could potentially create or influence the creation of stars within a galaxy.  Think about that for a moment.

To create a star, or star-system, you don’t need THE Big Bang.  You don’t need super-galaxies, or galaxies or star-systems.  What you need is a black hole.  Every star that dies turns into a black hole (or a neutron star, then a black hole).You just need a black hole to create a star, and planets, and there I suggest, life?

My hypothesis is this.  Even if all our galaxies are moving away from each other over billions of years, and even though light and heat will diminish – new stars, new galaxies, and new universes will be created, just as the “first” one was.  And this dimension will continue on for other new life forms to grow and learn and figure this all out all over again.

Watch Cosmos: A Spacetime Odyssey if you have no idea what I’m talking about, then come back to this article.

http://www.space.com/18893-black-hole-jets-similarities.html
http://www.thephysicsmill.com/2015/06/14/speculative-sunday-can-a-black-hole-explode/
https://en.wikipedia.org/wiki/Neutron_star

Note: After writing this, I read up on Hawking Radiation, and found that black holes do die if they don’t feed (on other stars), they will eventually evaporate.  This is kind of poetic.

rbenv and multiple local ruby version installs (like perlbrew)

When I was using perl as my primary development language, I had a platform of tools in place to make my perl development fun and productive. This included tools like Perl::Dancer, DBIx::Class, cpanm, and perlbrew. Perlbrew was a tool I used to maintain multiple versions of perl in my local development environment, so that I could test my code against multiple perl and module versions to ensure that it worked on the largest range of platforms ( and to avoid dependency related bugs ).

This allowed me to run my code against Perl 5.10, 5.12, and 5.14, and so on each with their own module-base, fully isolated from each-other.

Now I’m working with many different tools these days, and haven’t had the opportunity to work with other languages to the extent that I’ve worked with Perl, but I have been playing with Ruby and Golang. Using Ruby, I immediately thought that I would like to play with multiple versions of Ruby without altering the ‘system’ ruby on my workstation. A quick search of ‘perlbrew for ruby’ lead me to rbenv which seems to be exactly what I was looking for.

Some examples of how rbenv works:

# list all available versions:
$ rbenv install -l
 
# install a Ruby version:
$ rbenv install 2.0.0-p247
 
# Sets a local application-specific Ruby version by writing the version name to a .ruby-version file in the current directory.
$ rbenv local 1.9.3-p327
 
# Sets the global version of Ruby to be used in all shells by writing the version name to the ~/.rbenv/version file.
$ rbenv global 1.8.7-p352
 
# Sets a shell-specific Ruby version by setting the RBENV_VERSION environment variable in your shell
$ rbenv shell jruby-1.7.1
 
# Lists all Ruby versions known to rbenv, and shows an asterisk next to the currently active version.
$ rbenv versions 1.8.7-p352 1.9.2-p290 * 1.9.3-p327 (set by /Users/sam/.rbenv/version) jruby-1.7.1 rbx-1.2.4 ree-1.8.7-2011.03
 
# Displays the currently active Ruby version, along with information on how it was set.
$ rbenv version 1.9.3-p327 (set by /Users/sam/.rbenv/version)
 
# Displays the full path to the executable that rbenv will invoke when you run the given command.
$ rbenv which irb
/Users/sam/.rbenv/versions/1.9.3-p327/bin/irb

How to Backup an Ubuntu Desktop (12.04, 14.04)

Source: http://askubuntu.com/questions/9135/how-to-backup-settings-and-list-of-installed-packages

Warning: Read about caveats in the link above before use

#——————————————————-

## The backup script
 dpkg --get-selections > ~/Package.list
 sudo cp -R /etc/apt/sources.list* ~/
 sudo apt-key exportall > ~/Repo.keys
 rsync --progress /home/`whoami` /path/to/user/profile/backup/here
## The Restore Script
 rsync --progress /path/to/user/profile/backup/here /home/`whoami`
 sudo apt-key add ~/Repo.keys
 sudo cp -R ~/sources.list* /etc/apt/
 sudo apt-get update
 sudo apt-get install dselect
 sudo dpkg --set-selections < ~/Package.list
 sudo dselect

#——————————————————-

Who is this for: users that have normal regular use of their computer, that have done minimal or no configuration outside their home folder, did not mess up startup scripts and services. A user that wants to have his software restored to how it was when he installed it with all customizations being done and kept in their home folder.

Who this will not fit for: servers geeks, power users with software installed by source (restoring the package list might break your system), users that have changed the startup script of some application to fit better their needs. Caution: there is a big chance any modifications outside home will be over written.

Expressing Your Authority May Be Working Against You

It doesn’t matter whether you are a senior engineer, a team lead, or an IT manager – eventually you will encounter the situation.  A meeting or discussion that becomes slightly more animated than usual.  Opinions are strong, and it is clear that consensus will not be found on this particular contentious issue today.   As a senior engineer, team lead, or manager, it is fair and understood that sometimes you will have to make a call one way or the other.   This article is not about whether or not you should make that call.  This article is about how to make that call.

Lets say for example that you are in a meeting with many of your direct reports, and these direct reports may be working on different aspects of the same project – or – they may be on different teams, still working toward the successful completion of a specific project.  There is a contentious concern, perhaps on the complexity around a specific problem where dead-lines need to be set.  Opinions are being vocalized, and the volumes of those voices are getting louder.  There doesn’t seem to be a clear way to reason out the differences of opinion at the moment. People are being blamed, fingers are being pointed.  You are the team lead/manager.  What do you do?

Well, lets look at what you should not do, with some suggestions on how you might handle these situations differently:

  1. Do Not Swear
    • It may seem to you that swearing at a meeting to get the attention of your team is either hip, cool, contemporary, or resonant with authority, but you would be dead wrong.
    • Anyone who really wants to succeed, and wants their teams and their company to succeed, will always want to bring positivity to the table.  By swearing (and I mean anything that is obviously vulgar, saying something like “what the fuck”), you are tarnishing the respect that your direct reports may have had for you.
    • With you being in a senior position, your direct reports look up to you, and will often try to mimic your mannerisms and the method by which you work (without full context of course), and they will replicate these mannerisms upon interactions with other teams and team members.
    • If you are swearing because you are highly frustrated, and simply lost control, then that is another matter that you need to address, immediately.
    • Apologize – If you do swear, communicate to your team that you are indeed frustrated, and did not mean to offend anyone.  Apologize sincerely to the whole team, and this will immediately re-gain any respect you may have lost, since you are showing the team that you are responsible for your actions, and are willing to concede when you’ve made a mistake.  This takes courage, and is a great example to set for your team.
  2. Do Not Raise Your Voice
    • There are many situations where raising your voice might be appropriate, for example to get everyone’s attention so that a meeting can begin.  Context is very important.
    • However, raising your voice for the sake of making a point (or to invalidate a point being made by someone else), or to express your authority will only back-fire, as you will lose the respect of those to whom you are trying to make your point.
    • Silence is golden – if you need to visibly show your disappointment or disagreement with an individual or a decision being made at a meeting, then the best thing to do is to be quiet.  Stand up, and hold your hand out as if you are pushing something away from you (think Neo in the Matrix).  Make it visible that you have something to say, or that you disagree, or would like to take the discussion off-line.  Your teams will respect you even more if you are able to command the attention of a room with silence.  Any fool can get attention by being loud and abrasive.
    • Again, by raising your voice, you are setting an example for others to do the same as well.  Your team members will take your queue and start to build a paradigm around how they see you acting and reacting, and they will do the same – believing either that this is what it takes to be successful, or that this is how YOU would rather interact.  They may even raise their voice against you in the very same meeting, with the misguided belief that you would see this as a positive characteristic in them.  Do not perpetuate this line of thinking.  If you are able to command a room with silence, then everyone else will follow suit and become silent, at which point a real and valuable conversation can once again be had.
  3. Do Not Perpetuate Fact-less Finger-pointing
    • Just because someone on your team makes a claim against another, doesn’t mean it is true.  If one team member claims that they are in a bad situation, or that they “are blocked” by another team or individual, do not simply jump on that finger-pointing train.  This is the equivalent of joining a pitch-fork mob against a monster which you didn’t know existed only a few minutes ago.  As a leader, you should be critical of all information coming your way, especially the hearsay that tends to happen when a second party is criticising a third.  It is a purely reactive method of dealing with people and situations, and it does more harm than good.
    • Ask questions – but from the perspective of information-gathering, not finger pointing.  What this means is that you are taking ‘people’ out of the picture, and instead are looking at ‘facts’ (current status and configuration, time-stamps, and corroborating evidence).  Instead of just taking those who claim that the ‘sky is falling’ at their word.
    • If you are going to address someone who is to be the defendant of a particular criticism, don’t ask them “Did you do (or not do) x?”.  Instead of being open about the obstacles which have prevented them from completing a certain task, this puts people on the defensive.  Try instead to be on their side.  If you are sincerely interested in achieving success for all teams, and for the entire company, and not just for yourself or your team, then show this by being helpful.  Instead, make statements like “What can I do to help move x along?”, or “Can we spend a few moments to break down this objective into smaller tasks?  Perhaps I or someone from my team can assist with moving this along?”.  This kind questioning puts the person being criticised in a position to ask for, and accept help if they need it.  If it is simply a matter of prioritization, something the person hadn’t gotten around to just yet, or if they simply lost sight of the tasks – they will once again be aware that the task needs attention.  They may even be embarrassed that you are offering to assist them with such a simple task that they will openly concede that they’ve simply lost sight of it, and would likely resolve the situation right away to avoid further embarrassment.
    • Bring people together.  Be an example to the person raising the issue or making the criticism by bringing together the parties involved so that there can be a quick and constructive dialogue about current obstacles or perceived road-blocks.  Show people how to solve problems without escalation, so that they can perpetuate a positive methodology around people-handling, and so that they themselves can become positive role-models that others can aspire to.
    • If you instead believe that perpetuating unfounded criticism and finger-pointing is a good thing, and that is all you believe you can or should do; then all you will end up doing is to make people feel alienated.  Those who are being criticised will go on the defensive, and they will likely want to avoid interacting with you (or anyone else on the finger-pointing bandwagon) going forward.  This does nothing to improve collaboration within or between teams.  Your organization and your company will suffer because of it.

Getting upset at your direct reports, raising your voice in order to re-claim a conversation, or simply ignoring input from specific people is a sure-fire way to diminish your reputation and earned respect across your entire team.  For the most part, private sector IT including software development, systems administration, and project management, is all thought-work.  It is important to be aware of and to understand how much psychology plays a part in the success of a team or organization.  Positivity breeds positivity, and the inverse is true as well.

Ansible Playbooks – Externalization and Deduplication

Image result for ansible

Externalization and Deduplication

Developers who understand the concepts of modularity and deduplication should immediately recognize the power behind being able to include settings and commands from external files.   It is seriously counter-productive to maintain multiple scripts or playbooks that have large blocks of code or settings that are exactly the same.   This is an anti-pattern.

Ansible is a wonderful tool, however it can often be implemented in counter-productive ways.  Lets take variables for example.

Instead of maintaining a list of the same variables across multiple playbooks, it is better to use Variable File Separation.

The Ansible documentation provides an excellent example of how to do this.  However I feel that the reasoning behind why you would want to do it falls short in describing the most common use-case, deduplication.

The documentation discusses the possible needs around security or information sensitivity.  I also believe that deduplication should be added to that list.  Productivity around how playbooks are managed can be significantly increased if implemented in a modular fashion using Variable File Separation, or vars_files.   This by the way also goes for use of the includes_vars module.

Here are a list of reasons why you should immediately consider a deduplication project around your Ansible playbook variables:

Save Time Updating Multiple Files

This may seem like a no-brainer, but depending on the skills and experience of the person writing the playbook, this can become a significant hindrance to productivity.   Because of Ansible’s agent-less and decentralized manner, playbooks can be written by anyone who wants to get started with systems automation.  Often, these can be folks without significant proficiencies in programmer-oriented text editors such as Vim, Emacs, or Eclipse – or with bash scripting experience around command-line tools like awk, sed, and grep.

It is easy to imagine a Java developer without significant Linux command-line experience opening up one playbook at a time, and modifying the value for the same variable, over and over… and over again.

The best way for folks without ninja text-editing skills to stay productive is to deduplicate, and store common variables and tasks in external files that are referenced by multiple playbooks.

Prevent Bugs and Inconsistent Naming Conventions

In a perfect world, everyone would understand what a naming convention was.  All our variables would be small enough to type quickly, clear enough to understand its purpose, and simple enough that there would never be a mis-spelling or type-o.  This is rarely the case.

If left un-checked, SERVER_1_IP can also be SERVER1_IP, Server_1_IP, and server_1_ip.  All different variable names across multiple files, referencing the same value for the exact same purpose.

This mess can be avoided by externalizing this information in a shared file.

Delegate Maintenance and Updates to Variables That Change Frequently

In some environments, there may be playbook variables that need to change frequently.  If these variables are part of some large all-encompassing playbook that only some key administrators have access to be able to modify, your teams could be left waiting for your administrator to have free cycles available just to make a simple change.  Again, deduplication and externalization to the rescue!  Have these often-changing variables externalized so that users who need these changes immediately can go ahead and commit these changes to very specific, isolated files within your version control system that they have special rights to modify.

Cleaner Version Control History (and therefore Audit History)

If you have the same variables referenced by multiple files, and you make changes to each of those files before you commit them to version control, then your version control history can become a complete mess.  Your version control history will show a change to a single value affecting multiple files.  If you come from a software development background, and are familiar with the concept of code reviews, then you can appreciate being able to look at a simple change to a hard-coded value (or a constant), and see that it only affects one or two files.

I hope the reasons above convince some of you to start browsing your playbook repositories for possible candidates for deduplication.  I really believe that such refactoring projects can boost productivity and execution speed for individuals and teams looking to push changes faster while minimizing obstacles around configurations shared by multiple systems.  Send me a note if this inspires you to start your own deduplication project!

Examples of recursion, in Perl, Ruby, and Bash

This article is in response to the following question posted in the Perl community group on  LinkedIn:

I’m new to PERL and trying to understand recursive subroutines. Can someone please explain with an example (other than the factorial ;) ) step by step, how it works? Thanks in Advance.

Below, are some very simplified code examples in Perl, Ruby, and Bash.

A listing of the files used in these examples:

blopez@blopez-K56CM ~/hello_scripts 
$ tree .
 ├── hello.pl
 ├── hello.rb
 └── hello.sh
0 directories, 3 files
blopez@blopez-K56CM ~/hello_scripts $


Recursion example using Perl:

– How the Perl script is executed, and it’s output:

blopez@blopez-K56CM ~/hello_scripts $ perl hello.pl "How's it going!"
How's it going!
How's it going!
How's it going!
^C
blopez@blopez-K56CM ~/hello_scripts $

– The Perl recursion code:

#!/usr/bin/env perl
use Modern::Perl;
my $status_update = $ARGV[0]; # get script argument
 
sub hello_world
{
&nbsp;&nbsp;&nbsp; my $status_update = shift; # get function argument
&nbsp;&nbsp;&nbsp; say "$status_update";
&nbsp;&nbsp;&nbsp; sleep 1; # sleep, or eventually crash your system
&nbsp;&nbsp;&nbsp; &amp;hello_world( $status_update ); # execute myself with argument
}
 
&amp;hello_world( $status_update ); # execute function with argument


Recursion example using Ruby:

– How the Ruby script is executed:

blopez@blopez-K56CM ~/hello_scripts 
$ ruby hello.rb "Doing great!"
Doing great!
Doing great!
Doing great!
^Chello.rb:7:in `sleep': Interrupt
&nbsp;&nbsp; &nbsp;from hello.rb:7:in `hello_world'
&nbsp;&nbsp; &nbsp;from hello.rb:8:in `hello_world'
&nbsp;&nbsp; &nbsp;from hello.rb:8:in `hello_world'
&nbsp;&nbsp; &nbsp;from hello.rb:11:in `'
blopez@blopez-K56CM ~/hello_scripts $

Note: In Ruby’s case, stopping the script with CTRL-C returns a bit more debugging information.

– The Ruby recursion code:

#!/usr/bin/env ruby
status = ARGV[0] # get script argument
 
def hello_world( status ) # define function, and get script argument
&nbsp;&nbsp;&nbsp; puts status
&nbsp;&nbsp;&nbsp; sleep 1 # sleep, or potentially crash your system
&nbsp;&nbsp;&nbsp; return hello_world status # execute myself with argument
end
 
hello_world status # execute function with argument

Recursion example using Bash:

– How the Bash script is executed:

blopez@blopez-K56CM ~/hello_scripts $ bash hello.sh "..nice talking to you."
..nice talking to you.
..nice talking to you.
..nice talking to you.
^C
blopez@blopez-K56CM ~/hello_scripts $

– The Bash recursion code:

#!/usr/bin/env bash
 
mystatus=$1 # get script argument
 
hello_world() {
&nbsp;&nbsp;&nbsp; mystatus=$1 # get function argument
&nbsp;&nbsp;&nbsp; echo "${mystatus}"
&nbsp;&nbsp;&nbsp; sleep 1 # breath between executions, or crash your system
&nbsp;&nbsp;&nbsp; hello_world "${mystatus}" # execute myself with argument
}
 
hello_world "${mystatus}" # execute function with argument

Managing Your SMTP Relay With Postfix – Correctly Rejecting Mail for Non-local Users

Image result for SMTP postfix

I manage a few personal mail relays that I use for relaying my own mail and for experimentation purposes (mail logs are a great source of unique and continuously flowing data that can you use to try out different ideas in GUI, database, or parser development).  One of them was acting up recently.  I got a message from my upstream mail-queue host saying that they’ve queued up quite a bit of mail for me over the last few weeks, and that I should investigate, as they do want to avoid purging the queue of valid mail.

Clearly I wanted to avoid queuing up mail on a remote server that is intended for my domain, and so I set out about understanding the problem.

What I found was that there was a setting in my /etc/postfix/main.cf that, although it was technically a valid setting, was incorrect for the role that mail-server was playing.  Specifically the mail server was supposed to be rejecting email completely for non-local users, instead of just deferring it with a “try again later” message.

In this case, I’m using Postfix v2.5.5. The settings that control this configuration in /etc/postfix/main.cf are as follows:

  • unknown_local_recipient_reject_code
  • local_recipient_maps

local_recipient_maps

local_receipient_maps defines the accounts that this mail server will accept and relay mail for. All other accounts would be “rejected” by the mail server.

However, how rejected mail is treated by Postfix depends on how it is configured, and this was the problem with this particular server.

For Postfix, it is possible to mark a message as “rejected”, but actually have it mean “rejected right now, but maybe not permanently, so try again later”. This “try again later” will cause the e-mail message to be queued on the upstream server, until it reaches some kind of retry time-out and delivery is once again attempted. Of course this will fail again, and again.

This kind of configuration is great for testing purposes, because it allows you to test the same messages over and over again without losing them, or to queue them up so that they can be reviewed to ensure they are indeed invalid e-mail messages. However this is not the state you want your mail server to be in permanently. At some point once things are ready for long-term (production) use, you want your mail server to actually reject messages permanently.

unknown_local_recipient_reject_code

That is where unknown_local_recipient_reject_code comes in. This configuration property controls what the server means when it “rejects” a message. Does it mean right now, or permanently?

The SMTP server response code to reject mail permanently is 550, and the code to reject mail only temporarily is 450.

Here is how you would configure Postfix to reject mail only temporarily:

unknown_local_recipient_reject_code = 450

And here is how you set Postfix to reject mail permanently:

unknown_local_recipient_reject_code = 550

In my case, changing the unknown_local_recipient_reject_code from 450 to 550 is what solved the problem.

In summary, if you ever run into an issue with your Postfix mail server where you believe mail is set to be REJECTED but it still seems to be queuing up on your up-stream mail relay, double-check the unknown_local_recipient_reject_code.

# Local recipients defined by local unix accounts and aliases only
local_recipient_maps = proxy:unix:passwd.byname $alias_maps
 
# 450 (try again later), 550 (reject mail)
unknown_local_recipient_reject_code = 550

References
http://www.postfix.org/LOCAL_RECIPIENT_README.html
http://www.postfix.org/postconf.5.html#unknown_local_recipient_reject_code

Bad advice on “free advice”

Cross-post from LinkedIn, in response to How Seeking ‘Free’ Works Against Our Career Success:

I cannot completely agree here. There are many who offer free advice that also happens to be good advice. Alternatively, it is important for advice seekers to learn how to distinguish between good and bad advice by learning to think critically about the information they are receiving – by asking deeper, probing questions. Every answer received should lead to further questions. While I do agree that it is important to learn how to be independent and make your own way in this world (as in the example of parents encouraging children to pay for their own education), I do not see how this directly relates to giving or receiving free advice, or how free advice (as suggested in this article) can be considered to be bad advice without further inquiry. With regard to the job seeker asking for his/her resume to be reviewed, that was simply a lazy request. You cannot help those who are not willing to put in the effort to help themselves, regardless of whether or not your advice is free.

Stephen Colbert Interviews Neil deGrasse Tyson at Montclair Kimberley Academy – 2010-Jan-29

Cross-post from LinkedIn, in response to Stephen Hawking: Black Holes May Not Have ‘Event Horizons’ After All:

So relevant: http://www.youtube.com/watch?v=YXh9RQCvxmg Stephen Colbert interviews Dr. Neil deGrasse Tyson. The entire interview (starts about 6 mins in) is just a wholly wonderful discussion. I wish more people would watch it, over and over again. Dr. Tyson tries to elaborate on the very same topic (current understanding of black holes). Simply engrossing and inspiring. The interview is long, but the elaboration of black holes starts about 1hr 6 mins into the video. Enjoy!

Beautiful people do not just happen.

“The most beautiful people we have known are those who have known defeat, known suffering, known struggle, known loss, and have found their way out of the depths. These persons have an appreciation, a sensitivity, and an understanding of life that fills them with compassion, gentleness, and a deep loving concern. Beautiful people do not just happen.”
― Elisabeth Kübler-Ross

My Little Angel

We took our daughter, Phoebe to a photo shoot recently and I was just in awe at the results.  So many photos of her that looked almost surreal!  The photographer did an excellent job!

Eventually I’ll post the rest of them, but for now here are a few of my favourites.  Yes my dear friends, I made this :)

phoebe2 phoebe1 phoebe3

My Fun With Necrolinguaphilia

Last night I attended a talk given by Dr. Damian Conway (of Perl Best Practices fame) titled “Fun With Dead Languages“.  Although this is a talk that Damian had given previously, it is the first time that I heard it, and I’m so glad I did!

I was the first to arrive at the Mozilla office building at 366 Adelaide, and so was able to score a sweet parking spot right across the street (no small feat in downtown Toronto).

I arrived and introduced myself to Damian as he was preparing for his delivery shortly before a herd of approximately 70 hackers (according to Mozilla) from all language and computing backgrounds started pouring through the meeting room doors to be seated.

Damian has a very energetic style of presentation, and was able to hold our attention while covering everything from the virtual extinction of the Gros Michel Banana, to the benefits and efficiencies of stack-based programming (using PostScript as an example).  He compares many, very different languages including Befunge, Brainfuck, Lisp, and Piet, and suggests that a great place to look for new ideas is what he calls the “Language Morgue”, where he includes languages such as Awk, Prolog, Cobol… and even C++ as examples of dead languages and language paradigms.

Mr. Conway also dived into excruciating detail on how the Latin natural language can be used as an effective computer programming language, and has even gone so far as to write a module called Lingua::Romana::Perligata, which he has made available on the CPAN.

I also had the special treat of sitting right behind Sacha Chua who brilliantly sketched notes of the entire talk in real-time.  I haven’t had the pleasure of formally meeting Sacha just yet (didn’t even say “hello”, my bad!) as I didn’t want to distract her.  Aside from having my mind blown by Damian’s talk, I was also being mesmerized by Sacha’s artistic skills, and so I do feel somewhat justified in keeping my mouth shut just to absorb everything that was going on right in front of me (front-row seats FTW!).

20130806 Fun with Dead Languages - Damian Conway

Sacha has made her “Fun With Dead Languages” sketch notes publicly available on her blog for everyone to review and enjoy, and has placed it under a Creative Commons license, so please share freely (and drop her a note to say “thanks!”).

Overall, I learned a lot from this talk and appreciate it immensely.  The energy of the audience made the discussion that much more enjoyable.  If you are interested in programming languages or language theory in general, I suggest you attend this talk the next time Damian decides to deliver it (or find a recording if one happens to be available?).  Damian Conway’s insights and humorous delivery are well worth the brainfuck ;)

Maybe Big Brother Isn’t As Bad as You Think..

Cross-post from LinkedIn, in response to Maybe Big Brother Isn’t As Bad as You Think:

“This is a future Orwell could not have predicted. And Big Brother may turn out to be a pretty nice guy.” I respectfully disagree. As others have noted, there is (and always will be) a huge asymmetry in the information being shared and consumed as far as “Big Brother” and state surveillance is concerned. The “sharing” in this case is one-way. Only those in power would have the ability to view and make sense of the data.

Your argument that we “choose to share data” because we get something in return, is flawed. Most people do not choose to share the kind of data that we are referring to in this regard, otherwise it would be done freely and intentionally, and the secretive information gathering we are witnessing here would not be taking place. Even the information we do share “intentionally”, is done so for the most part by many of us who do not pay attention to, and truly consider the ramifications of the many disclaimers, license agreements, and privacy policies that we agree to on a daily basis. What we get in return, as you suggest, is far from a fair compromise.

This one-way “sharing” means that those who are in power have not only the ability to collect this information, but also the tools and the ability to analyse this data and generate statistics that the rest of us have no choice but to consume as facts. Aside from the ability to collect and “make sense of” the data, on our behalf – those in power also have the ability to limit and restrict infrastructure and resources in order to manipulate the “facts” at the source. For example, the ability to manipulate DNS or shut down ISPs to prevent the dissemination of data – effective censorship. Many people have been detained or persecuted (or worse) simply for “sharing” their thoughts and beliefs.

How can you make an anti-Orwellian argument, a case *for* “Big Brother”, and suggest that this kind of sharing can be good and benefit us all equally, when the vast amount of information we are talking about can be controlled from source to audience by such small percentage of the population? I suggest you pay attention the thoughts and many works of notable individuals such as Noam Chomsky, Glen Greenwald, and Lawrence Lessig, and perhaps reconsider your position on this matter. I am currently reading Greenwald’s latest book “With Liberty and Justice for Some: How the Law Is Used to Destroy Equality and Protect the Powerful”. I am sure you would find it most enlightening.

For those more visually/audibly inclined: “Noam Chomsky & Glenn Greenwald – With Liberty and Justice For Some”

httpv://www.youtube.com/watch?v=v1nlRFbZvXI

Canadian government is ‘muzzling its scientists’

An article talking about the issue of climate change research, and how governments may be actively preventing the findings from reaching the general public.

“I suspect the federal government would prefer that its scientists don’t discuss research that points out just how serious the climate change challenge is.”

This reminds me of the pseudo documentary “The Age of Stupid” that discusses the many ways that humanity has been warned about the quickly approaching dangers of climate change.

httpv://www.youtube.com/watch?v=DZjsJdokC0s

 

New Acer Iconiatab Acquired

Picked up an android Iconiatab over the weekend. Now to load up the latest draft of Modern Perl. Installed an SSH client, and tweetdeck, both of which seem to work nicely. Still getting used to the touch-based on-screen keyboard, but that will take some time (typing this post using the tablet right now,so forgive the lack of attention to detail).

I came across a very cool keyboard alternative called 8pen, which I’ll be looking into shortly. The primary reason I bought the tablet was for reading books, and was originally planning to get the Kobk Vox, but after playing with it in person, it didn’t impress me that much.

Now Listening: Deadmau5 – Faxing Berlin (Grifta Dubstep Remix)

Now that I’ve updated my Arch Linux 64 desktop with some newer packages, I’ve come across a few surprises. For one, that XMBC works! I was so peeved when I couldn’t get it to work before. XMBC is an awesome media player for Linux (and the original Xbox).

The latest releases of XBMC includes an visualization plugin that I’ve been trying to get my hands on for the longest time, called ProjectM. No other media player that I was comfortable installing had a ProjectM plugin, including VLC and Totem.

Anyway, I now finally have XBMC running (and I can even run it in a window, under Xmonad.. which looks freaking awesome.)

Happy Friday!

The Movie Review: Limitless

So I watched Limitless the other day on the advice of good friend.  (spoiler alert) It was awesome.

Limitless, starring Bradley Cooper as Edward “Eddie” Morra, explores a world where a pharmaceutical  company has manufactured pills that, when ingested, allows the imbiber to use the full capabilities of his or her brain.  If we currently only use 20% of our brains, then just one of these pills would allow you to use 100% of your brain power within 30 seconds of swallowing it, and the effects last for almost a day.

What makes this movie exciting for me is the believability of the powers that Eddie is endowed with, not to mention a great performance by the cast.  When Eddie takes his first pill, he essentially gains instant access to every single memory that he has.  Everything that he has learned, overheard, or glanced at briefly, becomes a strength and an intuition that he quickly begins to realize and exploit.  Not only does Eddie have access to all of his memories, including all of the Bruce Lee movies he has ever watched, but he also instantly gains the muscle-memory required to accomplish these feats which he has never trained for.

Eddie’s senses become sharper, and some type of time-dilation occurs which allows him to instantly absorb the smallest details about the environment around him.  He becomes intensely motivated to challenge himself by doing seemingly impossible things, like learning to fluently speak new languages in a matter of days.  He can mentally profile people he’s just met with great accuracy in a matter of seconds, allowing him to masterfully manipulate people and conversations to his advantage.  But the most important “ability” that he gains, in my opinion, is the drive to go out into the world and do something.

You see, before Eddie took the pill, he was an unmotivated, unsuccessful writer with one failed relationship behind him, and another crumbling right in front of him.  He was always behind on his rent.  He’s had perpetual writer’s block, which prevented him from even starting to write the novel he’d been promising his publisher for several weeks; and even though he’s a reasonably healthy guy, with a place to live (for the time being), he always looked homeless whenever he went out in public.   Eddie was a mess before the pill.   However, after taking the pill, and given the ability to master his own mind, Eddie was no longer afraid of anything.  He could quickly visualize possible solutions to any unexpected situation he was facing, and address the situation with ease.  Eddie completely turned his life around and began to accomplish things he would never have dreamed of.

I expect that fear is an element of life that can hold us all back from reaching our full potential, but only if we let it.

Eventually, Eddie figures out how to maintain his newfound mental mastery without the need to take the pills, and realizes that he has the entire world at his finger-tips.  However the only thing Eddie wants to do, more than anything else, is share his abilities with the rest of the world so that everyone can experience what it feels like, to be limitless.

It should be obvious that I think Limitless is an awesome movie, and I believe many of you will enjoy it as well.

 

 

Understanding the Concepts of Classes, Objects, and Data-types in Computer Programming

book-car

Every once in a while I get into discussions with various people about computer programming concepts such as objects, classes, data-types, methods, and functions.

Sometimes these discussions result in questions about the definition of these terms from those who have some programming experience, and sometimes these questions come from those who have no background in computer science or familiarity with programming terminology whatsoever.

I usually attempt to answer these questions using various metaphors and concepts that I feel the individual or group may be able to relate to.

It is quite possible that even my own understanding of these concepts may be incorrect or incomplete in some way. For the sake of reference and consistency, I am writing this brief article to explore these concepts in the hope that it will provide clarity into their meaning and purpose.

 

So, what is a data-type?

Some languages, like C, have a strict set of data-types. Other languages, like C++, and Java offer the developer the ability to create their own data-types that have the same privilages as the data-types that are built into the language itself.

A data-type is a strict set of rules that govern how a variable can be used.

A variable with a specific data-type can be thought of in the same way as material things in the real world. Things have attributes that make them what they are. For example, a book has pages made of paper that can be read. A car is generally understood to be an automobile that has four wheels and can be used for transport. You cannot drive a book to the grocery store, in the same sense that you cannot turn the pages of a car.

A data-type is a specific set of rules for how a variable can be used.

 

Data-types in computer programming may include examples such as:
_____________________ 
Object Type
==== ========
Int = number, no decimal places
Float = large number with decimal places
Char = a plain-text character
_____________________  

 

More familiar, real-world examples may include:
_____________________
Object Type
==== ========
Bucket = Strong, can hold water, has handle
Balloon = Fragile, can hold a variable amount of air, elastic, portable
Wheel = Round, metal, rubber, rolls
_____________________ 

 

In languages like C++, there are core data-types such as the ones found in C. However, C++ also offers developers the ability to create their own data-types.  Providing developers the ability to create their own data-types makes the language much more flexible. We more commonly refer to a user-defined data-type by the more popular term, class.

In C++, a class is a user-defined data-type [1]. That’s all it is. It provides the developer the ability to create a variable (or object) with specific attributes and restrictions, in the same way that doing “int dollars = 5;” creates an object called “dollars” who’s attribute is to have a value which is strictly an integer. In the real world, a five-dollar bill cannot be eaten (technically), and it cannot be driven like a car to a grocery store (even though that’s where it will likely end up).

An object is a variable that has been defined with a specific data-type. A variable is an object when it is used as an intance of a class, or when it contains more than just data. An object in computer programming is like an object in the real world such as a car, or a book. There are specific rules that govern how an object can be used which are inferred by the very nature of the object itself.

The nature of computer programming means that developers have the ability to redefine objects, for example making the object “book” something that can be driven. In the real world however, we know that you can call a car a book, but it’s still a car. The core understanding of what a car is has been ingrained within us. Although “car” is simply a three letter word (a symbol, or label), there are too many people and things in the world that depend on the word “car” having a specific definition. Therefore objects in the real world cannot be as easily redefined as their counter-parts in computer programming (however, it is still possible [2]).


So what is a method?

In computer programming, we have things called “functions”. A function is an enclosed set of instructions which are executed in order to generate (or “return”) a specific result or set of results. You can think of a function as a mini program. Computer programs are often created by piecing together multiple functions in interesting and creative ways.

Functions have many names, and can also be referred to as subroutines, blocks and methods. A method is a function which is specifically part of a class, or a user-defined data-type, which makes a method an attribute of an object – something that the object is capable of doing.  Just like in the real world, methods can be manipulated and redefined for an object, but not for that object’s base class.  A book can be used to prop-up a coffee table, but that does not mean that books are by definition meant to be used in this way.


Enlightenment achieved!

I’m not really sure where I was going with all of this, but the above should be sufficiently lucid.   I was motivated to write this after recently referencing Bjarne Stroustrup’s “The C++ Programming Language”.  If you’ve ever asked yourself the question “what is an object?” or “what is a class?”, then the above descriptions should serve as a useful reference.

[1] “The C++ Programming Language – Special Edition”, page 224.

[2] For example, the definition of “phone” has been redefined several times in recent history, from the concept of a dial-based phone, to cell phones, to modern smart-phones.

 

New drug could cure nearly any viral infection

Most bacterial infections can be treated with antibiotics such as penicillin, discovered decades ago. However, such drugs are useless against viral infections, including influenza, the common cold, and deadly hemorrhagic fevers such as Ebola.

Now, in a development that could transform how viral infections are treated, a team of researchers at MIT’s Lincoln Laboratory has designed a drug that can identify cells that have been infected by any type of virus, then kill those cells to terminate the infection.

In a paper published July 27 in the journal PLoS One, the researchers tested their drug against 15 viruses, and found it was effective against all of them — including rhinoviruses that cause the common cold, H1N1 influenza, a stomach virus, a polio virus, dengue fever and several other types of hemorrhagic fever.

The drug works by targeting a type of RNA produced only in cells that have been infected by viruses. “In theory, it should work against all viruses,” says Todd Rider, a senior staff scientist in Lincoln Laboratory’s Chemical, Biological, and Nanoscale Technologies Group who invented the new technology.

Because the technology is so broad-spectrum, it could potentially also be used to combat outbreaks of new viruses, such as the 2003 SARS (severe acute respiratory syndrome) outbreak, Rider says.

Other members of the research team are Lincoln Lab staff members Scott Wick, Christina Zook, Tara Boettcher, Jennifer Pancoast and Benjamin Zusman…[ Full Article ]

Trac and Reverse Proxies Using Apache 2

So today I was working on setting up a trac instance behind a reverse proxy, and found that it could be done quite easily under Apache 2. Apache 2 allows you to set up your reverse proxy in a variety of different ways.

The main thing to note here is that you should never try to ProxyPass/ProxyReverse from port 443 to Port 443. Instead, pass from port 443 to 80 in the back-end.  Allow the proxy to handle the SSL authentication for the browser.

If you require a secure connection from the back-end web server to the front-end proxy server, then utilize a VPN, or an SSH tunnel to pass the data; but don’t waste your time trying to make the web server behind your proxy handle the SSL authentication with the browser.  The amount of time you’ll spend trying to figure it out, you would have saved to go do something else more fun, like set up a Cacti instance to monitor bandwidth consumption within your home network :)

So Whats New?

Looks like the Canada Post strike is over, for now.

I don’t know if it would be wise for Canada Post to strike again.  It’s crossed my mind several times, as I’m sure it has crossed the minds of others, that we have enough technology in place, including wireless technologies and the Internet, that we could essentially do away with Canada Post and do everything digitally. It may seem like Canada Post offers a unique and relatively inexpensive service; but with them out of the picture, new solutions would start springing up in no time.

Document imaging would become a hot topic again (whatever happened to the popularity of personal document scanners?). Encryption would once again become an active topic of discussion.  Companies like Purolator and Fedex simply cost too much, and so more “do-it-yourself” type solutions would begin to flourish quickly.

I’ve been listening to some of my favorite tunes lately that I haven’t heard in a while, such as INXS, Portishead, Red Hot Chili Peppers, and so on.  What I’ve found very curious is that many of these great bands and their awesome songs have been songs I have been listening to ever since I was a clueless child.   Songs by Counting Crows, Seal, REM, and Evanescence were songs that really spoke to me, and validated for the most part, that this world we live in truly is a crazy place.  These songs were just the universe’s way of telling me that it totally agrees with me.

I think some of best lyrics in any song has come from Evanescence.  They were a band that was so ahead of it’s time, it isn’t even funny.   There are only a few other bands that I would place in that category, such as The Red Hot Chili Peppers and  Sneaker Pimps.  Hell, I’d even put No Doubt in that list, their “Tragic Kingdom” album was absolutely freaking awesome!  But there simply aren’t that many bands today that I’d put in that category.. but maybe I just don’t listen to enough new bands to know?

What bands do you like to listen to that you’d rank up there with a title of “One of the Greatest Bands of All Time”?

Coke Zero taste in my mouth, and my legs are rather numb

So I’m sitting here in front of my 28-inch I-INC monitor again, after the longest while. A couple of months ago we decided to mount the I-INC on the wall in our bedroom, and move the 32-inch Samsung LCD TV into the basement to serve as my replacement monitor. This was a big mistake. The reasons were relatively justified. We recently bought a 55-inch Samsung plasma to replace our now puny 32-inch Samsung in the living-room -and- I wanted to start using my original XBOX again because of other personal quest to start playing DDR again with my Red-Octane dance pad.

The Red-Octane dance pad didn’t work with the 360, so I had to use the original XBOX. However, the original XBOX did not support HDMI connectors, so I had to find a way to connect it to my monitor in the basement. Since this didn’t seem feasible, I decided that replacing my 28-inch I-INC LCD monitor with my 32-inch TV would be a good idea. Again, bad idea.

Anyway, after several weeks of utilizing an awkward 1366×768 resolution with 75 DPI fonts.. I decided it was time to admit when I was wrong, and revert back to using the 28-inch monitor (I put the 32-inch in the bedroom, like I should have done all along).

Stop The Meter On Your Internet Use

Bell Canada and other big telecom companies can now freely impose usage-based billing on independent Internet Service Providers (indie ISPs) and YOU.

This means we’re looking at a future where ISPs will charge per byte, the way they do with smart phones. If we allow this to happen Canadians will have no choice but to pay more for less Internet. Big Telecom companies are obviously trying to gouge consumers, control the Internet market, and ensure that consumers continue to subscribe to their television services.

This will crush innovative services, Canada’s digital competitiveness, and your wallet.

We need to stand up for the Internet.

Sign the Stop The Meter petition!

Visit http://openmedia.ca/meter

Thoughts on Religion and The World Today

I’m starting to think that more and more people are falling back to religious beliefs and traditions because they begin to believe that once you become secular or atheist, that the world has no meaning, that things are as they are, and there is no mystical purpose to life or the universe, and this makes them feel uncomfortable.

I think it’s comforting to know that the universe, our galaxy, our solar system, our world, and yes, all of us are travelling down a singular path which we cannot break away from; that all events, actions, and reactions which take place in the physical world, that have taken place in history, are simply things that we must experience in order to move forward.

It is inevitable that we will all die one day, and that the generations that come after us will all make the same mistakes (not necessarily in the same way, but in the same vain). The World population is continually growing, and it is already way past Earth’s capacity to support human life without the assistance of war and fear (or so it would seem, the way our world leaders have been acting and reacting lately). These are all things people are starting to realize in this age of free information and global connectivity. The reason why the majority of the world cannot cope with this realization is that our actions are still heavily influenced by emotions, faith, and romanticism.

Although we have so much technology at our disposal, we are still not a species which allows logic and reason dictate our choices. Instead, emotions, faith, greed, and mis-information are still the primary movers of the world.

Likely we will one day become an enlightened species, and finally refer to ourselves as “the Human race” rather than as white or black, or by where we were born or raised.  Maybe one day we will become a single nation which finally takes direction from logic, reason, and innocent curiosity; but that time seems very far away, and it’s quite likely we will destroy ourselves before we ever get there.. but I guess it’s still a possibility that we will overcome all our hatred and prejudices, and finally become a peaceful and free world.   Likely you and I won’t be around to see that day, but you never know.  Have faith.

Someone Hacked My Web Server

So I just found that someone hacked into my web server recently, I’m not sure when they started poking around, but I saw some significant activity around December 17th.

I say “hacked” instead of “cracked” or defaced/damaged because I haven’t seen any actual malicious activity, just a lot of wordpress php scripts which had some eval code appended to the top.

I’ve backed up the hacked php scripts and will try to decipher them later. The scripts are basically a bunch of php evals of statements encoded in base64. I could probably decode them quickly via some perl scripts to change all the evals to print statements, and then use the equivalent of perltidy to make them readable in order to find out exactly what they were trying to do.

In any event, it’s likely they still have some backdoor set up, because it seems they got root access, or at least the ability to write a file with root permissions into the DocumentRoot, so I’ll have to keep an eye out.

I’ve upgraded the system to Lenny (was Debian etch, so yeah I’m at fault there) and upgraded wordpress from 2.3.x to the latest 3.0.4. I blew away the hacked wordpress instance, and just installed wordpress from scratch, along with some other things which hopefully will alert me when something like this happens again.

To the person responsible – I’m not running this web server as some sort of proof of my skill set, it’s simply a personal web server which I am hosting myself because I don’t very much like to be pushed into the idea of cloud computing and hosting my stuff on blogspot, etc. I think it’s good to be able to host your own applications and services, and not be tied down to services provided by Big Corp.

My message to you is this, use your head. It was probably fun to try and break in, but actions like this are what’s causing people to subscribe to cloud computing with open arms, and eventually Big Corp will be hosting everyone’s data, and the freedom that you have to learn how to manipulate PHP will be non-existent because we’ll all be stuck in AOL hell.

If you want to do something cool and interesting, why not trying using your skills to help people.

If anyone’s interested in taking a look encoded PHP, here’s what looks to be one of the primary sources: style.css.php.  Note that the script is basically all on a single, really long line, so most text editors may have trouble reading it.

Work on CPAN-API and Perl Modules Indexing

Since the last TPM meeting in October, some of the TPM members have been working diligently to improve the CPAN search experience by re-architecting CPAN search from the bottom up. I’ve joined the design team in the hopes of providing the Perl community a much more improved CPAN experience.

As most Perl developers are aware, search.cpan.org is great for finding useful libraries and modules, but horrible at providing any significant information which relates modules to each-other, or providing useful meta-information or statistics which can be used to make better decisions on which modules to use, let alone deploy in a production environment.

If you are interested in taking part in the CPAN-API community project, please contact me, or visit the CPAN-API project site on GitHub.

CPAN-API: https://github.com/CPAN-API/cpan-api/wiki/
Toronto Perl Mongers: http://to.pm.org/

Jolicloud is of the Awesome

So if you haven’t heard of Jolicloud http://www.jolicloud.com/, then you need to download and install it now. It’s an Ubuntu based OS (a self-proclaimed “Cloud OS”) specifically designed for Netbooks, and it rocks. I have Jolicloud installed on my Samsung N110 Netbook, and I use it for everything from e-mail to games (snes9x) to work (Perl/Vim/Screen). Now what makes Jolicloud super-awesome is that it treats web applications no differently from desktop applications. Each application gets it’s own icon on the “Home screen”. It’s also socially aware – it can connect to facebook and allow you to search for applications and/or people who’ve used those applications, so that you can ask them questions and get guidance on the tools you’re trying to use.

The interface is very slick – big icons and a clean method of navigation to the lesser used functions of a standard Gnome/Ubuntu desktop. The most-awesomest part is that once you load up a terminal, you have full access to the command-line and all Ubuntu apt repositories.

Jolicloud isn’t just for netbooks! I’ve also installed it on my Acer Veriton (similar to the Acer Revo), and am using it as a media center OS. Jolicloud also comes in an “express” edition, which allows you to install it under windows, where it will come up as a secondary OS option under the windows boot-loader.

If you have a netbook, nettop, or any light-weight PC, then install Jolicloud. Highly recommended.

Diving in with Arch Linux

The Problem

The time had come for me to “invest” in getting some new equipment. The only workstation that I had up until recently was a company laptop which I had toted back and forth between VMware and my home office. I keep my personal documents on removable storage, but that doesn’t really help when you don’t have a workstation at home, so lugging the laptop around with me was a must.

Don’t get me wrong, I have systems, but their mostly systems running as file servers or VM servers doing various little things automagically, and they’re not sitting in or around my actual desk at home. Also, my printers/scanner at home relied on my laptop to be of any use. It was time to fix all of these unecessary grievences.

The Dilemma

For the past couple of weeks I had been thinking hard about what kind of system I should buy – should it be a powerful / modern desktop system with lots of RAM and screaming CPU/Video? Or would it be a powerful laptop/notebook which would serve as a desktop replacement? Should I go for the i3, i5, or i7 processor? ATI or Nvidia? What kind of budget was I looking at?

All of these questions plagued me for quite some time (okay, not that long.. I admit I’m a bit of an impulse buyer). I’ve spent long enough thinking about this that I realized a lot about myself. For one, I’m not a gamer. I was once one of those people who would have been ecstatic about getting next-gen hardware to play the lastest power-hungry games. Not any more.. and not for quite some time. The last time I seriously played a PC game was about 3 years ago. When I say “seriously”, I mean played it regulary, at least once a week. The last game I played was a game that I was really into; it was X2 of the X-Series space combat simulators.

Since then, I’ve touched a game or two, on and off, but the has fascination is no longer there. i’m more interested in hacking around with open source programs and becoming a better developer.

The Solution

Since I wasn’t going to focus on gaming and media for my new system purchase, this opened the door for a lot of possibilities that I haven’t considered, and some unexpected disappointments. First off, since I wasn’t going to plop $1,000.00 on a single system, I could, theoretically buy two lower-powered systems. And that’s exactly what I did. Instead of going with a full-fledged desktop or power-house laptop, I ended up buying an Acer Aspire Revo net-top unit as my primary workstation, and a Samsung N110 Netbook as my portable. This Revo is awesome! It has 2GB of RAM (upgradable to 4), an Nvidia ION chipset, and an Intel Atom processor (dual-core). I didn’t need much more than this for my purposes, this was perfect. The Samsung N110 was also a nice little beauty. It was a Atom processor with integrated graphics, but was light, pretty, and had a 6-cell battery, which meant that it would last about 8 hours during heavy use. I quickly installed JoliCloud Express on the Netbook, and have been very happy with it ever since.

The Disappointment (In myself)

The disappointment that I experienced was not in the purchase or the hardware, but it was in the fact that I hesitated for a long time to wipe away the Revo’s bundled OS to install Linux. The OS that the Revo came with was Windows 7 Home edition (the Samsung netbook had Windows XP). I haven’t used windows as my primary OS in years, and have always been proud to say so. For the last four years or so, I’ve been using Ubuntu (severely customized), and before that I was using Debian. When I initially started up the Revo, I was impressed by the windows 7 user interface, the nice colors, the clean lines, and the fact that it picked up all my hardware. It was pretty simple, and I have to admit somewhat luring. I’m definately not the little hacker I was 10 years ago. I don’t have time to spend hours hacking away into the wee morning just on my OS configuration. At least that’s what I keep telling myself :) But then it dawned on me – that’s how I got where I am today, by embracing curiousity, and defying conformity. That’s where life becomes interesting and liberating, and that’s where I feel at home. All these thoughts of nostalgia hit me shortly after I hard-reset the Revo, and windows 7 came up saying “system wasn’t shut down correctly – use safe mode” or something to that effect. There was no way for me to tell it to disregard the unclean boot-up, it persisted to ask me to go into safe mode, with no specific explanation. That’s when I wish I had a grub prompt or command line handy.

Diving in with Arch Linux

After coming to my senses, I realized that I definately didn’t want to go back to using Ubuntu for my primary workstation. For a while I’ve been feeling like Ubuntu has lost much of it’s luster, especially for someone like me who loves simplicity and minimalism over fancy GUIs and extra features. I wanted a distribution that tried to stay at the cutting edge with it’s packages, but didn’t screw with the basics of linux so much that you’re forced to use GUIs to configure your OS. Debian didn’t fit the bill here – it’s great for servers – rock solid, but it’s not that great if you want a cutting edge workstation without having to compile things from source.

After a little bit of reading and browsing distrowatch.com, I came across Arch Linux (which I’ve known of only in passing before), and decided that this was the OS for me. The Arch Linux community is small enough that I could make some significant contributions without much effort. The distribution itself is awesome, very clean, and very minimal. And most importantly, all of the system configurations are done by editing text files!

The Arch Way

Installing Arch was relatively straight-forward (IMO). It wasn’t as easy as installing, say, Linux Mint, but it also wasn’t as hard as installing Debian 3.0 either. The installation dialogs were ncurses based, but they were descriptive, linear, and logical. When it came time to supply arguments for the initial configuration of the packages I selected, they were all text files (very well documented) which I could edit with vim! I think at that point I knew that was about to embrace a distribution that was very special indeed. This distro was going back to basics, and not flooding it’s users with fancy splash screens and progress meters, it was doing the needful, and it was doing it well.

I still have a lot more to learn about Arch, as I’ve only scratched the surface so far. I’ve been able to set up sound (with alsa) and video using the latest Nvidia drivers. I’ve configured Xmonad as my window manager, and have gotten a handle of how to query and install packages with “pacman”, the Arch package installer. The only real problem that I’ve run into is setting up CUPS for my printers. After some research, it seems that the version of CUPS (1.4.3-2) available in the Arch packages is the latest version available from the CUPS source repository, and that I may have to downgrade (to 1.3.9) it in order to get my printers working.

Overall, I like what I see so far with Arch. I expect to post more on my experiences with it as I learn.

Syncronizing Xymon’s ‘bb-hosts’ Configurations

I’ve been using Xymon (formerly known as “Hobbit”) for a long time.  In most situations, I have Xymon running in a redundant configuration, with two or more instances of Xymon working together to monitor a network.

Even though Xymon works very well, a single change to the primary server’s configuration file (the “bb-hosts” file) means that you have to make the same change to all other ‘bb-hosts’ files in all other Xymon instances.

There are some creative ways to eliminate the drudgery of updating all these files any time a change to the primary file is necessary.  One method, for example would be to have the master file exported via NFS to all the other Xymon server instances, and each of those instances would sym-link to that primary ‘bb-hosts’ file from their local mount of that NFS export.

I don’t like the NFS export idea, because if the primary server has a problem, and the NFS export is no longer available, all instances of Xymon would break – badly.

Instead, I’ve opted for automatically synchronizing the ‘bb-hosts’ file across all Xymon instances via the use of apache, cron, a sym-link, and a simple bash script.

Here’s  how it works:

  • On the primary Xymon instance, sym-link ‘/home/xymon/server/etc/bb-hosts’ to ‘/var/www/bb-hosts’.
  • On the other instances of Xymon, run a bash script which grabs the primary server’s ‘bb-hosts’ via HTTP, which does some simple comparisons, and over-writes the local Xymon ‘bb-hosts’ if changes are detected.
  • Automat this script with cron.

Perhaps the trickiest part of doing this is the actual script used to grab, compare, and over-write the ‘bb-hosts’ file for the other instances of Xymon.  The script I’ve written below grabs the primary ‘bb-hosts’ file, and does a simple MD5 comparison with md5sum, and if it detects a change in the ‘bb-hosts’ file, it will send an e-mail to notify me that this change has occurred, along with details on what has changed.

Here’s the script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#!/bin/bash
 
REMOTE_BB_HOSTS="/tmp/bb-hosts"
LOCAL_BB_HOSTS="/home/xymon/server/etc/bb-hosts"
BB_HOSTS_DIFFS="/tmp/bb-hosts-diffs"
 
wget http://somewebhost.domain.com/bb-hosts -qO "$REMOTE_BB_HOSTS"
 
LOCAL_MD5=`md5sum $LOCAL_BB_HOSTS  | cut -d " " -f 1`
REMOTE_MD5=`md5sum  $REMOTE_BB_HOSTS  | cut -d " " -f 1`
 
#echo "$LOCAL_MD5"
#echo "$REMOTE_MD5"
 
if [ "$LOCAL_MD5" != "$REMOTE_MD5" ]; then
        echo "Generated by $0" > $BB_HOSTS_DIFFS;
        diff $LOCAL_BB_HOSTS $REMOTE_BB_HOSTS >> $BB_HOSTS_DIFFS;
        cp $REMOTE_BB_HOSTS $LOCAL_BB_HOSTS;
        mail -s "Xymon: monitor-02 bb-hosts updated" alertme@email.com < $BB_HOSTS_DIFFS;
fi

If you need a way to keep your Xymon ‘bb-hosts’ files in sync, something along the lines of the above script just may be what you’re looking for. If you’re currently accomplishing the same thing in an interesting way, please post a comment and let me know!