Category Archives: Knowledge & Reference

Category dedicated to knowledge and reference materials spanning all topics, including books, essays, and news articles. Reviews, comments, or opinion pieces may also be included.

Tiny linux disk usage reporting script – dux.bash

I decided to write a very small disk usage reporting script that provides some extra information than does just using ‘du‘ directly.  The script of course uses du, among other command line tools and parsing commands to generate the little report.  It suits my needs for the moment. Hopefully others out there will also find it useful.

Firstly, here is the script output to give you an idea of how to execute it, what it does, and how it looks:

$ ./dux.bash /home/jbl /home/jbl
Building extended du reports for /home/jbl in /home/jbl ...
[du extended report]:
13G     ./Personal
4.7G    ./DATA
2.4G    ./Pictures
1.4G    ./Downloads
Total (GB): 21.5
 
373M    ./core
260M    ./tmp
73M     ./game-saves
37M     ./new-music
33M     ./new-books
32M     ./vim-env
24M     ./random-tools
15M     ./stuff
Total (MB): 847

The script takes two arguments, the directory you want to analyze, and the directory where you want to store the reports.

As you will see, the script provides me an at-a-glance look at the following characteristics of a particular directory structure:

  • Grouping and separation of larger (GB sized) directories from smaller (MB sized) directories.
  • Directories sorted by size in each group.
  • Total sizes for each group of files and directories.

With this output I can see clearly which directories are causing the most contention, and how much of an impact they have compared to other directories.

The script is very crude, and probably needs some work and error correction (accounting for files off or root, etc.)   It also creates some temporary text files (used to construct the report), which is the reason for the second argument to the script.  However for now it’s doing what I need, so I figure it’s worth sharing.  Here it is:

#!/bin/bash
echo "Building extended du reports for $1 in $2 ...";
cd $1
du -sh $1/* > $2/du-output.txt
cat $2/du-output.txt | egrep '([0-9][0-9]M)' > ~jbl/du-output-MB.txt
cat $2/du-output.txt | egrep '[0-9]G'> ~jbl/du-output-GB.txt
cat $2/du-output-MB.txt | sort -hr > $2/du-output-MB-sorted.txt
cat $2/du-output-GB.txt | sort -hr > $2/du-output-GB-sorted.txt
echo '[du extended report]:';
cat $2/du-output-GB-sorted.txt
echo -ne "Total (GB): " && cat ~jbl/du-output-GB-sorted.txt | perl -pe 's/^(\d+\.+\d+|\d+)\w*.*/$1/g' | paste -sd+ | bc
echo ""
cat $2/du-output-MB-sorted.txt
echo -ne "Total (MB): " && cat ~jbl/du-output-MB-sorted.txt | perl -pe 's/^(\d+\.+\d+|\d+)\w*.*/$1/g' | paste -sd+ | bc

I’m not sure what more I will need from it going forward, so it may not get much love in the way of improvements.  Since it would be nice to have it on each system I’m using, I may convert it to a single script that has no need to generate temporary text files.  

Nuff said!  Hopefully you find either the script, or portions of it useful in some way!

For convenience, I’ve also added this script as a public gist so you can download dux.bash from github.

Cheers!

Our Galaxy is a Seed that Will Eventually Grow into it’s Own Universe

The universe is accelerating away from the center of the Big Bang.

The universe is cooling down, because galaxies are moving away from each other.
The number of stars in the sky will diminish over time, there will eventually be a few, then there will be none. This is the current theory (paraphrased) held by many scientists today, typically referred to as heat death.

I’m no scientist, but I like to visualize.  Read this article: Speculative Sunday: Can a Black Hole Explode?

I was inspired, in particular by this image:

This artist’s impression shows the remains of a star that came too close to a supermassive black hole. Extremely sharp observations of the event Swift J1644+57 with the radio telescope network EVN (European VLBI Network) have revealed a remarkably compact jet, shown here in yellow. – ESA/S. Komossa/Beabudai Design

 

The above image is an artists rendition of the results of the data received from an “earth-sized radio telescope”. The detail is specific, even if interpreted. What I’m seeing here is a pattern. Spiral falling / contraction (gravity), with a projection of stuff out the north and south poles. This projection from the black hole is likely directly related to the consumption of the star, which we see visualised as the star being smeared in a spiral around the singularity.

This is the pattern. Gravity pulls things in on one “plane” and creates a jet stream at the north and south poles of the black hole.  The jet stream is comprised of particles of matter that have been deflected or have narrowly escaped being captured by the black whole, only to be accelerated away at high speed again.   Now this particular aspect of how black holes function is very interesting because the process heats up space, to the point where it could potentially create or influence the creation of stars within a galaxy.  Think about that for a moment.

To create a star, or star-system, you don’t need THE Big Bang.  You don’t need super-galaxies, or galaxies or star-systems.  What you need is a black hole.  Every star that dies turns into a black hole (or a neutron star, then a black hole).You just need a black hole to create a star, and planets, and there I suggest, life?

My hypothesis is this.  Even if all our galaxies are moving away from each other over billions of years, and even though light and heat will diminish – new stars, new galaxies, and new universes will be created, just as the “first” one was.  And this dimension will continue on for other new life forms to grow and learn and figure this all out all over again.

Watch Cosmos: A Spacetime Odyssey if you have no idea what I’m talking about, then come back to this article.

http://www.space.com/18893-black-hole-jets-similarities.html
http://www.thephysicsmill.com/2015/06/14/speculative-sunday-can-a-black-hole-explode/
https://en.wikipedia.org/wiki/Neutron_star

Note: After writing this, I read up on Hawking Radiation, and found that black holes do die if they don’t feed (on other stars), they will eventually evaporate.  This is kind of poetic.

Soft Skills: The Software Developer’s Life Manual – A Review [Audiobook]

I’m almost through the book Soft Skills: The Software Developer’s Life Manual. I’m listening to the audio-book.  I like it, it’s pretty good. Along with the benefit of having the author, John Sonmez narrate his own book, he also provides a lot of commentary, discussion, and elaboration.  At first I thought it was annoying that the author would go off on a tangent every once in a while, then say “back to the book” and continue the verbatim reading.

However later I realized that the commentary and discussion were worth the tangents.  There are several very valuable tid-bits of information in this book, such as references and discussion of Pomodoro Technique, and KanbanFlow. The book touches a very broad scope of topics, from software development methodologies to personal finance management tips.  The book tries to help it’s readers see the habits and actions (or lack thereof) that are required to achieve a high degree of quality, consistency and professionalism in your career.

One of the things to keep in mind is that this book discusses a lot of tools and techniques that are documented external to the book itself.  The author frequently references his company’s website where the reader can find more information.

This book and the topics it discusses are very relevant to the success of an aspiring software developer.  Worth a read!

rbenv and multiple local ruby version installs (like perlbrew)

When I was using perl as my primary development language, I had a platform of tools in place to make my perl development fun and productive. This included tools like Perl::Dancer, DBIx::Class, cpanm, and perlbrew. Perlbrew was a tool I used to maintain multiple versions of perl in my local development environment, so that I could test my code against multiple perl and module versions to ensure that it worked on the largest range of platforms ( and to avoid dependency related bugs ).

This allowed me to run my code against Perl 5.10, 5.12, and 5.14, and so on each with their own module-base, fully isolated from each-other.

Now I’m working with many different tools these days, and haven’t had the opportunity to work with other languages to the extent that I’ve worked with Perl, but I have been playing with Ruby and Golang. Using Ruby, I immediately thought that I would like to play with multiple versions of Ruby without altering the ‘system’ ruby on my workstation. A quick search of ‘perlbrew for ruby’ lead me to rbenv which seems to be exactly what I was looking for.

Some examples of how rbenv works:

# list all available versions:
$ rbenv install -l
 
# install a Ruby version:
$ rbenv install 2.0.0-p247
 
# Sets a local application-specific Ruby version by writing the version name to a .ruby-version file in the current directory.
$ rbenv local 1.9.3-p327
 
# Sets the global version of Ruby to be used in all shells by writing the version name to the ~/.rbenv/version file.
$ rbenv global 1.8.7-p352
 
# Sets a shell-specific Ruby version by setting the RBENV_VERSION environment variable in your shell
$ rbenv shell jruby-1.7.1
 
# Lists all Ruby versions known to rbenv, and shows an asterisk next to the currently active version.
$ rbenv versions 1.8.7-p352 1.9.2-p290 * 1.9.3-p327 (set by /Users/sam/.rbenv/version) jruby-1.7.1 rbx-1.2.4 ree-1.8.7-2011.03
 
# Displays the currently active Ruby version, along with information on how it was set.
$ rbenv version 1.9.3-p327 (set by /Users/sam/.rbenv/version)
 
# Displays the full path to the executable that rbenv will invoke when you run the given command.
$ rbenv which irb
/Users/sam/.rbenv/versions/1.9.3-p327/bin/irb

How to Backup an Ubuntu Desktop (12.04, 14.04)

Source: http://askubuntu.com/questions/9135/how-to-backup-settings-and-list-of-installed-packages

Warning: Read about caveats in the link above before use

#——————————————————-

## The backup script
 dpkg --get-selections > ~/Package.list
 sudo cp -R /etc/apt/sources.list* ~/
 sudo apt-key exportall > ~/Repo.keys
 rsync --progress /home/`whoami` /path/to/user/profile/backup/here
## The Restore Script
 rsync --progress /path/to/user/profile/backup/here /home/`whoami`
 sudo apt-key add ~/Repo.keys
 sudo cp -R ~/sources.list* /etc/apt/
 sudo apt-get update
 sudo apt-get install dselect
 sudo dpkg --set-selections < ~/Package.list
 sudo dselect

#——————————————————-

Who is this for: users that have normal regular use of their computer, that have done minimal or no configuration outside their home folder, did not mess up startup scripts and services. A user that wants to have his software restored to how it was when he installed it with all customizations being done and kept in their home folder.

Who this will not fit for: servers geeks, power users with software installed by source (restoring the package list might break your system), users that have changed the startup script of some application to fit better their needs. Caution: there is a big chance any modifications outside home will be over written.

Expressing Your Authority May Be Working Against You

It doesn’t matter whether you are a senior engineer, a team lead, or an IT manager – eventually you will encounter the situation.  A meeting or discussion that becomes slightly more animated than usual.  Opinions are strong, and it is clear that consensus will not be found on this particular contentious issue today.   As a senior engineer, team lead, or manager, it is fair and understood that sometimes you will have to make a call one way or the other.   This article is not about whether or not you should make that call.  This article is about how to make that call.

Lets say for example that you are in a meeting with many of your direct reports, and these direct reports may be working on different aspects of the same project – or – they may be on different teams, still working toward the successful completion of a specific project.  There is a contentious concern, perhaps on the complexity around a specific problem where dead-lines need to be set.  Opinions are being vocalized, and the volumes of those voices are getting louder.  There doesn’t seem to be a clear way to reason out the differences of opinion at the moment. People are being blamed, fingers are being pointed.  You are the team lead/manager.  What do you do?

Well, lets look at what you should not do, with some suggestions on how you might handle these situations differently:

  1. Do Not Swear
    • It may seem to you that swearing at a meeting to get the attention of your team is either hip, cool, contemporary, or resonant with authority, but you would be dead wrong.
    • Anyone who really wants to succeed, and wants their teams and their company to succeed, will always want to bring positivity to the table.  By swearing (and I mean anything that is obviously vulgar, saying something like “what the fuck”), you are tarnishing the respect that your direct reports may have had for you.
    • With you being in a senior position, your direct reports look up to you, and will often try to mimic your mannerisms and the method by which you work (without full context of course), and they will replicate these mannerisms upon interactions with other teams and team members.
    • If you are swearing because you are highly frustrated, and simply lost control, then that is another matter that you need to address, immediately.
    • Apologize – If you do swear, communicate to your team that you are indeed frustrated, and did not mean to offend anyone.  Apologize sincerely to the whole team, and this will immediately re-gain any respect you may have lost, since you are showing the team that you are responsible for your actions, and are willing to concede when you’ve made a mistake.  This takes courage, and is a great example to set for your team.
  2. Do Not Raise Your Voice
    • There are many situations where raising your voice might be appropriate, for example to get everyone’s attention so that a meeting can begin.  Context is very important.
    • However, raising your voice for the sake of making a point (or to invalidate a point being made by someone else), or to express your authority will only back-fire, as you will lose the respect of those to whom you are trying to make your point.
    • Silence is golden – if you need to visibly show your disappointment or disagreement with an individual or a decision being made at a meeting, then the best thing to do is to be quiet.  Stand up, and hold your hand out as if you are pushing something away from you (think Neo in the Matrix).  Make it visible that you have something to say, or that you disagree, or would like to take the discussion off-line.  Your teams will respect you even more if you are able to command the attention of a room with silence.  Any fool can get attention by being loud and abrasive.
    • Again, by raising your voice, you are setting an example for others to do the same as well.  Your team members will take your queue and start to build a paradigm around how they see you acting and reacting, and they will do the same – believing either that this is what it takes to be successful, or that this is how YOU would rather interact.  They may even raise their voice against you in the very same meeting, with the misguided belief that you would see this as a positive characteristic in them.  Do not perpetuate this line of thinking.  If you are able to command a room with silence, then everyone else will follow suit and become silent, at which point a real and valuable conversation can once again be had.
  3. Do Not Perpetuate Fact-less Finger-pointing
    • Just because someone on your team makes a claim against another, doesn’t mean it is true.  If one team member claims that they are in a bad situation, or that they “are blocked” by another team or individual, do not simply jump on that finger-pointing train.  This is the equivalent of joining a pitch-fork mob against a monster which you didn’t know existed only a few minutes ago.  As a leader, you should be critical of all information coming your way, especially the hearsay that tends to happen when a second party is criticising a third.  It is a purely reactive method of dealing with people and situations, and it does more harm than good.
    • Ask questions – but from the perspective of information-gathering, not finger pointing.  What this means is that you are taking ‘people’ out of the picture, and instead are looking at ‘facts’ (current status and configuration, time-stamps, and corroborating evidence).  Instead of just taking those who claim that the ‘sky is falling’ at their word.
    • If you are going to address someone who is to be the defendant of a particular criticism, don’t ask them “Did you do (or not do) x?”.  Instead of being open about the obstacles which have prevented them from completing a certain task, this puts people on the defensive.  Try instead to be on their side.  If you are sincerely interested in achieving success for all teams, and for the entire company, and not just for yourself or your team, then show this by being helpful.  Instead, make statements like “What can I do to help move x along?”, or “Can we spend a few moments to break down this objective into smaller tasks?  Perhaps I or someone from my team can assist with moving this along?”.  This kind questioning puts the person being criticised in a position to ask for, and accept help if they need it.  If it is simply a matter of prioritization, something the person hadn’t gotten around to just yet, or if they simply lost sight of the tasks – they will once again be aware that the task needs attention.  They may even be embarrassed that you are offering to assist them with such a simple task that they will openly concede that they’ve simply lost sight of it, and would likely resolve the situation right away to avoid further embarrassment.
    • Bring people together.  Be an example to the person raising the issue or making the criticism by bringing together the parties involved so that there can be a quick and constructive dialogue about current obstacles or perceived road-blocks.  Show people how to solve problems without escalation, so that they can perpetuate a positive methodology around people-handling, and so that they themselves can become positive role-models that others can aspire to.
    • If you instead believe that perpetuating unfounded criticism and finger-pointing is a good thing, and that is all you believe you can or should do; then all you will end up doing is to make people feel alienated.  Those who are being criticised will go on the defensive, and they will likely want to avoid interacting with you (or anyone else on the finger-pointing bandwagon) going forward.  This does nothing to improve collaboration within or between teams.  Your organization and your company will suffer because of it.

Getting upset at your direct reports, raising your voice in order to re-claim a conversation, or simply ignoring input from specific people is a sure-fire way to diminish your reputation and earned respect across your entire team.  For the most part, private sector IT including software development, systems administration, and project management, is all thought-work.  It is important to be aware of and to understand how much psychology plays a part in the success of a team or organization.  Positivity breeds positivity, and the inverse is true as well.

Ansible Playbooks – Externalization and Deduplication

Image result for ansible

Externalization and Deduplication

Developers who understand the concepts of modularity and deduplication should immediately recognize the power behind being able to include settings and commands from external files.   It is seriously counter-productive to maintain multiple scripts or playbooks that have large blocks of code or settings that are exactly the same.   This is an anti-pattern.

Ansible is a wonderful tool, however it can often be implemented in counter-productive ways.  Lets take variables for example.

Instead of maintaining a list of the same variables across multiple playbooks, it is better to use Variable File Separation.

The Ansible documentation provides an excellent example of how to do this.  However I feel that the reasoning behind why you would want to do it falls short in describing the most common use-case, deduplication.

The documentation discusses the possible needs around security or information sensitivity.  I also believe that deduplication should be added to that list.  Productivity around how playbooks are managed can be significantly increased if implemented in a modular fashion using Variable File Separation, or vars_files.   This by the way also goes for use of the includes_vars module.

Here are a list of reasons why you should immediately consider a deduplication project around your Ansible playbook variables:

Save Time Updating Multiple Files

This may seem like a no-brainer, but depending on the skills and experience of the person writing the playbook, this can become a significant hindrance to productivity.   Because of Ansible’s agent-less and decentralized manner, playbooks can be written by anyone who wants to get started with systems automation.  Often, these can be folks without significant proficiencies in programmer-oriented text editors such as Vim, Emacs, or Eclipse – or with bash scripting experience around command-line tools like awk, sed, and grep.

It is easy to imagine a Java developer without significant Linux command-line experience opening up one playbook at a time, and modifying the value for the same variable, over and over… and over again.

The best way for folks without ninja text-editing skills to stay productive is to deduplicate, and store common variables and tasks in external files that are referenced by multiple playbooks.

Prevent Bugs and Inconsistent Naming Conventions

In a perfect world, everyone would understand what a naming convention was.  All our variables would be small enough to type quickly, clear enough to understand its purpose, and simple enough that there would never be a mis-spelling or type-o.  This is rarely the case.

If left un-checked, SERVER_1_IP can also be SERVER1_IP, Server_1_IP, and server_1_ip.  All different variable names across multiple files, referencing the same value for the exact same purpose.

This mess can be avoided by externalizing this information in a shared file.

Delegate Maintenance and Updates to Variables That Change Frequently

In some environments, there may be playbook variables that need to change frequently.  If these variables are part of some large all-encompassing playbook that only some key administrators have access to be able to modify, your teams could be left waiting for your administrator to have free cycles available just to make a simple change.  Again, deduplication and externalization to the rescue!  Have these often-changing variables externalized so that users who need these changes immediately can go ahead and commit these changes to very specific, isolated files within your version control system that they have special rights to modify.

Cleaner Version Control History (and therefore Audit History)

If you have the same variables referenced by multiple files, and you make changes to each of those files before you commit them to version control, then your version control history can become a complete mess.  Your version control history will show a change to a single value affecting multiple files.  If you come from a software development background, and are familiar with the concept of code reviews, then you can appreciate being able to look at a simple change to a hard-coded value (or a constant), and see that it only affects one or two files.

I hope the reasons above convince some of you to start browsing your playbook repositories for possible candidates for deduplication.  I really believe that such refactoring projects can boost productivity and execution speed for individuals and teams looking to push changes faster while minimizing obstacles around configurations shared by multiple systems.  Send me a note if this inspires you to start your own deduplication project!

Examples of recursion, in Perl, Ruby, and Bash

Image result for recursion

This article is in response to the following question posted in the Perl community group on  LinkedIn:

I’m new to PERL and trying to understand recursive subroutines. Can someone please explain with an example (other than the factorial ;) ) step by step, how it works? Thanks in Advance.

Below, are some very simplified code examples in Perl, Ruby, and Bash.

A listing of the files used in these examples:

blopez@blopez-K56CM ~/hello_scripts 
$ tree .
 ├── hello.pl
 ├── hello.rb
 └── hello.sh
0 directories, 3 files
blopez@blopez-K56CM ~/hello_scripts $


Recursion example using Perl:

– How the Perl script is executed, and it’s output:

blopez@blopez-K56CM ~/hello_scripts $ perl hello.pl "How's it going!"
How's it going!
How's it going!
How's it going!
^C
blopez@blopez-K56CM ~/hello_scripts $

– The Perl recursion code:

#!/usr/bin/env perl
use Modern::Perl;
my $status_update = $ARGV[0]; # get script argument
 
sub hello_world
{
&nbsp;&nbsp;&nbsp; my $status_update = shift; # get function argument
&nbsp;&nbsp;&nbsp; say "$status_update";
&nbsp;&nbsp;&nbsp; sleep 1; # sleep, or eventually crash your system
&nbsp;&nbsp;&nbsp; &amp;hello_world( $status_update ); # execute myself with argument
}
 
&amp;hello_world( $status_update ); # execute function with argument


Recursion example using Ruby:

– How the Ruby script is executed:

blopez@blopez-K56CM ~/hello_scripts 
$ ruby hello.rb "Doing great!"
Doing great!
Doing great!
Doing great!
^Chello.rb:7:in `sleep': Interrupt
&nbsp;&nbsp; &nbsp;from hello.rb:7:in `hello_world'
&nbsp;&nbsp; &nbsp;from hello.rb:8:in `hello_world'
&nbsp;&nbsp; &nbsp;from hello.rb:8:in `hello_world'
&nbsp;&nbsp; &nbsp;from hello.rb:11:in `'
blopez@blopez-K56CM ~/hello_scripts $

Note: In Ruby’s case, stopping the script with CTRL-C returns a bit more debugging information.

– The Ruby recursion code:

#!/usr/bin/env ruby
status = ARGV[0] # get script argument
 
def hello_world( status ) # define function, and get script argument
&nbsp;&nbsp;&nbsp; puts status
&nbsp;&nbsp;&nbsp; sleep 1 # sleep, or potentially crash your system
&nbsp;&nbsp;&nbsp; return hello_world status # execute myself with argument
end
 
hello_world status # execute function with argument

Recursion example using Bash:

– How the Bash script is executed:

blopez@blopez-K56CM ~/hello_scripts $ bash hello.sh "..nice talking to you."
..nice talking to you.
..nice talking to you.
..nice talking to you.
^C
blopez@blopez-K56CM ~/hello_scripts $

– The Bash recursion code:

#!/usr/bin/env bash
 
mystatus=$1 # get script argument
 
hello_world() {
&nbsp;&nbsp;&nbsp; mystatus=$1 # get function argument
&nbsp;&nbsp;&nbsp; echo "${mystatus}"
&nbsp;&nbsp;&nbsp; sleep 1 # breath between executions, or crash your system
&nbsp;&nbsp;&nbsp; hello_world "${mystatus}" # execute myself with argument
}
 
hello_world "${mystatus}" # execute function with argument

Managing Your SMTP Relay With Postfix – Correctly Rejecting Mail for Non-local Users

Image result for SMTP postfix

I manage a few personal mail relays that I use for relaying my own mail and for experimentation purposes (mail logs are a great source of unique and continuously flowing data that can you use to try out different ideas in GUI, database, or parser development).  One of them was acting up recently.  I got a message from my upstream mail-queue host saying that they’ve queued up quite a bit of mail for me over the last few weeks, and that I should investigate, as they do want to avoid purging the queue of valid mail.

Clearly I wanted to avoid queuing up mail on a remote server that is intended for my domain, and so I set out about understanding the problem.

What I found was that there was a setting in my /etc/postfix/main.cf that, although it was technically a valid setting, was incorrect for the role that mail-server was playing.  Specifically the mail server was supposed to be rejecting email completely for non-local users, instead of just deferring it with a “try again later” message.

In this case, I’m using Postfix v2.5.5. The settings that control this configuration in /etc/postfix/main.cf are as follows:

  • unknown_local_recipient_reject_code
  • local_recipient_maps

local_recipient_maps

local_receipient_maps defines the accounts that this mail server will accept and relay mail for. All other accounts would be “rejected” by the mail server.

However, how rejected mail is treated by Postfix depends on how it is configured, and this was the problem with this particular server.

For Postfix, it is possible to mark a message as “rejected”, but actually have it mean “rejected right now, but maybe not permanently, so try again later”. This “try again later” will cause the e-mail message to be queued on the upstream server, until it reaches some kind of retry time-out and delivery is once again attempted. Of course this will fail again, and again.

This kind of configuration is great for testing purposes, because it allows you to test the same messages over and over again without losing them, or to queue them up so that they can be reviewed to ensure they are indeed invalid e-mail messages. However this is not the state you want your mail server to be in permanently. At some point once things are ready for long-term (production) use, you want your mail server to actually reject messages permanently.

unknown_local_recipient_reject_code

That is where unknown_local_recipient_reject_code comes in. This configuration property controls what the server means when it “rejects” a message. Does it mean right now, or permanently?

The SMTP server response code to reject mail permanently is 550, and the code to reject mail only temporarily is 450.

Here is how you would configure Postfix to reject mail only temporarily:

unknown_local_recipient_reject_code = 450

And here is how you set Postfix to reject mail permanently:

unknown_local_recipient_reject_code = 550

In my case, changing the unknown_local_recipient_reject_code from 450 to 550 is what solved the problem.

In summary, if you ever run into an issue with your Postfix mail server where you believe mail is set to be REJECTED but it still seems to be queuing up on your up-stream mail relay, double-check the unknown_local_recipient_reject_code.

# Local recipients defined by local unix accounts and aliases only
local_recipient_maps = proxy:unix:passwd.byname $alias_maps
 
# 450 (try again later), 550 (reject mail)
unknown_local_recipient_reject_code = 550

References
http://www.postfix.org/LOCAL_RECIPIENT_README.html
http://www.postfix.org/postconf.5.html#unknown_local_recipient_reject_code

Bad advice on “free advice”

Cross-post from LinkedIn, in response to How Seeking ‘Free’ Works Against Our Career Success:

I cannot completely agree here. There are many who offer free advice that also happens to be good advice. Alternatively, it is important for advice seekers to learn how to distinguish between good and bad advice by learning to think critically about the information they are receiving – by asking deeper, probing questions. Every answer received should lead to further questions. While I do agree that it is important to learn how to be independent and make your own way in this world (as in the example of parents encouraging children to pay for their own education), I do not see how this directly relates to giving or receiving free advice, or how free advice (as suggested in this article) can be considered to be bad advice without further inquiry. With regard to the job seeker asking for his/her resume to be reviewed, that was simply a lazy request. You cannot help those who are not willing to put in the effort to help themselves, regardless of whether or not your advice is free.

Stephen Colbert Interviews Neil deGrasse Tyson at Montclair Kimberley Academy – 2010-Jan-29

Cross-post from LinkedIn, in response to Stephen Hawking: Black Holes May Not Have ‘Event Horizons’ After All:

So relevant: http://www.youtube.com/watch?v=YXh9RQCvxmg Stephen Colbert interviews Dr. Neil deGrasse Tyson. The entire interview (starts about 6 mins in) is just a wholly wonderful discussion. I wish more people would watch it, over and over again. Dr. Tyson tries to elaborate on the very same topic (current understanding of black holes). Simply engrossing and inspiring. The interview is long, but the elaboration of black holes starts about 1hr 6 mins into the video. Enjoy!

Beautiful people do not just happen.

“The most beautiful people we have known are those who have known defeat, known suffering, known struggle, known loss, and have found their way out of the depths. These persons have an appreciation, a sensitivity, and an understanding of life that fills them with compassion, gentleness, and a deep loving concern. Beautiful people do not just happen.”
― Elisabeth Kübler-Ross

My Little Angel

We took our daughter, Phoebe to a photo shoot recently and I was just in awe at the results.  So many photos of her that looked almost surreal!  The photographer did an excellent job!

Eventually I’ll post the rest of them, but for now here are a few of my favourites.  Yes my dear friends, I made this :)

phoebe2 phoebe1 phoebe3

My Fun With Necrolinguaphilia

Last night I attended a talk given by Dr. Damian Conway (of Perl Best Practices fame) titled “Fun With Dead Languages“.  Although this is a talk that Damian had given previously, it is the first time that I heard it, and I’m so glad I did!

I was the first to arrive at the Mozilla office building at 366 Adelaide, and so was able to score a sweet parking spot right across the street (no small feat in downtown Toronto).

I arrived and introduced myself to Damian as he was preparing for his delivery shortly before a herd of approximately 70 hackers (according to Mozilla) from all language and computing backgrounds started pouring through the meeting room doors to be seated.

Damian has a very energetic style of presentation, and was able to hold our attention while covering everything from the virtual extinction of the Gros Michel Banana, to the benefits and efficiencies of stack-based programming (using PostScript as an example).  He compares many, very different languages including Befunge, Brainfuck, Lisp, and Piet, and suggests that a great place to look for new ideas is what he calls the “Language Morgue”, where he includes languages such as Awk, Prolog, Cobol… and even C++ as examples of dead languages and language paradigms.

Mr. Conway also dived into excruciating detail on how the Latin natural language can be used as an effective computer programming language, and has even gone so far as to write a module called Lingua::Romana::Perligata, which he has made available on the CPAN.

I also had the special treat of sitting right behind Sacha Chua who brilliantly sketched notes of the entire talk in real-time.  I haven’t had the pleasure of formally meeting Sacha just yet (didn’t even say “hello”, my bad!) as I didn’t want to distract her.  Aside from having my mind blown by Damian’s talk, I was also being mesmerized by Sacha’s artistic skills, and so I do feel somewhat justified in keeping my mouth shut just to absorb everything that was going on right in front of me (front-row seats FTW!).

20130806 Fun with Dead Languages - Damian Conway

Sacha has made her “Fun With Dead Languages” sketch notes publicly available on her blog for everyone to review and enjoy, and has placed it under a Creative Commons license, so please share freely (and drop her a note to say “thanks!”).

Overall, I learned a lot from this this talk and appreciate it immensely.  The energy of the audience made the discussion that much more enjoyable.  If you are interested in programming languages or language theory in general, I suggest you attend this talk the next time Damian decides to deliver it (or find a recording if one happens to be available?).  Damian Conway’s insights and humorous delivery are well worth the brainfuck ;)

Maybe Big Brother Isn’t As Bad as You Think..

Cross-post from LinkedIn, in response to Maybe Big Brother Isn’t As Bad as You Think:

“This is a future Orwell could not have predicted. And Big Brother may turn out to be a pretty nice guy.” I respectfully disagree. As others have noted, there is (and always will be) a huge asymmetry in the information being shared and consumed as far as “Big Brother” and state surveillance is concerned. The “sharing” in this case is one-way. Only those in power would have the ability to view and make sense of the data.

Your argument that we “choose to share data” because we get something in return, is flawed. Most people do not choose to share the kind of data that we are referring to in this regard, otherwise it would be done freely and intentionally, and the secretive information gathering we are witnessing here would not be taking place. Even the information we do share “intentionally”, is done so for the most part by many of us who do not pay attention to, and truly consider the ramifications of the many disclaimers, license agreements, and privacy policies that we agree to on a daily basis. What we get in return, as you suggest, is far from a fair compromise.

This one-way “sharing” means that those who are in power have not only the ability to collect this information, but also the tools and the ability to analyse this data and generate statistics that the rest of us have no choice but to consume as facts. Aside from the ability to collect and “make sense of” the data, on our behalf – those in power also have the ability to limit and restrict infrastructure and resources in order to manipulate the “facts” at the source. For example, the ability to manipulate DNS or shut down ISPs to prevent the dissemination of data – effective censorship. Many people have been detained or persecuted (or worse) simply for “sharing” their thoughts and beliefs.

How can you make an anti-Orwellian argument, a case *for* “Big Brother”, and suggest that this kind of sharing can be good and benefit us all equally, when the vast amount of information we are talking about can be controlled from source to audience by such small percentage of the population? I suggest you pay attention the thoughts and many works of notable individuals such as Noam Chomsky, Glen Greenwald, and Lawrence Lessig, and perhaps reconsider your position on this matter. I am currently reading Greenwald’s latest book “With Liberty and Justice for Some: How the Law Is Used to Destroy Equality and Protect the Powerful”. I am sure you would find it most enlightening.

For those more visually/audibly inclined: “Noam Chomsky & Glenn Greenwald – With Liberty and Justice For Some”

httpv://www.youtube.com/watch?v=v1nlRFbZvXI

J. Bobby Lopez – Personal Blog

This is my personal blog. It includes articles I’ve personally written, along with interesting articles I have come across or have commented on at various points in time on the web.  The articles I’ve personally written are mostly about technology and software, but often about other things as well.  I am always eager to hear different perspectives on the topics I am interested in, so please feel free to comment and share your own opinions!  If you do register for an account on this site, please also send me a separate note by e-mail, since I do get a lot of spam. Thanks!